Hello list,

I have a problem with my Heartbeat and STONITH configuration:
When I disconnect the two private links both nodes are STONITHed.

Is there a configuration option to define something similar to
sleep() for the execution of STONITH?

I have two HP-DL380 Servers each with an iLO card.
The card of Node A is directly connected to eth4 on Node B and
vice versa (The advantage of this is that I can use the same setup
for both cards (AutoYaST installation with SLES 10 SP2)).

For STONITH I use the external/riloe plugin.

Output of rpm -qa | grep heartbeat
heartbeat-pils-2.1.3-0.9
heartbeat-2.1.3-0.9
heartbeat-ldirectord-2.1.3-0.9
heartbeat-stonith-2.1.3-0.9
yast2-heartbeat-2.13.13-0.3
heartbeat-cmpi-2.1.3-0.9


snippet of my cib.xml:

<primitive id="resource_shutdown_nodea" class="stonith" type="external/riloe" provider="heartbeat">
        <instance_attributes id="resource_shutdown_nodea_instance_attrs">
          <attributes>
<nvpair id="resource_shutdown_nodea_instance_attrs_hostlist" name="hostlist" value="nodea"/> <nvpair id="resource_shutdown_nodea_instance_attrs_ilo_hostname" name="ilo_hostname" value="10.0.2.1"/> <nvpair id="resource_shutdown_nodea_instance_attrs_ilo_user" name="ilo_user" value="user"/> <nvpair id="resource_shutdown_nodea_instance_attrs_ilo_password" name="ilo_password" value="XXXX"/> <nvpair id="resource_shutdown_nodea_instance_attrs_ilo_protocol" name="ilo_protocol" value="2.0"/> <nvpair id="resource_shutdown_nodea_instance_attrs_ilo_powerdown_method" name="ilo_powerdown_method" value="button"/> <nvpair id="resource_shutdown_nodea_instance_attrs_ilo_can_reset" name="ilo_can_reset" value="1"/> <nvpair id="resource_shutdown_nodea_attr_target_role" name="target_role" value="started"/>
          </attributes>
        </instance_attributes>
        <operations/>
      </primitive>
<primitive id="resource_shutdown_nodeb" class="stonith" type="external/riloe" provider="heartbeat">
        <instance_attributes id="resource_shutdown_nodeb_instance_attrs">
          <attributes>
<nvpair id="resource_shutdown_nodeb_instance_attrs_hostlist" name="hostlist" value="nodeb"/> <nvpair id="resource_shutdown_nodeb_instance_attrs_ilo_hostname" name="ilo_hostname" value="10.0.2.1"/> <nvpair id="resource_shutdown_nodeb_instance_attrs_ilo_user" name="ilo_user" value="user"/> <nvpair id="resource_shutdown_nodeb_instance_attrs_ilo_password" name="ilo_password" value="XXXX"/> <nvpair id="resource_shutdown_nodeb_instance_attrs_ilo_protocol" name="ilo_protocol" value="2.0"/> <nvpair id="resource_shutdown_nodeb_instance_attrs_ilo_powerdown_method" name="ilo_powerdown_method" value="button"/> <nvpair id="resource_shutdown_nodeb_instance_attrs_ilo_can_reset" name="ilo_can_reset" value="1"/> <nvpair id="resource_shutdown_nodeb_attr_target_role" name="target_role" value="started"/>
          </attributes>
        </instance_attributes>
        <operations/>
      </primitive>

...
    <constraints>
<rsc_location id="location_shutdown_nodea" rsc="resource_shutdown_nodea">
        <rule id="prefered_location_shutdown_nodea" score="-INFINITY">
<expression attribute="#uname" id="prefered_location_shutdown_nodea_uname" operation="eq" value="nodea"/>
        </rule>
      </rsc_location>
<rsc_location id="location_shutdown_nodeb" rsc="resource_shutdown_nodeb">
        <rule id="prefered_location_shutdown_nodeb" score="-INFINITY">
<expression attribute="#uname" id="prefered_location_shutdown_nodeb_uname" operation="eq" value="nodeb"/>
        </rule>
      </rsc_location>
    </constraints>


And here some logging:

NodeA:

Jun 4 07:46:31 nodea tengine: [5535]: info: te_connect_stonith: Attempting connection to fencing daemon... Jun 4 07:46:31 nodea haclient: on_event: from message queue: evt:cib_changed Jun 4 07:46:31 nodea haclient: on_event: from message queue: evt:cib_changed Jun 4 07:46:31 nodea haclient: on_event: from message queue: evt:cib_changed
Jun  4 07:46:31 nodea crmd: [8177]: info: update_dc: Set DC to nodea (2.0)
Jun 4 07:46:31 nodea crmd: [8177]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ] Jun 4 07:46:31 nodea crmd: [8177]: info: do_state_transition: All 1 cluster nodes responded to the join offer. Jun 4 07:46:31 nodea crmd: [8177]: info: update_attrd: Connecting to attrd... Jun 4 07:46:31 nodea cib: [8173]: info: sync_our_cib: Syncing CIB to all peers Jun 4 07:46:31 nodea attrd: [8176]: info: attrd_local_callback: Sending full refresh Jun 4 07:46:31 nodea cib: [8173]: info: log_data_element: cib:diff: - <cib epoch="5" dc_uuid="04517348-6706-4d1c-8d8d-043441cdf70b" num_updates="25"/> Jun 4 07:46:31 nodea cib: [8173]: info: log_data_element: cib:diff: + <cib epoch="6" dc_uuid="2db0aaed-6cbb-4187-9555-d1af5d76ba42" num_updates="1"/> Jun 4 07:46:31 nodea cib: [5537]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig) Jun 4 07:46:31 nodea cib: [5537]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig) Jun 4 07:46:31 nodea cib: [5537]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last) Jun 4 07:46:31 nodea cib: [5537]: info: write_cib_contents: Wrote version 0.6.2 of the CIB to disk (digest: 169f7716593f62cb48760cb104a9fe72) Jun 4 07:46:32 nodea haclient: on_event: from message queue: evt:cib_changed Jun 4 07:46:32 nodea haclient: on_event: from message queue: evt:cib_changed Jun 4 07:46:32 nodea cib: [5537]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig) Jun 4 07:46:32 nodea cib: [5537]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: /var/lib/heartbeat/crm/cib.xml.sig.last)
Jun  4 07:46:32 nodea crmd: [8177]: info: update_dc: Set DC to nodea (2.0)
Jun 4 07:46:32 nodea crmd: [8177]: info: do_dc_join_ack: join-1: Updating node state to member for nodea Jun 4 07:46:32 nodea crmd: [8177]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ] Jun 4 07:46:32 nodea crmd: [8177]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Jun  4 07:46:32 nodea tengine: [5535]: info: te_connect_stonith: Connected
Jun 4 07:46:32 nodea tengine: [5535]: info: update_abort_priority: Abort priority upgraded to 1000000 Jun 4 07:46:32 nodea tengine: [5535]: info: update_abort_priority: 'DC Takeover' abort superceeded Jun 4 07:46:32 nodea pengine: [5536]: WARN: unpack_nodes: Blind faith: not fencing unseen nodes Jun 4 07:46:32 nodea pengine: [5536]: WARN: determine_online_status_fencing: Node nodeb (04517348-6706-4d1c-8d8d-043441cdf70b) is un-expectedly down Jun 4 07:46:32 nodea pengine: [5536]: info: determine_online_status_fencing: ha_state=dead, ccm_state=false, crm_state=online, join_state=down, expected=member Jun 4 07:46:32 nodea pengine: [5536]: WARN: determine_online_status: Node nodeb is unclean Jun 4 07:46:32 nodea pengine: [5536]: info: determine_online_status: Node nodea is online Jun 4 07:46:32 nodea pengine: [5536]: notice: group_print: Resource Group: group_test Jun 4 07:46:32 nodea pengine: [5536]: notice: native_print: resource_OracleInstance (ocf::heartbeat:OracleInstance): Stopped Jun 4 07:46:32 nodea pengine: [5536]: notice: native_print: resource_clusterip (ocf::heartbeat:IPaddr2): Stopped Jun 4 07:46:32 nodea pengine: [5536]: notice: native_print: resource_oralsnr (ocf::heartbeat:oralsnr): Stopped Jun 4 07:46:32 nodea pengine: [5536]: notice: native_print: resource_test (ocf::heartbeat:test): Stopped Jun 4 07:46:32 nodea pengine: [5536]: notice: native_print: resource_shutdown_nodea (stonith:external/riloe): Started nodeb Jun 4 07:46:32 nodea pengine: [5536]: notice: native_print: resource_shutdown_nodeb (stonith:external/riloe): Started nodea Jun 4 07:46:32 nodea pengine: [5536]: WARN: native_color: Resource resource_OracleInstance cannot run anywhere Jun 4 07:46:32 nodea pengine: [5536]: WARN: native_color: Resource resource_clusterip cannot run anywhere Jun 4 07:46:32 nodea haclient: on_event: from message queue: evt:cib_changed Jun 4 07:46:32 nodea pengine: [5536]: WARN: native_color: Resource resource_oralsnr cannot run anywhere Jun 4 07:46:32 nodea pengine: [5536]: WARN: native_color: Resource resource_test cannot run anywhere Jun 4 07:46:32 nodea pengine: [5536]: WARN: native_color: Resource resource_shutdown_nodea cannot run anywhere Jun 4 07:46:32 nodea pengine: [5536]: WARN: custom_action: Action resource_shutdown_nodea_stop_0 on nodeb is unrunnable (offline) Jun 4 07:46:32 nodea pengine: [5536]: WARN: custom_action: Marking node nodeb unclean Jun 4 07:46:32 nodea pengine: [5536]: notice: NoRoleChange: Leave resource resource_shutdown_nodeb (nodea) Jun 4 07:46:32 nodea pengine: [5536]: WARN: stage6: Scheduling Node nodeb for STONITH Jun 4 07:46:32 nodea pengine: [5536]: info: native_stop_constraints: resource_shutdown_nodea_stop_0 is implicit after nodeb is fenced Jun 4 07:46:32 nodea crmd: [8177]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=route_message ] Jun 4 07:46:32 nodea tengine: [5535]: info: unpack_graph: Unpacked transition 0: 4 actions in 4 synapses Jun 4 07:46:32 nodea tengine: [5535]: info: te_pseudo_action: Pseudo action 8 fired and confirmed Jun 4 07:46:32 nodea tengine: [5535]: info: te_pseudo_action: Pseudo action 11 fired and confirmed Jun 4 07:46:32 nodea tengine: [5535]: info: te_fence_node: Executing poweroff fencing operation (12) on nodeb (timeout=30000) Jun 4 07:46:32 nodea pengine: [5536]: WARN: process_pe_message: Transition 0: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/heartbeat/pengine/pe-warn-0.bz2 Jun 4 07:46:32 nodea stonithd: [8175]: info: client tengine [pid: 5535] want a STONITH operation POWEROFF to node nodeb. Jun 4 07:46:32 nodea pengine: [5536]: info: process_pe_message: Configuration WARNINGs found during PE processing. Please run "crm_verify -L" to identify issues. Jun 4 07:46:32 nodea stonithd: [8175]: info: stonith_operate_locally::2368: sending fencing op (POWEROFF) for nodeb to device external (rsc_id=resource_shutdown_nodeb, pid=5538)
Jun  4 07:46:35 nodea su: (to root) root on none
Jun  4 07:46:35 nodea su: (to root) root on none
Jun  4 07:46:35 nodea shutdown[5631]: shutting down for system halt
Jun  4 07:46:35 nodea init: Switching to runlevel: 0
Jun 4 07:46:36 nodea heartbeat: [4781]: info: killing /usr/lib/heartbeat/mgmtd -v process group 8178 with signal 15
Jun  4 07:46:36 nodea mgmtd: [8178]: info: mgmtd is shutting down
Jun 4 07:46:36 nodea mgmtd: [8178]: ERROR: Connection to the CIB terminated... exiting
Jun  4 07:46:36 nodea haclient: on_event:evt:disconnected
Jun 4 07:46:36 nodea heartbeat: [4781]: info: killing /usr/lib/heartbeat/crmd process group 8177 with signal 15
Jun  4 07:46:36 nodea crmd: [8177]: info: crm_shutdown: Requesting shutdown
Jun 4 07:46:36 nodea crmd: [8177]: WARN: do_log: [[FSA]] Input I_SHUTDOWN from crm_shutdown() received in state (S_TRANSITION_ENGINE) Jun 4 07:46:36 nodea crmd: [8177]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_SHUTDOWN cause=C_SHUTDOWN origin=crm_shutdown ] Jun 4 07:46:36 nodea crmd: [8177]: info: do_state_transition: All 1 cluster nodes are eligible to run resources. Jun 4 07:46:36 nodea crmd: [8177]: info: do_shutdown_req: Sending shutdown request to DC: nodea Jun 4 07:46:36 nodea crmd: [8177]: info: do_shutdown_req: Processing shutdown locally Jun 4 07:46:36 nodea crmd: [8177]: info: handle_shutdown_request: Creating shutdown request for nodea Jun 4 07:46:36 nodea tengine: [5535]: info: extract_event: Aborting on shutdown attribute for 2db0aaed-6cbb-4187-9555-d1af5d76ba42 Jun 4 07:46:36 nodea tengine: [5535]: info: update_abort_priority: Abort priority upgraded to 1000000 Jun 4 07:46:36 nodea tengine: [5535]: info: update_abort_priority: Abort action 0 superceeded by 2


NodeB:

Jun  4 07:46:15 nodeb heartbeat: [4756]: WARN: node nodea: is dead
Jun  4 07:46:15 nodeb heartbeat: [4756]: info: Link nodea:bond1 dead.
Jun  4 07:46:15 nodeb ccm: [8146]: debug: quorum plugin: majority
Jun 4 07:46:15 nodeb ccm: [8146]: debug: cluster:linux-ha, member_count=1, member_quorum_votes=100 Jun 4 07:46:15 nodeb ccm: [8146]: debug: total_node_count=2, total_quorum_votes=200
Jun  4 07:46:15 nodeb ccm: [8146]: debug: quorum plugin: twonodes
Jun 4 07:46:15 nodeb ccm: [8146]: debug: cluster:linux-ha, member_count=1, member_quorum_votes=100 Jun 4 07:46:15 nodeb ccm: [8146]: debug: total_node_count=2, total_quorum_votes=200
Jun  4 07:46:15 nodeb ccm: [8146]: info: Break tie for 2 nodes cluster
Jun 4 07:46:15 nodeb crmd: [8151]: notice: crmd_ha_status_callback: Status update: Node nodea now has status [dead] Jun 4 07:46:15 nodeb cib: [8147]: info: mem_handle_event: Got an event OC_EV_MS_INVALID from ccm
Jun  4 07:46:15 nodeb cib: [8147]: info: mem_handle_event: no mbr_track info
Jun 4 07:46:15 nodeb cib: [8147]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm Jun 4 07:46:15 nodeb cib: [8147]: info: mem_handle_event: instance=3, nodes=1, new=0, lost=1, n_idx=0, new_idx=1, old_idx=3
Jun  4 07:46:15 nodeb cib: [8147]: info: cib_ccm_msg_callback: LOST: nodea
Jun  4 07:46:15 nodeb cib: [8147]: info: cib_ccm_msg_callback: PEER: nodeb
Jun 4 07:46:15 nodeb crmd: [8151]: info: mem_handle_event: Got an event OC_EV_MS_INVALID from ccm Jun 4 07:46:15 nodeb crmd: [8151]: info: mem_handle_event: no mbr_track info Jun 4 07:46:15 nodeb crmd: [8151]: info: mem_handle_event: Got an event OC_EV_MS_NEW_MEMBERSHIP from ccm Jun 4 07:46:15 nodeb crmd: [8151]: info: mem_handle_event: instance=3, nodes=1, new=0, lost=1, n_idx=0, new_idx=1, old_idx=3 Jun 4 07:46:15 nodeb crmd: [8151]: info: crmd_ccm_msg_callback: Quorum (re)attained after event=NEW MEMBERSHIP (id=3) Jun 4 07:46:15 nodeb crmd: [8151]: info: ccm_event_detail: NEW MEMBERSHIP: trans=3, nodes=1, new=0, lost=1 n_idx=0, new_idx=1, old_idx=3 Jun 4 07:46:15 nodeb crmd: [8151]: info: ccm_event_detail: CURRENT: nodeb [nodeid=1, born=3] Jun 4 07:46:15 nodeb crmd: [8151]: info: ccm_event_detail: LOST: nodea [nodeid=0, born=2] Jun 4 07:46:15 nodeb tengine: [8232]: WARN: match_down_event: No match for shutdown action on 2db0aaed-6cbb-4187-9555-d1af5d76ba42 Jun 4 07:46:15 nodeb tengine: [8232]: info: extract_event: Stonith/shutdown of 2db0aaed-6cbb-4187-9555-d1af5d76ba42 not matched Jun 4 07:46:15 nodeb tengine: [8232]: info: update_abort_priority: Abort priority upgraded to 1000000 Jun 4 07:46:15 nodeb crmd: [8151]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_IPC_MESSAGE origin=route_message ] Jun 4 07:46:15 nodeb crmd: [8151]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Jun  4 07:46:15 nodeb haclient: on_event:evt:cib_changed
Jun  4 07:46:15 nodeb haclient: on_event:evt:cib_changed
Jun 4 07:46:15 nodeb pengine: [8233]: WARN: unpack_nodes: Blind faith: not fencing unseen nodes Jun 4 07:46:15 nodeb pengine: [8233]: info: determine_online_status: Node nodeb is online Jun 4 07:46:15 nodeb pengine: [8233]: WARN: determine_online_status_fencing: Node nodea (2db0aaed-6cbb-4187-9555-d1af5d76ba42) is un-expectedly down Jun 4 07:46:15 nodeb pengine: [8233]: info: determine_online_status_fencing: ha_state=dead, ccm_state=false, crm_state=online, join_state=down, expected=member Jun 4 07:46:15 nodeb pengine: [8233]: WARN: determine_online_status: Node nodea is unclean Jun 4 07:46:15 nodeb pengine: [8233]: notice: group_print: Resource Group: group_test Jun 4 07:46:15 nodeb pengine: [8233]: notice: native_print: resource_OracleInstance (ocf::heartbeat:OracleInstance): Stopped Jun 4 07:46:15 nodeb pengine: [8233]: notice: native_print: resource_clusterip (ocf::heartbeat:IPaddr2): Stopped Jun 4 07:46:15 nodeb pengine: [8233]: notice: native_print: resource_oralsnr (ocf::heartbeat:oralsnr): Stopped Jun 4 07:46:15 nodeb pengine: [8233]: notice: native_print: resource_test (ocf::heartbeat:test): Stopped Jun 4 07:46:15 nodeb pengine: [8233]: notice: native_print: resource_shutdown_nodea (stonith:external/riloe): Started nodeb Jun 4 07:46:15 nodeb pengine: [8233]: notice: native_print: resource_shutdown_nodeb (stonith:external/riloe): Started nodea Jun 4 07:46:15 nodeb pengine: [8233]: WARN: native_color: Resource resource_OracleInstance cannot run anywhere Jun 4 07:46:15 nodeb pengine: [8233]: WARN: native_color: Resource resource_clusterip cannot run anywhere Jun 4 07:46:15 nodeb pengine: [8233]: WARN: native_color: Resource resource_oralsnr cannot run anywhere Jun 4 07:46:15 nodeb pengine: [8233]: WARN: native_color: Resource resource_test cannot run anywhere Jun 4 07:46:15 nodeb pengine: [8233]: WARN: native_color: Resource resource_shutdown_nodeb cannot run anywhere Jun 4 07:46:15 nodeb pengine: [8233]: notice: NoRoleChange: Leave resource resource_shutdown_nodea (nodeb) Jun 4 07:46:15 nodeb pengine: [8233]: WARN: custom_action: Action resource_shutdown_nodeb_stop_0 on nodea is unrunnable (offline) Jun 4 07:46:15 nodeb pengine: [8233]: WARN: custom_action: Marking node nodea unclean Jun 4 07:46:15 nodeb pengine: [8233]: WARN: stage6: Scheduling Node nodea for STONITH Jun 4 07:46:15 nodeb pengine: [8233]: info: native_stop_constraints: resource_shutdown_nodeb_stop_0 is implicit after nodea is fenced Jun 4 07:46:15 nodeb crmd: [8151]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=route_message ] Jun 4 07:46:15 nodeb tengine: [8232]: info: unpack_graph: Unpacked transition 3: 4 actions in 4 synapses Jun 4 07:46:15 nodeb tengine: [8232]: info: te_pseudo_action: Pseudo action 10 fired and confirmed Jun 4 07:46:15 nodeb tengine: [8232]: info: te_pseudo_action: Pseudo action 11 fired and confirmed Jun 4 07:46:15 nodeb pengine: [8233]: WARN: process_pe_message: Transition 3: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/heartbeat/pengine/pe-warn-4.bz2 Jun 4 07:46:15 nodeb tengine: [8232]: info: te_fence_node: Executing poweroff fencing operation (12) on nodea (timeout=30000) Jun 4 07:46:15 nodeb pengine: [8233]: info: process_pe_message: Configuration WARNINGs found during PE processing. Please run "crm_verify -L" to identify issues. Jun 4 07:46:15 nodeb stonithd: [8149]: info: client tengine [pid: 8232] want a STONITH operation POWEROFF to node nodea. Jun 4 07:46:16 nodeb stonithd: [8149]: info: stonith_operate_locally::2368: sending fencing op (POWEROFF) for nodea to device external (rsc_id=resource_shutdown_nodea, pid=5559)
Jun  4 07:46:20 nodeb su: (to root) root on none
Jun  4 07:46:21 nodeb su: (to root) root on none
Jun  4 07:46:21 nodeb shutdown[5671]: shutting down for system halt
Jun  4 07:46:21 nodeb init: Switching to runlevel: 0
Jun 4 07:46:22 nodeb heartbeat: [4756]: info: killing /usr/lib/heartbeat/mgmtd -v process group 8152 with signal 15
Jun  4 07:46:22 nodeb mgmtd: [8152]: info: mgmtd is shutting down
Jun 4 07:46:22 nodeb mgmtd: [8152]: ERROR: Connection to the CIB terminated... exiting
Jun  4 07:46:22 nodeb haclient: on_event:evt:disconnected
Jun 4 07:46:22 nodeb heartbeat: [4756]: info: killing /usr/lib/heartbeat/crmd process group 8151 with signal 15
Jun  4 07:46:22 nodeb crmd: [8151]: info: crm_shutdown: Requesting shutdown
Jun 4 07:46:22 nodeb crmd: [8151]: WARN: do_log: [[FSA]] Input I_SHUTDOWN from crm_shutdown() received in state (S_TRANSITION_ENGINE) Jun 4 07:46:22 nodeb crmd: [8151]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_SHUTDOWN cause=C_SHUTDOWN origin=crm_shutdown ] Jun 4 07:46:22 nodeb crmd: [8151]: info: do_state_transition: All 1 cluster nodes are eligible to run resources. Jun 4 07:46:22 nodeb crmd: [8151]: info: do_shutdown_req: Sending shutdown request to DC: nodeb Jun 4 07:46:22 nodeb crmd: [8151]: info: do_shutdown_req: Processing shutdown locally Jun 4 07:46:22 nodeb crmd: [8151]: info: handle_shutdown_request: Creating shutdown request for nodeb Jun 4 07:46:22 nodeb tengine: [8232]: info: extract_event: Aborting on shutdown attribute for 04517348-6706-4d1c-8d8d-043441cdf70b Jun 4 07:46:22 nodeb tengine: [8232]: info: update_abort_priority: Abort priority upgraded to 1000000 Jun 4 07:46:22 nodeb tengine: [8232]: info: update_abort_priority: Abort action 0 superceeded by 2






_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to