Hello all,
I am running a two-node cluster with pacemaker 1.1.6 and corosync 1.4.1.
I have both nodes configured to use an iSCSI LUN for sbd. I can force
a node reset if I run:
sbd -d $SBD_DEVICE message <node> reset
However, crm and stonith_admin both fail to shoot the other node.
The log output from 'crm node fence sdgxen-3' is as follows (logs from
sdgxen-2):
Nov 23 10:11:55 sdgxen-2 stonith-ng: [1144]: ERROR:
remote_op_query_timeout: Query 42660b6d-a63a-4a1f-885e-37d9d15f1742
for sdgxen-3 timed out
Nov 23 10:11:55 sdgxen-2 stonith-ng: [1144]: ERROR: remote_op_timeout:
Action reboot (42660b6d-a63a-4a1f-885e-37d9d15f1742) for sdgxen-3
timed out
Nov 23 10:11:55 sdgxen-2 stonith-ng: [1144]: info: remote_op_done:
Notifing clients of 42660b6d-a63a-4a1f-885e-37d9d15f1742 (reboot of
sdgxen-3 from 469ca96d-d0c9-4895-a8f2-7ba39b0efd16 by (null)): 0,
rc=-8
Nov 23 10:11:55 sdgxen-2 stonith-ng: [1144]: info:
stonith_notify_client: Sending st_fence-notification to client
1149/8325e730-827a-4719-b6cf-f455bcc4ea00
Nov 23 10:11:55 sdgxen-2 crmd: [1149]: info: tengine_stonith_callback:
StonithOp <remote-op state="0" st_target="sdgxen-3" st_op="reboot" />
Nov 23 10:11:55 sdgxen-2 crmd: [1149]: info: tengine_stonith_callback:
Stonith operation 8953/8:8954:0:54f92431-259c-4ab7-aec9-c5f0a21d695e:
Operation timed out (-8)
Nov 23 10:11:55 sdgxen-2 crmd: [1149]: ERROR:
tengine_stonith_callback: Stonith of sdgxen-3 failed (-8)... aborting
transition.
Nov 23 10:11:55 sdgxen-2 crmd: [1149]: info: abort_transition_graph:
tengine_stonith_callback:427 - Triggered transition abort (complete=0)
: Stonith failed
Nov 23 10:11:55 sdgxen-2 crmd: [1149]: info: update_abort_priority:
Abort priority upgraded from 0 to 1000000
Nov 23 10:11:55 sdgxen-2 crmd: [1149]: info: update_abort_priority:
Abort action done superceeded by restart
Nov 23 10:11:55 sdgxen-2 crmd: [1149]: ERROR: tengine_stonith_notify:
Peer sdgxen-3 could not be terminated (reboot) by <anyone> for
sdgxen-2 (ref=42660b6d-a63a-4a1f-885e-37d9d15f1742): Operation timed
out
Nov 23 10:11:55 sdgxen-2 crmd: [1149]: info: run_graph:
====================================================
Nov 23 10:11:55 sdgxen-2 crmd: [1149]: notice: run_graph: Transition
8954 (Complete=2, Pending=0, Fired=0, Skipped=2, Incomplete=0,
Source=/var/lib/pengine/pe-warn-1263.bz2): Stopped
Nov 23 10:11:55 sdgxen-2 crmd: [1149]: info: te_graph_trigger:
Transition 8954 is now complete
Nov 23 10:11:55 sdgxen-2 crmd: [1149]: info: do_state_transition:
State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [
input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Nov 23 10:11:55 sdgxen-2 crmd: [1149]: info: do_state_transition: All
2 cluster nodes are eligible to run resources.
Nov 23 10:11:55 sdgxen-2 crmd: [1149]: info: do_pe_invoke: Query 9044:
Requesting the current CIB: S_POLICY_ENGINE
Nov 23 10:11:55 sdgxen-2 crmd: [1149]: info: do_pe_invoke_callback:
Invoking the PE: query=9044, ref=pe_calc-dc-1322043115-8985, seq=172,
quorate=1
Nov 23 10:11:55 sdgxen-2 pengine: [1148]: WARN: pe_fence_node: Node
sdgxen-3 will be fenced because termination was requested
Nov 23 10:11:55 sdgxen-2 pengine: [1148]: WARN:
determine_online_status: Node sdgxen-3 is unclean
Nov 23 10:11:55 sdgxen-2 pengine: [1148]: WARN: stage6: Scheduling
Node sdgxen-3 for STONITH
Nov 23 10:11:55 sdgxen-2 pengine: [1148]: notice: LogActions: Leave
stonith-sbd (Started sdgxen-2)
Nov 23 10:11:55 sdgxen-2 pengine: [1148]: WARN: process_pe_message:
Transition 8955: WARNINGs found during PE processing. PEngine Input
stored in: /var/lib/pengine/pe-warn-1263.bz2
Nov 23 10:11:55 sdgxen-2 pengine: [1148]: notice: process_pe_message:
Configuration WARNINGs found during PE processing. Please run
"crm_verify -L" to identify issues.
Nov 23 10:11:55 sdgxen-2 crmd: [1149]: info: do_state_transition:
State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [
input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Nov 23 10:11:55 sdgxen-2 crmd: [1149]: info: unpack_graph: Unpacked
transition 8955: 4 actions in 4 synapses
Nov 23 10:11:55 sdgxen-2 crmd: [1149]: info: do_te_invoke: Processing
graph 8955 (ref=pe_calc-dc-1322043115-8985) derived from
/var/lib/pengine/pe-warn-1263.bz2
Nov 23 10:11:55 sdgxen-2 crmd: [1149]: info: te_pseudo_action: Pseudo
action 5 fired and confirmed
Nov 23 10:11:55 sdgxen-2 crmd: [1149]: info: te_fence_node: Executing
reboot fencing operation (8) on sdgxen-3 (timeout=60000)
Nov 23 10:11:55 sdgxen-2 stonith-ng: [1144]: info:
initiate_remote_stonith_op: Initiating remote operation reboot for
sdgxen-3: 25495ad5-8307-475c-885f-4999bca287c0
Nov 23 10:11:55 sdgxen-2 stonith-ng: [1144]: info:
can_fence_host_with_device: stonith-sbd can not fence sdgxen-3:
dynamic-list
Nov 23 10:11:55 sdgxen-2 stonith-ng: [1144]: info: stonith_command:
Processed st_query from sdgxen-2: rc=0
Nov 23 10:11:55 sdgxen-2 sbd: [1136]: info: Latency: 1 on disk
/dev/mapper/qa-test-sbd
Logs from sdgxen-3 (node supposedly fenced):
Nov 23 10:11:53 sdgxen-3 stonith-ng: [1057]: info: stonith_command: Processed st
_query from sdgxen-2: rc=0
Nov 23 10:11:53 sdgxen-3 sbd: [1046]: info: Latency: 1 on disk /dev/mapper/qa-te
st-sbd
Nov 23 10:11:54 sdgxen-3 sbd: [1046]: info: Latency: 1 on disk /dev/mapper/qa-te
st-sbd
Nov 23 10:11:55 sdgxen-3 sbd: [1046]: info: Latency: 1 on disk
/dev/mapper/qa-test-sbd
Nov 23 10:11:56 sdgxen-3 sbd: [1046]: info: Latency: 1 on disk
/dev/mapper/qa-test-sbd
Nov 23 10:11:57 sdgxen-3 sbd: [1046]: info: Latency: 1 on disk
/dev/mapper/qa-test-sbd
Nov 23 10:11:58 sdgxen-3 sbd: [1046]: info: Latency: 1 on disk
/dev/mapper/qa-test-sbd
Nov 23 10:11:59 sdgxen-3 stonith-ng: [1057]: info: stonith_command:
Processed st_query from sdgxen-2: rc=0
---
After much Googling and reading past linux-ha discussions it seems
like this could be an issue with my configuration. However, I can't
find any issues with my configuration after running:
# stonith -t external/sbd sbd_device=/dev/mapper/qa-test-sbd -l
sdgxen-2
sdgxen-3
# stonith -t external/sbd sbd_device=/dev/mapper/qa-test-sbd -S
info: external/sbd device OK.
Relevant portions of crm config:
primitive stonith-sbd stonith:external/sbd \
meta is-managed="true" target-role="Started"
stonith-enabled="true" \
stonith-timeout="60s" \
stonith-action="reboot" \
Lastly, running crm_verify -L -V yields:
crm_verify[6130]: 2011/11/23_10:36:13 WARN: pe_fence_node: Node
sdgxen-3 will be fenced because termination was requested
crm_verify[6130]: 2011/11/23_10:36:13 WARN: determine_online_status:
Node sdgxen-3 is unclean
crm_verify[6130]: 2011/11/23_10:36:13 WARN: stage6: Scheduling Node
sdgxen-3 for STONITH
Warnings found during check: config may not be valid
I think this is complaining because it is expecting the node to be
reset, and this isn't happening.
Any help would be greatly appreciated.
Thanks,
-Hal
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems