Yes, sorry took same bash by mistake...here are the correct logs.
Yes, xstha1 has delay 10s so that I'm giving him precedence, xstha2 has delay
1s and will be stonished earlier.
During the short time before xstha2 got powered off, I saw it had time to turn
on NFS IP (I saw duplicated IP on xstha1).
And becase configuration has "order zpool_data_order inf: zpool_data (
xstha1_san0_IP )", that means xstha2 had imported the zpool for a small time
before being stonished, and this must never happen.
What suggests me that resources were started on xstha2 (and duplicated IP is an
effect) are these logs portions of xstha2.
These tells me it could not turn off resources on xstha1 (correct, it couldn't
contact xstha1):
Dec 16 15:08:56 [667] pengine: warning: custom_action: Action
xstha1_san0_IP_stop_0 on xstha1 is unrunnable (offline)
Dec 16 15:08:56 [667] pengine: warning: custom_action: Action
zpool_data_stop_0 on xstha1 is unrunnable (offline)
Dec 16 15:08:56 [667] pengine: warning: custom_action: Action
xstha2-stonith_stop_0 on xstha1 is unrunnable (offline)
Dec 16 15:08:56 [667] pengine: warning: custom_action: Action
xstha2-stonith_stop_0 on xstha1 is unrunnable (offline)
These tells me xstha2 took control of resources, that were actually running on
xstha1:
Dec 16 15:08:56 [667] pengine: notice: LogAction: * Move
xstha1_san0_IP ( xstha1 -> xstha2 )
Dec 16 15:08:56 [667] pengine: info: LogActions: Leave xstha2_san0_IP
(Started xstha2)
Dec 16 15:08:56 [667] pengine: notice: LogAction: * Move
zpool_data ( xstha1 -> xstha2 )
Dec 16 15:08:56 [667] pengine: info: LogActions: Leave xstha1-stonith
(Started xstha2)
Dec 16 15:08:56 [667] pengine: notice: LogAction: * Stop
xstha2-stonith ( xstha1 ) due to node availability
The last stonith request is the last beacuse xstha2 was killed by xsrtha1
before the 10s delay, which is what I wanted.
Gabriele
Sonicle S.r.l. : http://www.sonicle.com
Music: http://www.gabrielebulfon.com
eXoplanets : https://gabrielebulfon.bandcamp.com/album/exoplanets
----------------------------------------------------------------------------------
Da: Andrei Borzenkov <[email protected]>
A: [email protected]
Data: 17 dicembre 2020 6.38.33 CET
Oggetto: Re: [ClusterLabs] Antw: [EXT] delaying start of a resource
16.12.2020 17:56, Gabriele Bulfon пишет:
> Thanks, here are the logs, there are infos about how it tried to start
> resources on the nodes.
Both logs are from the same node.
> Keep in mind the node1 was already running the resources, and I simulated a
> problem by turning down the ha interface.
>
There is no attempt to start resources in these logs. Logs end with
stonith request. As this node had delay 10s, it probably was
successfully eliminated by another node, but there are no logs from
another node.
_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
ClusterLabs home: https://www.clusterlabs.org/
Dec 16 15:08:07 [660] xstorage1 corosync notice [TOTEM ] A processor failed,
forming new configuration.
Dec 16 15:08:07 [660] xstorage1 corosync notice [TOTEM ] The network interface
is down.
Dec 16 15:08:08 [660] xstorage1 corosync notice [TOTEM ] A new membership
(127.0.0.1:408) was formed. Members left: 2
Dec 16 15:08:08 [660] xstorage1 corosync notice [TOTEM ] Failed to receive the
leave message. failed: 2
Dec 16 15:08:08 [710] attrd: info: pcmk_cpg_membership: Group
attrd event 2: xstha2 (node 2 pid 666) left via cluster exit
Dec 16 15:08:08 [707] cib: info: pcmk_cpg_membership: Group
cib event 2: xstha2 (node 2 pid 663) left via cluster exit
Dec 16 15:08:08 [710] attrd: info: crm_update_peer_proc:
pcmk_cpg_membership: Node xstha2[2] - corosync-cpg is now offline
Dec 16 15:08:08 [687] pacemakerd: info: pcmk_cpg_membership: Group
pacemakerd event 2: xstha2 (node 2 pid 662) left via cluster exit
Dec 16 15:08:08 [708] stonith-ng: info: pcmk_cpg_membership: Group
stonith-ng event 2: xstha2 (node 2 pid 664) left via cluster exit
Dec 16 15:08:08 [687] pacemakerd: info: crm_update_peer_proc:
pcmk_cpg_membership: Node xstha2[2] - corosync-cpg is now offline
Dec 16 15:08:08 [708] stonith-ng: info: crm_update_peer_proc:
pcmk_cpg_membership: Node xstha2[2] - corosync-cpg is now offline
Dec 16 15:08:08 [710] attrd: notice: crm_update_peer_state_iter: Node
xstha2 state is now lost | nodeid=2 previous=member source=crm_update_peer_proc
Dec 16 15:08:08 [660] xstorage1 corosync notice [QUORUM] Members[1]: 1
Dec 16 15:08:08 [710] attrd: notice: attrd_peer_remove: Removing all
xstha2 attributes for peer loss
Dec 16 15:08:08 [708] stonith-ng: notice: crm_update_peer_state_iter: Node
xstha2 state is now lost | nodeid=2 previous=member source=crm_update_peer_proc
Dec 16 15:08:08 [712] crmd: info: pcmk_cpg_membership: Group
crmd event 2: xstha2 (node 2 pid 668) left via cluster exit
Dec 16 15:08:08 [660] xstorage1 corosync notice [MAIN ] Completed service
synchronization, ready to provide service.
Dec 16 15:08:08 [687] pacemakerd: info: pcmk_cpg_membership: Group
pacemakerd event 2: xstha1 (node 1 pid 687) is member
Dec 16 15:08:08 [710] attrd: info: crm_reap_dead_member:
Removing node with name xstha2 and id 2 from membership cache
Dec 16 15:08:08 [712] crmd: info: crm_update_peer_proc:
pcmk_cpg_membership: Node xstha2[2] - corosync-cpg is now offline
Dec 16 15:08:08 [710] attrd: notice: reap_crm_member: Purged 1 peer
with id=2 and/or uname=xstha2 from the membership cache
Dec 16 15:08:08 [708] stonith-ng: info: crm_reap_dead_member:
Removing node with name xstha2 and id 2 from membership cache
Dec 16 15:08:08 [710] attrd: info: pcmk_cpg_membership: Group
attrd event 2: xstha1 (node 1 pid 710) is member
Dec 16 15:08:08 [707] cib: info: crm_update_peer_proc:
pcmk_cpg_membership: Node xstha2[2] - corosync-cpg is now offline
Dec 16 15:08:08 [687] pacemakerd: info: pcmk_quorum_notification: Quorum
retained | membership=408 members=1
Dec 16 15:08:08 [707] cib: notice: crm_update_peer_state_iter: Node
xstha2 state is now lost | nodeid=2 previous=member source=crm_update_peer_proc
Dec 16 15:08:08 [708] stonith-ng: notice: reap_crm_member: Purged 1 peer
with id=2 and/or uname=xstha2 from the membership cache
Dec 16 15:08:08 [712] crmd: info: peer_update_callback: Client
xstha2/peer now has status [offline] (DC=true, changed=4000000)
Dec 16 15:08:08 [687] pacemakerd: notice: crm_update_peer_state_iter: Node
xstha2 state is now lost | nodeid=2 previous=member source=crm_reap_unseen_nodes
Dec 16 15:08:08 [707] cib: info: crm_reap_dead_member:
Removing node with name xstha2 and id 2 from membership cache
Dec 16 15:08:08 [708] stonith-ng: info: pcmk_cpg_membership: Group
stonith-ng event 2: xstha1 (node 1 pid 708) is member
Dec 16 15:08:08 [707] cib: notice: reap_crm_member: Purged 1 peer
with id=2 and/or uname=xstha2 from the membership cache
Dec 16 15:08:08 [707] cib: info: pcmk_cpg_membership: Group
cib event 2: xstha1 (node 1 pid 707) is member
Dec 16 15:08:08 [687] pacemakerd: info: mcp_cpg_deliver: Ignoring
process list sent by peer for local node
Dec 16 15:08:08 [712] crmd: info: controld_delete_node_state:
Deleting transient attributes for node xstha2 (via CIB call 65) |
xpath=//node_state[@uname='xstha2']/transient_attributes
Dec 16 15:08:08 [707] cib: info: cib_process_request:
Forwarding cib_delete operation for section
//node_state[@uname='xstha2']/transient_attributes to all (origin=local/crmd/65)
Dec 16 15:08:08 [712] crmd: warning: match_down_event: No reason to
expect node 2 to be down
Dec 16 15:08:08 [712] crmd: notice: peer_update_callback:
Stonith/shutdown of xstha2 not matched
Dec 16 15:08:08 [712] crmd: info: abort_transition_graph:
Transition aborted: Node failure | source=peer_update_callback:300 complete=true
Dec 16 15:08:08 [712] crmd: info: pcmk_cpg_membership: Group
crmd event 2: xstha1 (node 1 pid 712) is member
Dec 16 15:08:08 [712] crmd: notice: do_state_transition: State
transition S_IDLE -> S_POLICY_ENGINE | input=I_PE_CALC cause=C_FSA_INTERNAL
origin=abort_transition_graph
Dec 16 15:08:08 [712] crmd: info: pcmk_quorum_notification: Quorum
retained | membership=408 members=1
Dec 16 15:08:08 [712] crmd: notice: crm_update_peer_state_iter: Node
xstha2 state is now lost | nodeid=2 previous=member source=crm_reap_unseen_nodes
Dec 16 15:08:08 [712] crmd: info: peer_update_callback: Cluster
node xstha2 is now lost (was member)
Dec 16 15:08:08 [712] crmd: warning: match_down_event: No reason to
expect node 2 to be down
Dec 16 15:08:08 [712] crmd: notice: peer_update_callback:
Stonith/shutdown of xstha2 not matched
Dec 16 15:08:08 [712] crmd: info: abort_transition_graph:
Transition aborted: Node failure | source=peer_update_callback:300 complete=true
Dec 16 15:08:08 [707] cib: info: cib_process_request:
Completed cib_delete operation for section
//node_state[@uname='xstha2']/transient_attributes: OK (rc=0,
origin=xstha1/crmd/65, version=0.46.19)
Dec 16 15:08:08 [707] cib: info: cib_process_request:
Forwarding cib_modify operation for section status to all (origin=local/crmd/66)
Dec 16 15:08:08 [707] cib: info: cib_process_request:
Forwarding cib_modify operation for section status to all (origin=local/crmd/68)
Dec 16 15:08:08 [707] cib: info: cib_perform_op: Diff: ---
0.46.19 2
Dec 16 15:08:08 [707] cib: info: cib_perform_op: Diff: +++
0.46.20 (null)
Dec 16 15:08:08 [707] cib: info: cib_perform_op: + /cib:
@num_updates=20
Dec 16 15:08:08 [707] cib: info: cib_perform_op: +
/cib/status/node_state[@id='2']: @crmd=offline,
@crm-debug-origin=peer_update_callback
Dec 16 15:08:08 [707] cib: info: cib_process_request:
Completed cib_modify operation for section status: OK (rc=0,
origin=xstha1/crmd/66, version=0.46.20)
Dec 16 15:08:08 [707] cib: info: cib_process_request:
Completed cib_modify operation for section status: OK (rc=0,
origin=xstha1/crmd/68, version=0.46.20)
Dec 16 15:08:08 [707] cib: info: cib_process_request:
Forwarding cib_modify operation for section nodes to all (origin=local/crmd/71)
Dec 16 15:08:08 [707] cib: info: cib_process_request:
Forwarding cib_modify operation for section status to all (origin=local/crmd/72)
Dec 16 15:08:08 [707] cib: info: cib_process_request:
Completed cib_modify operation for section nodes: OK (rc=0,
origin=xstha1/crmd/71, version=0.46.20)
Dec 16 15:08:08 [707] cib: info: cib_perform_op: Diff: ---
0.46.20 2
Dec 16 15:08:08 [707] cib: info: cib_perform_op: Diff: +++
0.46.21 (null)
Dec 16 15:08:08 [707] cib: info: cib_perform_op: + /cib:
@num_updates=21
Dec 16 15:08:08 [707] cib: info: cib_perform_op: +
/cib/status/node_state[@id='1']: @crm-debug-origin=post_cache_update
Dec 16 15:08:08 [707] cib: info: cib_perform_op: +
/cib/status/node_state[@id='2']: @crm-debug-origin=post_cache_update,
@in_ccm=false
Dec 16 15:08:08 [707] cib: info: cib_process_request:
Completed cib_modify operation for section status: OK (rc=0,
origin=xstha1/crmd/72, version=0.46.21)
Dec 16 15:08:09 [711] pengine: info: determine_online_status_fencing:
Node xstha1 is active
Dec 16 15:08:09 [711] pengine: info: determine_online_status: Node
xstha1 is online
Dec 16 15:08:09 [711] pengine: warning: pe_fence_node: Cluster node
xstha2 will be fenced: peer is no longer part of the cluster
Dec 16 15:08:09 [711] pengine: warning: determine_online_status: Node
xstha2 is unclean
Dec 16 15:08:09 [711] pengine: info: unpack_node_loop: Node 1 is
already processed
Dec 16 15:08:09 [711] pengine: info: unpack_node_loop: Node 2 is
already processed
Dec 16 15:08:09 [711] pengine: info: unpack_node_loop: Node 1 is
already processed
Dec 16 15:08:09 [711] pengine: info: unpack_node_loop: Node 2 is
already processed
Dec 16 15:08:09 [711] pengine: info: common_print: xstha1_san0_IP
(ocf::heartbeat:IPaddr): Started xstha1
Dec 16 15:08:09 [711] pengine: info: common_print: xstha2_san0_IP
(ocf::heartbeat:IPaddr): Started xstha2 (UNCLEAN)
Dec 16 15:08:09 [711] pengine: info: common_print: zpool_data
(ocf::heartbeat:ZFS): Started xstha1
Dec 16 15:08:09 [711] pengine: info: common_print: xstha1-stonith
(stonith:external/ipmi): Started xstha2 (UNCLEAN)
Dec 16 15:08:09 [711] pengine: info: common_print: xstha2-stonith
(stonith:external/ipmi): Started xstha1
Dec 16 15:08:09 [711] pengine: info: pcmk__native_allocate:
Resource xstha1-stonith cannot run anywhere
Dec 16 15:08:09 [711] pengine: warning: custom_action: Action
xstha2_san0_IP_stop_0 on xstha2 is unrunnable (offline)
Dec 16 15:08:09 [711] pengine: warning: custom_action: Action
xstha1-stonith_stop_0 on xstha2 is unrunnable (offline)
Dec 16 15:08:09 [711] pengine: warning: custom_action: Action
xstha1-stonith_stop_0 on xstha2 is unrunnable (offline)
Dec 16 15:08:09 [711] pengine: warning: stage6: Scheduling Node xstha2
for STONITH
Dec 16 15:08:09 [711] pengine: info: native_stop_constraints:
xstha2_san0_IP_stop_0 is implicit after xstha2 is fenced
Dec 16 15:08:09 [711] pengine: info: native_stop_constraints:
xstha1-stonith_stop_0 is implicit after xstha2 is fenced
Dec 16 15:08:09 [711] pengine: notice: LogNodeActions: * Fence (off)
xstha2 'peer is no longer part of the cluster'
Dec 16 15:08:09 [711] pengine: info: LogActions: Leave xstha1_san0_IP
(Started xstha1)
Dec 16 15:08:09 [711] pengine: notice: LogAction: * Move
xstha2_san0_IP ( xstha2 -> xstha1 )
Dec 16 15:08:09 [711] pengine: info: LogActions: Leave zpool_data
(Started xstha1)
Dec 16 15:08:09 [711] pengine: notice: LogAction: * Stop
xstha1-stonith ( xstha2 ) due to node availability
Dec 16 15:08:09 [711] pengine: info: LogActions: Leave xstha2-stonith
(Started xstha1)
Dec 16 15:08:09 [711] pengine: warning: process_pe_message: Calculated
transition 4 (with warnings), saving inputs in
/sonicle/var/cluster/lib/pacemaker/pengine/pe-warn-51.bz2
Dec 16 15:08:09 [712] crmd: info: do_state_transition: State
transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE | input=I_PE_SUCCESS
cause=C_IPC_MESSAGE origin=handle_response
Dec 16 15:08:09 [712] crmd: info: do_te_invoke: Processing
graph 4 (ref=pe_calc-dc-1608127689-39) derived from
/sonicle/var/cluster/lib/pacemaker/pengine/pe-warn-51.bz2
Dec 16 15:08:09 [712] crmd: notice: te_fence_node: Requesting
fencing (off) of node xstha2 | action=1 timeout=60000
Dec 16 15:08:09 [708] stonith-ng: notice: handle_request: Client
crmd.712.e9eb875f wants to fence (off) 'xstha2' with device '(any)'
Dec 16 15:08:09 [708] stonith-ng: notice: initiate_remote_stonith_op:
Requesting peer fencing (off) targeting xstha2 |
id=e487e7cc-f333-edd6-94d2-f5ff1bfd9b3d state=0
Dec 16 15:08:09 [708] stonith-ng: info: dynamic_list_search_cb:
Refreshing port list for xstha2-stonith
Dec 16 15:08:09 [708] stonith-ng: info: process_remote_stonith_query:
Query result 1 of 1 from xstha1 for xstha2/off (1 devices)
e487e7cc-f333-edd6-94d2-f5ff1bfd9b3d
Dec 16 15:08:09 [708] stonith-ng: info: call_remote_stonith: Total
timeout set to 60 for peer's fencing targeting xstha2 for
crmd.712|id=e487e7cc-f333-edd6-94d2-f5ff1bfd9b3d
Dec 16 15:08:09 [708] stonith-ng: notice: call_remote_stonith:
Requesting that xstha1 perform 'off' action targeting xstha2 | for client
crmd.712 (72s, 0s)
Dec 16 15:08:09 [708] stonith-ng: notice: can_fence_host_with_device:
xstha2-stonith can fence (off) xstha2: dynamic-list
Dec 16 15:08:09 [708] stonith-ng: info: stonith_fence_get_devices_cb:
Found 1 matching devices for 'xstha2'
Dec 16 15:08:09 [708] stonith-ng: notice: schedule_stonith_command:
Delaying 'off' action targeting xstha2 on xstha2-stonith for 1s (timeout=60s,
requested_delay=0s, base=1s, max=1s)
Dec 16 15:08:12 [708] stonith-ng: notice: log_operation: Operation 'off'
[1273] (call 4 from crmd.712) for host 'xstha2' with device 'xstha2-stonith'
returned: 0 (OK)
Dec 16 15:08:12 [708] stonith-ng: notice: remote_op_done: Operation 'off'
targeting xstha2 on xstha1 for [email protected]: OK
Dec 16 15:08:12 [712] crmd: notice: tengine_stonith_callback: Stonith
operation 4/1:4:0:cc8faf12-ac24-cc9c-c212-effe6840ca76: OK (0)
Dec 16 15:08:12 [712] crmd: info: tengine_stonith_callback: Stonith
operation 4 for xstha2 passed
Dec 16 15:08:12 [712] crmd: info: crm_update_peer_expected:
crmd_peer_down: Node xstha2[2] - expected state is now down (was member)
Dec 16 15:08:12 [712] crmd: info: controld_delete_node_state:
Deleting all state for node xstha2 (via CIB call 76) |
xpath=//node_state[@uname='xstha2']/*
Dec 16 15:08:12 [707] cib: info: cib_process_request:
Forwarding cib_modify operation for section status to all (origin=local/crmd/75)
Dec 16 15:08:12 [712] crmd: notice: tengine_stonith_notify: Peer
xstha2 was terminated (off) by xstha1 on behalf of crmd.712: OK |
initiator=xstha1 ref=e487e7cc-f333-edd6-94d2-f5ff1bfd9b3d
Dec 16 15:08:12 [707] cib: info: cib_process_request:
Forwarding cib_delete operation for section //node_state[@uname='xstha2']/* to
all (origin=local/crmd/76)
Dec 16 15:08:12 [712] crmd: info: controld_delete_node_state:
Deleting all state for node xstha2 (via CIB call 78) |
xpath=//node_state[@uname='xstha2']/*
Dec 16 15:08:12 [712] crmd: notice: te_rsc_command: Initiating
start operation xstha2_san0_IP_start_0 locally on xstha1 | action 6
Dec 16 15:08:12 [712] crmd: info: do_lrm_rsc_op: Performing
key=6:4:0:cc8faf12-ac24-cc9c-c212-effe6840ca76 op=xstha2_san0_IP_start_0
Dec 16 15:08:12 [707] cib: info: cib_perform_op: Diff: ---
0.46.21 2
Dec 16 15:08:12 [707] cib: info: cib_perform_op: Diff: +++
0.46.22 (null)
Dec 16 15:08:12 [707] cib: info: cib_perform_op: + /cib:
@num_updates=22
Dec 16 15:08:12 [707] cib: info: cib_perform_op: +
/cib/status/node_state[@id='2']: @crm-debug-origin=send_stonith_update,
@join=down, @expected=down
Dec 16 15:08:12 [709] lrmd: info: log_execute: executing -
rsc:xstha2_san0_IP action:start call_id:26
Dec 16 15:08:12 [707] cib: info: cib_process_request:
Completed cib_modify operation for section status: OK (rc=0,
origin=xstha1/crmd/75, version=0.46.22)
Dec 16 15:08:12 [712] crmd: info: cib_fencing_updated: Fencing
update 75 for xstha2: complete
Dec 16 15:08:12 [707] cib: info: cib_perform_op: Diff: ---
0.46.22 2
Dec 16 15:08:12 [707] cib: info: cib_perform_op: Diff: +++
0.46.23 (null)
Dec 16 15:08:12 [707] cib: info: cib_perform_op: --
/cib/status/node_state[@id='2']/lrm[@id='2']
Dec 16 15:08:12 [707] cib: info: cib_perform_op: + /cib:
@num_updates=23
Dec 16 15:08:12 [707] cib: info: cib_process_request:
Completed cib_delete operation for section //node_state[@uname='xstha2']/*: OK
(rc=0, origin=xstha1/crmd/76, version=0.46.23)
Dec 16 15:08:12 [707] cib: info: cib_process_request:
Forwarding cib_modify operation for section status to all (origin=local/crmd/77)
Dec 16 15:08:12 [707] cib: info: cib_process_request:
Forwarding cib_delete operation for section //node_state[@uname='xstha2']/* to
all (origin=local/crmd/78)
Dec 16 15:08:12 [707] cib: info: cib_process_request:
Completed cib_modify operation for section status: OK (rc=0,
origin=xstha1/crmd/77, version=0.46.23)
Dec 16 15:08:12 [712] crmd: info: cib_fencing_updated: Fencing
update 77 for xstha2: complete
Dec 16 15:08:12 [707] cib: info: cib_process_request:
Completed cib_delete operation for section //node_state[@uname='xstha2']/*: OK
(rc=0, origin=xstha1/crmd/78, version=0.46.23)
Dec 16 15:08:12 [709] lrmd: notice: operation_finished:
xstha2_san0_IP_start_0:1286:stderr [ Converted dotted-quad netmask to CIDR as:
24 ]
Dec 16 15:08:12 [709] lrmd: info: log_finished: finished -
rsc:xstha2_san0_IP action:start call_id:26 pid:1286 exit-code:0 exec-time:424ms
queue-time:0ms
Dec 16 15:08:12 [712] crmd: notice: process_lrm_event: Result of start
operation for xstha2_san0_IP on xstha1: 0 (ok) | call=26
key=xstha2_san0_IP_start_0 confirmed=true cib-update=79
Dec 16 15:08:12 [707] cib: info: cib_process_request:
Forwarding cib_modify operation for section status to all (origin=local/crmd/79)
Dec 16 15:08:12 [707] cib: info: cib_perform_op: Diff: ---
0.46.23 2
Dec 16 15:08:12 [707] cib: info: cib_perform_op: Diff: +++
0.46.24 (null)
Dec 16 15:08:12 [707] cib: info: cib_perform_op: + /cib:
@num_updates=24
Dec 16 15:08:12 [707] cib: info: cib_perform_op: +
/cib/status/node_state[@id='1']: @crm-debug-origin=do_update_resource
Dec 16 15:08:12 [707] cib: info: cib_perform_op: +
/cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='xstha2_san0_IP']/lrm_rsc_op[@id='xstha2_san0_IP_last_0']:
@operation_key=xstha2_san0_IP_start_0, @operation=start,
@crm-debug-origin=do_update_resource,
@transition-key=6:4:0:cc8faf12-ac24-cc9c-c212-effe6840ca76,
@transition-magic=0:0;6:4:0:cc8faf12-ac24-cc9c-c212-effe6840ca76, @call-id=26,
@rc-code=0, @last-run=1608127692, @last-rc-change=1608127692, @exec-time=424
Dec 16 15:08:12 [707] cib: info: cib_process_request:
Completed cib_modify operation for section status: OK (rc=0,
origin=xstha1/crmd/79, version=0.46.24)
Dec 16 15:08:12 [712] crmd: info: match_graph_event: Action
xstha2_san0_IP_start_0 (6) confirmed on xstha1 (rc=0)
Dec 16 15:08:12 [712] crmd: notice: run_graph: Transition 4
(Complete=4, Pending=0, Fired=0, Skipped=0, Incomplete=0,
Source=/sonicle/var/cluster/lib/pacemaker/pengine/pe-warn-51.bz2): Complete
Dec 16 15:08:12 [712] crmd: info: do_log: Input I_TE_SUCCESS
received in state S_TRANSITION_ENGINE from notify_crmd
Dec 16 15:08:12 [712] crmd: notice: do_state_transition: State
transition S_TRANSITION_ENGINE -> S_IDLE | input=I_TE_SUCCESS
cause=C_FSA_INTERNAL origin=notify_crmd
Dec 16 15:08:17 [707] cib: info: cib_process_ping: Reporting our
current digest to xstha1: 12b5d0c73b7cc062864dd80352e00b6c for 0.46.24 (82626a0
0)
Dec 16 15:08:54 [642] xstorage2 corosync notice [TOTEM ] A processor failed,
forming new configuration.
Dec 16 15:08:56 [642] xstorage2 corosync notice [TOTEM ] A new membership
(10.100.100.2:408) was formed. Members left: 1
Dec 16 15:08:56 [642] xstorage2 corosync notice [TOTEM ] Failed to receive the
leave message. failed: 1
Dec 16 15:08:56 [666] attrd: info: pcmk_cpg_membership: Group
attrd event 2: xstha1 (node 1 pid 710) left via cluster exit
Dec 16 15:08:56 [663] cib: info: pcmk_cpg_membership: Group
cib event 2: xstha1 (node 1 pid 707) left via cluster exit
Dec 16 15:08:56 [662] pacemakerd: info: pcmk_cpg_membership: Group
pacemakerd event 2: xstha1 (node 1 pid 687) left via cluster exit
Dec 16 15:08:56 [642] xstorage2 corosync notice [QUORUM] Members[1]: 2
Dec 16 15:08:56 [662] pacemakerd: info: crm_update_peer_proc:
pcmk_cpg_membership: Node xstha1[1] - corosync-cpg is now offline
Dec 16 15:08:56 [666] attrd: info: crm_update_peer_proc:
pcmk_cpg_membership: Node xstha1[1] - corosync-cpg is now offline
Dec 16 15:08:56 [662] pacemakerd: info: pcmk_cpg_membership: Group
pacemakerd event 2: xstha2 (node 2 pid 662) is member
Dec 16 15:08:56 [642] xstorage2 corosync notice [MAIN ] Completed service
synchronization, ready to provide service.
Dec 16 15:08:56 [668] crmd: info: pcmk_cpg_membership: Group
crmd event 2: xstha1 (node 1 pid 712) left via cluster exit
Dec 16 15:08:56 [664] stonith-ng: info: pcmk_cpg_membership: Group
stonith-ng event 2: xstha1 (node 1 pid 708) left via cluster exit
Dec 16 15:08:56 [663] cib: info: crm_update_peer_proc:
pcmk_cpg_membership: Node xstha1[1] - corosync-cpg is now offline
Dec 16 15:08:56 [668] crmd: info: crm_update_peer_proc:
pcmk_cpg_membership: Node xstha1[1] - corosync-cpg is now offline
Dec 16 15:08:56 [666] attrd: notice: attrd_remove_voter: Lost attribute
writer xstha1
Dec 16 15:08:56 [664] stonith-ng: info: crm_update_peer_proc:
pcmk_cpg_membership: Node xstha1[1] - corosync-cpg is now offline
Dec 16 15:08:56 [662] pacemakerd: info: pcmk_quorum_notification: Quorum
retained | membership=408 members=1
Dec 16 15:08:56 [663] cib: notice: crm_update_peer_state_iter: Node
xstha1 state is now lost | nodeid=1 previous=member source=crm_update_peer_proc
Dec 16 15:08:56 [664] stonith-ng: notice: crm_update_peer_state_iter: Node
xstha1 state is now lost | nodeid=1 previous=member source=crm_update_peer_proc
Dec 16 15:08:56 [662] pacemakerd: notice: crm_update_peer_state_iter: Node
xstha1 state is now lost | nodeid=1 previous=member source=crm_reap_unseen_nodes
Dec 16 15:08:56 [668] crmd: info: peer_update_callback: Client
xstha1/peer now has status [offline] (DC=xstha1, changed=4000000)
Dec 16 15:08:56 [663] cib: info: crm_reap_dead_member:
Removing node with name xstha1 and id 1 from membership cache
Dec 16 15:08:56 [666] attrd: info: attrd_start_election_if_needed:
Starting an election to determine the writer
Dec 16 15:08:56 [663] cib: notice: reap_crm_member: Purged 1 peer
with id=1 and/or uname=xstha1 from the membership cache
Dec 16 15:08:56 [668] crmd: notice: peer_update_callback: Our
peer on the DC (xstha1) is dead
Dec 16 15:08:56 [663] cib: info: pcmk_cpg_membership: Group
cib event 2: xstha2 (node 2 pid 663) is member
Dec 16 15:08:56 [664] stonith-ng: info: crm_reap_dead_member:
Removing node with name xstha1 and id 1 from membership cache
Dec 16 15:08:56 [662] pacemakerd: info: mcp_cpg_deliver: Ignoring
process list sent by peer for local node
Dec 16 15:08:56 [666] attrd: notice: crm_update_peer_state_iter: Node
xstha1 state is now lost | nodeid=1 previous=member source=crm_update_peer_proc
Dec 16 15:08:56 [664] stonith-ng: notice: reap_crm_member: Purged 1 peer
with id=1 and/or uname=xstha1 from the membership cache
Dec 16 15:08:56 [668] crmd: info: controld_delete_node_state:
Deleting transient attributes for node xstha1 (via CIB call 18) |
xpath=//node_state[@uname='xstha1']/transient_attributes
Dec 16 15:08:56 [664] stonith-ng: info: pcmk_cpg_membership: Group
stonith-ng event 2: xstha2 (node 2 pid 664) is member
Dec 16 15:08:56 [666] attrd: notice: attrd_peer_remove: Removing all
xstha1 attributes for peer loss
Dec 16 15:08:56 [668] crmd: info: pcmk_cpg_membership: Group
crmd event 2: xstha2 (node 2 pid 668) is member
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Forwarding cib_delete operation for section
//node_state[@uname='xstha1']/transient_attributes to all (origin=local/crmd/18)
Dec 16 15:08:56 [666] attrd: info: crm_reap_dead_member:
Removing node with name xstha1 and id 1 from membership cache
Dec 16 15:08:56 [668] crmd: notice: do_state_transition: State
transition S_NOT_DC -> S_ELECTION | input=I_ELECTION
cause=C_CRMD_STATUS_CALLBACK origin=peer_update_callback
Dec 16 15:08:56 [666] attrd: notice: reap_crm_member: Purged 1 peer
with id=1 and/or uname=xstha1 from the membership cache
Dec 16 15:08:56 [668] crmd: info: update_dc: Unset DC. Was xstha1
Dec 16 15:08:56 [666] attrd: info: pcmk_cpg_membership: Group
attrd event 2: xstha2 (node 2 pid 666) is member
Dec 16 15:08:56 [666] attrd: info: election_check: election-attrd
won by local node
Dec 16 15:08:56 [668] crmd: info: pcmk_quorum_notification: Quorum
retained | membership=408 members=1
Dec 16 15:08:56 [666] attrd: notice: attrd_declare_winner:
Recorded local node as attribute writer (was unset)
Dec 16 15:08:56 [668] crmd: notice: crm_update_peer_state_iter: Node
xstha1 state is now lost | nodeid=1 previous=member source=crm_reap_unseen_nodes
Dec 16 15:08:56 [668] crmd: info: peer_update_callback: Cluster
node xstha1 is now lost (was member)
Dec 16 15:08:56 [666] attrd: info: write_attribute: Processed 1
private change for #attrd-protocol, id=n/a, set=n/a
Dec 16 15:08:56 [668] crmd: info: election_check: election-DC won
by local node
Dec 16 15:08:56 [668] crmd: info: do_log: Input I_ELECTION_DC
received in state S_ELECTION from election_win_cb
Dec 16 15:08:56 [668] crmd: notice: do_state_transition: State
transition S_ELECTION -> S_INTEGRATION | input=I_ELECTION_DC
cause=C_FSA_INTERNAL origin=election_win_cb
Dec 16 15:08:56 [668] crmd: info: do_te_control: Registering TE
UUID: f340fcfc-17fa-ebf0-c5bf-8299546d41b6
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Completed cib_delete operation for section
//node_state[@uname='xstha1']/transient_attributes: OK (rc=0,
origin=xstha2/crmd/18, version=0.46.19)
Dec 16 15:08:56 [668] crmd: info: set_graph_functions: Setting
custom graph functions
Dec 16 15:08:56 [668] crmd: info: do_dc_takeover: Taking over DC
status for this partition
Dec 16 15:08:56 [663] cib: info: cib_process_readwrite: We are
now in R/W mode
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Completed cib_master operation for section 'all': OK (rc=0,
origin=local/crmd/19, version=0.46.19)
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Forwarding cib_modify operation for section cib to all (origin=local/crmd/20)
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Completed cib_modify operation for section cib: OK (rc=0,
origin=xstha2/crmd/20, version=0.46.19)
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Forwarding cib_modify operation for section crm_config to all
(origin=local/crmd/22)
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Completed cib_modify operation for section crm_config: OK (rc=0,
origin=xstha2/crmd/22, version=0.46.19)
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Forwarding cib_modify operation for section crm_config to all
(origin=local/crmd/24)
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Completed cib_modify operation for section crm_config: OK (rc=0,
origin=xstha2/crmd/24, version=0.46.19)
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Forwarding cib_modify operation for section crm_config to all
(origin=local/crmd/26)
Dec 16 15:08:56 [668] crmd: info: corosync_cluster_name: Cannot
get totem.cluster_name: CS_ERR_NOT_EXIST (12)
Dec 16 15:08:56 [668] crmd: info: join_make_offer: Making join-1
offers based on membership event 408
Dec 16 15:08:56 [668] crmd: info: join_make_offer: Sending join-1
offer to xstha2
Dec 16 15:08:56 [668] crmd: info: join_make_offer: Not making
join-1 offer to inactive node xstha1
Dec 16 15:08:56 [668] crmd: info: do_dc_join_offer_all: Waiting
on join-1 requests from 1 outstanding node
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Completed cib_modify operation for section crm_config: OK (rc=0,
origin=xstha2/crmd/26, version=0.46.19)
Dec 16 15:08:56 [668] crmd: info: update_dc: Set DC to xstha2
(3.0.14)
Dec 16 15:08:56 [668] crmd: info: crm_update_peer_expected:
update_dc: Node xstha2[2] - expected state is now member (was (null))
Dec 16 15:08:56 [668] crmd: info: do_state_transition: State
transition S_INTEGRATION -> S_FINALIZE_JOIN | input=I_INTEGRATED
cause=C_FSA_INTERNAL origin=check_join_state
Dec 16 15:08:56 [663] cib: info: cib_process_replace: Digest
matched on replace from xstha2: 4835352cb7b4920917d8beee219bc962
Dec 16 15:08:56 [663] cib: info: cib_process_replace:
Replaced 0.46.19 with 0.46.19 from xstha2
Dec 16 15:08:56 [668] crmd: info: controld_delete_node_state:
Deleting resource history for node xstha2 (via CIB call 31) |
xpath=//node_state[@uname='xstha2']/lrm
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Completed cib_replace operation for section 'all': OK (rc=0,
origin=xstha2/crmd/29, version=0.46.19)
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Forwarding cib_modify operation for section nodes to all (origin=local/crmd/30)
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Forwarding cib_delete operation for section //node_state[@uname='xstha2']/lrm
to all (origin=local/crmd/31)
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Forwarding cib_modify operation for section status to all (origin=local/crmd/32)
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Completed cib_modify operation for section nodes: OK (rc=0,
origin=xstha2/crmd/30, version=0.46.19)
Dec 16 15:08:56 [663] cib: info: cib_perform_op: Diff: ---
0.46.19 2
Dec 16 15:08:56 [663] cib: info: cib_perform_op: Diff: +++
0.46.20 (null)
Dec 16 15:08:56 [663] cib: info: cib_perform_op: --
/cib/status/node_state[@id='2']/lrm[@id='2']
Dec 16 15:08:56 [663] cib: info: cib_perform_op: + /cib:
@num_updates=20
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Completed cib_delete operation for section //node_state[@uname='xstha2']/lrm:
OK (rc=0, origin=xstha2/crmd/31, version=0.46.20)
Dec 16 15:08:56 [663] cib: info: cib_perform_op: Diff: ---
0.46.20 2
Dec 16 15:08:56 [663] cib: info: cib_perform_op: Diff: +++
0.46.21 (null)
Dec 16 15:08:56 [663] cib: info: cib_perform_op: + /cib:
@num_updates=21
Dec 16 15:08:56 [663] cib: info: cib_perform_op: +
/cib/status/node_state[@id='2']: @crm-debug-origin=do_lrm_query_internal
Dec 16 15:08:56 [663] cib: info: cib_perform_op: ++
/cib/status/node_state[@id='2']: <lrm id="2"/>
Dec 16 15:08:56 [663] cib: info: cib_perform_op: ++
<lrm_resources>
Dec 16 15:08:56 [663] cib: info: cib_perform_op: ++
<lrm_resource id="zpool_data" type="ZFS" class="ocf"
provider="heartbeat">
Dec 16 15:08:56 [663] cib: info: cib_perform_op: ++
<lrm_rsc_op id="zpool_data_last_0"
operation_key="zpool_data_monitor_0" operation="monitor"
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14"
transition-key="3:3:7:cc8faf12-ac24-cc9c-c212-effe6840ca76"
transition-magic="0:7;3:3:7:cc8faf12-ac24-cc9c-c212-effe6840ca76"
exit-reason="" on_node="xstha2" call-id="13" rc-code="7" op-status="0"
interval="0" last-run="1608127496" last-rc-change="1608127
Dec 16 15:08:56 [663] cib: info: cib_perform_op: ++
</lrm_resource>
Dec 16 15:08:56 [663] cib: info: cib_perform_op: ++
<lrm_resource id="xstha1-stonith" type="external/ipmi"
class="stonith">
Dec 16 15:08:56 [663] cib: info: cib_perform_op: ++
<lrm_rsc_op id="xstha1-stonith_last_0"
operation_key="xstha1-stonith_start_0" operation="start"
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14"
transition-key="9:3:0:cc8faf12-ac24-cc9c-c212-effe6840ca76"
transition-magic="0:0;9:3:0:cc8faf12-ac24-cc9c-c212-effe6840ca76"
exit-reason="" on_node="xstha2" call-id="22" rc-code="0" op-status="0"
interval="0" last-run="1608127496" last-rc-change="160
Dec 16 15:08:56 [663] cib: info: cib_perform_op: ++
<lrm_rsc_op id="xstha1-stonith_monitor_25000"
operation_key="xstha1-stonith_monitor_25000" operation="monitor"
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14"
transition-key="10:3:0:cc8faf12-ac24-cc9c-c212-effe6840ca76"
transition-magic="0:0;10:3:0:cc8faf12-ac24-cc9c-c212-effe6840ca76"
exit-reason="" on_node="xstha2" call-id="24" rc-code="0" op-status="0"
interval="25000" last-rc-change="1608
Dec 16 15:08:56 [663] cib: info: cib_perform_op: ++
</lrm_resource>
Dec 16 15:08:56 [663] cib: info: cib_perform_op: ++
<lrm_resource id="xstha2-stonith" type="external/ipmi"
class="stonith">
Dec 16 15:08:56 [663] cib: info: cib_perform_op: ++
<lrm_rsc_op id="xstha2-stonith_last_0"
operation_key="xstha2-stonith_monitor_0" operation="monitor"
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14"
transition-key="5:3:7:cc8faf12-ac24-cc9c-c212-effe6840ca76"
transition-magic="0:7;5:3:7:cc8faf12-ac24-cc9c-c212-effe6840ca76"
exit-reason="" on_node="xstha2" call-id="21" rc-code="7" op-status="0"
interval="0" last-run="1608127496" last-rc-change=
Dec 16 15:08:56 [663] cib: info: cib_perform_op: ++
</lrm_resource>
Dec 16 15:08:56 [663] cib: info: cib_perform_op: ++
<lrm_resource id="xstha1_san0_IP" type="IPaddr"
class="ocf" provider="heartbeat">
Dec 16 15:08:56 [663] cib: info: cib_perform_op: ++
<lrm_rsc_op id="xstha1_san0_IP_last_0"
operation_key="xstha1_san0_IP_monitor_0" operation="monitor"
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14"
transition-key="1:3:7:cc8faf12-ac24-cc9c-c212-effe6840ca76"
transition-magic="0:7;1:3:7:cc8faf12-ac24-cc9c-c212-effe6840ca76"
exit-reason="" on_node="xstha2" call-id="5" rc-code="7" op-status="0"
interval="0" last-run="1608127496" last-rc-change="
Dec 16 15:08:56 [663] cib: info: cib_perform_op: ++
</lrm_resource>
Dec 16 15:08:56 [663] cib: info: cib_perform_op: ++
<lrm_resource id="xstha2_san0_IP" type="IPaddr"
class="ocf" provider="heartbeat">
Dec 16 15:08:56 [663] cib: info: cib_perform_op: ++
<lrm_rsc_op id="xstha2_san0_IP_last_0"
operation_key="xstha2_san0_IP_start_0" operation="start"
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.14"
transition-key="7:3:0:cc8faf12-ac24-cc9c-c212-effe6840ca76"
transition-magic="0:0;7:3:0:cc8faf12-ac24-cc9c-c212-effe6840ca76"
exit-reason="" on_node="xstha2" call-id="23" rc-code="0" op-status="0"
interval="0" last-run="1608127497" last-rc-change="160
Dec 16 15:08:56 [663] cib: info: cib_perform_op: ++
</lrm_resource>
Dec 16 15:08:56 [663] cib: info: cib_perform_op: ++
</lrm_resources>
Dec 16 15:08:56 [663] cib: info: cib_perform_op: ++
</lrm>
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Completed cib_modify operation for section status: OK (rc=0,
origin=xstha2/crmd/32, version=0.46.21)
Dec 16 15:08:56 [668] crmd: info: do_state_transition: State
transition S_FINALIZE_JOIN -> S_POLICY_ENGINE | input=I_FINALIZED
cause=C_FSA_INTERNAL origin=check_join_state
Dec 16 15:08:56 [668] crmd: info: abort_transition_graph:
Transition aborted: Peer Cancelled | source=do_te_invoke:143 complete=true
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Forwarding cib_modify operation for section nodes to all (origin=local/crmd/35)
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Forwarding cib_modify operation for section status to all (origin=local/crmd/36)
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Forwarding cib_modify operation for section cib to all (origin=local/crmd/37)
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Completed cib_modify operation for section nodes: OK (rc=0,
origin=xstha2/crmd/35, version=0.46.21)
Dec 16 15:08:56 [663] cib: info: cib_perform_op: Diff: ---
0.46.21 2
Dec 16 15:08:56 [663] cib: info: cib_perform_op: Diff: +++
0.46.22 (null)
Dec 16 15:08:56 [663] cib: info: cib_perform_op: + /cib:
@num_updates=22
Dec 16 15:08:56 [663] cib: info: cib_perform_op: +
/cib/status/node_state[@id='1']: @in_ccm=false, @crmd=offline,
@crm-debug-origin=do_state_transition, @join=down
Dec 16 15:08:56 [663] cib: info: cib_perform_op: +
/cib/status/node_state[@id='2']: @crm-debug-origin=do_state_transition
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Completed cib_modify operation for section status: OK (rc=0,
origin=xstha2/crmd/36, version=0.46.22)
Dec 16 15:08:56 [663] cib: info: cib_perform_op: Diff: ---
0.46.22 2
Dec 16 15:08:56 [663] cib: info: cib_perform_op: Diff: +++
0.46.23 (null)
Dec 16 15:08:56 [663] cib: info: cib_perform_op: + /cib:
@num_updates=23, @dc-uuid=2
Dec 16 15:08:56 [663] cib: info: cib_file_backup: Archived
previous version as /sonicle/var/cluster/lib/pacemaker/cib/cib-8.raw
Dec 16 15:08:56 [663] cib: info: cib_process_request:
Completed cib_modify operation for section cib: OK (rc=0,
origin=xstha2/crmd/37, version=0.46.23)
Dec 16 15:08:56 [663] cib: info: cib_file_write_with_digest: Wrote
version 0.46.0 of the CIB to disk (digest: 1ea3e3ee6c388f74623494869acf32d0)
Dec 16 15:08:56 [663] cib: info: cib_file_write_with_digest: Reading
cluster configuration file /sonicle/var/cluster/lib/pacemaker/cib/cib.4LaWbc
(digest: /sonicle/var/cluster/lib/pacemaker/cib/cib.5LaWbc)
Dec 16 15:08:56 [667] pengine: warning: unpack_config: Support for
stonith-action of 'poweroff' is deprecated and will be removed in a future
release (use 'off' instead)
Dec 16 15:08:56 [667] pengine: warning: pe_fence_node: Cluster node
xstha1 will be fenced: peer is no longer part of the cluster
Dec 16 15:08:56 [667] pengine: warning: determine_online_status: Node
xstha1 is unclean
Dec 16 15:08:56 [667] pengine: info: determine_online_status_fencing:
Node xstha2 is active
Dec 16 15:08:56 [667] pengine: info: determine_online_status: Node
xstha2 is online
Dec 16 15:08:56 [667] pengine: info: unpack_node_loop: Node 1 is
already processed
Dec 16 15:08:56 [667] pengine: info: unpack_node_loop: Node 2 is
already processed
Dec 16 15:08:56 [667] pengine: info: unpack_node_loop: Node 1 is
already processed
Dec 16 15:08:56 [667] pengine: info: unpack_node_loop: Node 2 is
already processed
Dec 16 15:08:56 [667] pengine: info: common_print: xstha1_san0_IP
(ocf::heartbeat:IPaddr): Started xstha1 (UNCLEAN)
Dec 16 15:08:56 [667] pengine: info: common_print: xstha2_san0_IP
(ocf::heartbeat:IPaddr): Started xstha2
Dec 16 15:08:56 [667] pengine: info: common_print: zpool_data
(ocf::heartbeat:ZFS): Started xstha1 (UNCLEAN)
Dec 16 15:08:56 [667] pengine: info: common_print: xstha1-stonith
(stonith:external/ipmi): Started xstha2
Dec 16 15:08:56 [667] pengine: info: common_print: xstha2-stonith
(stonith:external/ipmi): Started xstha1 (UNCLEAN)
Dec 16 15:08:56 [667] pengine: info: pcmk__native_allocate:
Resource xstha2-stonith cannot run anywhere
Dec 16 15:08:56 [667] pengine: warning: custom_action: Action
xstha1_san0_IP_stop_0 on xstha1 is unrunnable (offline)
Dec 16 15:08:56 [667] pengine: warning: custom_action: Action
zpool_data_stop_0 on xstha1 is unrunnable (offline)
Dec 16 15:08:56 [667] pengine: warning: custom_action: Action
xstha2-stonith_stop_0 on xstha1 is unrunnable (offline)
Dec 16 15:08:56 [667] pengine: warning: custom_action: Action
xstha2-stonith_stop_0 on xstha1 is unrunnable (offline)
Dec 16 15:08:56 [667] pengine: warning: stage6: Scheduling Node xstha1
for STONITH
Dec 16 15:08:56 [667] pengine: info: native_stop_constraints:
xstha1_san0_IP_stop_0 is implicit after xstha1 is fenced
Dec 16 15:08:56 [667] pengine: info: native_stop_constraints:
zpool_data_stop_0 is implicit after xstha1 is fenced
Dec 16 15:08:56 [667] pengine: info: native_stop_constraints:
xstha2-stonith_stop_0 is implicit after xstha1 is fenced
Dec 16 15:08:56 [667] pengine: notice: LogNodeActions: * Fence (off)
xstha1 'peer is no longer part of the cluster'
Dec 16 15:08:56 [667] pengine: notice: LogAction: * Move
xstha1_san0_IP ( xstha1 -> xstha2 )
Dec 16 15:08:56 [667] pengine: info: LogActions: Leave xstha2_san0_IP
(Started xstha2)
Dec 16 15:08:56 [667] pengine: notice: LogAction: * Move
zpool_data ( xstha1 -> xstha2 )
Dec 16 15:08:56 [667] pengine: info: LogActions: Leave xstha1-stonith
(Started xstha2)
Dec 16 15:08:56 [667] pengine: notice: LogAction: * Stop
xstha2-stonith ( xstha1 ) due to node availability
Dec 16 15:08:56 [667] pengine: warning: process_pe_message: Calculated
transition 0 (with warnings), saving inputs in
/sonicle/var/cluster/lib/pacemaker/pengine/pe-warn-6.bz2
Dec 16 15:08:56 [668] crmd: info: do_state_transition: State
transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE | input=I_PE_SUCCESS
cause=C_IPC_MESSAGE origin=handle_response
Dec 16 15:08:56 [668] crmd: info: do_te_invoke: Processing
graph 0 (ref=pe_calc-dc-1608127736-14) derived from
/sonicle/var/cluster/lib/pacemaker/pengine/pe-warn-6.bz2
Dec 16 15:08:56 [668] crmd: notice: te_fence_node: Requesting
fencing (off) of node xstha1 | action=1 timeout=60000
Dec 16 15:08:56 [664] stonith-ng: notice: handle_request: Client
crmd.668.c46cefe4 wants to fence (off) 'xstha1' with device '(any)'
Dec 16 15:08:56 [664] stonith-ng: notice: initiate_remote_stonith_op:
Requesting peer fencing (off) targeting xstha1 |
id=3cdbf44e-e860-c100-95e0-db72cc63ae16 state=0
Dec 16 15:08:56 [664] stonith-ng: info: dynamic_list_search_cb:
Refreshing port list for xstha1-stonith
Dec 16 15:08:56 [664] stonith-ng: info: process_remote_stonith_query:
Query result 1 of 1 from xstha2 for xstha1/off (1 devices)
3cdbf44e-e860-c100-95e0-db72cc63ae16
Dec 16 15:08:56 [664] stonith-ng: info: call_remote_stonith: Total
timeout set to 60 for peer's fencing targeting xstha1 for
crmd.668|id=3cdbf44e-e860-c100-95e0-db72cc63ae16
Dec 16 15:08:56 [664] stonith-ng: notice: call_remote_stonith:
Requesting that xstha2 perform 'off' action targeting xstha1 | for client
crmd.668 (72s, 0s)
Dec 16 15:08:56 [664] stonith-ng: notice: can_fence_host_with_device:
xstha1-stonith can fence (off) xstha1: dynamic-list
Dec 16 15:08:56 [664] stonith-ng: info: stonith_fence_get_devices_cb:
Found 1 matching devices for 'xstha1'
Dec 16 15:08:56 [664] stonith-ng: notice: schedule_stonith_command:
Delaying 'off' action targeting xstha1 on xstha1-stonith for 10s (timeout=60s,
requested_delay=0s, base=10s, max=10s)
_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
ClusterLabs home: https://www.clusterlabs.org/