> Having it started on one node is normal. Fence devices default to
> requires=quorum, meaning they can start on a new node even before the
> original node is fenced. It looks like that's what happened here, but
> something went wrong with the fencing, so the cluster assumes it's
> still active on the old node as well.

Okay, I was thinking it was normal to not show as running on all nodes per 
previous E-mails...  Could it be because I've already powered the VM off, that 
the fence command trying to power it off received a failure response?

> I'm not sure what went wrong with the fencing. Once fencing succeeds,
> the node should show up as offline without also being unclean. Anything
> interesting in the logs around the time of the fencing?

To test this out, I've tried a simpler setup with just the fencing resource and 
a VIP.

------
Online: [ d-gp2-dbpg0-1 d-gp2-dbpg0-2 d-gp2-dbpg0-3 ]

Full list of resources:

 vfencing       (stonith:external/vcenter):     Started d-gp2-dbpg0-2
 postgresql-master-vip  (ocf::heartbeat:IPaddr2):       Started d-gp2-dbpg0-1

PCSD Status:
  d-gp2-dbpg0-1: Online
  d-gp2-dbpg0-2: Online
  d-gp2-dbpg0-3: Online
------

Looks good with the VIP up on node 1.  After this, I log into vSphere and power 
off the primary node, which results in this state:

------
Node d-gp2-dbpg0-1: UNCLEAN (offline)
Online: [ d-gp2-dbpg0-2 d-gp2-dbpg0-3 ]

Full list of resources:

 vfencing       (stonith:external/vcenter):     Started d-gp2-dbpg0-2
 postgresql-master-vip  (ocf::heartbeat:IPaddr2):       Started d-gp2-dbpg0-1 
(UNCLEAN)
------

Here's the log from node 2 during that time:

------
[4246] d-gp2-dbpg0-2 corosyncnotice  [TOTEM ] A processor failed, forming new 
configuration.
[4246] d-gp2-dbpg0-2 corosyncnotice  [TOTEM ] A new membership 
(10.124.164.63:96) was formed. Members left: 1
[4246] d-gp2-dbpg0-2 corosyncnotice  [TOTEM ] Failed to receive the leave 
message. failed: 1
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: pcmk_cpg_membership: 
Node 1 left group cib (peer=d-gp2-dbpg0-1, counter=6.0)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: 
crm_update_peer_proc:        pcmk_cpg_membership: Node d-gp2-dbpg0-1[1] - 
corosync-cpg is now offline
May 18 20:36:16 [4284] d-gp2-dbpg0-2      attrd:     info: pcmk_cpg_membership: 
Node 1 left group attrd (peer=d-gp2-dbpg0-1, counter=6.0)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:   notice: 
crm_update_peer_state_iter:  crm_update_peer_proc: Node d-gp2-dbpg0-1[1] - 
state is now lost (was member)
May 18 20:36:16 [4284] d-gp2-dbpg0-2      attrd:     info: 
crm_update_peer_proc:        pcmk_cpg_membership: Node d-gp2-dbpg0-1[1] - 
corosync-cpg is now offline
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:   notice: 
crm_reap_dead_member:        Removing d-gp2-dbpg0-1/1 from the membership list
May 18 20:36:16 [4284] d-gp2-dbpg0-2      attrd:   notice: 
crm_update_peer_state_iter:  crm_update_peer_proc: Node d-gp2-dbpg0-1[1] - 
state is now lost (was member)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:   notice: reap_crm_member:     
Purged 1 peers with id=1 and/or uname=d-gp2-dbpg0-1 from the membership cache
May 18 20:36:16 [4284] d-gp2-dbpg0-2      attrd:   notice: 
crm_reap_dead_member:        Removing d-gp2-dbpg0-1/1 from the membership list
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: pcmk_cpg_membership: 
Node 2 still member of group cib (peer=d-gp2-dbpg0-2, counter=6.0)
May 18 20:36:16 [4284] d-gp2-dbpg0-2      attrd:   notice: reap_crm_member:     
Purged 1 peers with id=1 and/or uname=d-gp2-dbpg0-1 from the membership cache
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: pcmk_cpg_membership: 
Node 3 still member of group cib (peer=d-gp2-dbpg0-3, counter=6.1)
May 18 20:36:16 [4282] d-gp2-dbpg0-2 stonith-ng:     info: pcmk_cpg_membership: 
Node 1 left group stonith-ng (peer=d-gp2-dbpg0-1, counter=6.0)
May 18 20:36:16 [4284] d-gp2-dbpg0-2      attrd:     info: pcmk_cpg_membership: 
Node 2 still member of group attrd (peer=d-gp2-dbpg0-2, counter=6.0)
May 18 20:36:16 [4284] d-gp2-dbpg0-2      attrd:     info: pcmk_cpg_membership: 
Node 3 still member of group attrd (peer=d-gp2-dbpg0-3, counter=6.1)
May 18 20:36:16 [4282] d-gp2-dbpg0-2 stonith-ng:     info: 
crm_update_peer_proc:        pcmk_cpg_membership: Node d-gp2-dbpg0-1[1] - 
corosync-cpg is now offline
May 18 20:36:16 [4282] d-gp2-dbpg0-2 stonith-ng:   notice: 
crm_update_peer_state_iter:  crm_update_peer_proc: Node d-gp2-dbpg0-1[1] - 
state is now lost (was member)
May 18 20:36:16 [4278] d-gp2-dbpg0-2 pacemakerd:     info: 
pcmk_quorum_notification:    Membership 96: quorum retained (2)
May 18 20:36:16 [4278] d-gp2-dbpg0-2 pacemakerd:   notice: 
crm_update_peer_state_iter:  crm_reap_unseen_nodes: Node d-gp2-dbpg0-1[1] - 
state is now lost (was member)
May 18 20:36:16 [4282] d-gp2-dbpg0-2 stonith-ng:   notice: 
crm_reap_dead_member:        Removing d-gp2-dbpg0-1/1 from the membership list
May 18 20:36:16 [4278] d-gp2-dbpg0-2 pacemakerd:     info: pcmk_cpg_membership: 
Node 1 left group pacemakerd (peer=d-gp2-dbpg0-1, counter=5.0)
May 18 20:36:16 [4282] d-gp2-dbpg0-2 stonith-ng:   notice: reap_crm_member:     
Purged 1 peers with id=1 and/or uname=d-gp2-dbpg0-1 from the membership cache
May 18 20:36:16 [4282] d-gp2-dbpg0-2 stonith-ng:     info: pcmk_cpg_membership: 
Node 2 still member of group stonith-ng (peer=d-gp2-dbpg0-2, counter=6.0)
May 18 20:36:16 [4278] d-gp2-dbpg0-2 pacemakerd:     info: 
crm_update_peer_proc:        pcmk_cpg_membership: Node d-gp2-dbpg0-1[1] - 
corosync-cpg is now offline
May 18 20:36:16 [4282] d-gp2-dbpg0-2 stonith-ng:     info: pcmk_cpg_membership: 
Node 3 still member of group stonith-ng (peer=d-gp2-dbpg0-3, counter=6.1)
May 18 20:36:16 [4278] d-gp2-dbpg0-2 pacemakerd:     info: pcmk_cpg_membership: 
Node 2 still member of group pacemakerd (peer=d-gp2-dbpg0-2, counter=5.0)
May 18 20:36:16 [4278] d-gp2-dbpg0-2 pacemakerd:     info: pcmk_cpg_membership: 
Node 3 still member of group pacemakerd (peer=d-gp2-dbpg0-3, counter=5.1)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
Diff: --- 0.35.5 2
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
Diff: +++ 0.35.6 (null)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
+  /cib:  @num_updates=6
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
+  /cib/status/node_state[@id='1']:  @crm-debug-origin=peer_update_callback
May 18 20:36:16 [4286] d-gp2-dbpg0-2       crmd:     info: 
pcmk_quorum_notification:    Membership 96: quorum retained (2)
May 18 20:36:16 [4286] d-gp2-dbpg0-2       crmd:   notice: 
crm_update_peer_state_iter:  crm_reap_unseen_nodes: Node d-gp2-dbpg0-1[1] - 
state is now lost (was member)
May 18 20:36:16 [4286] d-gp2-dbpg0-2       crmd:     info: pcmk_cpg_membership: 
Node 1 left group crmd (peer=d-gp2-dbpg0-1, counter=6.0)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_process_request: 
Completed cib_modify operation for section status: OK (rc=0, 
origin=d-gp2-dbpg0-3/crmd/164, version=0.35.6)
May 18 20:36:16 [4286] d-gp2-dbpg0-2       crmd:     info: 
crm_update_peer_proc:        pcmk_cpg_membership: Node d-gp2-dbpg0-1[1] - 
corosync-cpg is now offline
May 18 20:36:16 [4286] d-gp2-dbpg0-2       crmd:     info: pcmk_cpg_membership: 
Node 2 still member of group crmd (peer=d-gp2-dbpg0-2, counter=6.0)
May 18 20:36:16 [4286] d-gp2-dbpg0-2       crmd:     info: pcmk_cpg_membership: 
Node 3 still member of group crmd (peer=d-gp2-dbpg0-3, counter=6.1)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_process_request: 
Completed cib_modify operation for section nodes: OK (rc=0, 
origin=d-gp2-dbpg0-3/crmd/168, version=0.35.6)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
Diff: --- 0.35.6 2
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
Diff: +++ 0.35.7 (null)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
+  /cib:  @num_updates=7
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
+  /cib/status/node_state[@id='1']:  @in_ccm=false, 
@crm-debug-origin=post_cache_update
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
+  /cib/status/node_state[@id='2']:  @crm-debug-origin=post_cache_update
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
+  /cib/status/node_state[@id='3']:  @crm-debug-origin=post_cache_update
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_process_request: 
Completed cib_modify operation for section status: OK (rc=0, 
origin=d-gp2-dbpg0-3/crmd/169, version=0.35.7)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
Diff: --- 0.35.7 2
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
Diff: +++ 0.35.8 (null)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
-- /cib/status/node_state[@id='1']/transient_attributes[@id='1']
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
+  /cib:  @num_updates=8
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_process_request: 
Completed cib_delete operation for section 
//node_state[@uname='d-gp2-dbpg0-1']/transient_attributes: OK (rc=0, 
origin=d-gp2-dbpg0-3/crmd/171, version=0.35.8)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
Diff: --- 0.35.8 2
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
Diff: +++ 0.35.9 (null)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
+  /cib:  @num_updates=9
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
+  /cib/status/node_state[@id='1']:  @crmd=offline, 
@crm-debug-origin=peer_update_callback
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_process_request: 
Completed cib_modify operation for section status: OK (rc=0, 
origin=d-gp2-dbpg0-3/crmd/172, version=0.35.9)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_process_request: 
Completed cib_modify operation for section nodes: OK (rc=0, 
origin=d-gp2-dbpg0-3/crmd/176, version=0.35.9)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
Diff: --- 0.35.9 2
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
Diff: +++ 0.35.10 (null)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
+  /cib:  @num_updates=10
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
+  /cib/status/node_state[@id='1']:  @crm-debug-origin=do_state_transition, 
@join=down
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
+  /cib/status/node_state[@id='2']:  @crm-debug-origin=do_state_transition
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
+  /cib/status/node_state[@id='3']:  @crm-debug-origin=do_state_transition
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_process_request: 
Completed cib_modify operation for section status: OK (rc=0, 
origin=d-gp2-dbpg0-3/crmd/177, version=0.35.10)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_process_request: 
Completed cib_modify operation for section cib: OK (rc=0, 
origin=d-gp2-dbpg0-3/crmd/178, version=0.35.10)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
Diff: --- 0.35.10 2
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
Diff: +++ 0.35.11 (null)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
+  /cib:  @num_updates=11
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_process_request: 
Completed cib_modify operation for section status: OK (rc=0, 
origin=d-gp2-dbpg0-3/attrd/10, version=0.35.11)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
Diff: --- 0.35.11 2
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
Diff: +++ 0.35.12 (null)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
+  /cib:  @num_updates=12
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_process_request: 
Completed cib_modify operation for section status: OK (rc=0, 
origin=d-gp2-dbpg0-3/attrd/11, version=0.35.12)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_process_request: 
Completed cib_modify operation for section status: OK (rc=0, 
origin=d-gp2-dbpg0-3/attrd/12, version=0.35.12)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
Diff: --- 0.35.12 2
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
Diff: +++ 0.35.13 (null)
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_perform_op:      
+  /cib:  @num_updates=13
May 18 20:36:16 [4281] d-gp2-dbpg0-2        cib:     info: cib_process_request: 
Completed cib_modify operation for section status: OK (rc=0, 
origin=d-gp2-dbpg0-3/attrd/13, version=0.35.13)
May 18 20:36:23 d-gp2-dbpg0-2 stonith: [16244]: CRIT: external_reset_req: 
'vcenter reset' for host d-gp2-dbpg0-1 failed with rc 255
May 18 20:36:23 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_action:  
fence_legacy[16243] stderr: [ Died at /usr/lib/stonith/plugins/external/vcenter 
line 22. ]
May 18 20:36:23 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_action:  
fence_legacy[16243] stderr: [   ...propagated at 
/usr/lib/stonith/plugins/external/vcenter line 22. ]
May 18 20:36:23 [4282] d-gp2-dbpg0-2 stonith-ng:     info: 
internal_stonith_action_execute:     Attempt 2 to execute fence_legacy 
(reboot). remaining timeout is 57
May 18 20:36:27 d-gp2-dbpg0-2 stonith: [16265]: CRIT: external_reset_req: 
'vcenter reset' for host d-gp2-dbpg0-1 failed with rc 255
May 18 20:36:27 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_action:  
fence_legacy[16264] stderr: [ Died at /usr/lib/stonith/plugins/external/vcenter 
line 22. ]
May 18 20:36:27 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_action:  
fence_legacy[16264] stderr: [   ...propagated at 
/usr/lib/stonith/plugins/external/vcenter line 22. ]
May 18 20:36:27 [4282] d-gp2-dbpg0-2 stonith-ng:     info: 
update_remaining_timeout:    Attempted to execute agent fence_legacy (reboot) 
the maximum number of times (2) allowed
May 18 20:36:27 [4282] d-gp2-dbpg0-2 stonith-ng:    error: log_operation:       
Operation 'reboot' [16264] (call 4 from crmd.4267) for host 'd-gp2-dbpg0-1' 
with device 'vfencing' returned: -201 (Generic Pacemaker error)
May 18 20:36:27 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_operation:       
vfencing:16264 [ Performing: stonith -t external/vcenter -T reset d-gp2-dbpg0-1 
]
May 18 20:36:27 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_operation:       
vfencing:16264 [ failed: d-gp2-dbpg0-1 5 ]
May 18 20:36:35 [4282] d-gp2-dbpg0-2 stonith-ng:   notice: remote_op_done:      
Operation reboot of d-gp2-dbpg0-1 by <no-one> for 
crmd.4267@d-gp2-dbpg0-3.7d21135c: No route to host
May 18 20:36:23 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_action:  
fence_legacy[16243] stderr: [ Died at /usr/lib/stonith/plugins/external/vcenter 
line 22. ]
May 18 20:36:23 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_action:  
fence_legacy[16243] stderr: [   ...propagated at 
/usr/lib/stonith/plugins/external/vcenter line 22. ]
May 18 20:36:23 [4282] d-gp2-dbpg0-2 stonith-ng:     info: 
internal_stonith_action_execute:     Attempt 2 to execute fence_legacy 
(reboot). remaining timeout is 57
May 18 20:36:27 d-gp2-dbpg0-2 stonith: [16265]: CRIT: external_reset_req: 
'vcenter reset' for host d-gp2-dbpg0-1 failed with rc 255
May 18 20:36:27 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_action:  
fence_legacy[16264] stderr: [ Died at /usr/lib/stonith/plugins/external/vcenter 
line 22. ]
May 18 20:36:27 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_action:  
fence_legacy[16264] stderr: [   ...propagated at 
/usr/lib/stonith/plugins/external/vcenter line 22. ]
May 18 20:36:27 [4282] d-gp2-dbpg0-2 stonith-ng:     info: 
update_remaining_timeout:    Attempted to execute agent fence_legacy (reboot) 
the maximum number of times (2) allowed
May 18 20:36:27 [4282] d-gp2-dbpg0-2 stonith-ng:    error: log_operation:       
Operation 'reboot' [16264] (call 4 from crmd.4267) for host 'd-gp2-dbpg0-1' 
with device 'vfencing' returned: -201 (Generic Pacemaker error)
May 18 20:36:27 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_operation:       
vfencing:16264 [ Performing: stonith -t external/vcenter -T reset d-gp2-dbpg0-1 
]
May 18 20:36:27 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_operation:       
vfencing:16264 [ failed: d-gp2-dbpg0-1 5 ]
May 18 20:36:35 [4282] d-gp2-dbpg0-2 stonith-ng:   notice: remote_op_done:      
Operation reboot of d-gp2-dbpg0-1 by <no-one> for 
crmd.4267@d-gp2-dbpg0-3.7d21135c: No route to host
May 18 20:36:41 d-gp2-dbpg0-2 stonith: [16323]: CRIT: external_reset_req: 
'vcenter reset' for host d-gp2-dbpg0-1 failed with rc 255
May 18 20:36:41 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_action:  
fence_legacy[16322] stderr: [ Died at /usr/lib/stonith/plugins/external/vcenter 
line 22. ]
May 18 20:36:41 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_action:  
fence_legacy[16322] stderr: [   ...propagated at 
/usr/lib/stonith/plugins/external/vcenter line 22. ]
May 18 20:36:41 [4282] d-gp2-dbpg0-2 stonith-ng:     info: 
internal_stonith_action_execute:     Attempt 2 to execute fence_legacy 
(reboot). remaining timeout is 57
May 18 20:36:45 d-gp2-dbpg0-2 stonith: [16344]: CRIT: external_reset_req: 
'vcenter reset' for host d-gp2-dbpg0-1 failed with rc 255
May 18 20:36:45 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_action:  
fence_legacy[16343] stderr: [ Died at /usr/lib/stonith/plugins/external/vcenter 
line 22. ]
May 18 20:36:45 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_action:  
fence_legacy[16343] stderr: [   ...propagated at 
/usr/lib/stonith/plugins/external/vcenter line 22. ]
May 18 20:36:45 [4282] d-gp2-dbpg0-2 stonith-ng:     info: 
update_remaining_timeout:    Attempted to execute agent fence_legacy (reboot) 
the maximum number of times (2) allowed
May 18 20:36:45 [4282] d-gp2-dbpg0-2 stonith-ng:    error: log_operation:       
Operation 'reboot' [16343] (call 5 from crmd.4267) for host 'd-gp2-dbpg0-1' 
with device 'vfencing' returned: -201 (Generic Pacemaker error)
May 18 20:36:45 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_operation:       
vfencing:16343 [ Performing: stonith -t external/vcenter -T reset d-gp2-dbpg0-1 
]
May 18 20:36:45 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_operation:       
vfencing:16343 [ failed: d-gp2-dbpg0-1 5 ]
May 18 20:36:53 [4282] d-gp2-dbpg0-2 stonith-ng:   notice: remote_op_done:      
Operation reboot of d-gp2-dbpg0-1 by <no-one> for 
crmd.4267@d-gp2-dbpg0-3.e29edf29: No route to host
May 18 20:36:59 d-gp2-dbpg0-2 stonith: [16383]: CRIT: external_reset_req: 
'vcenter reset' for host d-gp2-dbpg0-1 failed with rc 255
May 18 20:36:59 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_action:  
fence_legacy[16382] stderr: [ Died at /usr/lib/stonith/plugins/external/vcenter 
line 22. ]
May 18 20:36:59 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_action:  
fence_legacy[16382] stderr: [   ...propagated at 
/usr/lib/stonith/plugins/external/vcenter line 22. ]
May 18 20:36:59 [4282] d-gp2-dbpg0-2 stonith-ng:     info: 
internal_stonith_action_execute:     Attempt 2 to execute fence_legacy 
(reboot). remaining timeout is 57
May 18 20:37:03 d-gp2-dbpg0-2 stonith: [16404]: CRIT: external_reset_req: 
'vcenter reset' for host d-gp2-dbpg0-1 failed with rc 255
May 18 20:37:03 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_action:  
fence_legacy[16403] stderr: [ Died at /usr/lib/stonith/plugins/external/vcenter 
line 22. ]
May 18 20:37:03 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_action:  
fence_legacy[16403] stderr: [   ...propagated at 
/usr/lib/stonith/plugins/external/vcenter line 22. ]
May 18 20:37:03 [4282] d-gp2-dbpg0-2 stonith-ng:     info: 
update_remaining_timeout:    Attempted to execute agent fence_legacy (reboot) 
the maximum number of times (2) allowed
May 18 20:37:03 [4282] d-gp2-dbpg0-2 stonith-ng:    error: log_operation:       
Operation 'reboot' [16403] (call 6 from crmd.4267) for host 'd-gp2-dbpg0-1' 
with device 'vfencing' returned: -201 (Generic Pacemaker error)
May 18 20:37:03 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_operation:       
vfencing:16403 [ Performing: stonith -t external/vcenter -T reset d-gp2-dbpg0-1 
]
May 18 20:37:03 [4282] d-gp2-dbpg0-2 stonith-ng:  warning: log_operation:       
vfencing:16403 [ failed: d-gp2-dbpg0-1 5 ]
May 18 20:37:11 [4282] d-gp2-dbpg0-2 stonith-ng:   notice: remote_op_done:      
Operation reboot of d-gp2-dbpg0-1 by <no-one> for 
crmd.4267@d-gp2-dbpg0-3.25d3b994: No route to host
[...continues indefinitely...]
------

Thanks,
-- 
Casey
_______________________________________________
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to