Zitat von Klaus Wenninger <kwenn...@redhat.com>:

On 11/02/2016 06:32 PM, Ken Gaillot wrote:
On 10/26/2016 06:12 AM, Rainer Nerb wrote:
Hello all,

we're currently testing a 2-node-cluster with 2 vms and live migration
on CentOS 7.2 and Pacemaker 1.1.13-10 with disks on iSCSI-targets and
migration via ssh-method.

Live migration works, if we issue "pcs resource move ...", "pcs cluster
standby", "pcs cluster stop" and even "systemctl rescue".
The latter only worked, after adding the following additional
dependencies to pacemaker.service and leaving the management of those
services to systemd:

  * After/Requires=systemd-machined.service
  * After/Requires=systemd-machine-id-commit.service
  * After/Requires=remote-fs.target
  * After/Requires=libvirtd.service
  * After/Requires=iscsi.service
  * After/Requires=iscsid.service
  * After/Requires=sshd.service
This makes sense when clustered resources depend on services that aren't
themselves managed by the cluster. It's dependent on your situation, so
it's not something that pacemaker can solve generically.
First approach was to use systemd-resources as there are no ocf: resource-agents for iSCSI-Initiators or libvirtd in our distribution. But then migration failed even on "systemctl rescue".

You may already be aware, but the easiest way to add such requirements
is to put them in a systemd unit override, e.g.
/etc/systemd/system/pacemaker.service.d/dependencies.conf.
Yes, that's how we implemented the additional dependencies.

When shutting down or rebooting migration fails and not even the regular
shutdown of the vms succeeds. Systemd seems to tear down the vms by
terminating something they depend on.

Is this a known issue? Did we miss any further dependencies?
There was a shutdown issue when using systemd-class cluster resources
(systemd: instead of ocf:), but I believe that was fixed in the package
you're using, and it's probably not relevant here anyway.
Speaking of
https://github.com/ClusterLabs/pacemaker/pull/887/commits/6aae8542abedc755b90c8c49aa5c429718fd12f1?

It shouldn't be in Centos 7.2 but I agree unless there are no
systemd-resources involved it wouldn't matter.


It does sound like there's another dependency, but I don't know what.

What log messages do you see on the failure?
See attached log files.

Tia
Rainer


------------------------------------------------------------------------
IT Nerb GmbH
Lessingstraße 8
85098 Großmehring

Telefon         :       +49 700 ITNERBGMBH
Telefax         :       +49 8407 939 284
email   :       i...@it-nerb.de
Internet        :       www.it-nerb.de <http://www.it-nerb.de>
Geschäftsführer :       Rainer Nerb
Handelsregister :       HRB 2592
HR-Gericht      :       Ingolstadt

------------------------------------------------------------------------
_______________________________________________
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org



_______________________________________________
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




-----------------------------------------------------------------------
IT Nerb GmbH
Lessingstraße 8
85098 Großmehring

Telefon        : +49 700 ITNERBGMBH
Telefax        : +49 8407 939 284
email          : i...@it-nerb.de
Internet       : www.it-nerb.de

Geschäftsführer: Rainer Nerb
Handelsregister: HRB 2592
HR-Gericht     : Ingolstadt
-----------------------------------------------------------------------
-----------------------------
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
crm_update_peer_expected:      handle_request: Node kvm02.test-cluster1[2] - 
expected state is now down (was member)
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
handle_shutdown_request:       Creating shutdown request for 
kvm02.test-cluster1 (state=S_IDLE)
Nov 04 11:11:39 [1935] kvm01.test-cluster1      attrd:     info: 
attrd_peer_update:     Setting shutdown[kvm02.test-cluster1]: 0 -> 1478254299 
from kvm01.test-cluster1
Nov 04 11:11:39 [1935] kvm01.test-cluster1      attrd:     info: 
write_attribute:       Sent update 14 with 2 changes for shutdown, id=<n/a>, 
set=(null)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Forwarding cib_modify operation for section status to 
master (origin=local/attrd/14)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: --- 0.23.48 2
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: +++ 0.23.49 (null)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib:  @num_updates=49
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  
/cib/status/node_state[@id='2']/transient_attributes[@id='2']/instance_attributes[@id='status-2']/nvpair[@id='status-2-shutdown']:
  @value=1478254299
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Completed cib_modify operation for section status: OK 
(rc=0, origin=kvm01.test-cluster1/attrd/14, version=0.23.49)
Nov 04 11:11:39 [1935] kvm01.test-cluster1      attrd:     info: 
attrd_cib_callback:    Update 14 for shutdown: OK (0)
Nov 04 11:11:39 [1935] kvm01.test-cluster1      attrd:     info: 
attrd_cib_callback:    Update 14 for shutdown[kvm02.test-cluster1]=1478254299: 
OK (0)
Nov 04 11:11:39 [1935] kvm01.test-cluster1      attrd:     info: 
attrd_cib_callback:    Update 14 for shutdown[kvm01.test-cluster1]=0: OK (0)
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
abort_transition_graph:        Transition aborted by status-2-shutdown, 
shutdown=1478254299: Transient attribute change (modify cib=0.23.49, 
source=abort_unless_down:319, 
path=/cib/status/node_state[@id='2']/transient_attributes[@id='2']/instance_attributes[@id='status-2']/nvpair[@id='status-2-shutdown'],
 1)
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:   notice: 
do_state_transition:   State transition S_IDLE -> S_POLICY_ENGINE [ 
input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Nov 04 11:11:39 [1936] kvm01.test-cluster1    pengine:     info: 
determine_online_status:       Node kvm02.test-cluster1 is shutting down
Nov 04 11:11:39 [1936] kvm01.test-cluster1    pengine:     info: 
determine_online_status:       Node kvm01.test-cluster1 is online
Nov 04 11:11:39 [1936] kvm01.test-cluster1    pengine:     info: native_print:  
kvm_samba1      (ocf::heartbeat:VirtualDomain): Started kvm01.test-cluster1
Nov 04 11:11:39 [1936] kvm01.test-cluster1    pengine:     info: native_print:  
kvm_samba2      (ocf::heartbeat:VirtualDomain): Started kvm02.test-cluster1
Nov 04 11:11:39 [1936] kvm01.test-cluster1    pengine:     info: RecurringOp:   
 Start recurring monitor (10s) for kvm_samba2 on kvm01.test-cluster1
Nov 04 11:11:39 [1936] kvm01.test-cluster1    pengine:   notice: stage6:        
Scheduling Node kvm02.test-cluster1 for shutdown
Nov 04 11:11:39 [1936] kvm01.test-cluster1    pengine:     info: LogActions:    
Leave   kvm_samba1      (Started kvm01.test-cluster1)
Nov 04 11:11:39 [1936] kvm01.test-cluster1    pengine:   notice: LogActions:    
Migrate kvm_samba2      (Started kvm02.test-cluster1 -> kvm01.test-cluster1)
Nov 04 11:11:39 [1936] kvm01.test-cluster1    pengine:   notice: 
process_pe_message:    Calculated Transition 11: 
/var/lib/pacemaker/pengine/pe-input-658.bz2
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
do_state_transition:   State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE 
[ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: do_te_invoke:  
Processing graph 11 (ref=pe_calc-dc-1478254299-59) derived from 
/var/lib/pacemaker/pengine/pe-input-658.bz2
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:   notice: 
te_rsc_command:        Initiating action 12: migrate_to kvm_samba2_migrate_to_0 
on kvm02.test-cluster1
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
create_operation_update:       cib_action_update: Updating resource kvm_samba2 
after migrate_to op pending (interval=0)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Forwarding cib_modify operation for section status to 
master (origin=local/crmd/91)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: --- 0.23.49 2
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: +++ 0.23.50 (null)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib:  @num_updates=50
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  
/cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='kvm_samba2']/lrm_rsc_op[@id='kvm_samba2_last_0']:
  @operation_key=kvm_samba2_migrate_to_0, @operation=migrate_to, 
@crm-debug-origin=cib_action_update, 
@transition-key=12:11:0:9d0841ad-adb6-44b4-a299-3cef745cc206, 
@transition-magic=-1:193;12:11:0:9d0841ad-adb6-44b4-a299-3cef745cc206, 
@call-id=-1, @rc-code=193, @op-status=-1, @last-run=1478254299, @l
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Completed cib_modify operation for section status: OK 
(rc=0, origin=kvm01.test-cluster1/crmd/91, version=0.23.50)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: --- 0.23.50 2
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: +++ 0.23.51 (null)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib:  @num_updates=51
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  
/cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='kvm_samba2']/lrm_rsc_op[@id='kvm_samba2_last_0']:
  @crm-debug-origin=do_update_resource, 
@transition-magic=0:1;12:11:0:9d0841ad-adb6-44b4-a299-3cef745cc206, 
@call-id=13, @rc-code=1, @op-status=0, @exec-time=257, @exit-reason=samba2: 
migrate_to: Not active locally!
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        ++ 
/cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='kvm_samba2']:
  <lrm_rsc_op id="kvm_samba2_last_failure_0" 
operation_key="kvm_samba2_migrate_to_0" operation="migrate_to" 
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" 
transition-key="12:11:0:9d0841ad-adb6-44b4-a299-3cef745cc206" 
transition-magic="0:1;12:11:0:9d0841ad-adb6-44b4-a299-3cef745cc206" 
exit-reason="samba2: migrate_to: Not
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:  warning: 
status_from_rc:        Action 12 (kvm_samba2_migrate_to_0) on 
kvm02.test-cluster1 failed (target: 0 vs. rc: 1): Error
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:   notice: 
abort_transition_graph:        Transition aborted by kvm_samba2_migrate_to_0 
'modify' on kvm02.test-cluster1: Event failed 
(magic=0:1;12:11:0:9d0841ad-adb6-44b4-a299-3cef745cc206, cib=0.23.51, 
source=match_graph_event:381, 0)
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
match_graph_event:     Action kvm_samba2_migrate_to_0 (12) confirmed on 
kvm02.test-cluster1 (rc=1)
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
process_graph_event:   Detected action (11.12) 
kvm_samba2_migrate_to_0.13=unknown error: failed
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:  warning: 
status_from_rc:        Action 12 (kvm_samba2_migrate_to_0) on 
kvm02.test-cluster1 failed (target: 0 vs. rc: 1): Error
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
abort_transition_graph:        Transition aborted by kvm_samba2_migrate_to_0 
'create' on kvm02.test-cluster1: Event failed 
(magic=0:1;12:11:0:9d0841ad-adb6-44b4-a299-3cef745cc206, cib=0.23.51, 
source=match_graph_event:381, 0)
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
match_graph_event:     Action kvm_samba2_migrate_to_0 (12) confirmed on 
kvm02.test-cluster1 (rc=1)
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
process_graph_event:   Detected action (11.12) 
kvm_samba2_migrate_to_0.13=unknown error: failed
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:   notice: run_graph:     
Transition 11 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=6, 
Source=/var/lib/pacemaker/pengine/pe-input-658.bz2): Complete
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
do_state_transition:   State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE 
[ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Completed cib_modify operation for section status: OK 
(rc=0, origin=kvm02.test-cluster1/crmd/16, version=0.23.51)
Nov 04 11:11:39 [1936] kvm01.test-cluster1    pengine:     info: 
determine_online_status:       Node kvm02.test-cluster1 is shutting down
Nov 04 11:11:39 [1936] kvm01.test-cluster1    pengine:     info: 
determine_online_status:       Node kvm01.test-cluster1 is online
Nov 04 11:11:39 [1936] kvm01.test-cluster1    pengine:  warning: 
unpack_rsc_op_failure: Processing failed op migrate_to for kvm_samba2 on 
kvm02.test-cluster1: unknown error (1)
Nov 04 11:11:39 [1936] kvm01.test-cluster1    pengine:  warning: 
unpack_rsc_op_failure: Processing failed op migrate_to for kvm_samba2 on 
kvm02.test-cluster1: unknown error (1)
Nov 04 11:11:39 [1936] kvm01.test-cluster1    pengine:     info: native_print:  
kvm_samba1      (ocf::heartbeat:VirtualDomain): Started kvm01.test-cluster1
Nov 04 11:11:39 [1936] kvm01.test-cluster1    pengine:     info: native_print:  
kvm_samba2      (ocf::heartbeat:VirtualDomain): FAILED kvm02.test-cluster1
Nov 04 11:11:39 [1936] kvm01.test-cluster1    pengine:     info: RecurringOp:   
 Start recurring monitor (10s) for kvm_samba2 on kvm01.test-cluster1
Nov 04 11:11:39 [1936] kvm01.test-cluster1    pengine:   notice: stage6:        
Scheduling Node kvm02.test-cluster1 for shutdown
Nov 04 11:11:39 [1936] kvm01.test-cluster1    pengine:     info: LogActions:    
Leave   kvm_samba1      (Started kvm01.test-cluster1)
Nov 04 11:11:39 [1936] kvm01.test-cluster1    pengine:   notice: LogActions:    
Recover kvm_samba2      (Started kvm02.test-cluster1 -> kvm01.test-cluster1)
Nov 04 11:11:39 [1936] kvm01.test-cluster1    pengine:   notice: 
process_pe_message:    Calculated Transition 12: 
/var/lib/pacemaker/pengine/pe-input-659.bz2
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
do_state_transition:   State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE 
[ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: do_te_invoke:  
Processing graph 12 (ref=pe_calc-dc-1478254299-61) derived from 
/var/lib/pacemaker/pengine/pe-input-659.bz2
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:   notice: 
te_rsc_command:        Initiating action 2: stop kvm_samba2_stop_0 on 
kvm02.test-cluster1
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: --- 0.23.51 2
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: +++ 0.23.52 (null)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib:  @num_updates=52
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  
/cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='kvm_samba2']/lrm_rsc_op[@id='kvm_samba2_last_0']:
  @operation_key=kvm_samba2_stop_0, @operation=stop, 
@transition-key=2:12:0:9d0841ad-adb6-44b4-a299-3cef745cc206, 
@transition-magic=0:0;2:12:0:9d0841ad-adb6-44b4-a299-3cef745cc206, @call-id=14, 
@rc-code=0, @exec-time=110
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Completed cib_modify operation for section status: OK 
(rc=0, origin=kvm02.test-cluster1/crmd/17, version=0.23.52)
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
match_graph_event:     Action kvm_samba2_stop_0 (2) confirmed on 
kvm02.test-cluster1 (rc=0)
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:   notice: 
te_rsc_command:        Initiating action 10: start kvm_samba2_start_0 on 
kvm01.test-cluster1 (local)
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: do_lrm_rsc_op: 
Performing key=10:12:0:9d0841ad-adb6-44b4-a299-3cef745cc206 
op=kvm_samba2_start_0
Nov 04 11:11:39 [1934] kvm01.test-cluster1       lrmd:     info: log_execute:   
executing - rsc:kvm_samba2 action:start call_id:17
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
te_crm_command:        Executing crm-event (13): do_shutdown on 
kvm02.test-cluster1
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
pcmk_cpg_membership:   Node 2 left group crmd (peer=kvm02.test-cluster1, 
counter=4.0)
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
crm_update_peer_proc:  pcmk_cpg_membership: Node kvm02.test-cluster1[2] - 
corosync-cpg is now offline
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
peer_update_callback:  Client kvm02.test-cluster1/peer now has status [offline] 
(DC=true, changed=4000000)
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
peer_update_callback:  Peer kvm02.test-cluster1 left us
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
erase_status_tag:      Deleting xpath: 
//node_state[@uname='kvm02.test-cluster1']/transient_attributes
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:   notice: 
peer_update_callback:  do_shutdown of kvm02.test-cluster1 (op 13) is complete
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
crm_update_peer_join:  crmd_peer_down: Node kvm02.test-cluster1[2] - join-2 
phase 4 -> 0
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
pcmk_cpg_membership:   Node 1 still member of group crmd 
(peer=kvm01.test-cluster1, counter=4.0)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Forwarding cib_delete operation for section 
//node_state[@uname='kvm02.test-cluster1']/transient_attributes to master 
(origin=local/crmd/93)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Forwarding cib_modify operation for section status to 
master (origin=local/crmd/94)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: --- 0.23.52 2
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: +++ 0.23.53 0652e5263821da5d38799d582aa48ce2
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        -- 
/cib/status/node_state[@id='2']/transient_attributes[@id='2']
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib:  @num_updates=53
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Completed cib_delete operation for section 
//node_state[@uname='kvm02.test-cluster1']/transient_attributes: OK (rc=0, 
origin=kvm01.test-cluster1/crmd/93, version=0.23.53)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: --- 0.23.53 2
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: +++ 0.23.54 (null)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib:  @num_updates=54
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib/status/node_state[@id='2']:  @crmd=offline, 
@crm-debug-origin=peer_update_callback, @join=down, @expected=down
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Completed cib_modify operation for section status: OK 
(rc=0, origin=kvm01.test-cluster1/crmd/94, version=0.23.54)
Nov 04 11:11:39 [1935] kvm01.test-cluster1      attrd:     info: 
pcmk_cpg_membership:   Node 2 left group attrd (peer=kvm02.test-cluster1, 
counter=4.0)
Nov 04 11:11:39 [1935] kvm01.test-cluster1      attrd:     info: 
crm_update_peer_proc:  pcmk_cpg_membership: Node kvm02.test-cluster1[2] - 
corosync-cpg is now offline
Nov 04 11:11:39 [1935] kvm01.test-cluster1      attrd:   notice: 
crm_update_peer_state_iter:    crm_update_peer_proc: Node 
kvm02.test-cluster1[2] - state is now lost (was member)
Nov 04 11:11:39 [1935] kvm01.test-cluster1      attrd:   notice: 
attrd_peer_remove:     Removing all kvm02.test-cluster1 attributes for 
attrd_peer_change_cb
Nov 04 11:11:39 [1935] kvm01.test-cluster1      attrd:   notice: 
crm_reap_dead_member:  Removing kvm02.test-cluster1/2 from the membership list
Nov 04 11:11:39 [1935] kvm01.test-cluster1      attrd:   notice: 
reap_crm_member:       Purged 1 peers with id=2 and/or 
uname=kvm02.test-cluster1 from the membership cache
Nov 04 11:11:39 [1935] kvm01.test-cluster1      attrd:     info: 
pcmk_cpg_membership:   Node 1 still member of group attrd 
(peer=kvm01.test-cluster1, counter=4.0)
Nov 04 11:11:39 [1933] kvm01.test-cluster1 stonith-ng:     info: 
pcmk_cpg_membership:   Node 2 left group stonith-ng (peer=kvm02.test-cluster1, 
counter=4.0)
Nov 04 11:11:39 [1933] kvm01.test-cluster1 stonith-ng:     info: 
crm_update_peer_proc:  pcmk_cpg_membership: Node kvm02.test-cluster1[2] - 
corosync-cpg is now offline
Nov 04 11:11:39 [1933] kvm01.test-cluster1 stonith-ng:   notice: 
crm_update_peer_state_iter:    crm_update_peer_proc: Node 
kvm02.test-cluster1[2] - state is now lost (was member)
Nov 04 11:11:39 [1933] kvm01.test-cluster1 stonith-ng:   notice: 
crm_reap_dead_member:  Removing kvm02.test-cluster1/2 from the membership list
Nov 04 11:11:39 [1933] kvm01.test-cluster1 stonith-ng:   notice: 
reap_crm_member:       Purged 1 peers with id=2 and/or 
uname=kvm02.test-cluster1 from the membership cache
Nov 04 11:11:39 [1933] kvm01.test-cluster1 stonith-ng:     info: 
pcmk_cpg_membership:   Node 1 still member of group stonith-ng 
(peer=kvm01.test-cluster1, counter=4.0)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_shutdown_req:      Shutdown REQ from kvm02.test-cluster1
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_shutdown_req:      Shutdown REQ from kvm02.test-cluster1
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
pcmk_cpg_membership:   Node 2 left group cib (peer=kvm02.test-cluster1, 
counter=4.0)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
crm_update_peer_proc:  pcmk_cpg_membership: Node kvm02.test-cluster1[2] - 
corosync-cpg is now offline
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:   notice: 
crm_update_peer_state_iter:    crm_update_peer_proc: Node 
kvm02.test-cluster1[2] - state is now lost (was member)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:   notice: 
crm_reap_dead_member:  Removing kvm02.test-cluster1/2 from the membership list
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:   notice: 
reap_crm_member:       Purged 1 peers with id=2 and/or 
uname=kvm02.test-cluster1 from the membership cache
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
pcmk_cpg_membership:   Node 1 still member of group cib 
(peer=kvm01.test-cluster1, counter=4.0)
Nov 04 11:11:39 [1908] kvm01.test-cluster1 pacemakerd:     info: 
pcmk_cpg_membership:   Node 2 left group pacemakerd (peer=kvm02.test-cluster1, 
counter=4.0)
Nov 04 11:11:39 [1908] kvm01.test-cluster1 pacemakerd:     info: 
crm_update_peer_proc:  pcmk_cpg_membership: Node kvm02.test-cluster1[2] - 
corosync-cpg is now offline
Nov 04 11:11:39 [1908] kvm01.test-cluster1 pacemakerd:     info: 
pcmk_cpg_membership:   Node 1 still member of group pacemakerd 
(peer=kvm01.test-cluster1, counter=4.0)
Nov 04 11:11:39 [1908] kvm01.test-cluster1 pacemakerd:     info: 
mcp_cpg_deliver:       Ignoring process list sent by peer for local node
Nov 04 11:11:39 [1908] kvm01.test-cluster1 pacemakerd:     info: 
pcmk_quorum_notification:      Membership 1080: quorum retained (1)
Nov 04 11:11:39 [1908] kvm01.test-cluster1 pacemakerd:   notice: 
crm_update_peer_state_iter:    crm_reap_unseen_nodes: Node 
kvm02.test-cluster1[2] - state is now lost (was member)
[1710] kvm01.test-cluster1 corosyncnotice  [TOTEM ] A new membership 
(10.0.0.1:1080) was formed. Members left: 2
[1710] kvm01.test-cluster1 corosyncnotice  [QUORUM] Members[1]: 1
[1710] kvm01.test-cluster1 corosyncnotice  [MAIN  ] Completed service 
synchronization, ready to provide service.
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
pcmk_quorum_notification:      Membership 1080: quorum retained (1)
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:   notice: 
crm_update_peer_state_iter:    crm_reap_unseen_nodes: Node 
kvm02.test-cluster1[2] - state is now lost (was member)
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:     info: 
peer_update_callback:  kvm02.test-cluster1 is now lost (was member)
Nov 04 11:11:39 [1937] kvm01.test-cluster1       crmd:   notice: 
peer_update_callback:  do_shutdown of kvm02.test-cluster1 (op 13) is complete
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Forwarding cib_modify operation for section status to 
master (origin=local/crmd/95)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Forwarding cib_modify operation for section nodes to 
master (origin=local/crmd/98)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Forwarding cib_modify operation for section status to 
master (origin=local/crmd/99)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Completed cib_modify operation for section status: OK 
(rc=0, origin=kvm01.test-cluster1/crmd/95, version=0.23.54)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Completed cib_modify operation for section nodes: OK 
(rc=0, origin=kvm01.test-cluster1/crmd/98, version=0.23.54)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: --- 0.23.54 2
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: +++ 0.23.55 (null)
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib:  @num_updates=55
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib/status/node_state[@id='2']:  @in_ccm=false, 
@crm-debug-origin=post_cache_update
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib/status/node_state[@id='1']:  
@crm-debug-origin=post_cache_update
Nov 04 11:11:39 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Completed cib_modify operation for section status: OK 
(rc=0, origin=kvm01.test-cluster1/crmd/99, version=0.23.55)
Nov 04 11:11:40 [1934] kvm01.test-cluster1       lrmd:     info: log_finished:  
finished - rsc:kvm_samba2 action:start call_id:17 pid:22726 exit-code:0 
exec-time:567ms queue-time:0ms
Nov 04 11:11:40 [1937] kvm01.test-cluster1       crmd:     info: 
action_synced_wait:    Managed VirtualDomain_meta-data_0 process 22791 exited 
with rc=0
Nov 04 11:11:40 [1937] kvm01.test-cluster1       crmd:   notice: 
process_lrm_event:     Operation kvm_samba2_start_0: ok 
(node=kvm01.test-cluster1, call=17, rc=0, cib-update=100, confirmed=true)
Nov 04 11:11:40 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Forwarding cib_modify operation for section status to 
master (origin=local/crmd/100)
Nov 04 11:11:40 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: --- 0.23.55 2
Nov 04 11:11:40 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: +++ 0.23.56 (null)
Nov 04 11:11:40 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib:  @num_updates=56
Nov 04 11:11:40 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib/status/node_state[@id='1']:  
@crm-debug-origin=do_update_resource
Nov 04 11:11:40 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  
/cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='kvm_samba2']/lrm_rsc_op[@id='kvm_samba2_last_0']:
  @operation_key=kvm_samba2_start_0, @operation=start, 
@transition-key=10:12:0:9d0841ad-adb6-44b4-a299-3cef745cc206, 
@transition-magic=0:0;10:12:0:9d0841ad-adb6-44b4-a299-3cef745cc206, 
@call-id=17, @last-run=1478254299, @last-rc-change=1478254299, @exec-time=567, 
@queue-time=0
Nov 04 11:11:40 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Completed cib_modify operation for section status: OK 
(rc=0, origin=kvm01.test-cluster1/crmd/100, version=0.23.56)
Nov 04 11:11:40 [1937] kvm01.test-cluster1       crmd:     info: 
match_graph_event:     Action kvm_samba2_start_0 (10) confirmed on 
kvm01.test-cluster1 (rc=0)
Nov 04 11:11:40 [1937] kvm01.test-cluster1       crmd:   notice: 
te_rsc_command:        Initiating action 11: monitor kvm_samba2_monitor_10000 
on kvm01.test-cluster1 (local)
Nov 04 11:11:40 [1937] kvm01.test-cluster1       crmd:     info: do_lrm_rsc_op: 
Performing key=11:12:0:9d0841ad-adb6-44b4-a299-3cef745cc206 
op=kvm_samba2_monitor_10000
Nov 04 11:11:40 [1937] kvm01.test-cluster1       crmd:     info: 
process_lrm_event:     Operation kvm_samba2_monitor_10000: ok 
(node=kvm01.test-cluster1, call=18, rc=0, cib-update=101, confirmed=false)
Nov 04 11:11:40 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Forwarding cib_modify operation for section status to 
master (origin=local/crmd/101)
Nov 04 11:11:40 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: --- 0.23.56 2
Nov 04 11:11:40 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: +++ 0.23.57 (null)
Nov 04 11:11:40 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib:  @num_updates=57
Nov 04 11:11:40 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  
/cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='kvm_samba2']/lrm_rsc_op[@id='kvm_samba2_monitor_10000']:
  @crm-debug-origin=do_update_resource, 
@transition-key=11:12:0:9d0841ad-adb6-44b4-a299-3cef745cc206, 
@transition-magic=0:0;11:12:0:9d0841ad-adb6-44b4-a299-3cef745cc206, 
@call-id=18, @last-rc-change=1478254300, @exec-time=174
Nov 04 11:11:40 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Completed cib_modify operation for section status: OK 
(rc=0, origin=kvm01.test-cluster1/crmd/101, version=0.23.57)
Nov 04 11:11:40 [1937] kvm01.test-cluster1       crmd:     info: 
match_graph_event:     Action kvm_samba2_monitor_10000 (11) confirmed on 
kvm01.test-cluster1 (rc=0)
Nov 04 11:11:40 [1937] kvm01.test-cluster1       crmd:   notice: run_graph:     
Transition 12 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=0, 
Source=/var/lib/pacemaker/pengine/pe-input-659.bz2): Complete
Nov 04 11:11:40 [1937] kvm01.test-cluster1       crmd:     info: do_log:        
FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Nov 04 11:11:40 [1937] kvm01.test-cluster1       crmd:   notice: 
do_state_transition:   State transition S_TRANSITION_ENGINE -> S_IDLE [ 
input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Nov 04 11:11:45 [1929] kvm01.test-cluster1        cib:     info
-----------------------------
Nov 04 11:08:19 [1937] kvm01.test-cluster1       crmd:     info: 
crm_update_peer_expected:      handle_request: Node kvm02.test-cluster1[2] - 
expected state is now down (was member)
Nov 04 11:08:19 [1937] kvm01.test-cluster1       crmd:     info: 
handle_shutdown_request:       Creating shutdown request for 
kvm02.test-cluster1 (state=S_IDLE)
Nov 04 11:08:19 [1935] kvm01.test-cluster1      attrd:     info: 
attrd_peer_update:     Setting shutdown[kvm02.test-cluster1]: 0 -> 1478254099 
from kvm01.test-cluster1
Nov 04 11:08:19 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: --- 0.23.18 2
Nov 04 11:08:19 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: +++ 0.23.19 (null)
Nov 04 11:08:19 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib:  @num_updates=19
Nov 04 11:08:19 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  
/cib/status/node_state[@id='2']/transient_attributes[@id='2']/instance_attributes[@id='status-2']/nvpair[@id='status-2-shutdown']:
  @value=1478254099
Nov 04 11:08:19 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Completed cib_modify operation for section status: OK 
(rc=0, origin=kvm02.test-cluster1/attrd/7, version=0.23.19)
Nov 04 11:08:19 [1937] kvm01.test-cluster1       crmd:     info: 
abort_transition_graph:        Transition aborted by status-2-shutdown, 
shutdown=1478254099: Transient attribute change (modify cib=0.23.19, 
source=abort_unless_down:319, 
path=/cib/status/node_state[@id='2']/transient_attributes[@id='2']/instance_attributes[@id='status-2']/nvpair[@id='status-2-shutdown'],
 1)
Nov 04 11:08:19 [1937] kvm01.test-cluster1       crmd:   notice: 
do_state_transition:   State transition S_IDLE -> S_POLICY_ENGINE [ 
input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Nov 04 11:08:19 [1936] kvm01.test-cluster1    pengine:     info: 
determine_online_status:       Node kvm02.test-cluster1 is shutting down
Nov 04 11:08:19 [1936] kvm01.test-cluster1    pengine:     info: 
determine_online_status:       Node kvm01.test-cluster1 is online
Nov 04 11:08:19 [1936] kvm01.test-cluster1    pengine:     info: native_print:  
kvm_samba1      (ocf::heartbeat:VirtualDomain): Started kvm01.test-cluster1
Nov 04 11:08:19 [1936] kvm01.test-cluster1    pengine:     info: native_print:  
kvm_samba2      (ocf::heartbeat:VirtualDomain): Started kvm02.test-cluster1
Nov 04 11:08:19 [1936] kvm01.test-cluster1    pengine:     info: RecurringOp:   
 Start recurring monitor (10s) for kvm_samba2 on kvm01.test-cluster1
Nov 04 11:08:19 [1936] kvm01.test-cluster1    pengine:   notice: stage6:        
Scheduling Node kvm02.test-cluster1 for shutdown
Nov 04 11:08:19 [1936] kvm01.test-cluster1    pengine:     info: LogActions:    
Leave   kvm_samba1      (Started kvm01.test-cluster1)
Nov 04 11:08:19 [1936] kvm01.test-cluster1    pengine:   notice: LogActions:    
Migrate kvm_samba2      (Started kvm02.test-cluster1 -> kvm01.test-cluster1)
Nov 04 11:08:19 [1936] kvm01.test-cluster1    pengine:   notice: 
process_pe_message:    Calculated Transition 7: 
/var/lib/pacemaker/pengine/pe-input-654.bz2
Nov 04 11:08:19 [1937] kvm01.test-cluster1       crmd:     info: 
do_state_transition:   State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE 
[ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Nov 04 11:08:19 [1937] kvm01.test-cluster1       crmd:     info: do_te_invoke:  
Processing graph 7 (ref=pe_calc-dc-1478254099-31) derived from 
/var/lib/pacemaker/pengine/pe-input-654.bz2
Nov 04 11:08:19 [1937] kvm01.test-cluster1       crmd:   notice: 
te_rsc_command:        Initiating action 12: migrate_to kvm_samba2_migrate_to_0 
on kvm02.test-cluster1
Nov 04 11:08:19 [1937] kvm01.test-cluster1       crmd:     info: 
create_operation_update:       cib_action_update: Updating resource kvm_samba2 
after migrate_to op pending (interval=0)
Nov 04 11:08:19 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Forwarding cib_modify operation for section status to 
master (origin=local/crmd/45)
Nov 04 11:08:19 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: --- 0.23.19 2
Nov 04 11:08:19 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: +++ 0.23.20 (null)
Nov 04 11:08:19 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib:  @num_updates=20
Nov 04 11:08:19 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  
/cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='kvm_samba2']/lrm_rsc_op[@id='kvm_samba2_last_0']:
  @operation_key=kvm_samba2_migrate_to_0, @operation=migrate_to, 
@crm-debug-origin=cib_action_update, 
@transition-key=12:7:0:9d0841ad-adb6-44b4-a299-3cef745cc206, 
@transition-magic=-1:193;12:7:0:9d0841ad-adb6-44b4-a299-3cef745cc206, 
@call-id=-1, @rc-code=193, @op-status=-1, @last-run=1478254099, @las
Nov 04 11:08:19 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Completed cib_modify operation for section status: OK 
(rc=0, origin=kvm01.test-cluster1/crmd/45, version=0.23.20)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: --- 0.23.20 2
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: +++ 0.23.21 (null)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib:  @num_updates=21
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  
/cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='kvm_samba2']/lrm_rsc_op[@id='kvm_samba2_last_0']:
  @crm-debug-origin=do_update_resource, 
@transition-magic=0:0;12:7:0:9d0841ad-adb6-44b4-a299-3cef745cc206, @call-id=13, 
@rc-code=0, @op-status=0, @exec-time=4483
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Completed cib_modify operation for section status: OK 
(rc=0, origin=kvm02.test-cluster1/crmd/15, version=0.23.21)
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:     info: 
match_graph_event:     Action kvm_samba2_migrate_to_0 (12) confirmed on 
kvm02.test-cluster1 (rc=0)
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:   notice: 
te_rsc_command:        Initiating action 13: migrate_from 
kvm_samba2_migrate_from_0 on kvm01.test-cluster1 (local)
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:     info: do_lrm_rsc_op: 
Performing key=13:7:0:9d0841ad-adb6-44b4-a299-3cef745cc206 
op=kvm_samba2_migrate_from_0
Nov 04 11:08:24 [1934] kvm01.test-cluster1       lrmd:     info: log_execute:   
executing - rsc:kvm_samba2 action:migrate_from call_id:12
VirtualDomain(kvm_samba2)[21154]:       2016/11/04_11:08:24 INFO: samba2: live 
migration from kvm02.test-cluster1 succeeded.
Nov 04 11:08:24 [1934] kvm01.test-cluster1       lrmd:     info: log_finished:  
finished - rsc:kvm_samba2 action:migrate_from call_id:12 pid:21154 exit-code:0 
exec-time:175ms queue-time:0ms
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:   notice: 
process_lrm_event:     Operation kvm_samba2_migrate_from_0: ok 
(node=kvm01.test-cluster1, call=12, rc=0, cib-update=46, confirmed=true)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Forwarding cib_modify operation for section status to 
master (origin=local/crmd/46)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: --- 0.23.21 2
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: +++ 0.23.22 (null)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib:  @num_updates=22
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  
/cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='kvm_samba2']/lrm_rsc_op[@id='kvm_samba2_last_0']:
  @operation_key=kvm_samba2_migrate_from_0, @operation=migrate_from, 
@transition-key=13:7:0:9d0841ad-adb6-44b4-a299-3cef745cc206, 
@transition-magic=0:0;13:7:0:9d0841ad-adb6-44b4-a299-3cef745cc206, @call-id=12, 
@rc-code=0, @last-run=1478254104, @last-rc-change=1478254104, @exec-time=175, 
@migrate_sour
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Completed cib_modify operation for section status: OK 
(rc=0, origin=kvm01.test-cluster1/crmd/46, version=0.23.22)
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:     info: 
match_graph_event:     Action kvm_samba2_migrate_from_0 (13) confirmed on 
kvm01.test-cluster1 (rc=0)
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:   notice: 
te_rsc_command:        Initiating action 9: stop kvm_samba2_stop_0 on 
kvm02.test-cluster1
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: --- 0.23.22 2
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: +++ 0.23.23 (null)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib:  @num_updates=23
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  
/cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='kvm_samba2']/lrm_rsc_op[@id='kvm_samba2_last_0']:
  @operation_key=kvm_samba2_stop_0, @operation=stop, 
@transition-key=9:7:0:9d0841ad-adb6-44b4-a299-3cef745cc206, 
@transition-magic=0:0;9:7:0:9d0841ad-adb6-44b4-a299-3cef745cc206, @call-id=14, 
@last-run=1478254104, @last-rc-change=1478254104, @exec-time=57
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Completed cib_modify operation for section status: OK 
(rc=0, origin=kvm02.test-cluster1/crmd/16, version=0.23.23)
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:     info: 
match_graph_event:     Action kvm_samba2_stop_0 (9) confirmed on 
kvm02.test-cluster1 (rc=0)
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:     info: 
te_crm_command:        Executing crm-event (15): do_shutdown on 
kvm02.test-cluster1
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:   notice: 
te_rsc_command:        Initiating action 11: monitor kvm_samba2_monitor_10000 
on kvm01.test-cluster1 (local)
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:     info: do_lrm_rsc_op: 
Performing key=11:7:0:9d0841ad-adb6-44b4-a299-3cef745cc206 
op=kvm_samba2_monitor_10000
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:     info: 
pcmk_cpg_membership:   Node 2 left group crmd (peer=kvm02.test-cluster1, 
counter=2.0)
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:     info: 
crm_update_peer_proc:  pcmk_cpg_membership: Node kvm02.test-cluster1[2] - 
corosync-cpg is now offline
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:     info: 
peer_update_callback:  Client kvm02.test-cluster1/peer now has status [offline] 
(DC=true, changed=4000000)
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:     info: 
peer_update_callback:  Peer kvm02.test-cluster1 left us
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:     info: 
erase_status_tag:      Deleting xpath: 
//node_state[@uname='kvm02.test-cluster1']/transient_attributes
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Forwarding cib_delete operation for section 
//node_state[@uname='kvm02.test-cluster1']/transient_attributes to master 
(origin=local/crmd/47)
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:   notice: 
peer_update_callback:  do_shutdown of kvm02.test-cluster1 (op 15) is complete
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:     info: 
crm_update_peer_join:  crmd_peer_down: Node kvm02.test-cluster1[2] - join-2 
phase 4 -> 0
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:     info: 
pcmk_cpg_membership:   Node 1 still member of group crmd 
(peer=kvm01.test-cluster1, counter=2.0)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Forwarding cib_modify operation for section status to 
master (origin=local/crmd/48)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: --- 0.23.23 2
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: +++ 0.23.24 2032dcea3b7299e948f6cadaf56b473b
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        -- 
/cib/status/node_state[@id='2']/transient_attributes[@id='2']
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib:  @num_updates=24
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Completed cib_delete operation for section 
//node_state[@uname='kvm02.test-cluster1']/transient_attributes: OK (rc=0, 
origin=kvm01.test-cluster1/crmd/47, version=0.23.24)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: --- 0.23.24 2
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: +++ 0.23.25 (null)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib:  @num_updates=25
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib/status/node_state[@id='2']:  @crmd=offline, 
@crm-debug-origin=peer_update_callback, @join=down, @expected=down
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Completed cib_modify operation for section status: OK 
(rc=0, origin=kvm01.test-cluster1/crmd/48, version=0.23.25)
Nov 04 11:08:24 [1935] kvm01.test-cluster1      attrd:     info: 
pcmk_cpg_membership:   Node 2 left group attrd (peer=kvm02.test-cluster1, 
counter=2.0)
Nov 04 11:08:24 [1935] kvm01.test-cluster1      attrd:     info: 
crm_update_peer_proc:  pcmk_cpg_membership: Node kvm02.test-cluster1[2] - 
corosync-cpg is now offline
Nov 04 11:08:24 [1935] kvm01.test-cluster1      attrd:   notice: 
crm_update_peer_state_iter:    crm_update_peer_proc: Node 
kvm02.test-cluster1[2] - state is now lost (was member)
Nov 04 11:08:24 [1935] kvm01.test-cluster1      attrd:   notice: 
attrd_peer_remove:     Removing all kvm02.test-cluster1 attributes for 
attrd_peer_change_cb
Nov 04 11:08:24 [1935] kvm01.test-cluster1      attrd:   notice: 
attrd_peer_change_cb:  Lost attribute writer kvm02.test-cluster1
Nov 04 11:08:24 [1935] kvm01.test-cluster1      attrd:   notice: 
crm_reap_dead_member:  Removing kvm02.test-cluster1/2 from the membership list
Nov 04 11:08:24 [1935] kvm01.test-cluster1      attrd:   notice: 
reap_crm_member:       Purged 1 peers with id=2 and/or 
uname=kvm02.test-cluster1 from the membership cache
Nov 04 11:08:24 [1935] kvm01.test-cluster1      attrd:     info: 
pcmk_cpg_membership:   Node 1 still member of group attrd 
(peer=kvm01.test-cluster1, counter=2.0)
Nov 04 11:08:24 [1933] kvm01.test-cluster1 stonith-ng:     info: 
pcmk_cpg_membership:   Node 2 left group stonith-ng (peer=kvm02.test-cluster1, 
counter=2.0)
Nov 04 11:08:24 [1933] kvm01.test-cluster1 stonith-ng:     info: 
crm_update_peer_proc:  pcmk_cpg_membership: Node kvm02.test-cluster1[2] - 
corosync-cpg is now offline
Nov 04 11:08:24 [1933] kvm01.test-cluster1 stonith-ng:   notice: 
crm_update_peer_state_iter:    crm_update_peer_proc: Node 
kvm02.test-cluster1[2] - state is now lost (was member)
Nov 04 11:08:24 [1933] kvm01.test-cluster1 stonith-ng:   notice: 
crm_reap_dead_member:  Removing kvm02.test-cluster1/2 from the membership list
Nov 04 11:08:24 [1933] kvm01.test-cluster1 stonith-ng:   notice: 
reap_crm_member:       Purged 1 peers with id=2 and/or 
uname=kvm02.test-cluster1 from the membership cache
Nov 04 11:08:24 [1933] kvm01.test-cluster1 stonith-ng:     info: 
pcmk_cpg_membership:   Node 1 still member of group stonith-ng 
(peer=kvm01.test-cluster1, counter=2.0)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_shutdown_req:      Shutdown REQ from kvm02.test-cluster1
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
pcmk_cpg_membership:   Node 2 left group cib (peer=kvm02.test-cluster1, 
counter=2.0)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
crm_update_peer_proc:  pcmk_cpg_membership: Node kvm02.test-cluster1[2] - 
corosync-cpg is now offline
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:   notice: 
crm_update_peer_state_iter:    crm_update_peer_proc: Node 
kvm02.test-cluster1[2] - state is now lost (was member)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:   notice: 
crm_reap_dead_member:  Removing kvm02.test-cluster1/2 from the membership list
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:   notice: 
reap_crm_member:       Purged 1 peers with id=2 and/or 
uname=kvm02.test-cluster1 from the membership cache
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
pcmk_cpg_membership:   Node 1 still member of group cib 
(peer=kvm01.test-cluster1, counter=2.0)
Nov 04 11:08:24 [1908] kvm01.test-cluster1 pacemakerd:     info: 
pcmk_cpg_membership:   Node 2 left group pacemakerd (peer=kvm02.test-cluster1, 
counter=2.0)
Nov 04 11:08:24 [1908] kvm01.test-cluster1 pacemakerd:     info: 
crm_update_peer_proc:  pcmk_cpg_membership: Node kvm02.test-cluster1[2] - 
corosync-cpg is now offline
Nov 04 11:08:24 [1908] kvm01.test-cluster1 pacemakerd:     info: 
pcmk_cpg_membership:   Node 1 still member of group pacemakerd 
(peer=kvm01.test-cluster1, counter=2.0)
Nov 04 11:08:24 [1908] kvm01.test-cluster1 pacemakerd:     info: 
mcp_cpg_deliver:       Ignoring process list sent by peer for local node
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:     info: 
process_lrm_event:     Operation kvm_samba2_monitor_10000: ok 
(node=kvm01.test-cluster1, call=13, rc=0, cib-update=49, confirmed=false)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Forwarding cib_modify operation for section status to 
master (origin=local/crmd/49)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: --- 0.23.25 2
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: +++ 0.23.26 (null)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib:  @num_updates=26
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        ++ 
/cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='kvm_samba2']:
  <lrm_rsc_op id="kvm_samba2_monitor_10000" 
operation_key="kvm_samba2_monitor_10000" operation="monitor" 
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" 
transition-key="11:7:0:9d0841ad-adb6-44b4-a299-3cef745cc206" 
transition-magic="0:0;11:7:0:9d0841ad-adb6-44b4-a299-3cef745cc206" 
on_node="kvm01.test-cluster1" call-id="13
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Completed cib_modify operation for section status: OK 
(rc=0, origin=kvm01.test-cluster1/crmd/49, version=0.23.26)
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:     info: 
match_graph_event:     Action kvm_samba2_monitor_10000 (11) confirmed on 
kvm01.test-cluster1 (rc=0)
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:     info: 
pcmk_quorum_notification:      Membership 1072: quorum retained (1)
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:   notice: 
crm_update_peer_state_iter:    crm_reap_unseen_nodes: Node 
kvm02.test-cluster1[2] - state is now lost (was member)
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:     info: 
peer_update_callback:  kvm02.test-cluster1 is now lost (was member)
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:   notice: 
peer_update_callback:  do_shutdown of kvm02.test-cluster1 (op 15) is complete
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:   notice: run_graph:     
Transition 7 (Complete=7, Pending=0, Fired=0, Skipped=0, Incomplete=0, 
Source=/var/lib/pacemaker/pengine/pe-input-654.bz2): Complete
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:     info: do_log:        
FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Nov 04 11:08:24 [1937] kvm01.test-cluster1       crmd:   notice: 
do_state_transition:   State transition S_TRANSITION_ENGINE -> S_IDLE [ 
input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
[1710] kvm01.test-cluster1 corosyncnotice  [TOTEM ] A new membership 
(10.0.0.1:1072) was formed. Members left: 2
[1710] kvm01.test-cluster1 corosyncnotice  [QUORUM] Members[1]: 1
[1710] kvm01.test-cluster1 corosyncnotice  [MAIN  ] Completed service 
synchronization, ready to provide service.
Nov 04 11:08:24 [1908] kvm01.test-cluster1 pacemakerd:     info: 
pcmk_quorum_notification:      Membership 1072: quorum retained (1)
Nov 04 11:08:24 [1908] kvm01.test-cluster1 pacemakerd:   notice: 
crm_update_peer_state_iter:    crm_reap_unseen_nodes: Node 
kvm02.test-cluster1[2] - state is now lost (was member)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Forwarding cib_modify operation for section status to 
master (origin=local/crmd/50)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Forwarding cib_modify operation for section nodes to 
master (origin=local/crmd/53)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Forwarding cib_modify operation for section status to 
master (origin=local/crmd/54)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Completed cib_modify operation for section status: OK 
(rc=0, origin=kvm01.test-cluster1/crmd/50, version=0.23.26)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Completed cib_modify operation for section nodes: OK 
(rc=0, origin=kvm01.test-cluster1/crmd/53, version=0.23.26)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: --- 0.23.26 2
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        Diff: +++ 0.23.27 (null)
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib:  @num_updates=27
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib/status/node_state[@id='2']:  @in_ccm=false, 
@crm-debug-origin=post_cache_update
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_perform_op:        +  /cib/status/node_state[@id='1']:  
@crm-debug-origin=post_cache_update
Nov 04 11:08:24 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_request:   Completed cib_modify operation for section status: OK 
(rc=0, origin=kvm01.test-cluster1/crmd/54, version=0.23.27)
Nov 04 11:08:29 [1929] kvm01.test-cluster1        cib:     info: 
cib_process_ping:      Reporting our current digest to kvm01.test-cluster1: 
d91e7c3acf79967abe61648b7d5b6740 for 0.23.27 (0x17dc310 0)
-----------------------------
Nov  4 11:11:08 kvm02 systemd-logind: Removed session 17.
Nov  4 11:11:08 kvm02 systemd-logind: Removed session 18.
Nov  4 11:11:08 kvm02 systemd-logind: Removed session 21.
Nov  4 11:11:08 kvm02 systemd-logind: Removed session 19.
Nov  4 11:11:08 kvm02 systemd-logind: Removed session 20.
Nov  4 11:11:08 kvm02 systemd-logind: Removed session 24.
Nov  4 11:11:08 kvm02 systemd-logind: Removed session 23.
Nov  4 11:11:08 kvm02 systemd-logind: Removed session 22.
Nov  4 11:11:39 kvm02 systemd: Stopping Daemon for power management...
Nov  4 11:11:39 kvm02 systemd: Stopping Virtual Machine qemu-2-samba2.
Nov  4 11:11:39 kvm02 systemd: Stopping Disk Manager...
Nov  4 11:11:39 kvm02 systemd: Deactivating swap /dev/mapper/centos-swap...
Nov  4 11:11:39 kvm02 systemd: Stopped Stop Read-Ahead Data Collection 10s 
After Completed Startup.
Nov  4 11:11:39 kvm02 systemd: Stopping Stop Read-Ahead Data Collection 10s 
After Completed Startup.
Nov  4 11:11:39 kvm02 systemd: Stopping Session 13 of user root.
Nov  4 11:11:39 kvm02 systemd: Stopping Manage, Install and Generate Color 
Profiles...
Nov  4 11:11:39 kvm02 systemd: Stopping Session 14 of user root.
Nov  4 11:11:39 kvm02 systemd: Stopped Dump dmesg to /var/log/dmesg.
Nov  4 11:11:39 kvm02 systemd: Stopping Dump dmesg to /var/log/dmesg...
Nov  4 11:11:39 kvm02 systemd: Stopped target Graphical Interface.
Nov  4 11:11:39 kvm02 systemd: Stopping Graphical Interface.
Nov  4 11:11:39 kvm02 systemd: Stopped target Multi-User System.
Nov  4 11:11:39 kvm02 systemd: Stopping Multi-User System.
Nov  4 11:11:39 kvm02 systemd: Stopping libstoragemgmt plug-in server daemon...
Nov  4 11:11:39 kvm02 systemd: Stopping CUPS Printing Service...
Nov  4 11:11:39 kvm02 systemd: Stopped target Login Prompts.
Nov  4 11:11:39 kvm02 systemd: Stopping Login Prompts.
Nov  4 11:11:39 kvm02 systemd: Stopping Dynamic System Tuning Daemon...
Nov  4 11:11:39 kvm02 systemd: Stopping Avahi mDNS/DNS-SD Stack...
Nov  4 11:11:39 kvm02 systemd: Stopping Kernel Samepage Merging (KSM) Tuning 
Daemon...
Nov  4 11:11:39 kvm02 rsyslogd: [origin software="rsyslogd" swVersion="7.4.7" 
x-pid="21073" x-info="http://www.rsyslog.com";] exiting on signal 15.

_______________________________________________
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to