Sorry, log was missed out...

TA,
Ivan


On Mon, 2008-06-30 at 17:58 +1200, Ivan wrote:
> Hi,
> 
> Sorry to make too much noise about the subject but I am desperate to fix
> my HASI cluster. 
> 
> My suspicion is that the Xen RA is broken. When a VM gets migrated to
> another node in a 2 node cluster the migration is successful but for
> some reason the VMs on the active node get stopped then re-started just
> after the migration. 
> 
> How come the line "ERROR: VM3: xm migrate to SERVER1 succeeded"???
> 
> I guess the RA returns something else other than "success" hence the
> successful operation is considered "failed" which probably cause the
> resources to be re-started(star/stop) on the active node.
> 
> BTW:today I downloaded some rpms from buildservice:
> 
> http://download.opensuse.org/repositories/server:ha-clustering:UNSTABLE/SLE_10
> 
> has the same Xen RA as my SLES10SP2. I wonder if SP2 was shipped with
> unstable RA?
> 
> Could somebody please shed some light on this weird behaviour? My VMs
> really need the live migration...
> 
> Thanks,
> Ivan
> 
> 
> 
> 
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
> 
lrmd[6695]: 2008/06/29_21:37:44 WARN: G_SIG_dispatch: Dispatch function for 
SIGCHLD was delayed 240 ms (> 100 ms) before being called (GSource: 0x805dac0)
lrmd[6695]: 2008/06/29_21:37:44 info: G_SIG_dispatch: started at 1729279863 
should have started at 1729279839
cib[6694]: 2008/06/29_21:38:01 info: log_data_element: cib:diff: - <cib 
epoch="23"/>
cib[6694]: 2008/06/29_21:38:01 info: log_data_element: cib:diff: + <cib 
epoch="24">
cib[6694]: 2008/06/29_21:38:01 info: log_data_element: cib:diff: +   
<configuration>
cib[6694]: 2008/06/29_21:38:01 info: log_data_element: cib:diff: +     <nodes>
tengine[7344]: 2008/06/29_21:38:01 info: update_abort_priority: Abort priority 
upgraded to 1000000
cib[6694]: 2008/06/29_21:38:01 info: log_data_element: cib:diff: +       <node 
id="3ed49e19-fbb6-4cb2-8881-252b9bb768bc">
cib[6694]: 2008/06/29_21:38:01 info: log_data_element: cib:diff: +         
<instance_attributes id="nodes-3ed49e19-fbb6-4cb2-8881-252b9bb768bc">
cib[6694]: 2008/06/29_21:38:01 info: log_data_element: cib:diff: +           
<attributes>
cib[6694]: 2008/06/29_21:38:01 info: log_data_element: cib:diff: +             
<nvpair id="standby-3ed49e19-fbb6-4cb2-8881-252b9bb768bc" name="standby" 
value="on"/>
cib[6694]: 2008/06/29_21:38:01 info: log_data_element: cib:diff: +           
</attributes>
cib[6694]: 2008/06/29_21:38:01 info: log_data_element: cib:diff: +         
</instance_attributes>
cib[6694]: 2008/06/29_21:38:01 info: log_data_element: cib:diff: +       </node>
cib[6694]: 2008/06/29_21:38:01 info: log_data_element: cib:diff: +     </nodes>
cib[6694]: 2008/06/29_21:38:01 info: log_data_element: cib:diff: +   
</configuration>
cib[6694]: 2008/06/29_21:38:01 info: log_data_element: cib:diff: + </cib>
crmd[6698]: 2008/06/29_21:38:01 info: do_state_transition: State transition 
S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_IPC_MESSAGE 
origin=route_message ]
crmd[6698]: 2008/06/29_21:38:01 info: do_state_transition: All 2 cluster nodes 
are eligible to run resources.
cib[9407]: 2008/06/29_21:38:01 info: retrieveCib: Reading cluster configuration 
from: /var/lib/heartbeat/crm/cib.xml (digest: 
/var/lib/heartbeat/crm/cib.xml.sig)
cib[9407]: 2008/06/29_21:38:01 info: retrieveCib: Reading cluster configuration 
from: /var/lib/heartbeat/crm/cib.xml (digest: 
/var/lib/heartbeat/crm/cib.xml.sig)
cib[9407]: 2008/06/29_21:38:01 info: retrieveCib: Reading cluster configuration 
from: /var/lib/heartbeat/crm/cib.xml.last (digest: 
/var/lib/heartbeat/crm/cib.xml.sig.last)
pengine[7345]: 2008/06/29_21:38:01 info: unpack_nodes: Node SERVER2 is in 
standby-mode
pengine[7345]: 2008/06/29_21:38:01 info: determine_online_status: Node SERVER2 
is standby
pengine[7345]: 2008/06/29_21:38:01 info: determine_online_status: Node SERVER1 
is online
pengine[7345]: 2008/06/29_21:38:01 notice: clone_print: Clone Set: STONITH
pengine[7345]: 2008/06/29_21:38:01 notice: native_print:     STONITH-child:0    
(stonith:external/ssh): Started SERVER2
pengine[7345]: 2008/06/29_21:38:01 notice: native_print:     STONITH-child:1    
(stonith:external/ssh): Started SERVER1
pengine[7345]: 2008/06/29_21:38:01 notice: native_print: STONITH-iLO-SERVER1    
(stonith:external/riloe):       Started SERVER2
pengine[7345]: 2008/06/29_21:38:01 notice: native_print: STONITH-iLO-SERVER2    
(stonith:external/riloe):       Started SERVER1
pengine[7345]: 2008/06/29_21:38:01 notice: clone_print: Clone Set: pingd
pengine[7345]: 2008/06/29_21:38:01 notice: native_print:     pingd-child:0      
(ocf::heartbeat:pingd): Started SERVER2
pengine[7345]: 2008/06/29_21:38:01 notice: native_print:     pingd-child:1      
(ocf::heartbeat:pingd): Started SERVER1
pengine[7345]: 2008/06/29_21:38:01 notice: clone_print: Clone Set: evms
pengine[7345]: 2008/06/29_21:38:01 notice: native_print:     evms-child:0       
(ocf::heartbeat:EvmsSCC):       Started SERVER2
pengine[7345]: 2008/06/29_21:38:01 notice: native_print:     evms-child:1       
(ocf::heartbeat:EvmsSCC):       Started SERVER1
pengine[7345]: 2008/06/29_21:38:01 notice: clone_print: Clone Set: configpool
pengine[7345]: 2008/06/29_21:38:01 notice: native_print:     configpool-child:0 
(ocf::heartbeat:Filesystem):    Started SERVER2
pengine[7345]: 2008/06/29_21:38:01 notice: native_print:     configpool-child:1 
(ocf::heartbeat:Filesystem):    Started SERVER1
pengine[7345]: 2008/06/29_21:38:01 notice: native_print: VM1    
(ocf::heartbeat:Xen):   Started SERVER2
pengine[7345]: 2008/06/29_21:38:01 notice: native_print: VM2    
(ocf::heartbeat:Xen):   Started SERVER1
pengine[7345]: 2008/06/29_21:38:01 notice: native_print: VM3    
(ocf::heartbeat:Xen):   Started SERVER2
pengine[7345]: 2008/06/29_21:38:01 WARN: native_color: Resource STONITH-child:0 
cannot run anywhere
pengine[7345]: 2008/06/29_21:38:01 WARN: native_color: Resource 
STONITH-iLO-SERVER1 cannot run anywhere
pengine[7345]: 2008/06/29_21:38:01 WARN: native_color: Resource pingd-child:0 
cannot run anywhere
pengine[7345]: 2008/06/29_21:38:01 WARN: native_color: Resource evms-child:0 
cannot run anywhere
pengine[7345]: 2008/06/29_21:38:01 WARN: native_color: Resource 
configpool-child:0 cannot run anywhere
pengine[7345]: 2008/06/29_21:38:01 notice: StopRsc:   SERVER2   Stop 
STONITH-child:0
pengine[7345]: 2008/06/29_21:38:01 notice: NoRoleChange: Leave resource 
STONITH-child:1 (SERVER1)
pengine[7345]: 2008/06/29_21:38:01 notice: StopRsc:   SERVER2   Stop 
STONITH-iLO-SERVER1
pengine[7345]: 2008/06/29_21:38:01 notice: NoRoleChange: Leave resource 
STONITH-iLO-SERVER2     (SERVER1)
pengine[7345]: 2008/06/29_21:38:01 notice: StopRsc:   SERVER2   Stop 
pingd-child:0
pengine[7345]: 2008/06/29_21:38:01 notice: NoRoleChange: Leave resource 
pingd-child:1   (SERVER1)
pengine[7345]: 2008/06/29_21:38:01 notice: StopRsc:   SERVER2   Stop 
evms-child:0
pengine[7345]: 2008/06/29_21:38:01 notice: NoRoleChange: Leave resource 
evms-child:1    (SERVER1)
pengine[7345]: 2008/06/29_21:38:01 notice: StopRsc:   SERVER2   Stop 
configpool-child:0
pengine[7345]: 2008/06/29_21:38:01 notice: NoRoleChange: Leave resource 
configpool-child:1      (SERVER1)
pengine[7345]: 2008/06/29_21:38:01 notice: NoRoleChange: Move  resource VM1     
(SERVER2 -> SERVER1)
pengine[7345]: 2008/06/29_21:38:01 notice: StopRsc:   SERVER2   Stop VM1
pengine[7345]: 2008/06/29_21:38:01 notice: StartRsc:  SERVER1   Start VM1
pengine[7345]: 2008/06/29_21:38:01 notice: RecurringOp: SERVER1    
VM1_monitor_10000
pengine[7345]: 2008/06/29_21:38:01 notice: NoRoleChange: Leave resource VM2     
(SERVER1)
pengine[7345]: 2008/06/29_21:38:01 notice: NoRoleChange: Move  resource VM3     
(SERVER2 -> SERVER1)
pengine[7345]: 2008/06/29_21:38:01 notice: StopRsc:   SERVER2   Stop VM3
pengine[7345]: 2008/06/29_21:38:01 notice: StartRsc:  SERVER1   Start VM3
pengine[7345]: 2008/06/29_21:38:01 notice: RecurringOp: SERVER1    
VM3_monitor_10000
pengine[7345]: 2008/06/29_21:38:01 notice: complex_migrate_reload: Migrating 
VM1 from SERVER2 to SERVER1
pengine[7345]: 2008/06/29_21:38:01 notice: complex_migrate_reload: Migrating 
VM3 from SERVER2 to SERVER1
crmd[6698]: 2008/06/29_21:38:01 info: do_state_transition: State transition 
S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE 
origin=route_message ]
pengine[7345]: 2008/06/29_21:38:01 WARN: process_pe_message: Transition 25: 
WARNINGs found during PE processing. PEngine Input stored in: 
/var/lib/heartbeat/pengine/pe-warn-0.bz2
pengine[7345]: 2008/06/29_21:38:01 info: process_pe_message: Configuration 
WARNINGs found during PE processing.  Please run "crm_verify -L" to identify 
issues.
cib[9407]: 2008/06/29_21:38:01 info: write_cib_contents: Wrote version 0.24.1 
of the CIB to disk (digest: 11fe4b84f165351e52d696590691868a)
cib[9407]: 2008/06/29_21:38:01 info: retrieveCib: Reading cluster configuration 
from: /var/lib/heartbeat/crm/cib.xml (digest: 
/var/lib/heartbeat/crm/cib.xml.sig)
tengine[7344]: 2008/06/29_21:38:01 info: unpack_graph: Unpacked transition 25: 
46 actions in 46 synapses
tengine[7344]: 2008/06/29_21:38:01 info: te_pseudo_action: Pseudo action 21 
fired and confirmed
tengine[7344]: 2008/06/29_21:38:01 info: te_pseudo_action: Pseudo action 31 
fired and confirmed
tengine[7344]: 2008/06/29_21:38:01 info: te_pseudo_action: Pseudo action 44 
fired and confirmed
tengine[7344]: 2008/06/29_21:38:01 info: te_pseudo_action: Pseudo action 59 
fired and confirmed
tengine[7344]: 2008/06/29_21:38:01 info: send_rsc_command: Initiating action 
63: VM1_migrate_to_0 on SERVER2
tengine[7344]: 2008/06/29_21:38:01 info: send_rsc_command: Initiating action 
66: VM2_stop_0 on SERVER1
tengine[7344]: 2008/06/29_21:38:01 info: send_rsc_command: Initiating action 
68: VM3_migrate_to_0 on SERVER2
tengine[7344]: 2008/06/29_21:38:01 info: send_rsc_command: Initiating action 
26: pingd-child:0_stop_0 on SERVER2
tengine[7344]: 2008/06/29_21:38:01 info: send_rsc_command: Initiating action 
76: evms-child:0_pre_notify_stop_0 on SERVER2
crmd[6698]: 2008/06/29_21:38:01 info: do_lrm_rsc_op: Performing 
op=VM1_migrate_to_0 key=63:25:8d6ac63c-a693-4d5e-9b91-765f5f62c7cf)
tengine[7344]: 2008/06/29_21:38:01 info: send_rsc_command: Initiating action 
79: evms-child:1_pre_notify_stop_0 on SERVER1
lrmd[6695]: 2008/06/29_21:38:01 info: rsc:VM1: migrate_to
tengine[7344]: 2008/06/29_21:38:01 info: send_rsc_command: Initiating action 
81: configpool-child:0_pre_notify_stop_0 on SERVER2
tengine[7344]: 2008/06/29_21:38:01 info: send_rsc_command: Initiating action 
83: configpool-child:1_pre_notify_stop_0 on SERVER1
crmd[6698]: 2008/06/29_21:38:01 info: do_lrm_rsc_op: Performing 
op=VM3_migrate_to_0 key=68:25:8d6ac63c-a693-4d5e-9b91-765f5f62c7cf)
lrmd[6695]: 2008/06/29_21:38:01 info: rsc:VM3: migrate_to
crmd[6698]: 2008/06/29_21:38:01 info: do_lrm_rsc_op: Performing 
op=pingd-child:0_stop_0 key=26:25:8d6ac63c-a693-4d5e-9b91-765f5f62c7cf)
lrmd[6695]: 2008/06/29_21:38:01 info: rsc:pingd-child:0: stop
cib[9407]: 2008/06/29_21:38:01 info: retrieveCib: Reading cluster configuration 
from: /var/lib/heartbeat/crm/cib.xml.last (digest: 
/var/lib/heartbeat/crm/cib.xml.sig.last)
crmd[6698]: 2008/06/29_21:38:01 info: do_lrm_rsc_op: Performing 
op=evms-child:0_notify_0 key=76:25:8d6ac63c-a693-4d5e-9b91-765f5f62c7cf)
crmd[6698]: 2008/06/29_21:38:01 info: do_lrm_rsc_op: Performing 
op=configpool-child:0_notify_0 key=81:25:8d6ac63c-a693-4d5e-9b91-765f5f62c7cf)
crmd[6698]: 2008/06/29_21:38:01 info: process_lrm_event: LRM operation 
VM1_monitor_10000 (call=21, rc=-2) Cancelled 
crmd[6698]: 2008/06/29_21:38:01 info: process_lrm_event: LRM operation 
VM3_monitor_10000 (call=25, rc=-2) Cancelled 
crmd[6698]: 2008/06/29_21:38:01 info: process_lrm_event: LRM operation 
pingd-child:0_monitor_20000 (call=11, rc=-2) Cancelled 
crmd[6698]: 2008/06/29_21:38:01 info: process_lrm_event: LRM operation 
pingd-child:0_stop_0 (call=28, rc=0) complete 
tengine[7344]: 2008/06/29_21:38:01 info: match_graph_event: Action 
pingd-child:0_stop_0 (26) confirmed on SERVER2 (rc=0)
tengine[7344]: 2008/06/29_21:38:01 info: te_pseudo_action: Pseudo action 32 
fired and confirmed
lrmd[6695]: 2008/06/29_21:38:01 info: rsc:configpool-child:0: notify
Filesystem[9446][9478]: 2008/06/29_21:38:01 INFO: Running notify for 
/dev/evms/san2/cfgpool on /etc/xen/vm
Filesystem[9446][9486]: 2008/06/29_21:38:01 INFO: 
5928BFFEC45242D7B737712FB4E94DB8: notify: pre for stop
Filesystem[9446][9487]: 2008/06/29_21:38:01 INFO: 
5928BFFEC45242D7B737712FB4E94DB8: notify active: SERVER1 
Filesystem[9446][9488]: 2008/06/29_21:38:01 INFO: 
5928BFFEC45242D7B737712FB4E94DB8: notify stop: SERVER2 SERVER1 
Filesystem[9446][9489]: 2008/06/29_21:38:01 INFO: 
5928BFFEC45242D7B737712FB4E94DB8: notify start: SERVER1 
Filesystem[9446][9491]: 2008/06/29_21:38:01 INFO: 
5928BFFEC45242D7B737712FB4E94DB8: ignoring pre-notify for stop.
crmd[6698]: 2008/06/29_21:38:01 info: process_lrm_event: LRM operation 
configpool-child:0_notify_0 (call=30, rc=0) complete 
Xen[9409][9497]: 2008/06/29_21:38:01 INFO: VM3: Starting xm migrate to SERVER1
tengine[7344]: 2008/06/29_21:38:01 info: match_graph_event: Action 
configpool-child:0_pre_notify_stop_0 (81) confirmed on SERVER2 (rc=0)
Xen[9408][9502]: 2008/06/29_21:38:01 INFO: VM1: Starting xm migrate to SERVER1
tengine[7344]: 2008/06/29_21:38:01 info: match_graph_event: Action 
evms-child:1_pre_notify_stop_0 (79) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:38:01 info: match_graph_event: Action 
configpool-child:1_pre_notify_stop_0 (83) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:38:01 info: te_pseudo_action: Pseudo action 60 
fired and confirmed
lrmd[6695]: 2008/06/29_21:38:02 info: rsc:evms-child:0: notify
crmd[6698]: 2008/06/29_21:38:02 info: process_lrm_event: LRM operation 
evms-child:0_notify_0 (call=29, rc=0) complete 
tengine[7344]: 2008/06/29_21:38:02 info: match_graph_event: Action 
evms-child:0_pre_notify_stop_0 (76) confirmed on SERVER2 (rc=0)
tengine[7344]: 2008/06/29_21:38:02 info: te_pseudo_action: Pseudo action 45 
fired and confirmed
Xen[9408][9768]: 2008/06/29_21:38:07 ERROR: VM1: xm migrate to SERVER1 
succeeded.
crmd[6698]: 2008/06/29_21:38:07 info: process_lrm_event: LRM operation 
VM1_migrate_to_0 (call=26, rc=0) complete 
tengine[7344]: 2008/06/29_21:38:07 info: match_graph_event: Action VM1_stop_0 
(63) confirmed on SERVER2 (rc=0)
Xen[9409][10124]: 2008/06/29_21:38:09 ERROR: VM3: xm migrate to SERVER1 
succeeded.
crmd[6698]: 2008/06/29_21:38:09 info: process_lrm_event: LRM operation 
VM3_migrate_to_0 (call=27, rc=0) complete 
tengine[7344]: 2008/06/29_21:38:09 info: match_graph_event: Action VM3_stop_0 
(68) confirmed on SERVER2 (rc=0)
tengine[7344]: 2008/06/29_21:38:25 info: match_graph_event: Action VM2_stop_0 
(66) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:38:25 info: te_pseudo_action: Pseudo action 57 
fired and confirmed
tengine[7344]: 2008/06/29_21:38:25 info: send_rsc_command: Initiating action 
48: configpool-child:0_stop_0 on SERVER2
tengine[7344]: 2008/06/29_21:38:25 info: send_rsc_command: Initiating action 
49: configpool-child:1_stop_0 on SERVER1
crmd[6698]: 2008/06/29_21:38:25 info: do_lrm_rsc_op: Performing 
op=configpool-child:0_stop_0 key=48:25:8d6ac63c-a693-4d5e-9b91-765f5f62c7cf)
lrmd[6695]: 2008/06/29_21:38:25 info: rsc:configpool-child:0: stop
crmd[6698]: 2008/06/29_21:38:25 info: process_lrm_event: LRM operation 
configpool-child:0_monitor_20000 (call=18, rc=-2) Cancelled 
Filesystem[10382][10412]: 2008/06/29_21:38:26 INFO: Running stop for 
/dev/evms/san2/cfgpool on /etc/xen/vm
Filesystem[10382][10422]: 2008/06/29_21:38:26 INFO: Trying to unmount 
/etc/xen/vm
Filesystem[10382][10437]: 2008/06/29_21:38:30 INFO: unmounted /etc/xen/vm 
successfully
Filesystem[10382][10450]: 2008/06/29_21:38:30 INFO: 
/sys/kernel/config/cluster/ocfs2/heartbeat/5928BFFEC45242D7B737712FB4E94DB8: 
Removing membership directory.
crmd[6698]: 2008/06/29_21:38:30 info: process_lrm_event: LRM operation 
configpool-child:0_stop_0 (call=31, rc=0) complete 
tengine[7344]: 2008/06/29_21:38:30 info: match_graph_event: Action 
configpool-child:0_stop_0 (48) confirmed on SERVER2 (rc=0)
tengine[7344]: 2008/06/29_21:38:31 info: match_graph_event: Action 
configpool-child:1_stop_0 (49) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:38:31 info: te_pseudo_action: Pseudo action 58 
fired and confirmed
tengine[7344]: 2008/06/29_21:38:31 info: te_pseudo_action: Pseudo action 61 
fired and confirmed
tengine[7344]: 2008/06/29_21:38:31 info: te_pseudo_action: Pseudo action 62 
fired and confirmed
tengine[7344]: 2008/06/29_21:38:31 info: te_pseudo_action: Pseudo action 42 
fired and confirmed
tengine[7344]: 2008/06/29_21:38:31 info: te_pseudo_action: Pseudo action 53 
fired and confirmed
tengine[7344]: 2008/06/29_21:38:31 info: te_pseudo_action: Pseudo action 54 
fired and confirmed
tengine[7344]: 2008/06/29_21:38:31 info: send_rsc_command: Initiating action 
33: evms-child:0_stop_0 on SERVER2
tengine[7344]: 2008/06/29_21:38:31 info: te_pseudo_action: Pseudo action 51 
fired and confirmed
tengine[7344]: 2008/06/29_21:38:31 info: send_rsc_command: Initiating action 
50: configpool-child:1_start_0 on SERVER1
crmd[6698]: 2008/06/29_21:38:31 info: do_lrm_rsc_op: Performing 
op=evms-child:0_stop_0 key=33:25:8d6ac63c-a693-4d5e-9b91-765f5f62c7cf)
lrmd[6695]: 2008/06/29_21:38:31 info: rsc:evms-child:0: stop
crmd[6698]: 2008/06/29_21:38:31 info: process_lrm_event: LRM operation 
evms-child:0_stop_0 (call=32, rc=0) complete 
tengine[7344]: 2008/06/29_21:38:31 info: match_graph_event: Action 
evms-child:0_stop_0 (33) confirmed on SERVER2 (rc=0)
tengine[7344]: 2008/06/29_21:38:31 info: te_pseudo_action: Pseudo action 43 
fired and confirmed
tengine[7344]: 2008/06/29_21:38:31 info: te_pseudo_action: Pseudo action 46 
fired and confirmed
tengine[7344]: 2008/06/29_21:38:31 info: send_rsc_command: Initiating action 
80: evms-child:1_post_notify_stop_0 on SERVER1
tengine[7344]: 2008/06/29_21:38:32 info: match_graph_event: Action 
evms-child:1_post_notify_stop_0 (80) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:38:32 info: te_pseudo_action: Pseudo action 47 
fired and confirmed
tengine[7344]: 2008/06/29_21:38:32 info: match_graph_event: Action 
configpool-child:1_start_0 (50) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:38:32 info: te_pseudo_action: Pseudo action 52 
fired and confirmed
tengine[7344]: 2008/06/29_21:38:32 info: te_pseudo_action: Pseudo action 55 
fired and confirmed
tengine[7344]: 2008/06/29_21:38:32 info: send_rsc_command: Initiating action 
82: configpool-child:1_post_notify_start_0 on SERVER1
tengine[7344]: 2008/06/29_21:38:33 info: match_graph_event: Action 
configpool-child:1_post_notify_start_0 (82) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:38:33 info: te_pseudo_action: Pseudo action 56 
fired and confirmed
tengine[7344]: 2008/06/29_21:38:33 info: send_rsc_command: Initiating action 
64: VM1_migrate_from_0 on SERVER1
tengine[7344]: 2008/06/29_21:38:33 info: send_rsc_command: Initiating action 
67: VM2_start_0 on SERVER1
tengine[7344]: 2008/06/29_21:38:33 info: send_rsc_command: Initiating action 
69: VM3_migrate_from_0 on SERVER1
tengine[7344]: 2008/06/29_21:38:33 info: send_rsc_command: Initiating action 
10: configpool-child:1_monitor_20000 on SERVER1
tengine[7344]: 2008/06/29_21:38:34 info: match_graph_event: Action VM1_start_0 
(64) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:38:34 info: send_rsc_command: Initiating action 
65: VM1_monitor_10000 on SERVER1
tengine[7344]: 2008/06/29_21:38:34 info: match_graph_event: Action VM3_start_0 
(69) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:38:34 info: send_rsc_command: Initiating action 
70: VM3_monitor_10000 on SERVER1
tengine[7344]: 2008/06/29_21:38:34 info: te_pseudo_action: Pseudo action 12 
fired and confirmed
tengine[7344]: 2008/06/29_21:38:34 info: send_rsc_command: Initiating action 
16: STONITH-child:0_stop_0 on SERVER2
tengine[7344]: 2008/06/29_21:38:34 info: send_rsc_command: Initiating action 
23: STONITH-iLO-SERVER1_stop_0 on SERVER2
crmd[6698]: 2008/06/29_21:38:34 info: do_lrm_rsc_op: Performing 
op=STONITH-child:0_stop_0 key=16:25:8d6ac63c-a693-4d5e-9b91-765f5f62c7cf)
lrmd[6695]: 2008/06/29_21:38:34 info: rsc:STONITH-child:0: stop
crmd[6698]: 2008/06/29_21:38:34 info: do_lrm_rsc_op: Performing 
op=STONITH-iLO-SERVER1_stop_0 key=23:25:8d6ac63c-a693-4d5e-9b91-765f5f62c7cf)
lrmd[6695]: 2008/06/29_21:38:34 info: rsc:STONITH-iLO-SERVER1: stop
crmd[6698]: 2008/06/29_21:38:34 info: process_lrm_event: LRM operation 
STONITH-child:0_monitor_5000 (call=4, rc=-2) Cancelled 
crmd[6698]: 2008/06/29_21:38:34 info: process_lrm_event: LRM operation 
STONITH-iLO-SERVER1_monitor_30000 (call=7, rc=-2) Cancelled 
lrmd[10474]: 2008/06/29_21:38:34 info: Try to stop STONITH resource 
<rsc_id=STONITH-child:0> : Device=external/ssh
crmd[6698]: 2008/06/29_21:38:34 info: process_lrm_event: LRM operation 
STONITH-child:0_stop_0 (call=33, rc=0) complete 
lrmd[10475]: 2008/06/29_21:38:34 info: Try to stop STONITH resource 
<rsc_id=STONITH-iLO-SERVER1> : Device=external/riloe
crmd[6698]: 2008/06/29_21:38:34 info: process_lrm_event: LRM operation 
STONITH-iLO-SERVER1_stop_0 (call=34, rc=0) complete 
tengine[7344]: 2008/06/29_21:38:34 info: match_graph_event: Action 
STONITH-child:0_stop_0 (16) confirmed on SERVER2 (rc=0)
tengine[7344]: 2008/06/29_21:38:34 info: te_pseudo_action: Pseudo action 22 
fired and confirmed
tengine[7344]: 2008/06/29_21:38:34 info: match_graph_event: Action 
STONITH-iLO-SERVER1_stop_0 (23) confirmed on SERVER2 (rc=0)
tengine[7344]: 2008/06/29_21:38:35 info: match_graph_event: Action 
configpool-child:1_monitor_20000 (10) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:38:35 info: match_graph_event: Action 
VM1_monitor_10000 (65) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:38:35 info: match_graph_event: Action 
VM3_monitor_10000 (70) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:38:38 info: match_graph_event: Action VM2_start_0 
(67) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:38:38 info: send_rsc_command: Initiating action 
11: VM2_monitor_10000 on SERVER1
crmd[6698]: 2008/06/29_21:38:40 info: do_state_transition: State transition 
S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_IPC_MESSAGE 
origin=route_message ]
tengine[7344]: 2008/06/29_21:38:40 info: match_graph_event: Action 
VM2_monitor_10000 (11) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:38:40 info: run_graph: Transition 25: 
(Complete=46, Pending=0, Fired=0, Skipped=0, Incomplete=0)
tengine[7344]: 2008/06/29_21:38:40 info: notify_crmd: Transition 25 status: 
te_complete - <null>
cib[6694]: 2008/06/29_21:39:54 info: cib_stats: Processed 56 operations 
(9285.00us average, 0% utilization) in the last 10min
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: - <cib 
epoch="24" num_updates="32">
tengine[7344]: 2008/06/29_21:40:51 info: update_abort_priority: Abort priority 
upgraded to 1000000
crmd[6698]: 2008/06/29_21:40:51 info: do_state_transition: State transition 
S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_IPC_MESSAGE 
origin=route_message ]
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: -   
<configuration>
crmd[6698]: 2008/06/29_21:40:51 info: do_state_transition: All 2 cluster nodes 
are eligible to run resources.
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: -     <nodes>
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: -       <node 
id="3ed49e19-fbb6-4cb2-8881-252b9bb768bc">
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: -         
<instance_attributes id="nodes-3ed49e19-fbb6-4cb2-8881-252b9bb768bc">
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: -           
<attributes>
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: -             
<nvpair value="on" id="standby-3ed49e19-fbb6-4cb2-8881-252b9bb768bc"/>
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: -           
</attributes>
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: -         
</instance_attributes>
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: -       </node>
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: -     </nodes>
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: -   
</configuration>
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: - </cib>
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: + <cib 
epoch="25" num_updates="1">
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: +   
<configuration>
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: +     <nodes>
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: +       <node 
id="3ed49e19-fbb6-4cb2-8881-252b9bb768bc">
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: +         
<instance_attributes id="nodes-3ed49e19-fbb6-4cb2-8881-252b9bb768bc">
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: +           
<attributes>
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: +             
<nvpair value="off" id="standby-3ed49e19-fbb6-4cb2-8881-252b9bb768bc"/>
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: +           
</attributes>
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: +         
</instance_attributes>
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: +       </node>
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: +     </nodes>
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: +   
</configuration>
cib[6694]: 2008/06/29_21:40:51 info: log_data_element: cib:diff: + </cib>
cib[12003]: 2008/06/29_21:40:51 info: retrieveCib: Reading cluster 
configuration from: /var/lib/heartbeat/crm/cib.xml (digest: 
/var/lib/heartbeat/crm/cib.xml.sig)
cib[12003]: 2008/06/29_21:40:51 info: retrieveCib: Reading cluster 
configuration from: /var/lib/heartbeat/crm/cib.xml (digest: 
/var/lib/heartbeat/crm/cib.xml.sig)
cib[12003]: 2008/06/29_21:40:51 info: retrieveCib: Reading cluster 
configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: 
/var/lib/heartbeat/crm/cib.xml.sig.last)
pengine[7345]: 2008/06/29_21:40:51 info: determine_online_status: Node SERVER2 
is online
pengine[7345]: 2008/06/29_21:40:51 info: determine_online_status: Node SERVER1 
is online
pengine[7345]: 2008/06/29_21:40:51 notice: clone_print: Clone Set: STONITH
pengine[7345]: 2008/06/29_21:40:51 notice: native_print:     STONITH-child:0    
(stonith:external/ssh): Stopped 
pengine[7345]: 2008/06/29_21:40:51 notice: native_print:     STONITH-child:1    
(stonith:external/ssh): Started SERVER1
pengine[7345]: 2008/06/29_21:40:51 notice: native_print: STONITH-iLO-SERVER1    
(stonith:external/riloe):       Stopped 
pengine[7345]: 2008/06/29_21:40:51 notice: native_print: STONITH-iLO-SERVER2    
(stonith:external/riloe):       Started SERVER1
pengine[7345]: 2008/06/29_21:40:51 notice: clone_print: Clone Set: pingd
pengine[7345]: 2008/06/29_21:40:51 notice: native_print:     pingd-child:0      
(ocf::heartbeat:pingd): Stopped 
pengine[7345]: 2008/06/29_21:40:51 notice: native_print:     pingd-child:1      
(ocf::heartbeat:pingd): Started SERVER1
pengine[7345]: 2008/06/29_21:40:51 notice: clone_print: Clone Set: evms
pengine[7345]: 2008/06/29_21:40:51 notice: native_print:     evms-child:0       
(ocf::heartbeat:EvmsSCC):       Stopped 
pengine[7345]: 2008/06/29_21:40:51 notice: native_print:     evms-child:1       
(ocf::heartbeat:EvmsSCC):       Started SERVER1
pengine[7345]: 2008/06/29_21:40:51 notice: clone_print: Clone Set: configpool
cib[12003]: 2008/06/29_21:40:51 info: write_cib_contents: Wrote version 0.25.1 
of the CIB to disk (digest: a52652dcda33cb9dc8e6b90c5f90d1eb)
pengine[7345]: 2008/06/29_21:40:51 notice: native_print:     configpool-child:0 
(ocf::heartbeat:Filesystem):    Stopped 
pengine[7345]: 2008/06/29_21:40:51 notice: native_print:     configpool-child:1 
(ocf::heartbeat:Filesystem):    Started SERVER1
pengine[7345]: 2008/06/29_21:40:51 notice: native_print: VM1    
(ocf::heartbeat:Xen):   Started SERVER1
pengine[7345]: 2008/06/29_21:40:51 notice: native_print: VM2    
(ocf::heartbeat:Xen):   Started SERVER1
cib[12003]: 2008/06/29_21:40:51 info: retrieveCib: Reading cluster 
configuration from: /var/lib/heartbeat/crm/cib.xml (digest: 
/var/lib/heartbeat/crm/cib.xml.sig)
pengine[7345]: 2008/06/29_21:40:51 notice: native_print: VM3    
(ocf::heartbeat:Xen):   Started SERVER1
pengine[7345]: 2008/06/29_21:40:51 notice: StartRsc:  SERVER2   Start 
STONITH-child:0
pengine[7345]: 2008/06/29_21:40:51 notice: RecurringOp: SERVER2    
STONITH-child:0_monitor_5000
pengine[7345]: 2008/06/29_21:40:51 notice: NoRoleChange: Leave resource 
STONITH-child:1 (SERVER1)
pengine[7345]: 2008/06/29_21:40:51 notice: StartRsc:  SERVER2   Start 
STONITH-iLO-SERVER1
pengine[7345]: 2008/06/29_21:40:51 notice: RecurringOp: SERVER2    
STONITH-iLO-SERVER1_monitor_30000
pengine[7345]: 2008/06/29_21:40:51 notice: NoRoleChange: Leave resource 
STONITH-iLO-SERVER2     (SERVER1)
pengine[7345]: 2008/06/29_21:40:51 notice: StartRsc:  SERVER2   Start 
pingd-child:0
pengine[7345]: 2008/06/29_21:40:51 notice: RecurringOp: SERVER2    
pingd-child:0_monitor_20000
pengine[7345]: 2008/06/29_21:40:51 notice: NoRoleChange: Leave resource 
pingd-child:1   (SERVER1)
pengine[7345]: 2008/06/29_21:40:51 notice: StartRsc:  SERVER2   Start 
evms-child:0
pengine[7345]: 2008/06/29_21:40:51 notice: NoRoleChange: Leave resource 
evms-child:1    (SERVER1)
pengine[7345]: 2008/06/29_21:40:51 notice: StartRsc:  SERVER2   Start 
configpool-child:0
pengine[7345]: 2008/06/29_21:40:51 notice: RecurringOp: SERVER2    
configpool-child:0_monitor_20000
pengine[7345]: 2008/06/29_21:40:51 notice: NoRoleChange: Leave resource 
configpool-child:1      (SERVER1)
pengine[7345]: 2008/06/29_21:40:51 notice: NoRoleChange: Leave resource VM1     
(SERVER1)
pengine[7345]: 2008/06/29_21:40:51 notice: NoRoleChange: Leave resource VM2     
(SERVER1)
pengine[7345]: 2008/06/29_21:40:51 notice: NoRoleChange: Leave resource VM3     
(SERVER1)
cib[12003]: 2008/06/29_21:40:51 info: retrieveCib: Reading cluster 
configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: 
/var/lib/heartbeat/crm/cib.xml.sig.last)
crmd[6698]: 2008/06/29_21:40:51 info: do_state_transition: State transition 
S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE 
origin=route_message ]
lrmd[6695]: 2008/06/29_21:40:51 info: rsc:STONITH-iLO-SERVER1: start
crmd[6698]: 2008/06/29_21:40:51 info: do_lrm_rsc_op: Performing 
op=STONITH-iLO-SERVER1_start_0 key=20:26:8d6ac63c-a693-4d5e-9b91-765f5f62c7cf)
pengine[7345]: 2008/06/29_21:40:51 info: process_pe_message: Transition 26: 
PEngine Input stored in: /var/lib/heartbeat/pengine/pe-input-25.bz2
tengine[7344]: 2008/06/29_21:40:51 info: unpack_graph: Unpacked transition 26: 
41 actions in 41 synapses
tengine[7344]: 2008/06/29_21:40:51 info: te_pseudo_action: Pseudo action 16 
fired and confirmed
tengine[7344]: 2008/06/29_21:40:51 info: send_rsc_command: Initiating action 
20: STONITH-iLO-SERVER1_start_0 on SERVER2
tengine[7344]: 2008/06/29_21:40:51 info: te_pseudo_action: Pseudo action 28 
fired and confirmed
tengine[7344]: 2008/06/29_21:40:51 info: te_pseudo_action: Pseudo action 37 
fired and confirmed
tengine[7344]: 2008/06/29_21:40:51 info: te_pseudo_action: Pseudo action 53 
fired and confirmed
tengine[7344]: 2008/06/29_21:40:51 info: send_rsc_command: Initiating action 
63: VM1_stop_0 on SERVER1
tengine[7344]: 2008/06/29_21:40:51 info: send_rsc_command: Initiating action 
65: VM2_stop_0 on SERVER1
tengine[7344]: 2008/06/29_21:40:51 info: send_rsc_command: Initiating action 
67: VM3_stop_0 on SERVER1
tengine[7344]: 2008/06/29_21:40:51 info: send_rsc_command: Initiating action 
12: STONITH-child:0_start_0 on SERVER2
tengine[7344]: 2008/06/29_21:40:51 info: send_rsc_command: Initiating action 
24: pingd-child:0_start_0 on SERVER2
tengine[7344]: 2008/06/29_21:40:51 info: send_rsc_command: Initiating action 
75: evms-child:1_pre_notify_start_0 on SERVER1
tengine[7344]: 2008/06/29_21:40:51 info: send_rsc_command: Initiating action 
80: configpool-child:1_pre_notify_start_0 on SERVER1
crmd[6698]: 2008/06/29_21:40:51 info: do_lrm_rsc_op: Performing 
op=STONITH-child:0_start_0 key=12:26:8d6ac63c-a693-4d5e-9b91-765f5f62c7cf)
lrmd[6695]: 2008/06/29_21:40:51 info: rsc:STONITH-child:0: start
lrmd[12004]: 2008/06/29_21:40:51 info: Try to start STONITH resource 
<rsc_id=STONITH-iLO-SERVER1> : Device=external/riloe
crmd[6698]: 2008/06/29_21:40:51 info: do_lrm_rsc_op: Performing 
op=pingd-child:0_start_0 key=24:26:8d6ac63c-a693-4d5e-9b91-765f5f62c7cf)
lrmd[6695]: 2008/06/29_21:40:51 info: rsc:pingd-child:0: start
crmd[6698]: 2008/06/29_21:40:51 info: process_lrm_event: LRM operation 
pingd-child:0_start_0 (call=37, rc=0) complete 
tengine[7344]: 2008/06/29_21:40:51 info: match_graph_event: Action 
pingd-child:0_start_0 (24) confirmed on SERVER2 (rc=0)
tengine[7344]: 2008/06/29_21:40:51 info: send_rsc_command: Initiating action 
25: pingd-child:0_monitor_20000 on SERVER2
tengine[7344]: 2008/06/29_21:40:51 info: te_pseudo_action: Pseudo action 29 
fired and confirmed
crmd[6698]: 2008/06/29_21:40:51 info: do_lrm_rsc_op: Performing 
op=pingd-child:0_monitor_20000 key=25:26:8d6ac63c-a693-4d5e-9b91-765f5f62c7cf)
crmd[6698]: 2008/06/29_21:40:51 info: process_lrm_event: LRM operation 
pingd-child:0_monitor_20000 (call=38, rc=0) complete 
tengine[7344]: 2008/06/29_21:40:51 info: match_graph_event: Action 
pingd-child:0_monitor_20000 (25) confirmed on SERVER2 (rc=0)
lrmd[12005]: 2008/06/29_21:40:51 info: Try to start STONITH resource 
<rsc_id=STONITH-child:0> : Device=external/ssh
tengine[7344]: 2008/06/29_21:40:51 info: match_graph_event: Action 
evms-child:1_pre_notify_start_0 (75) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:40:51 info: te_pseudo_action: Pseudo action 38 
fired and confirmed
tengine[7344]: 2008/06/29_21:40:51 info: te_pseudo_action: Pseudo action 35 
fired and confirmed
tengine[7344]: 2008/06/29_21:40:51 info: send_rsc_command: Initiating action 
32: evms-child:0_start_0 on SERVER2
crmd[6698]: 2008/06/29_21:40:51 info: do_lrm_rsc_op: Performing 
op=evms-child:0_start_0 key=32:26:8d6ac63c-a693-4d5e-9b91-765f5f62c7cf)
lrmd[6695]: 2008/06/29_21:40:51 info: rsc:evms-child:0: start
heartbeat[4393]: 2008/06/29_21:40:51 WARN:  nodename not found for uuid
crmd[6698]: 2008/06/29_21:40:52 info: process_lrm_event: LRM operation 
STONITH-child:0_start_0 (call=36, rc=0) complete 
tengine[7344]: 2008/06/29_21:40:52 info: match_graph_event: Action 
STONITH-child:0_start_0 (12) confirmed on SERVER2 (rc=0)
tengine[7344]: 2008/06/29_21:40:52 info: send_rsc_command: Initiating action 
13: STONITH-child:0_monitor_5000 on SERVER2
tengine[7344]: 2008/06/29_21:40:52 info: te_pseudo_action: Pseudo action 17 
fired and confirmed
crmd[6698]: 2008/06/29_21:40:52 info: do_lrm_rsc_op: Performing 
op=STONITH-child:0_monitor_5000 key=13:26:8d6ac63c-a693-4d5e-9b91-765f5f62c7cf)
heartbeat[4393]: 2008/06/29_21:40:53 WARN:  nodename not found for uuid
tengine[7344]: 2008/06/29_21:40:53 info: match_graph_event: Action 
configpool-child:1_pre_notify_start_0 (80) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:40:53 info: te_pseudo_action: Pseudo action 54 
fired and confirmed
crmd[6698]: 2008/06/29_21:40:53 info: process_lrm_event: LRM operation 
STONITH-child:0_monitor_5000 (call=40, rc=0) complete 
tengine[7344]: 2008/06/29_21:40:53 info: match_graph_event: Action 
STONITH-child:0_monitor_5000 (13) confirmed on SERVER2 (rc=0)
heartbeat[4393]: 2008/06/29_21:40:53 WARN:  nodename not found for uuid
crmd[6698]: 2008/06/29_21:40:56 info: process_lrm_event: LRM operation 
evms-child:0_start_0 (call=39, rc=0) complete 
tengine[7344]: 2008/06/29_21:40:56 info: match_graph_event: Action 
evms-child:0_start_0 (32) confirmed on SERVER2 (rc=0)
tengine[7344]: 2008/06/29_21:40:56 info: te_pseudo_action: Pseudo action 36 
fired and confirmed
tengine[7344]: 2008/06/29_21:40:56 info: te_pseudo_action: Pseudo action 39 
fired and confirmed
tengine[7344]: 2008/06/29_21:40:56 info: send_rsc_command: Initiating action 
74: evms-child:0_post_notify_start_0 on SERVER2
tengine[7344]: 2008/06/29_21:40:56 info: send_rsc_command: Initiating action 
76: evms-child:1_post_notify_start_0 on SERVER1
crmd[6698]: 2008/06/29_21:40:56 info: do_lrm_rsc_op: Performing 
op=evms-child:0_notify_0 key=74:26:8d6ac63c-a693-4d5e-9b91-765f5f62c7cf)
lrmd[6695]: 2008/06/29_21:40:56 info: rsc:evms-child:0: notify
crmd[6698]: 2008/06/29_21:40:56 info: process_lrm_event: LRM operation 
evms-child:0_notify_0 (call=41, rc=0) complete 
tengine[7344]: 2008/06/29_21:40:56 info: match_graph_event: Action 
evms-child:0_post_notify_start_0 (74) confirmed on SERVER2 (rc=0)
tengine[7344]: 2008/06/29_21:40:57 info: match_graph_event: Action 
evms-child:1_post_notify_start_0 (76) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:40:57 info: te_pseudo_action: Pseudo action 40 
fired and confirmed
tengine[7344]: 2008/06/29_21:40:57 info: te_pseudo_action: Pseudo action 51 
fired and confirmed
tengine[7344]: 2008/06/29_21:40:57 info: send_rsc_command: Initiating action 
47: configpool-child:0_start_0 on SERVER2
lrmd[6695]: 2008/06/29_21:40:57 info: rsc:configpool-child:0: start
crmd[6698]: 2008/06/29_21:40:57 info: do_lrm_rsc_op: Performing 
op=configpool-child:0_start_0 key=47:26:8d6ac63c-a693-4d5e-9b91-765f5f62c7cf)
Filesystem[12157][12188]: 2008/06/29_21:40:57 INFO: Running start for 
/dev/evms/san2/cfgpool on /etc/xen/vm
Filesystem[12157][12200]: 2008/06/29_21:40:57 INFO: 
5928BFFEC45242D7B737712FB4E94DB8: Faking pre-notification on start.
Filesystem[12157][12201]: 2008/06/29_21:40:57 INFO: 
5928BFFEC45242D7B737712FB4E94DB8: notify: pre for start
crmd[6698]: 2008/06/29_21:40:57 info: process_lrm_event: LRM operation 
STONITH-iLO-SERVER1_start_0 (call=35, rc=0) complete 
Filesystem[12157][12202]: 2008/06/29_21:40:57 INFO: 
5928BFFEC45242D7B737712FB4E94DB8: notify active: SERVER1 
Filesystem[12157][12203]: 2008/06/29_21:40:57 INFO: 
5928BFFEC45242D7B737712FB4E94DB8: notify stop: 
Filesystem[12157][12204]: 2008/06/29_21:40:57 INFO: 
5928BFFEC45242D7B737712FB4E94DB8: notify start: SERVER2 
tengine[7344]: 2008/06/29_21:40:57 info: match_graph_event: Action 
STONITH-iLO-SERVER1_start_0 (20) confirmed on SERVER2 (rc=0)
tengine[7344]: 2008/06/29_21:40:57 info: send_rsc_command: Initiating action 
21: STONITH-iLO-SERVER1_monitor_30000 on SERVER2
crmd[6698]: 2008/06/29_21:40:57 info: do_lrm_rsc_op: Performing 
op=STONITH-iLO-SERVER1_monitor_30000 
key=21:26:8d6ac63c-a693-4d5e-9b91-765f5f62c7cf)
Filesystem[12157][12210]: 2008/06/29_21:40:57 INFO: 
5928BFFEC45242D7B737712FB4E94DB8: post-processed active: SERVER1  SERVER2
Filesystem[12157][12212]: 2008/06/29_21:40:57 INFO: 
5928BFFEC45242D7B737712FB4E94DB8: I am node SERVER2.
Filesystem[12157][12213]: 2008/06/29_21:40:57 INFO: 
5928BFFEC45242D7B737712FB4E94DB8: heartbeat directory doesn't exist yet, 
creating.
Filesystem[12157][12215]: 2008/06/29_21:40:57 INFO: 
5928BFFEC45242D7B737712FB4E94DB8: Existing node list: 
Filesystem[12157][12216]: 2008/06/29_21:40:57 INFO: 
5928BFFEC45242D7B737712FB4E94DB8: Activating node: SERVER1
Filesystem[12157][12218]: 2008/06/29_21:40:57 INFO: 
5928BFFEC45242D7B737712FB4E94DB8: Activating node: SERVER2
crmd[6698]: 2008/06/29_21:40:57 info: process_lrm_event: LRM operation 
configpool-child:0_start_0 (call=42, rc=0) complete 
tengine[7344]: 2008/06/29_21:40:57 info: match_graph_event: Action 
configpool-child:0_start_0 (47) confirmed on SERVER2 (rc=0)
tengine[7344]: 2008/06/29_21:40:57 info: te_pseudo_action: Pseudo action 52 
fired and confirmed
tengine[7344]: 2008/06/29_21:40:57 info: te_pseudo_action: Pseudo action 55 
fired and confirmed
tengine[7344]: 2008/06/29_21:40:57 info: send_rsc_command: Initiating action 
79: configpool-child:0_post_notify_start_0 on SERVER2
tengine[7344]: 2008/06/29_21:40:57 info: send_rsc_command: Initiating action 
81: configpool-child:1_post_notify_start_0 on SERVER1
crmd[6698]: 2008/06/29_21:40:57 info: do_lrm_rsc_op: Performing 
op=configpool-child:0_notify_0 key=79:26:8d6ac63c-a693-4d5e-9b91-765f5f62c7cf)
lrmd[6695]: 2008/06/29_21:40:57 info: rsc:configpool-child:0: notify
Filesystem[12243][12273]: 2008/06/29_21:40:58 INFO: Running notify for 
/dev/evms/san2/cfgpool on /etc/xen/vm
Filesystem[12243][12281]: 2008/06/29_21:40:58 INFO: 
5928BFFEC45242D7B737712FB4E94DB8: notify: post for start
Filesystem[12243][12282]: 2008/06/29_21:40:58 INFO: 
5928BFFEC45242D7B737712FB4E94DB8: notify active: SERVER1 
Filesystem[12243][12283]: 2008/06/29_21:40:58 INFO: 
5928BFFEC45242D7B737712FB4E94DB8: notify stop: 
Filesystem[12243][12284]: 2008/06/29_21:40:58 INFO: 
5928BFFEC45242D7B737712FB4E94DB8: notify start: SERVER2 
Filesystem[12243][12285]: 2008/06/29_21:40:58 INFO: 
5928BFFEC45242D7B737712FB4E94DB8: ignoring post-notify for start.
crmd[6698]: 2008/06/29_21:40:58 info: process_lrm_event: LRM operation 
configpool-child:0_notify_0 (call=44, rc=0) complete 
tengine[7344]: 2008/06/29_21:40:58 info: match_graph_event: Action 
configpool-child:0_post_notify_start_0 (79) confirmed on SERVER2 (rc=0)
tengine[7344]: 2008/06/29_21:40:59 info: match_graph_event: Action 
configpool-child:1_post_notify_start_0 (81) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:40:59 info: te_pseudo_action: Pseudo action 56 
fired and confirmed
tengine[7344]: 2008/06/29_21:40:59 info: send_rsc_command: Initiating action 
48: configpool-child:0_monitor_20000 on SERVER2
crmd[6698]: 2008/06/29_21:40:59 info: do_lrm_rsc_op: Performing 
op=configpool-child:0_monitor_20000 
key=48:26:8d6ac63c-a693-4d5e-9b91-765f5f62c7cf)
crmd[6698]: 2008/06/29_21:40:59 info: process_lrm_event: LRM operation 
configpool-child:0_monitor_20000 (call=45, rc=0) complete 
tengine[7344]: 2008/06/29_21:40:59 info: match_graph_event: Action 
configpool-child:0_monitor_20000 (48) confirmed on SERVER2 (rc=0)
crmd[6698]: 2008/06/29_21:41:04 info: process_lrm_event: LRM operation 
STONITH-iLO-SERVER1_monitor_30000 (call=43, rc=0) complete 
tengine[7344]: 2008/06/29_21:41:04 info: match_graph_event: Action 
STONITH-iLO-SERVER1_monitor_30000 (21) confirmed on SERVER2 (rc=0)
tengine[7344]: 2008/06/29_21:41:15 info: match_graph_event: Action VM2_stop_0 
(65) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:41:15 info: send_rsc_command: Initiating action 
66: VM2_start_0 on SERVER1
tengine[7344]: 2008/06/29_21:41:17 info: match_graph_event: Action VM1_stop_0 
(63) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:41:17 info: send_rsc_command: Initiating action 
64: VM1_start_0 on SERVER1
tengine[7344]: 2008/06/29_21:41:18 info: match_graph_event: Action VM3_stop_0 
(67) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:41:18 info: send_rsc_command: Initiating action 
68: VM3_start_0 on SERVER1
tengine[7344]: 2008/06/29_21:41:18 info: te_pseudo_action: Pseudo action 8 
fired and confirmed
tengine[7344]: 2008/06/29_21:41:21 info: match_graph_event: Action VM2_start_0 
(66) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:41:21 info: send_rsc_command: Initiating action 6: 
VM2_monitor_10000 on SERVER1
tengine[7344]: 2008/06/29_21:41:21 info: match_graph_event: Action VM1_start_0 
(64) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:41:21 info: send_rsc_command: Initiating action 5: 
VM1_monitor_10000 on SERVER1
tengine[7344]: 2008/06/29_21:41:22 info: match_graph_event: Action 
VM2_monitor_10000 (6) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:41:23 info: match_graph_event: Action 
VM1_monitor_10000 (5) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:41:23 info: match_graph_event: Action VM3_start_0 
(68) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:41:23 info: send_rsc_command: Initiating action 7: 
VM3_monitor_10000 on SERVER1
tengine[7344]: 2008/06/29_21:41:24 info: match_graph_event: Action 
VM3_monitor_10000 (7) confirmed on SERVER1 (rc=0)
tengine[7344]: 2008/06/29_21:41:24 info: run_graph: Transition 26: 
(Complete=41, Pending=0, Fired=0, Skipped=0, Incomplete=0)
tengine[7344]: 2008/06/29_21:41:24 info: notify_crmd: Transition 26 status: 
te_complete - <null>
crmd[6698]: 2008/06/29_21:41:24 info: do_state_transition: State transition 
S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_IPC_MESSAGE 
origin=route_message ]
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to