Now everything seems to work correctly. Thank you very much for your
help.

The problems were the "pingd-multiplier" too low (thanks Andreas) and
the wrong attribute name (Thnaks Dejan).

Kind regards,
Daniel Hubeli

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Dejan
Muhamedagic
Sent: Wednesday, August 22, 2007 8:02 PM
To: General Linux-HA mailing list
Subject: Re: [Linux-HA] Pingd problem

On Wed, Aug 22, 2007 at 05:37:01PM +0200, Hubeli Daniel wrote:
> Hi all,
> 
> I'm writing you because I'm having some problems with pingd.
> 
> My structure is:
> 
> -     2 nodes cluster
> -     OS: SLES 10 SP1
> -     Ha version: 2.0.8
> 
> I successfully configured an NFS group which works quite good. Now I'd
> like to monitor 2 IPs; if both nodes see both IPs I'd like to run the
> resource on my normal preferred node (assigned with a score attribute)
> but if a node see just 1 IP and the other node see both IP I'd like to
> switch the resource group on the node with better connectivity.
> 
> My actual configuration seems not working but I don't understand why
(to
> me seems correctly configured).

You refer to the pingd attribute in constraints by name
"group_pingd". It should be "pingd". Or you can change the
attribute name in the pingd primitive definition in the CIB (not
sure about the attribute's name, you can get by checking the
pingd RA's meta-data).

In the logs I see references to some resources which are not in
the CIB: nfsp*, nfst*, oe10*, perhaps more. How comes?

> If someone has any hints it would be great ...

Try to use ptest on peinputs which occur when the connectivity
changes. That should tell you what the cluster thinks it's doing
(actually, the pengine). Check the status section to see if the
pingd attribute really gets updated correctly (cibadmin -Q).

> My configuration is:

Next time, please attach CIB/logs instead of pasting them in the
message.

Dejan

> #
> #
>  ha.cf
> #
> autojoin any
> crm true
> bcast eth3
> bcast eth1
> node ulxxcapb
> node ulxxcapa
> use_logd on
> respawn root /sbin/evmsd
> apiauth evms uid=hacluster,root
> apiauth ping gid=root uid=root
> ping 9.0.2.90 9.0.1.91
> 
> #
> # General config:
> #
> <crm_config>
>   <cluster_property_set id="cib-bootstrap-options">
>     <attributes>
>       <nvpair name="last-lrm-refresh"
> id="cib-bootstrap-options-last-lrm-refresh" value="1184073021"/>
>       <nvpair name="transition-idle-timeout"
> id="transition-idle-timeout"                value="60"/>
>       <nvpair name="symmetric-cluster"
> id="symmetric-cluster"                      value="true"/>
>       <nvpair name="no-quorum-policy"
> id="no-quorum-policy"                       value="stop"/>
>       <nvpair name="stonith-enabled"
> id="stonith-enabled"                        value="false"/>
>       <nvpair name="stonith-action"
> id="stonith-action"                         value="reboot"/>
>       <nvpair name="startup-fencing"
> id="startup-fencing"                        value="true"/>
>       <nvpair name="is-managed-default"
> id="is-managed-default"                     value="true"/>
>       <nvpair name="default-resource-stickiness"
> id="default-resource-stickiness"            value="100"/>
>       <nvpair name="default-resource-failure-stickiness"
> id="default-resource-failure-stickiness"    value="-200"/>
>       <nvpair name="stop-orphan-resources"
> id="stop-orphan-resources"                  value="true"/>
>       <nvpair name="stop-orphan-actions"
> id="stop-orphan-actions"                    value="true"/>
>     </attributes>
>   </cluster_property_set>
> </crm_config>
> 
> #
> # Node config:
> #
> <nodes>
>   <node id="4153c055-d562-46bb-8f33-41023f000ef9" uname="ulxxcapa"
> type="normal"/>
>   <node id="acc23203-d2c8-419e-9dfb-ce621b332225" uname="ulxxcapb"
> type="normal"/>
> </nodes>
> 
> #
> # Resource config (just a NFS Share):
> #
>         <group id="group_nfs2">
>           <primitive class="heartbeat" type="evms_failover"
> provider="heartbeat" id="nfs2_evms_failover">
>             <instance_attributes id="nfs2_evms_failover_attrs">
>               <attributes>
>                 <nvpair name="target_role"
> id="target_role_nfs2_evms_failover" value="started"/>
>                 <nvpair name="1"           id="1_nfs2_evms_failover"
> value="nfs2"/>
>               </attributes>
>             </instance_attributes>
>             <operations>
>               <op name="monitor" id="monitor_nfs2_evms_failover"
> timeout="60s"    interval="30s"/>
>               <op name="start"   id="start_nfs2_evms_failover"
> timeout="300s"/>
>               <op name="stop"    id="stop_nfs2_evms_failover"
> timeout="300s"/>
>             </operations>
>           </primitive>
>           <primitive class="ocf" type="Filesystem"
provider="heartbeat"
> id="nfs2_filesystem">
>             <instance_attributes id="nfs2_filesystem_attrs">
>               <attributes>
>                 <nvpair name="target_role"
> id="target_role_nfs2_filesystem"  value="started"/>
>                 <nvpair name="device"      id="device_nfs2_filesystem"
> value="/dev/evms/nfs2/nfs2_lv"/>
>                 <nvpair name="directory"
> id="directory_nfs2_filesystem"    value="/mnt/nfs2"/>
>                 <nvpair name="fstype"      id="fstype_nfs2_filesystem"
> value="ext3"/>
>               </attributes>
>             </instance_attributes>
>             <operations>
>               <op name="monitor" id="monitor_nfs2_filesystem"
> timeout="60s" interval="30s"/>
>               <op name="start"   id="start_nfs2_filesystem"
> timeout="300s"/>
>               <op name="stop"    id="stop_nfs2_filesystem"
> timeout="300s"/>
>             </operations>
>           </primitive>
>           <primitive class="ocf" type="IPaddr2" provider="heartbeat"
> id="nfs2_ip_1">
>             <instance_attributes id="nfs2_ip_1_attrs">
>               <attributes>
>                 <nvpair name="target_role" id="target_role_nfs2_ip_1"
> value="started"/>
>                 <nvpair name="ip"          id="ip_nfs2_ip_1"
> value="9.0.1.92"/>
>                 <nvpair name="nic"         id="nic_nfs2_ip_1"
> value="eth0"/>
>               </attributes>
>             </instance_attributes>
>             <operations>
>               <op name="monitor" id="monitor_nfs2_ip_1" timeout="10s"
> interval="5s" />
>             </operations>
>           </primitive>
>           <primitive class="ocf" type="IPaddr2" provider="heartbeat"
> id="nfs2_ip_2">
>             <instance_attributes id="nfs2_ip_2_attrs">
>               <attributes>
>                 <nvpair name="target_role" id="target_role_nfs2_ip_2"
> value="started"/>
>                 <nvpair name="ip"          id="ip_nfs2_ip_2"
> value="9.0.2.92"/>
>                 <nvpair name="nic"         id="nic_nfs2_ip_2"
> value="eth2"/>
>               </attributes>
>             </instance_attributes>
>             <operations>
>               <op name="monitor" id="monitor_nfs2_ip_2" timeout="10s"
> interval="5s"/>
>             </operations>
>           </primitive>
>           <primitive id="nfs2_export" class="lsb" type="export_nfs2">
>             <operations>
>               <op name="monitor" id="monitor_export_nfs2"
timeout="60s"
> interval="30s" on_fail="restart"/>
>               <op name="start"   id="start_export_nfs2"
> timeout="300s"/>
>               <op name="stop"    id="stop_export_nfs2"
> timeout="300s"/>
>             </operations>
>           </primitive>
>           <primitive id="nfs2_tsmc" class="lsb" type="dsmc_nfs2">
>             <operations>
>               <op name="monitor" id="monitor_tsmc_nfs2" timeout="60s"
> interval="30s" on_fail="restart"/>
>               <op name="start"   id="start_tsmc_nfs2"
timeout="300s"/>
>               <op name="stop"    id="stop_tsmc_nfs2"
timeout="300s"/>
>             </operations>
>           </primitive>
>         </group>
> 
> #
> # Pingd definition
> #
> <clone id="group_pingd">
> 
>   <instance_attributes id="group_pingd">
>     <attributes>
>       <nvpair id="clone_node_max" name="clone_node_max" value="1"/>
>     </attributes>
>   </instance_attributes>
> 
>   <primitive id="pingd-child" provider="heartbeat" class="ocf"
> type="pingd">
>     <instance_attributes id="pingd_inst_attr">
>       <attributes>
>          <nvpair id="pingd-dampen"     name="dampen"     value="2s"/>
>          <nvpair id="pingd-multiplier" name="multiplier" value="100"/>
>          <nvpair id="pingd-pidfile"    name="pidfile"
> value="/var/run/pingd.pid"/>
>          <nvpair id="pingd-user"       name="user"
value="root"/>
>       </attributes>
>     </instance_attributes>
>     <operations>
>       <op id="pingd-child-start"   name="start"   prereq="nothing"/>
>       <op id="pingd-child-monitor" name="monitor" interval="4s"
> timeout="8s" prereq="nothing"/>
>     </operations>
>   </primitive>
> 
> </clone>
> 
> 
> #
> # Resource restriction
> #
> <constraints>
>   <rsc_location id="pingd_nfs2_location" rsc="group_nfs2">
>     <rule id="prefered_place_group_nfs2" score="50">
>       <expression attribute="#uname" operation="eq" value="ulxxcapb"/>
>     </rule> 
>     <rule id="pingd_nfs2_rule" score_attribute="group_pingd">
>       <expression id="pingd_nfs2_conn_defined" attribute="group_pingd"
> operation="defined"/>
>     </rule>
>   </rsc_location>
> </constraints>
> 
> 
> If the resource is in the default location (node B) and the node loose
> one ping node in the log I see the following messages (but the
resource
> remain where it is):
> 
> heartbeat[3192]: 2007/08/22_17:33:40 WARN: node 9.0.2.90: is dead
> crmd[3240]: 2007/08/22_17:33:40 notice: crmd_ha_status_callback:
Status
> update: Node 9.0.2.90 now has status [dead]
> heartbeat[3192]: 2007/08/22_17:33:40 info: Link 9.0.2.90:9.0.2.90
dead.
> crmd[3240]: 2007/08/22_17:33:40 WARN: get_uuid: Could not calculate
UUID
> for 9.0.2.90
> crmd[3240]: 2007/08/22_17:33:40 info: crmd_ha_status_callback: Ping
node
> 9.0.2.90 is dead
> attrd[3239]: 2007/08/22_17:33:42 info: attrd_timer_callback: Sending
> flush op to all hosts for: pingd
> attrd[3239]: 2007/08/22_17:33:42 info: attrd_ha_callback: flush
message
> from ulxxcapb
> attrd[3239]: 2007/08/22_17:33:42 info: attrd_ha_callback: Sent update
> 13: pingd=100
> cib[3236]: 2007/08/22_17:33:42 info: cib_diff_notify: Update (client:
> 3239, call:13): 0.151.14643 -> 0.151.14644 (ok)
> tengine[3283]: 2007/08/22_17:33:42 info: te_update_diff: Processing
diff
> (cib_modify): 0.151.14643 -> 0.151.14644
> tengine[3283]: 2007/08/22_17:33:42 info: extract_event: Aborting on
> transient_attributes changes for acc23203-d2c8-419e-9dfb-ce621b332225
> tengine[3283]: 2007/08/22_17:33:42 info: update_abort_priority: Abort
> priority upgraded to 1000000
> tengine[3283]: 2007/08/22_17:33:42 info: te_update_diff: Aborting on
> transient_attributes deletions
> crmd[3240]: 2007/08/22_17:33:42 info: do_state_transition: ulxxcapb:
> State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC
> cause=C_IPC_MESSAGE origin=route_message ]
> crmd[3240]: 2007/08/22_17:33:42 info: do_state_transition: All 2
cluster
> nodes are eligible to run resources.
> cib[11297]: 2007/08/22_17:33:42 info: write_cib_contents: Wrote
version
> 0.151.14644 of the CIB to disk (digest:
> 2f4ba9ed708980450a3a2a4cc9af67a8)
> pengine[3284]: 2007/08/22_17:33:42 info: log_data_element:
> process_pe_message: [generation] <cib admin_epoch="0"
have_quorum="true"
> ignore_dtd="false" num_peers="2" cib_feature_revision="1.3"
> generated="true" epoch="151" num_updates="14644" cib-last-written="Wed
> Aug 22 17:06:03 2007" ccm_transition="2"
> dc_uuid="acc23203-d2c8-419e-9dfb-ce621b332225"/>
> pengine[3284]: 2007/08/22_17:33:42 notice: cluster_option: Using
default
> value '60s' for cluster option 'cluster-delay'
> pengine[3284]: 2007/08/22_17:33:42 notice: cluster_option: Using
default
> value '20s' for cluster option 'default-action-timeout'
> pengine[3284]: 2007/08/22_17:33:42 notice: cluster_option: Using
default
> value 'false' for cluster option 'remove-after-stop'
> pengine[3284]: 2007/08/22_17:33:42 notice: cluster_option: Using
default
> value '-1' for cluster option 'pe-error-series-max'
> pengine[3284]: 2007/08/22_17:33:42 notice: cluster_option: Using
default
> value '-1' for cluster option 'pe-warn-series-max'
> pengine[3284]: 2007/08/22_17:33:42 notice: cluster_option: Using
default
> value '-1' for cluster option 'pe-input-series-max'
> pengine[3284]: 2007/08/22_17:33:42 info: determine_online_status: Node
> ulxxcapb is online
> pengine[3284]: 2007/08/22_17:33:42 info: determine_online_status: Node
> ulxxcapa is online
> pengine[3284]: 2007/08/22_17:33:42 info: group_print: Resource Group:
> group_nfsp
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:
> nfsp_evms_failover   (heartbeat:evms_failover):      Stopped 
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:
> nfsp_filesystem      (heartbeat::ocf:Filesystem):    Stopped 
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:     nfsp_ip_1
> (heartbeat::ocf:IPaddr2):       Stopped 
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:     nfsp_ip_2
> (heartbeat::ocf:IPaddr2):       Stopped 
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:     nfsp_export
> (lsb:export_nfsp):      Stopped 
> pengine[3284]: 2007/08/22_17:33:42 info: group_print: Resource Group:
> group_nfst
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:
> nfst_evms_failover   (heartbeat:evms_failover):      Stopped 
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:
> nfst_filesystem      (heartbeat::ocf:Filesystem):    Stopped 
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:     nfst_ip_1
> (heartbeat::ocf:IPaddr2):       Stopped 
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:     nfst_ip_2
> (heartbeat::ocf:IPaddr2):       Stopped 
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:     nfst_export
> (lsb:export_nfst):      Stopped 
> pengine[3284]: 2007/08/22_17:33:42 info: group_print: Resource Group:
> group_oe10
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:
> oe10_evms_failover   (heartbeat:evms_failover):      Stopped 
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:
> oe10_filesystem      (heartbeat::ocf:Filesystem):    Stopped 
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:     oe10_ip_1
> (heartbeat::ocf:IPaddr2):       Stopped 
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:     oe10_ip_2
> (heartbeat::ocf:IPaddr2):       Stopped 
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:     oe10_db
> (heartbeat::ocf:oracle):        Stopped 
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:     oe10_lsnr
> (heartbeat::ocf:oralsnr):       Stopped 
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:     oe10_tsmc
> (lsb:dsmc_oe10):        Stopped 
> pengine[3284]: 2007/08/22_17:33:42 info: group_print: Resource Group:
> group_nfs2
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:
> nfs2_evms_failover   (heartbeat:evms_failover):      Started ulxxcapb
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:
> nfs2_filesystem      (heartbeat::ocf:Filesystem):    Started ulxxcapb
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:     nfs2_ip_1
> (heartbeat::ocf:IPaddr2):       Started ulxxcapb
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:     nfs2_ip_2
> (heartbeat::ocf:IPaddr2):       Started ulxxcapb
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:     nfs2_export
> (lsb:export_nfs2):      Started ulxxcapb
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:     nfs2_tsmc
> (lsb:dsmc_nfs2):        Started ulxxcapb
> pengine[3284]: 2007/08/22_17:33:42 info: clone_print: Clone Set:
> group_pingd
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:
pingd-child:0
> (heartbeat::ocf:pingd): Started ulxxcapa
> pengine[3284]: 2007/08/22_17:33:42 info: native_print:
pingd-child:1
> (heartbeat::ocf:pingd): Started ulxxcapb
> pengine[3284]: 2007/08/22_17:33:42 info: log_data_element:
> check_action_definition: params:all <parameters target_role="started"
> 1="nfs2"/>
> pengine[3284]: 2007/08/22_17:33:42 WARN: check_action_definition:
> Parameters to nfs2_evms_failover_monitor_0 on ulxxcapb changed:
recorded
> d32fe35b8f6ec7db0825df29a3063746 vs. calculated (all)
> 358c4b605b51a03afd9530f9ee88896b
> pengine[3284]: 2007/08/22_17:33:42 info: log_data_element:
> check_action_definition: params:all <parameters multiplier="100"
> dampen="2s" user="root" pidfile="/var/run/pingd.pid"/>
> pengine[3284]: 2007/08/22_17:33:42 WARN: check_action_definition:
> Parameters to pingd-child:1_monitor_0 on ulxxcapb changed: recorded
> 2500f62f8cbe28359717874cda643d0f vs. calculated (all)
> 6ce8a133bf8292a7521af0d4610e60f2
> pengine[3284]: 2007/08/22_17:33:42 info: log_data_element:
> check_action_definition: params:all <parameters multiplier="100"
> dampen="2s" user="root" pidfile="/var/run/pingd.pid"/>
> pengine[3284]: 2007/08/22_17:33:42 WARN: check_action_definition:
> Parameters to pingd-child:1_start_0 on ulxxcapb changed: recorded
> e58195cc0db685dc80aa15525c617b8a vs. calculated (all)
> 6ce8a133bf8292a7521af0d4610e60f2
> pengine[3284]: 2007/08/22_17:33:42 info: log_data_element:
> check_action_definition: params:all <parameters multiplier="100"
> dampen="2s" user="root" pidfile="/var/run/pingd.pid"/>
> pengine[3284]: 2007/08/22_17:33:42 WARN: check_action_definition:
> Parameters to pingd-child:1_monitor_4000 on ulxxcapb changed: recorded
> e58195cc0db685dc80aa15525c617b8a vs. calculated (all)
> 6ce8a133bf8292a7521af0d4610e60f2
> pengine[3284]: 2007/08/22_17:33:42 info: log_data_element:
> check_action_definition: params:all <parameters multiplier="100"
> dampen="2s" user="root" pidfile="/var/run/pingd.pid"/>
> pengine[3284]: 2007/08/22_17:33:42 WARN: check_action_definition:
> Parameters to pingd-child:0_monitor_0 on ulxxcapa changed: recorded
> 2500f62f8cbe28359717874cda643d0f vs. calculated (all)
> 6ce8a133bf8292a7521af0d4610e60f2
> pengine[3284]: 2007/08/22_17:33:42 info: log_data_element:
> check_action_definition: params:all <parameters multiplier="100"
> dampen="2s" user="root" pidfile="/var/run/pingd.pid"/>
> pengine[3284]: 2007/08/22_17:33:42 WARN: check_action_definition:
> Parameters to pingd-child:0_start_0 on ulxxcapa changed: recorded
> e58195cc0db685dc80aa15525c617b8a vs. calculated (all)
> 6ce8a133bf8292a7521af0d4610e60f2
> pengine[3284]: 2007/08/22_17:33:42 info: log_data_element:
> check_action_definition: params:all <parameters multiplier="100"
> dampen="2s" user="root" pidfile="/var/run/pingd.pid"/>
> pengine[3284]: 2007/08/22_17:33:42 WARN: check_action_definition:
> Parameters to pingd-child:0_monitor_4000 on ulxxcapa changed: recorded
> e58195cc0db685dc80aa15525c617b8a vs. calculated (all)
> 6ce8a133bf8292a7521af0d4610e60f2
> pengine[3284]: 2007/08/22_17:33:42 info: native_color: Combine scores
> from nfsp_export and nfsp_ip_2
> pengine[3284]: 2007/08/22_17:33:42 info: native_color: Combine scores
> from nfsp_ip_2 and nfsp_ip_1
> pengine[3284]: 2007/08/22_17:33:42 info: native_color: Combine scores
> from nfsp_ip_1 and nfsp_filesystem
> pengine[3284]: 2007/08/22_17:33:42 info: native_color: Combine scores
> from nfsp_filesystem and nfsp_evms_failover
> pengine[3284]: 2007/08/22_17:33:42 WARN: native_color: Resource
> nfsp_evms_failover cannot run anywhere
> pengine[3284]: 2007/08/22_17:33:42 WARN: native_color: Resource
> nfsp_filesystem cannot run anywhere
> pengine[3284]: 2007/08/22_17:33:42 WARN: native_color: Resource
> nfsp_ip_1 cannot run anywhere
> pengine[3284]: 2007/08/22_17:33:42 WARN: native_color: Resource
> nfsp_ip_2 cannot run anywhere
> pengine[3284]: 2007/08/22_17:33:42 WARN: native_color: Resource
> nfsp_export cannot run anywhere
> pengine[3284]: 2007/08/22_17:33:42 info: native_color: Combine scores
> from nfst_export and nfst_ip_2
> pengine[3284]: 2007/08/22_17:33:42 info: native_color: Combine scores
> from nfst_ip_2 and nfst_ip_1
> pengine[3284]: 2007/08/22_17:33:42 info: native_color: Combine scores
> from nfst_ip_1 and nfst_filesystem
> pengine[3284]: 2007/08/22_17:33:42 info: native_color: Combine scores
> from nfst_filesystem and nfst_evms_failover
> pengine[3284]: 2007/08/22_17:33:42 WARN: native_color: Resource
> nfst_evms_failover cannot run anywhere
> pengine[3284]: 2007/08/22_17:33:42 WARN: native_color: Resource
> nfst_filesystem cannot run anywhere
> pengine[3284]: 2007/08/22_17:33:42 WARN: native_color: Resource
> nfst_ip_1 cannot run anywhere
> pengine[3284]: 2007/08/22_17:33:42 WARN: native_color: Resource
> nfst_ip_2 cannot run anywhere
> pengine[3284]: 2007/08/22_17:33:42 WARN: native_color: Resource
> nfst_export cannot run anywhere
> pengine[3284]: 2007/08/22_17:33:42 info: native_color: Combine scores
> from oe10_tsmc and oe10_lsnr
> pengine[3284]: 2007/08/22_17:33:42 info: native_color: Combine scores
> from oe10_lsnr and oe10_db
> pengine[3284]: 2007/08/22_17:33:42 info: native_color: Combine scores
> from oe10_db and oe10_ip_2
> pengine[3284]: 2007/08/22_17:33:42 info: native_color: Combine scores
> from oe10_ip_2 and oe10_ip_1
> pengine[3284]: 2007/08/22_17:33:42 info: native_color: Combine scores
> from oe10_ip_1 and oe10_filesystem
> pengine[3284]: 2007/08/22_17:33:42 info: native_color: Combine scores
> from oe10_filesystem and oe10_evms_failover
> pengine[3284]: 2007/08/22_17:33:42 WARN: native_color: Resource
> oe10_evms_failover cannot run anywhere
> pengine[3284]: 2007/08/22_17:33:42 WARN: native_color: Resource
> oe10_filesystem cannot run anywhere
> pengine[3284]: 2007/08/22_17:33:42 WARN: native_color: Resource
> oe10_ip_1 cannot run anywhere
> pengine[3284]: 2007/08/22_17:33:42 WARN: native_color: Resource
> oe10_ip_2 cannot run anywhere
> pengine[3284]: 2007/08/22_17:33:42 WARN: native_color: Resource
oe10_db
> cannot run anywhere
> pengine[3284]: 2007/08/22_17:33:42 WARN: native_color: Resource
> oe10_lsnr cannot run anywhere
> pengine[3284]: 2007/08/22_17:33:42 WARN: native_color: Resource
> oe10_tsmc cannot run anywhere
> pengine[3284]: 2007/08/22_17:33:42 info: native_color: Combine scores
> from nfs2_tsmc and nfs2_export
> pengine[3284]: 2007/08/22_17:33:42 info: native_color: Combine scores
> from nfs2_export and nfs2_ip_2
> pengine[3284]: 2007/08/22_17:33:42 info: native_color: Combine scores
> from nfs2_ip_2 and nfs2_ip_1
> pengine[3284]: 2007/08/22_17:33:42 info: native_color: Combine scores
> from nfs2_ip_1 and nfs2_filesystem
> pengine[3284]: 2007/08/22_17:33:42 info: native_color: Combine scores
> from nfs2_filesystem and nfs2_evms_failover
> pengine[3284]: 2007/08/22_17:33:42 notice: NoRoleChange: Leave
resource
> nfs2_evms_failover      (ulxxcapb)
> pengine[3284]: 2007/08/22_17:33:42 notice: NoRoleChange: Leave
resource
> nfs2_filesystem (ulxxcapb)
> pengine[3284]: 2007/08/22_17:33:42 notice: NoRoleChange: Leave
resource
> nfs2_ip_1       (ulxxcapb)
> pengine[3284]: 2007/08/22_17:33:42 notice: NoRoleChange: Leave
resource
> nfs2_ip_2       (ulxxcapb)
> pengine[3284]: 2007/08/22_17:33:42 notice: NoRoleChange: Leave
resource
> nfs2_export     (ulxxcapb)
> pengine[3284]: 2007/08/22_17:33:42 notice: NoRoleChange: Leave
resource
> nfs2_tsmc       (ulxxcapb)
> pengine[3284]: 2007/08/22_17:33:42 notice: NoRoleChange: Restart
> resource pingd-child:0 (ulxxcapa)
> pengine[3284]: 2007/08/22_17:33:42 notice: RecurringOp: ulxxcapa
> pingd-child:0_monitor_4000
> pengine[3284]: 2007/08/22_17:33:42 notice: NoRoleChange: Restart
> resource pingd-child:1 (ulxxcapb)
> pengine[3284]: 2007/08/22_17:33:42 notice: RecurringOp: ulxxcapb
> pingd-child:1_monitor_4000
> crmd[3240]: 2007/08/22_17:33:42 info: do_state_transition: ulxxcapb:
> State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [
> input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=route_message ]
> tengine[3283]: 2007/08/22_17:33:42 info: unpack_graph: Unpacked
> transition 17: 10 actions in 10 synapses
> tengine[3283]: 2007/08/22_17:33:42 info: te_pseudo_action: Pseudo
action
> 49 fired and confirmed
> tengine[3283]: 2007/08/22_17:33:42 info: send_rsc_command: Initiating
> action 45: pingd-child:0_stop_0 on ulxxcapa
> tengine[3283]: 2007/08/22_17:33:42 info: send_rsc_command: Initiating
> action 46: pingd-child:1_stop_0 on ulxxcapb
> crmd[3240]: 2007/08/22_17:33:42 info: do_lrm_rsc_op: Performing
> op=pingd-child:1_stop_0
key=46:17:8db77d5a-c345-4e9a-b711-97943abfd78e)
> crmd[3240]: 2007/08/22_17:33:42 WARN: process_lrm_event: LRM operation
> pingd-child:1_monitor_4000 (call=98, rc=-2) Cancelled 
> pengine[3284]: 2007/08/22_17:33:42 WARN: process_pe_message:
Transition
> 17: WARNINGs found during PE processing. PEngine Input stored in:
> /var/lib/heartbeat/pengine/pe-warn-1355.bz2
> pengine[3284]: 2007/08/22_17:33:42 info: process_pe_message:
> Configuration WARNINGs found during PE processing.  Please run
> "crm_verify -L" to identify issues.
> crmd[3240]: 2007/08/22_17:33:42 info: process_lrm_event: LRM operation
> pingd-child:1_stop_0 (call=111, rc=0) complete 
> cib[3236]: 2007/08/22_17:33:43 info: cib_diff_notify: Update (client:
> 3240, call:141): 0.151.14644 -> 0.151.14645 (ok)
> tengine[3283]: 2007/08/22_17:33:43 info: te_update_diff: Processing
diff
> (cib_update): 0.151.14644 -> 0.151.14645
> tengine[3283]: 2007/08/22_17:33:43 info: match_graph_event: Action
> pingd-child:1_stop_0 (46) confirmed on
> acc23203-d2c8-419e-9dfb-ce621b332225
> cib[11303]: 2007/08/22_17:33:43 info: write_cib_contents: Wrote
version
> 0.151.14645 of the CIB to disk (digest:
> 03ecd35c34dbb001aa6b5734c025de04)
> cib[3236]: 2007/08/22_17:33:43 info: cib_diff_notify: Update (client:
> 16681, call:82): 0.151.14645 -> 0.151.14646 (ok)
> tengine[3283]: 2007/08/22_17:33:43 info: te_update_diff: Processing
diff
> (cib_update): 0.151.14645 -> 0.151.14646
> tengine[3283]: 2007/08/22_17:33:43 info: match_graph_event: Action
> pingd-child:0_stop_0 (45) confirmed on
> 4153c055-d562-46bb-8f33-41023f000ef9
> tengine[3283]: 2007/08/22_17:33:43 info: te_pseudo_action: Pseudo
action
> 50 fired and confirmed
> tengine[3283]: 2007/08/22_17:33:43 info: te_pseudo_action: Pseudo
action
> 47 fired and confirmed
> tengine[3283]: 2007/08/22_17:33:43 info: send_rsc_command: Initiating
> action 16: pingd-child:0_start_0 on ulxxcapa
> tengine[3283]: 2007/08/22_17:33:43 info: send_rsc_command: Initiating
> action 14: pingd-child:1_start_0 on ulxxcapb
> crmd[3240]: 2007/08/22_17:33:43 info: do_lrm_rsc_op: Performing
> op=pingd-child:1_start_0
key=14:17:8db77d5a-c345-4e9a-b711-97943abfd78e)
> cib[11309]: 2007/08/22_17:33:43 info: write_cib_contents: Wrote
version
> 0.151.14646 of the CIB to disk (digest:
> 4577ba9efc08b59b7d15c4d0287f37c4)
> crmd[3240]: 2007/08/22_17:33:43 info: process_lrm_event: LRM operation
> pingd-child:1_start_0 (call=112, rc=0) complete 
> cib[3236]: 2007/08/22_17:33:43 info: cib_diff_notify: Update (client:
> 3240, call:142): 0.151.14646 -> 0.151.14647 (ok)
> tengine[3283]: 2007/08/22_17:33:43 info: te_update_diff: Processing
diff
> (cib_update): 0.151.14646 -> 0.151.14647
> tengine[3283]: 2007/08/22_17:33:43 info: match_graph_event: Action
> pingd-child:1_start_0 (14) confirmed on
> acc23203-d2c8-419e-9dfb-ce621b332225
> tengine[3283]: 2007/08/22_17:33:43 info: send_rsc_command: Initiating
> action 7: pingd-child:1_monitor_4000 on ulxxcapb
> crmd[3240]: 2007/08/22_17:33:43 info: do_lrm_rsc_op: Performing
> op=pingd-child:1_monitor_4000
> key=7:17:8db77d5a-c345-4e9a-b711-97943abfd78e)
> crmd[3240]: 2007/08/22_17:33:43 info: process_lrm_event: LRM operation
> pingd-child:1_monitor_4000 (call=113, rc=0) complete 
> cib[3236]: 2007/08/22_17:33:44 info: cib_diff_notify: Update (client:
> 3240, call:143): 0.151.14647 -> 0.151.14648 (ok)
> tengine[3283]: 2007/08/22_17:33:44 info: te_update_diff: Processing
diff
> (cib_update): 0.151.14647 -> 0.151.14648
> tengine[3283]: 2007/08/22_17:33:44 info: match_graph_event: Action
> pingd-child:1_monitor_4000 (7) confirmed on
> acc23203-d2c8-419e-9dfb-ce621b332225
> cib[11332]: 2007/08/22_17:33:44 info: write_cib_contents: Wrote
version
> 0.151.14647 of the CIB to disk (digest:
> 4ef8ac941bf8f010b1af432b52f5c800)
> cib[11337]: 2007/08/22_17:33:44 info: write_cib_contents: Wrote
version
> 0.151.14648 of the CIB to disk (digest:
> 0211132390c452995fffb4865a233d42)
> cib[3236]: 2007/08/22_17:33:44 info: cib_diff_notify: Update (client:
> 16681, call:83): 0.151.14648 -> 0.151.14649 (ok)
> tengine[3283]: 2007/08/22_17:33:44 info: te_update_diff: Processing
diff
> (cib_update): 0.151.14648 -> 0.151.14649
> tengine[3283]: 2007/08/22_17:33:44 info: match_graph_event: Action
> pingd-child:0_start_0 (16) confirmed on
> 4153c055-d562-46bb-8f33-41023f000ef9
> tengine[3283]: 2007/08/22_17:33:44 info: send_rsc_command: Initiating
> action 8: pingd-child:0_monitor_4000 on ulxxcapa
> tengine[3283]: 2007/08/22_17:33:44 info: te_pseudo_action: Pseudo
action
> 48 fired and confirmed
> cib[11338]: 2007/08/22_17:33:44 info: write_cib_contents: Wrote
version
> 0.151.14649 of the CIB to disk (digest:
> fa532c5822e4a7847ef22b869f638faa)
> cib[3236]: 2007/08/22_17:33:45 info: cib_diff_notify: Update (client:
> 16681, call:84): 0.151.14649 -> 0.151.14650 (ok)
> crmd[3240]: 2007/08/22_17:33:45 info: do_state_transition: ulxxcapb:
> State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS
> cause=C_IPC_MESSAGE origin=route_message ]
> tengine[3283]: 2007/08/22_17:33:45 info: te_update_diff: Processing
diff
> (cib_update): 0.151.14649 -> 0.151.14650
> tengine[3283]: 2007/08/22_17:33:45 info: match_graph_event: Action
> pingd-child:0_monitor_4000 (8) confirmed on
> 4153c055-d562-46bb-8f33-41023f000ef9
> tengine[3283]: 2007/08/22_17:33:45 info: run_graph: Transition 17:
> (Complete=10, Pending=0, Fired=0, Skipped=0, Incomplete=0)
> tengine[3283]: 2007/08/22_17:33:45 info: notify_crmd: Transition 17
> status: te_complete - <null>
> cib[11339]: 2007/08/22_17:33:45 info: write_cib_contents: Wrote
version
> 0.151.14650 of the CIB to disk (digest:
> 6b1ee544423b64236748810e98e83ab6)
> heartbeat[3192]: 2007/08/22_17:33:49 WARN: 1 lost packet(s) for
> [ulxxcapa] [1841:1843]
> heartbeat[3192]: 2007/08/22_17:33:49 info: No pkts missing from
> ulxxcapa!
> ccm[3235]: 2007/08/22_17:33:56 info: client (pid=11467) removed from
ccm
> 
> 
> Kind regards,
> Daniel
> 
> 
> Diese Nachricht ist ausschliesslich f?r den Adressaten bestimmt und
beinhaltet unter Umst?nden vertrauliche Mitteilungen. Da die
Vertraulichkeit von e-Mail-Nachrichten nicht gew?hrleistet werden kann,
?bernehmen wir keine Haftung f?r die Gew?hrung der Vertraulichkeit und
Unversehrtheit dieser Mitteilung. Bei irrt?mlicher Zustellung bitten wir
Sie um Benachrichtigung per e-Mail und um L?schung dieser Nachricht
sowie eventueller Anh?nge. Jegliche unberechtigte Verwendung oder
Verbreitung dieser Informationen ist streng verboten.
> 
> This message is intended only for the named recipient and may contain
confidential or privileged information. As the confidentiality of email
communication cannot be guaranteed, we do not accept any responsibility
for the confidentiality and the intactness of this message. If you have
received it in error, please advise the sender by return e-mail and
delete this message and any attachments. Any unauthorised use or
dissemination of this information is strictly prohibited.
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Diese Nachricht ist ausschliesslich für den Adressaten bestimmt und beinhaltet 
unter Umständen vertrauliche Mitteilungen. Da die Vertraulichkeit von 
e-Mail-Nachrichten nicht gewährleistet werden kann, übernehmen wir keine 
Haftung für die Gewährung der Vertraulichkeit und Unversehrtheit dieser 
Mitteilung. Bei irrtümlicher Zustellung bitten wir Sie um Benachrichtigung per 
e-Mail und um Löschung dieser Nachricht sowie eventueller Anhänge. Jegliche 
unberechtigte Verwendung oder Verbreitung dieser Informationen ist streng 
verboten.

This message is intended only for the named recipient and may contain 
confidential or privileged information. As the confidentiality of email 
communication cannot be guaranteed, we do not accept any responsibility for the 
confidentiality and the intactness of this message. If you have received it in 
error, please advise the sender by return e-mail and delete this message and 
any attachments. Any unauthorised use or dissemination of this information is 
strictly prohibited.

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to