Dear all,

I have the following environment set up:

===============================
OS: RHEL 4 update5
HeartBeat: 2.0.8
Node type: Iptables + FwBuilder
===============================
Problem origin:
The environment was working fine before an incorrect shut down of one of the
machines (for balance testing in power failure scenario) forced an fsck and
all data on one of the nodes was lost.
Let's call the nodes: fw1 and fw2.
fw1 disapeared, so to speak.
fw2 was fine, identically configured.
===============================
To try and solve the problem quickly, we made a tar.gz of the whole fw2
disk, and restored it on fw1 disk.
At this point we of course had two fw2. (reinstalling everything was a
problem, since some software was old, and all machines were bastioned,
etc...)
===============================
Next thing done was reconfiguring IP and node name (including resulting
uname -n result correctly) to fw1 on the fw1 copy.
===============================

Problem:
In case it had any effect, we modified the /etc/ha.d/ha.cf for the unicast
IP address to be set to fw2 on the fw1 disk.
Starting heartbeat basically didn't work at all... So we unplugged fw1, and
found a problem with uuid (too late).

On the fw1 disk, the uuid was different from the fw2 uuid (which seems weird
to us).
(In version 2.0.8, crm_uuid doesn't offer the -w option, so we had to find
another way to fix that).
To solve it, we modified the cib.xml file on fw1, using the new crm_uuid
result, and cibadmin (of course).
Then we tried deleting fw1 from fw2 cib file using the hb_deletehost, which
didn't work. (!)
So we went on and changed using cibadmin the fw2 cib file, to reflect the
new uuid from fw1 correctly.

At this point we had two identical cib.xml files aside from the first xml
entry, as follows:
fw1 cib.xml first line:
 <cib admin_epoch="0" have_quorum="true" ignore_dtd="false" num_peers="1"
cib_feature_revision="1.3" generated="true" epoch="116545"
num_updates="440620" cib-last-written="Thu Apr 24 14:11:50 2008"
ccm_transition="3" dc_uuid="9af9135b-4efa-4434-9db4-b739ebb6fcc5">

fw2 cib.xml first line:
<cib admin_epoch="0" have_quorum="true" ignore_dtd="false" num_peers="1"
cib_feature_revision="1.3" generated="true" epoch="116546"
num_updates="440636" cib-last-written="Thu Apr 24 14:35:33 2008"
ccm_transition="1" dc_uuid="9af9135b-4efa-4434-9db4-b739ebb6fcc5">

Based on identifiers, fw2 is dc. (see complete cib.xml at the end)
We then plugged the network cables, started heartbeat (/etc/init.d/heartbeat
start) on fw1 (heartbeat was still functionning on fw2).

This didn't work at all. Thankfully enough, fw2 is still running correctly.

===============================
Current wonders:

A weird thing is the hostcache file under /var/lib/heartbeat on the two
machines:

On fw2:
fw1     00000000-0000-0000-0000-000000000000    100
fw2     9af9135b-4efa-4434-9db4-b739ebb6fcc5    100

On fw1:
fw1     09d33017-9f05-4d1d-af31-6d9cca58ddd4    100
fw2     9af9135b-4efa-4434-9db4-b739ebb6fcc5    100


On the other hand, crm_mon and hb_gui both work on fw2, but not at all on
fw1.
On fw1, we get a "not connected"...
When running crm_verify on fw1, we get the following messages:

element cib: validity error : Element cib content does not follow the DTD,
expecting (configuration , status), got (configuration )
crm_verify[6390]: 2008/04/24_13:48:03 ERROR: validate_with_dtd: CIB does not
validate against /usr/lib64/heartbeat/crm.dtd
crm_verify[6390]: 2008/04/24_13:48:03 ERROR: main: CIB did not pass DTD
validation
crm_verify[6390]: 2008/04/24_13:48:03 ERROR: get_node_score: Rule
prefered_HighestConnectivityNode: no score specified.  Assuming 0.
crm_verify[6390]: 2008/04/24_13:48:03 ERROR: get_node_score: Rule
prefered_HighestConnectivityNode: no score specified.  Assuming 0.
crm_verify[6390]: 2008/04/24_13:48:03 ERROR: get_node_score: Rule
prefered_HighestConnectivityNode: no score specified.  Assuming 0.
crm_verify[6390]: 2008/04/24_13:48:03 ERROR: get_node_score: Rule
prefered_HighestConnectivityNode: no score specified.  Assuming 0.
Errors found during check: config not valid
  -V may provide more details


What else shall we do?


To try and fasten up things, we copy logs down here, my apologies for the
verbosity of this message, but by experience, I normally prefer to put all
available info...
Here is an extract of the errors from /var/log/ha-log in both nodes at the
moment of starting again heartbeat on fw1:
===============================
node fw2 ha-log extract:
ccm[4387]: 2008/04/24_13:33:10 ERROR: llm_get_nodename: index(429419083) out
of range
Apr 24 13:33:09 fw2 heartbeat: [3560]: WARN: Late heartbeat: Node apa:
interval 3810 ms
heartbeat[3560]: 2008/04/24_13:33:10 ERROR: process_status_message: bad node
[fw1] in message
Apr 24 13:33:09 fw2 heartbeat: [3560]: WARN: G_CH_dispatch_int: Dispatch
function for read child took too long to execute: 420 ms (> 50 ms) (GSource:
0x6555\
f8)
heartbeat[3560]: 2008/04/24_13:33:10 ERROR: MSG: Dumping message with 12
fields
Apr 24 13:33:09 fw2 heartbeat: [3560]: WARN: Gmain_timeout_dispatch:
Dispatch function for send local status was delayed 3170 ms (> 210 ms)
before being cal\
led (GSource: 0x655728)
heartbeat[3560]: 2008/04/24_13:33:10 ERROR: MSG[0] : [t=status]
Apr 24 13:33:09 fw2 heartbeat: [3560]: info: Gmain_timeout_dispatch: started
at 430104127 should have started at 430103810
heartbeat[3560]: 2008/04/24_13:33:10 ERROR: MSG[1] : [st=active]
Apr 24 13:33:10 fw2 heartbeat: [3560]: WARN: Late heartbeat: Node fw2:
interval 3800 ms
heartbeat[3560]: 2008/04/24_13:33:10 ERROR: MSG[2] : [dt=bb8]
Apr 24 13:33:10 fw2 heartbeat: [3560]: WARN: Gmain_timeout_dispatch:
Dispatch function for send local status took too long to execute: 340 ms (>
210 ms) (GS\
ource: 0x655728)
Apr 24 13:33:10 fw2 ccm: [4387]: ERROR: llm_get_nodename: index(429419083)
out of range
heartbeat[3560]: 2008/04/24_13:33:11 ERROR: MSG[3] : [protocol=1]
Apr 24 13:33:10 fw2 heartbeat: [3560]: ERROR: process_status_message: bad
node [fw1] in message
heartbeat[3560]: 2008/04/24_13:33:11 ERROR: MSG[4] : [src=fw1]
Apr 24 13:33:10 fw2 heartbeat: [3560]: ERROR: MSG: Dumping message with 12
fields
heartbeat[3560]: 2008/04/24_13:33:11 ERROR: MSG[5] : [(1)srcuuid=0x7277e8(36
27)]
ccm[4387]: 2008/04/24_13:33:11 ERROR: llm_get_nodename: index(429419083) out
of range
Apr 24 13:33:10 fw2 heartbeat: [3560]: ERROR: MSG[0] : [t=status]
heartbeat[3560]: 2008/04/24_13:33:11 ERROR: MSG[6] : [seq=1fb5]
Apr 24 13:33:10 fw2 heartbeat: [3560]: ERROR: MSG[1] : [st=active]
heartbeat[3560]: 2008/04/24_13:33:11 ERROR: MSG[7] : [hg=2]
Apr 24 13:33:10 fw2 heartbeat: [3560]: ERROR: MSG[2] : [dt=bb8]
heartbeat[3560]: 2008/04/24_13:33:11 ERROR: MSG[8] : [ts=48106de1]
Apr 24 13:33:10 fw2 heartbeat: [3560]: ERROR: MSG[3] : [protocol=1]
heartbeat[3560]: 2008/04/24_13:33:11 ERROR: MSG[9] : [ld=2.12 2.14 2.09 1/74
504]
Apr 24 13:33:11 fw2 heartbeat: [3560]: ERROR: MSG[4] : [src=fw1]
heartbeat[3560]: 2008/04/24_13:33:11 ERROR: MSG[10] : [ttl=5]
Apr 24 13:33:11 fw2 heartbeat: [3560]: ERROR: MSG[5] :
[(1)srcuuid=0x7277e8(36 27)]
Apr 24 13:33:11 fw2 ccm: [4387]: ERROR: llm_get_nodename: index(429419083)
out of range
heartbeat[3560]: 2008/04/24_13:33:12 ERROR: MSG[11] : [auth=1 46cdcc]
Apr 24 13:33:11 fw2 heartbeat: [3560]: ERROR: MSG[6] : [seq=1fb5]
heartbeat[3560]: 2008/04/24_13:33:12 WARN: G_CH_dispatch_int: Dispatch
function for read child took too long to execute: 1760 ms (> 50 ms)
(GSource: 0x65513\
8)
Apr 24 13:33:11 fw2 heartbeat: [3560]: ERROR: MSG[7] : [hg=2]
heartbeat[3560]: 2008/04/24_13:33:12 WARN: G_CH_dispatch_int: Dispatch
function for read child was delayed 1880 ms (> 100 ms) before being called
(GSource: \
0x655398)
ccm[4387]: 2008/04/24_13:33:12 ERROR: llm_get_nodename: index(429419083) out
of range
Apr 24 13:33:11 fw2 heartbeat: [3560]: ERROR: MSG[8] : [ts=48106de1]
heartbeat[3560]: 2008/04/24_13:33:12 info: G_CH_dispatch_int: started at
430104360 should have started at 430104172
Apr 24 13:33:11 fw2 heartbeat: [3560]: ERROR: MSG[9] : [ld=2.12 2.14 2.09
1/74 504]
heartbeat[3560]: 2008/04/24_13:33:12 WARN: Late heartbeat: Node
192.168.45.227: interval 3440 ms
Apr 24 13:33:11 fw2 heartbeat: [3560]: ERROR: MSG[10] : [ttl=5]
heartbeat[3560]: 2008/04/24_13:33:12 WARN: G_CH_dispatch_int: Dispatch
function for read child took too long to execute: 460 ms (> 50 ms) (GSource:
0x655398\
)
Apr 24 13:33:11 fw2 heartbeat: [3560]: ERROR: MSG[11] : [auth=1 46cdcc]
heartbeat[3560]: 2008/04/24_13:33:12 WARN: G_CH_dispatch_int: Dispatch
function for read child was delayed 2460 ms (> 100 ms) before being called
(GSource: \
0x6555f8)
Apr 24 13:33:12 fw2 heartbeat: [3560]: WARN: G_CH_dispatch_int: Dispatch
function for read child took too long to execute: 1760 ms (> 50 ms)
(GSource: 0x655\
138)
heartbeat[3560]: 2008/04/24_13:33:13 info: G_CH_dispatch_int: started at
430104418 should have started at 430104172
Apr 24 13:33:12 fw2 heartbeat: [3560]: WARN: G_CH_dispatch_int: Dispatch
function for read child was delayed 1880 ms (> 100 ms) before being called
(GSource\
: 0x655398)
Apr 24 13:33:12 fw2 ccm: [4387]: ERROR: llm_get_nodename: index(429419083)
out of range
heartbeat[3560]: 2008/04/24_13:33:13 WARN: Late heartbeat: Node apa:
interval 3380 ms
Apr 24 13:33:12 fw2 heartbeat: [3560]: info: G_CH_dispatch_int: started at
430104360 should have started at 430104172
heartbeat[3560]: 2008/04/24_13:33:13 WARN: G_CH_dispatch_int: Dispatch
function for read child took too long to execute: 350 ms (> 50 ms) (GSource:
0x6555f8\
)
Apr 24 13:33:12 fw2 heartbeat: [3560]: WARN: Late heartbeat: Node
192.168.45.227: interval 3440 ms
ccm[4387]: 2008/04/24_13:33:13 ERROR: llm_get_nodename: index(429419083) out
of range
heartbeat[3560]: 2008/04/24_13:33:13 WARN: Gmain_timeout_dispatch: Dispatch
function for send local status was delayed 2760 ms (> 210 ms) before being
calle\
d (GSource: 0x655728)
Apr 24 13:33:12 fw2 heartbeat: [3560]: WARN: G_CH_dispatch_int: Dispatch
function for read child took too long to execute: 460 ms (> 50 ms) (GSource:
0x6553\
98)
heartbeat[3560]: 2008/04/24_13:33:13 info: Gmain_timeout_dispatch: started
at 430104466 should have started at 430104190
Apr 24 13:33:12 fw2 heartbeat: [3560]: WARN: G_CH_dispatch_int: Dispatch
function for read child was delayed 2460 ms (> 100 ms) before being called
(GSource\
: 0x6555f8)
heartbeat[3560]: 2008/04/24_13:33:13 WARN: Late heartbeat: Node fw2:
interval 3460 ms

===============================
node fw1 ha-log extract:

heartbeat[5979]: 2008/04/24_13:33:11 WARN: Logging daemon is disabled
--enabling logging daemon is recommended
heartbeat[5979]: 2008/04/24_13:33:11 WARN: Logging daemon is disabled
--enabling logging daemon is recommended
heartbeat[5979]: 2008/04/24_13:33:11 info: **************************
heartbeat[5979]: 2008/04/24_13:33:11 info: Configuration validated. Starting
heartbeat 2.0.8
heartbeat[5980]: 2008/04/24_13:33:11 info: heartbeat: version 2.0.8
heartbeat[5980]: 2008/04/24_13:33:11 info: Heartbeat generation: 9
heartbeat[5980]: 2008/04/24_13:33:11 info: G_main_add_TriggerHandler: Added
signal manual handler
heartbeat[5980]: 2008/04/24_13:33:11 info: G_main_add_TriggerHandler: Added
signal manual handler
heartbeat[5980]: 2008/04/24_13:33:11 info: Removing
/var/run/heartbeat/rsctmp failed, recreating.
heartbeat[5980]: 2008/04/24_13:33:11 info: glib: Starting serial heartbeat
on tty /dev/ttyS0 (115200 baud)
heartbeat[5980]: 2008/04/24_13:33:11 info: glib: ucast: write socket
priority set to IPTOS_LOWDELAY on eth0
heartbeat[5980]: 2008/04/24_13:33:11 info: glib: ucast: bound send socket to
device: eth0
heartbeat[5980]: 2008/04/24_13:33:11 info: glib: ucast: bound receive socket
to device: eth0
heartbeat[5980]: 2008/04/24_13:33:11 info: glib: ucast: started on port 694
interface eth0 to 192.168.45.243
heartbeat[5980]: 2008/04/24_13:33:11 info: glib: ping heartbeat started.
heartbeat[5980]: 2008/04/24_13:33:11 info: glib: ping group heartbeat
started.
heartbeat[5980]: 2008/04/24_13:33:11 info: G_main_add_SignalHandler: Added
signal handler for signal 17
heartbeat[5980]: 2008/04/24_13:33:11 info: Local status now set to: 'up'
heartbeat[5980]: 2008/04/24_13:33:13 info: Link fw2:eth0 up.
heartbeat[5980]: 2008/04/24_13:33:13 info: Status update for node fw2:
status active
heartbeat[5980]: 2008/04/24_13:33:13 info: Link 192.168.45.227:192.168.45.227
up.
heartbeat[5980]: 2008/04/24_13:33:13 WARN: Late heartbeat: Node
192.168.45.227: interval 1200 ms
heartbeat[5980]: 2008/04/24_13:33:13 info: Status update for node
192.168.45.227: status ping
heartbeat[5980]: 2008/04/24_13:33:13 info: Link apa:apa up.
heartbeat[5980]: 2008/04/24_13:33:13 WARN: Late heartbeat: Node apa:
interval 1200 ms
heartbeat[5980]: 2008/04/24_13:33:13 info: Status update for node apa:
status ping
heartbeat[5980]: 2008/04/24_13:33:15 WARN: Late heartbeat: Node fw2:
interval 1500 ms
heartbeat[5980]: 2008/04/24_13:33:16 WARN: Late heartbeat: Node fw2:
interval 1310 ms
heartbeat[5980]: 2008/04/24_13:33:18 WARN: Late heartbeat: Node fw2:
interval 1380 ms
heartbeat[5980]: 2008/04/24_13:33:19 WARN: Late heartbeat: Node fw2:
interval 1440 ms
heartbeat[5980]: 2008/04/24_13:33:20 WARN: Late heartbeat: Node fw2:
interval 1360 ms
heartbeat[5980]: 2008/04/24_13:33:22 WARN: Late heartbeat: Node fw2:
interval 1450 ms
heartbeat[5980]: 2008/04/24_13:33:23 WARN: Late heartbeat: Node fw2:
interval 1580 ms
heartbeat[5980]: 2008/04/24_13:33:25 WARN: Late heartbeat: Node fw2:
interval 1180 ms
heartbeat[5980]: 2008/04/24_13:33:26 WARN: Late heartbeat: Node fw2:
interval 1320 ms
heartbeat[5980]: 2008/04/24_13:33:27 WARN: Late heartbeat: Node fw2:
interval 1500 ms
heartbeat[5980]: 2008/04/24_13:33:29 WARN: Late heartbeat: Node fw2:
interval 1500 ms
heartbeat[5980]: 2008/04/24_13:33:30 WARN: Late heartbeat: Node fw2:
interval 1520 ms
heartbeat[5980]: 2008/04/24_13:33:32 WARN: Late heartbeat: Node fw2:
interval 1400 ms
heartbeat[5980]: 2008/04/24_13:33:33 WARN: Late heartbeat: Node fw2:
interval 1610 ms
heartbeat[5980]: 2008/04/24_13:33:35 WARN: Late heartbeat: Node fw2:
interval 1160 ms
heartbeat[5980]: 2008/04/24_13:33:36 WARN: Late heartbeat: Node fw2:
interval 1350 ms
heartbeat[5980]: 2008/04/24_13:33:37 WARN: Late heartbeat: Node fw2:
interval 1490 ms
heartbeat[5980]: 2008/04/24_13:33:39 WARN: Late heartbeat: Node fw2:
interval 1520 ms
heartbeat[5980]: 2008/04/24_13:33:39 info: all clients are now paused

===============================
Complete cib.xml (identical, excepting the first line as shown before)

 <cib admin_epoch="0" have_quorum="true" ignore_dtd="false" num_peers="1"
cib_feature_revision="1.3" generated="true" epoch="116546"
num_updates="440636" cib-last-written="Thu Apr 24 14:11:50 2008"
ccm_transition="1" dc_uuid="9af9135b-4efa-4434-9db4-b739ebb6fcc5">
   <configuration>
     <crm_config>
       <cluster_property_set id="cib-bootstrap-options">
         <attributes>
           <nvpair id="cib-bootstrap-options-symmetric-cluster"
name="symmetric-cluster" value="true"/>
           <nvpair id="cib-bootstrap-options-no_quorum-policy"
name="no_quorum-policy" value="stop"/>
           <nvpair id="cib-bootstrap-options-default-resource-stickiness"
name="default-resource-stickiness" value="0"/>
           <nvpair
id="cib-bootstrap-options-default-resource-failure-stickiness"
name="default-resource-failure-stickiness" value="0"/>
           <nvpair id="cib-bootstrap-options-stonith-enabled"
name="stonith-enabled" value="false"/>
           <nvpair id="cib-bootstrap-options-stonith-action"
name="stonith-action" value="reboot"/>
           <nvpair id="cib-bootstrap-options-stop-orphan-resources"
name="stop-orphan-resources" value="true"/>
           <nvpair id="cib-bootstrap-options-stop-orphan-actions"
name="stop-orphan-actions" value="true"/>
           <nvpair id="cib-bootstrap-options-remove-after-stop"
name="remove-after-stop" value="false"/>
           <nvpair id="cib-bootstrap-options-short-resource-names"
name="short-resource-names" value="true"/>
           <nvpair id="cib-bootstrap-options-transition-idle-timeout"
name="transition-idle-timeout" value="5min"/>
           <nvpair id="cib-bootstrap-options-default-action-timeout"
name="default-action-timeout" value="5s"/>
           <nvpair id="cib-bootstrap-options-is-managed-default"
name="is-managed-default" value="true"/>
           <nvpair id="cib-bootstrap-options-last-lrm-refresh"
name="last-lrm-refresh" value="1183626802"/>
         </attributes>
       </cluster_property_set>
     </crm_config>
     <nodes>
       <node uname="fw1" type="normal"
id="09d33017-9f05-4d1d-af31-6d9cca58ddd4">
         <instance_attributes
id="nodes-09d33017-9f05-4d1d-af31-6d9cca58ddd4">
           <attributes>
             <nvpair name="standby"
id="standby-09d33017-9f05-4d1d-af31-6d9cca58ddd4" value="off"/>
           </attributes>
         </instance_attributes>
       </node>
       <node uname="fw2" type="normal"
id="9af9135b-4efa-4434-9db4-b739ebb6fcc5">
         <instance_attributes
id="nodes-9af9135b-4efa-4434-9db4-b739ebb6fcc5">
           <attributes>
             <nvpair name="standby"
id="standby-9af9135b-4efa-4434-9db4-b739ebb6fcc5" value="off"/>
           </attributes>
         </instance_attributes>
       </node>
     </nodes>
     <resources>
       <group id="group_1" resource_stickiness="10">
         <instance_attributes id="group_1_instance_attrs">
           <attributes/>
         </instance_attributes>
         <primitive id="Other_Internal_gw_IP" class="ocf" type="IPaddr"
provider="heartbeat">
           <instance_attributes id="Other_Internal_gw_IP_instance_attrs">
             <attributes>
               <nvpair id="f598ab6b-e1e7-4b5d-8fae-5afbd1efa335" name="ip"
value="192.168.45.1"/>
               <nvpair id="030fbd69-5966-42f6-bd74-f09c3e7bcbdd" name="nic"
value="eth0"/>
             </attributes>
           </instance_attributes>
           <operations>
             <op id="37874a9e-1fae-4adf-a9e6-713dca1da761" name="monitor"
interval="500ms" timeout="1500ms" start_delay="0" disabled="false"
role="Started"/>
          </operations>
         </primitive>
         <primitive id="Other_External_gw_IP" class="ocf" type="IPaddr"
provider="heartbeat">
           <instance_attributes id="Other_External_gw_IP_instance_attrs">
             <attributes>
               <nvpair id="3de2655b-fa2d-4f1d-a100-761b49c0f82d" name="ip"
value="192.168.26.100"/>
               <nvpair id="fb93bb64-ab51-472e-89bf-7f5bd1fd667e" name="nic"
value="eth1"/>
             </attributes>
           </instance_attributes>
           <operations>
             <op id="4dec5f9f-0622-4845-9141-1ce33cf64c5b" name="monitor"
interval="500ms" timeout="1500ms"/>
           </operations>
         </primitive>
       </group>
       <clone id="NetworkConnectivity">
         <instance_attributes id="NetworkConnectivity_instance_attrs">
           <attributes>
             <nvpair id="NetworkConnectivity_clone_max" name="clone_max"
value="2"/>
             <nvpair id="NetworkConnectivity_clone_node_max"
name="clone_node_max" value="1"/>
           </attributes>
         </instance_attributes>
         <primitive class="ocf" type="pingd" provider="heartbeat"
id="VisibleHosts">
           <instance_attributes id="VisibleHosts_instance_attrs">
             <attributes>
               <nvpair id="9883f724-35e1-4c31-a8b5-2a9030a87fdb"
name="pidfile" value="/tmp/pingd.pid"/>
               <nvpair id="b103c599-c65f-469d-b018-f2918404f3c0"
name="multiplier" value="100"/>
               <nvpair name="user" id="b81f5d22-15ba-49f6-a4f4-d225650f0c7f"
value="root"/>
               <nvpair id="3da35649-24de-4edf-ae56-5b643fce0a00" name="name"
value="pingdscore"/>
               <nvpair id="933099a1-cfb9-4ddd-95c1-a2f68b71f72f"
name="dampen" value="1s"/>
             </attributes>
           </instance_attributes>
         </primitive>
       </clone>
     </resources>
     <constraints>
       <rsc_location id="HighestConnectivityNode" rsc="group_1">
         <rule id="prefered_HighestConnectivityNode">
           <expression attribute="pingdscore"
id="0f3bb416-fbd1-4802-a8dd-aa762223d995" operation="defined"/>
         </rule>
       </rsc_location>
     </constraints>
   </configuration>
   <status>
     <node_state id="9af9135b-4efa-4434-9db4-b739ebb6fcc5" uname="fw2"
crmd="online" crm-debug-origin="do_update_resource" shutdown="0"
in_ccm="true" ha="active" join="member" expected="member">
       <lrm id="9af9135b-4efa-4434-9db4-b739ebb6fcc5">
         <lrm_resources>
           <lrm_resource id="Other_Internal_gw_IP" type="IPaddr" class="ocf"
provider="heartbeat">
             <lrm_rsc_op id="Other_Internal_gw_IP_monitor_0"
operation="monitor" crm-debug-origin="do_update_resource"
transition_key="3:0:d6b38a4b-312b-4d16-88c7-758cc6d4356f"
transition_magic="0:7;3:0:d6b38a4b-312b-4d16-88c7-758cc6d4356f" call_id="2"
crm_feature_set="1.0.7" rc_code="7" op_status="0" interval="0"
op_digest="81900a01a4b9a7c8b86f8844bff6d848"/>
             <lrm_rsc_op id="Other_Internal_gw_IP_start_0" operation="start"
crm-debug-origin="do_update_resource"
transition_key="7:0:d6b38a4b-312b-4d16-88c7-758cc6d4356f"
transition_magic="0:0;7:0:d6b38a4b-312b-4d16-88c7-758cc6d4356f" call_id="6"
crm_feature_set="1.0.7" rc_code="0" op_status="0" interval="0"
op_digest="81900a01a4b9a7c8b86f8844bff6d848"/>
             <lrm_rsc_op id="Other_Internal_gw_IP_monitor_500"
operation="monitor" crm-debug-origin="do_update_resource"
transition_key="5:1:d6b38a4b-312b-4d16-88c7-758cc6d4356f"
transition_magic="0:0;5:1:d6b38a4b-312b-4d16-88c7-758cc6d4356f" call_id="7"
crm_feature_set="1.0.7" rc_code="0" op_status="0" interval="500"
op_digest="81900a01a4b9a7c8b86f8844bff6d848"/>
           </lrm_resource>
           <lrm_resource id="Other_External_gw_IP" type="IPaddr" class="ocf"
provider="heartbeat">
             <lrm_rsc_op id="Other_External_gw_IP_monitor_0"
operation="monitor" crm-debug-origin="do_update_resource"
transition_key="4:0:d6b38a4b-312b-4d16-88c7-758cc6d4356f"
transition_magic="0:7;4:0:d6b38a4b-312b-4d16-88c7-758cc6d4356f" call_id="3"
crm_feature_set="1.0.7" rc_code="7" op_status="0" interval="0"
op_digest="0ddcf55c0443f2b5b342b73940c9d3cd"/>
             <lrm_rsc_op id="Other_External_gw_IP_start_0" operation="start"
crm-debug-origin="do_update_resource"
transition_key="6:1:d6b38a4b-312b-4d16-88c7-758cc6d4356f"
transition_magic="0:0;6:1:d6b38a4b-312b-4d16-88c7-758cc6d4356f" call_id="8"
crm_feature_set="1.0.7" rc_code="0" op_status="0" interval="0"
op_digest="0ddcf55c0443f2b5b342b73940c9d3cd"/>
             <lrm_rsc_op id="Other_External_gw_IP_monitor_500"
operation="monitor" crm-debug-origin="do_update_resource"
transition_key="7:1:d6b38a4b-312b-4d16-88c7-758cc6d4356f"
transition_magic="0:0;7:1:d6b38a4b-312b-4d16-88c7-758cc6d4356f" call_id="10"
crm_feature_set="1.0.7" rc_code="0" op_status="0" interval="500"
op_digest="0ddcf55c0443f2b5b342b73940c9d3cd"/>
           </lrm_resource>
           <lrm_resource id="VisibleHosts:0" type="pingd" class="ocf"
provider="heartbeat">
             <lrm_rsc_op id="VisibleHosts:0_monitor_0" operation="monitor"
crm-debug-origin="do_update_resource"
transition_key="5:0:d6b38a4b-312b-4d16-88c7-758cc6d4356f"
transition_magic="0:7;5:0:d6b38a4b-312b-4d16-88c7-758cc6d4356f" call_id="4"
crm_feature_set="1.0.7" rc_code="7" op_status="0" interval="0"
op_digest="0f20d5a0c440403f7b8487fbbb0ea257"/>
             <lrm_rsc_op id="VisibleHosts:0_start_0" operation="start"
crm-debug-origin="do_update_resource"
transition_key="12:1:d6b38a4b-312b-4d16-88c7-758cc6d4356f"
transition_magic="0:0;12:1:d6b38a4b-312b-4d16-88c7-758cc6d4356f" call_id="9"
crm_feature_set="1.0.7" rc_code="0" op_status="0" interval="0"
op_digest="0f20d5a0c440403f7b8487fbbb0ea257"/>
           </lrm_resource>
           <lrm_resource id="VisibleHosts:1" type="pingd" class="ocf"
provider="heartbeat">
             <lrm_rsc_op id="VisibleHosts:1_monitor_0" operation="monitor"
crm-debug-origin="do_update_resource"
transition_key="6:0:d6b38a4b-312b-4d16-88c7-758cc6d4356f"
transition_magic="0:7;6:0:d6b38a4b-312b-4d16-88c7-758cc6d4356f" call_id="5"
crm_feature_set="1.0.7" rc_code="7" op_status="0" interval="0"
op_digest="0f20d5a0c440403f7b8487fbbb0ea257"/>
           </lrm_resource>
         </lrm_resources>
       </lrm>
       <transient_attributes id="9af9135b-4efa-4434-9db4-b739ebb6fcc5">
         <instance_attributes
id="status-9af9135b-4efa-4434-9db4-b739ebb6fcc5">
           <attributes>
             <nvpair
id="status-9af9135b-4efa-4434-9db4-b739ebb6fcc5-probe_complete"
name="probe_complete" value="true"/>
             <nvpair
id="status-9af9135b-4efa-4434-9db4-b739ebb6fcc5-pingdscore"
name="pingdscore" value="200"/>
           </attributes>
         </instance_attributes>
       </transient_attributes>
     </node_state>
   </status>
 </cib>
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to