Hello!, I have a problem running resources. Heartbeat starts to launch some resources but stops at third resource and sends this output at log:
What could be the problem ?

crmd[5062]: 2008/05/14_19:17:04 info: do_lrm_rsc_op: Performing op=tunnel_vf1_start_0 key=5:2:d45a7c4e-c7ef-4c79-bc9f-a84287929745)
lrmd[5059]: 2008/05/14_19:17:05 info: rsc:tunnel_vf1: start
lrmd[5214]: 2008/05/14_19:17:05 WARN: For LSB init script, no additional parameters are needed. pengine[5067]: 2008/05/14_19:17:05 info: process_pe_message: Transition 2: PEngine Input stored in: /var/lib/heartbeat/pengine/pe-input-11.bz2 lrmd[5059]: 2008/05/14_19:17:05 info: RA output: (tunnel_vf1:start:stderr) SIOCDELRT: No existe el proceso

lrmd[5059]: 2008/05/14_19:17:05 info: RA output: (tunnel_vf1:start:stdout) Creamos vf1...

crmd[5062]: 2008/05/14_19:17:05 info: process_lrm_event: LRM operation tunnel_vf1_start_0 (call=16, rc=0) complete tengine[5066]: 2008/05/14_19:17:05 info: match_graph_event: Action tunnel_vf1_start_0 (5) confirmed on debianquagga1 (rc=0) tengine[5066]: 2008/05/14_19:17:05 info: send_rsc_command: Initiating action 6: tunnel_vf2_start_0 on debianquagga1 crmd[5062]: 2008/05/14_19:17:05 info: do_lrm_rsc_op: Performing op=tunnel_vf2_start_0 key=6:2:d45a7c4e-c7ef-4c79-bc9f-a84287929745)
lrmd[5059]: 2008/05/14_19:17:05 info: rsc:tunnel_vf2: start
lrmd[5236]: 2008/05/14_19:17:05 WARN: For LSB init script, no additional parameters are needed. lrmd[5059]: 2008/05/14_19:17:05 info: RA output: (tunnel_vf2:start:stderr) SIOCDELRT: No existe el proceso

lrmd[5059]: 2008/05/14_19:17:05 info: RA output: (tunnel_vf2:start:stdout) Creamos vf2...

crmd[5062]: 2008/05/14_19:17:05 info: process_lrm_event: LRM operation tunnel_vf2_start_0 (call=17, rc=0) complete tengine[5066]: 2008/05/14_19:17:05 info: match_graph_event: Action tunnel_vf2_start_0 (6) confirmed on debianquagga1 (rc=0) tengine[5066]: 2008/05/14_19:17:05 info: send_rsc_command: Initiating action 7: tunnel_vf3_start_0 on debianquagga1 crmd[5062]: 2008/05/14_19:17:05 info: do_lrm_rsc_op: Performing op=tunnel_vf3_start_0 key=7:2:d45a7c4e-c7ef-4c79-bc9f-a84287929745)
lrmd[5059]: 2008/05/14_19:17:05 info: rsc:tunnel_vf3: start
lrmd[5270]: 2008/05/14_19:17:05 WARN: For LSB init script, no additional parameters are needed. lrmd[5059]: 2008/05/14_19:17:05 info: RA output: (tunnel_vf3:start:stderr) SIOCDELRT: No existe el proceso

lrmd[5059]: 2008/05/14_19:17:05 info: RA output: (tunnel_vf3:start:stdout) Creamos vf3...

crmd[5062]: 2008/05/14_19:17:05 info: process_lrm_event: LRM operation tunnel_vf3_start_0 (call=18, rc=0) complete crmd[5062]: 2008/05/14_19:17:05 info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_IPC_MESSAGE origin=route_message ]

tengine[5066]: 2008/05/14_19:17:05 info: match_graph_event: Action tunnel_vf3_start_0 (7) confirmed on debianquagga1 (rc=0) tengine[5066]: 2008/05/14_19:17:05 notice: run_graph: ==================================================== tengine[5066]: 2008/05/14_19:17:05 WARN: run_graph: Transition 2: (Complete=4, Pending=0, Fired=0, Skipped=0, Incomplete=16) tengine[5066]: 2008/05/14_19:17:05 ERROR: te_graph_trigger: Transition failed: terminated tengine[5066]: 2008/05/14_19:17:05 WARN: print_graph: Graph 2 (20 actions in 20 synapses): batch-limit=30 jobs, network-delay=60000ms tengine[5066]: 2008/05/14_19:17:05 WARN: print_graph: Synapse 0 was confirmed (priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_graph: Synapse 1 is pending (priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: [Action 24]: Pending (id: FW_GROUP_running_0, type: pseduo, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 5]: Completed (id: tunnel_vf1_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 6]: Completed (id: tunnel_vf2_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 7]: Completed (id: tunnel_vf3_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 8]: Pending (id: shorewall_HA_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 9]: Pending (id: IPaddr_mundor1_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 11]: Pending (id: IPaddr_mundor2_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 13]: Pending (id: IPaddr_lan_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 15]: Pending (id: IPaddr_lan2_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 17]: Pending (id: IPaddr_lan2_2_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 19]: Pending (id: IPaddr_auna_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 21]: Pending (id: Quagga_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 23]: Completed (id: FW_GROUP_start_0, type: pseduo, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_graph: Synapse 2 was confirmed (priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_graph: Synapse 3 was confirmed (priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_graph: Synapse 4 was confirmed (priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_graph: Synapse 5 is pending (priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: [Action 8]: Pending (id: shorewall_HA_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 7]: Completed (id: tunnel_vf3_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 19]: Pending (id: IPaddr_auna_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_graph: Synapse 6 is pending (priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: [Action 9]: Pending (id: IPaddr_mundor1_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 8]: Pending (id: shorewall_HA_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 21]: Pending (id: Quagga_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_graph: Synapse 7 is pending (priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: [Action 10]: Pending (id: IPaddr_mundor1_monitor_5000, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 9]: Pending (id: IPaddr_mundor1_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_graph: Synapse 8 is pending (priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: [Action 11]: Pending (id: IPaddr_mundor2_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 9]: Pending (id: IPaddr_mundor1_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_graph: Synapse 9 is pending (priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: [Action 12]: Pending (id: IPaddr_mundor2_monitor_5000, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 11]: Pending (id: IPaddr_mundor2_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_graph: Synapse 10 is pending (priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: [Action 13]: Pending (id: IPaddr_lan_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 11]: Pending (id: IPaddr_mundor2_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_graph: Synapse 11 is pending (priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: [Action 14]: Pending (id: IPaddr_lan_monitor_5000, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 13]: Pending (id: IPaddr_lan_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_graph: Synapse 12 is pending (priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: [Action 15]: Pending (id: IPaddr_lan2_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 13]: Pending (id: IPaddr_lan_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_graph: Synapse 13 is pending (priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: [Action 16]: Pending (id: IPaddr_lan2_monitor_5000, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 15]: Pending (id: IPaddr_lan2_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_graph: Synapse 14 is pending (priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: [Action 17]: Pending (id: IPaddr_lan2_2_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: * [Input 15]: Pending (id: IPaddr_lan2_start_0, loc: debianquagga1, priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_graph: Synapse 15 is pending (priority: 0) tengine[5066]: 2008/05/14_19:17:05 WARN: print_elem: [Action 18]: Pending (id: IPaddr_lan2_2_monitor_5000, loc: debianquagga1, priority: 0)

The resources:
<resources>
      <group id="FW_GROUP">
<primitive class="lsb" id="tunnel_vf1" provider="linux" type="tunnel_vf1" resource_stickiness="0">
          <operations>
            <op id="tunel_vf1_op1" name="start" timeout="5s"/>
            <op id="tunel_vf1_op2" name="stop" timeout="5s"/>
<op id="tunel_vf1_op3" interval="10s" name="monitor" timeout="4s"/>
          </operations>
        </primitive>
<primitive class="lsb" id="tunnel_vf2" provider="linux" type="tunnel_vf2" resource_stickiness="0">
          <operations>
            <op id="tunel_vf2_op1" name="start" timeout="5s"/>
            <op id="tunel_vf2_op2" name="stop" timeout="5s"/>
<op id="tunel_vf2_op3" interval="10s" name="monitor" timeout="4s"/>
          </operations>
        </primitive>
<primitive class="lsb" id="tunnel_vf3" provider="linux" type="tunnel_vf3" resource_stickiness="0">
          <operations>
            <op id="tunel_vf3_op1" name="start" timeout="5s"/>
            <op id="tunel_vf3_op2" name="stop" timeout="5s"/>
<op id="tunel_vf3_op3" interval="10s" name="monitor" timeout="4s"/>
          </operations>
        </primitive>
<primitive class="lsb" id="shorewall_HA" provider="linux" type="shorewall_HA" resource_stickiness="0">
          <operations>
            <op id="shorewall_op1" name="start" timeout="5s"/>
            <op id="shorewall_op2" name="stop" timeout="5s"/>
<op id="shorewall_op3" interval="10s" name="monitor" timeout="4s"/>
          </operations>
        </primitive>

<primitive class="ocf" id="IPaddr_mundor1" provider="heartbeat" type="IPaddr" resource_stickiness="0">
          <operations>
            <op id="1" interval="5s" name="monitor" timeout="5s"/>
            <op id="2" name="start" timeout="5s"/>
            <op id="3" name="stop" timeout="5s"/>
          </operations>
          <instance_attributes id="mundor_opts">
            <attributes>
              <nvpair id="ip1" name="ip" value="192.168.20.220"/>
              <nvpair id="ip2" name="netmask" value="24"/>
              <nvpair id="ip3" name="gw" value="192.168.20.254"/>
              <nvpair id="ip4" name="nic" value="mundor"/>
            </attributes>
          </instance_attributes>
        </primitive>
<primitive class="ocf" id="IPaddr_mundor2" provider="heartbeat" type="IPaddr" resource_stickiness="0">
          <operations>
            <op id="4" interval="5s" name="monitor" timeout="5s"/>
            <op id="5" name="start" timeout="5s"/>
            <op id="6" name="stop" timeout="5s"/>
          </operations>
          <instance_attributes id="mundor_opts_lan">
            <attributes>
              <nvpair id="ip5" name="ip" value="91.117.252.10"/>
              <nvpair id="ip6" name="netmask" value="30"/>
              <nvpair id="ip8" name="nic" value="mundor"/>
            </attributes>
          </instance_attributes>
        </primitive>
<primitive class="ocf" id="IPaddr_lan" provider="heartbeat" type="IPaddr" resource_stickiness="0">
          <operations>
            <op id="7" interval="5s" name="monitor" timeout="5s"/>
            <op id="8" name="start" timeout="5s"/>
            <op id="9" name="stop" timeout="5s"/>
          </operations>
          <instance_attributes id="lan_opts">
            <attributes>
              <nvpair id="ip9" name="ip" value="192.168.19.254"/>
              <nvpair id="ip10" name="netmask" value="24"/>
              <nvpair id="ip12" name="nic" value="lan"/>
            </attributes>
          </instance_attributes>
        </primitive>
<primitive class="ocf" id="IPaddr_lan2" provider="heartbeat" type="IPaddr" resource_stickiness="0">
          <operations>
            <op id="10" interval="5s" name="monitor" timeout="5s"/>
            <op id="11" name="start" timeout="5s"/>
            <op id="12" name="stop" timeout="5s"/>
          </operations>
          <instance_attributes id="lan2_opts">
            <attributes>
              <nvpair id="ip13" name="ip" value="192.168.18.254"/>
              <nvpair id="ip14" name="netmask" value="24"/>
              <nvpair id="ip16" name="nic" value="lan2"/>
            </attributes>
          </instance_attributes>
        </primitive>
<primitive class="ocf" id="IPaddr_lan2_2" provider="heartbeat" type="IPaddr" resource_stickiness="0">
          <operations>
            <op id="13" interval="5s" name="monitor" timeout="5s"/>
            <op id="14" name="start" timeout="5s"/>
            <op id="15" name="stop" timeout="5s"/>
          </operations>
          <instance_attributes id="lan2_opts2">
            <attributes>
              <nvpair id="ip17" name="ip" value="192.168.17.254"/>
              <nvpair id="ip18" name="netmask" value="24"/>
              <nvpair id="ip20" name="nic" value="lan2"/>
            </attributes>
          </instance_attributes>
        </primitive>
<primitive class="ocf" id="IPaddr_auna" provider="heartbeat" type="IPaddr" resource_stickiness="0">
          <operations>
            <op id="16" interval="5s" name="monitor" timeout="5s"/>
            <op id="17" name="start" timeout="5s"/>
            <op id="18" name="stop" timeout="5s"/>
          </operations>
          <instance_attributes id="auna_opts">
            <attributes>
              <nvpair id="ip21" name="ip" value="192.168.21.220"/>
              <nvpair id="ip22" name="netmask" value="24"/>
              <nvpair id="ip24" name="nic" value="auna"/>
            </attributes>
          </instance_attributes>
        </primitive>
<primitive class="lsb" id="Quagga" provider="linux" type="quagga" resource_stickiness="0">
          <operations>
            <op id="quagga_op1" name="start" timeout="30s"/>
        <op id="quagga_op2" interval="30s" name="monitor" timeout="2s"/>
            <op id="quagga_op3" name="stop" timeout="30s"/>
          </operations>
        </primitive>
      </group>
      <clone id="pingd">
        <instance_attributes id="pingd">
          <attributes>
<nvpair id="pingd-clone_node_max" name="clone_node_max" value="1"/>
          </attributes>
        </instance_attributes>
<primitive id="pingd-child" provider="heartbeat" class="ocf" type="pingd">
          <operations>
<op id="pingd-child-monitor" name="monitor" interval="20s" timeout="40s" prereq="nothing"/>
            <op id="pingd-child-start" name="start" prereq="nothing"/>
          </operations>
          <instance_attributes id="pingd_inst_attr">
            <attributes>
              <nvpair id="pingd-dampen" name="dampen" value="5s"/>
              <nvpair id="pingd-multiplier" name="multiplier" value="100"/>
            </attributes>
          </instance_attributes>
        </primitive>
      </clone>
    </resources>

the constraints, could be the problem here ?:

  <constraints>
<rsc_order id="order_transition_1" from="tunnel_vf1" action="start" type="before" to="tunnel_vf2"/> <rsc_order id="order_transition_2" from="tunnel_vf2" action="start" type="before" to="tunnel_vf3"/> <rsc_order id="order_transition_3" from="tunnel_vf3" action="start" type="before" to="Quagga"/> <rsc_order id="order_transition_4" from="Quagga" action="start" type="before" to="IPaddr_mundor1"/> <rsc_order id="order_transition_5" from="IPaddr_mundor1" action="start" type="before" to="IPaddr_mundor2"/> <rsc_order id="order_transition_6" from="IPaddr_mundor2" action="start" type="before" to="IPaddr_lan"/> <rsc_order id="order_transition_7" from="IPaddr_lan" action="start" type="before" to="IPaddr_lan2"/> <rsc_order id="order_transition_8" from="IPaddr_lan2" action="start" type="before" to="IPaddr_lan2_2"/> <rsc_order id="order_transition_9" from="IPaddr_lan2_2" action="start" type="before" to="IPaddr_auna"/> <rsc_order id="order_transition_10" from="IPaddr_auna" action="start" type="before" to="shorewall_HA"/>
      <rsc_location id="my_resource:connected" rsc="FW_GROUP">
        <rule id="my_resource:prefer:fwlocatel" score="500">
<expression id="my_resource:prefer:fwlocatel:expr" attribute="#uname" operation="eq" value="debianquagga1"/>
        </rule>
<rule id="my_resource:connected:rule" score="-INFINITY" boolean_op="or"> <expression id="my_resource:connected:expr:undefined" attribute="pingd" operation="not_defined"/> <expression id="my_resource:connected:expr:zero" attribute="pingd" operation="lte" value="0"/>
        </rule>
      </rsc_location>
    </constraints>
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to