On Mon, Aug 20, 2007 at 03:29:10PM -0700, Todd Lyons wrote:
> Hey all!  I configured and maintain a heartbeat 1.2.3 cluster (load
> balancer failover) and didn't really experience any problems.
> 
> Now I'm trying to get familiar and competent with 2.x and I'm running
> into brick walls.  Reading the documentation, it seems to be very
> straight forward, but I can't get it to do what I want.  To date, I've
> only tried to configure it with the gui, but I'm happy to use the cli as
> well.
> 
> Goal: an internal server on old crappy hardware locks on occassion.
> Another internal server is running a copy of the same services, can
> bring up the IP of the server that locked, and the services hobble along
> until someone can reboot the locked machine.  I'm effectively doing an
> IP address takeover, but the machine that I'm taking over the IP from is
> NOT running heartbeat.
> 
> My thought process was that heartbeat would run in a single node
> configuration and that I could define two resources:
> 1) pingd that would ping a secondary IP (192.168.100.6) on the machine that
> occassionally locks
> 2) an ipaddr2 that does "if not pingd then ipaddr2 up" (192.168.100.9)
> 
> I can get the ipaddr2 resource to manually start and stop.  I have not
> been able to get the pingd process to actually send out any pings.  I
> have not been able to figure out how to tell ipaddr2 to only turn on if
> pingd fails either.

pingd is an RA which inserts an attribute in a CIB. This
attribute can then be referenced by rules (cf. score_attribute).
Though typically you find rules in context of constraints, they
can also be used to choose a set of instance_attributes for a
resource/group. For example, you could set the target_rule
attribute to "stopped" depending on a value of the "pingd"
score_attribute. I didn't test this example, but it should work:

       <primitive class="ocf" id="IPaddr_10_16_1_201" provider="heartbeat" 
type="IPaddr">
         <operations>
           <op id="IPaddr_10_16_1_201_mon" interval="5s" name="monitor" 
timeout="5s"/>
         </operations>
         <instance_attributes id="stopped_on_pingd_100" score="100">
            <rule id="testpingd" score_attribute="pingd" score="100">
              <expression id="testpingd_exp" attribute="pingd" operation="eq" 
value="100"/>
            </rule>
           <attributes>
             <nvpair id="target_role" name="target_role" value="Stopped"/>
           </attributes>
         </instance_attributes>
         <instance_attributes id="IPaddr_10_16_1_201_inst_attr" score="10">
           <attributes>
             <nvpair id="IPaddr_10_16_1_201_attr_0" name="ip" 
value="10.16.1.201"/>
           </attributes>
         </instance_attributes>
       </primitive>

See this page for pingd examples:

http://linux-ha.org/pingd (NB: no need for clones as you run a
single node cluster)

Of course, it would be much easier if you could run heartbeat on
both nodes.

> Here is the generated cib.xml.
> 
>  <cib generated="true" admin_epoch="0" have_quorum="true" num_peers="1" 
> cib_feature_revision="1.3" epoch="9" num_
> updates="342" cib-last-written="Mon Aug 20 15:22:16 2007" ccm_transition="1" 
> dc_uuid="53e4b339-8b6d-43fe-bf86-c9a
> ef9f4ca61">
>    <configuration>
>      <crm_config>
>        <cluster_property_set id="cib-bootstrap-options">
>          <attributes>
>            <nvpair name="last-lrm-refresh" 
> id="cib-bootstrap-options-last-lrm-refresh" value="1187643852"/>
>          </attributes>
>        </cluster_property_set>
>      </crm_config>
>      <nodes>
>        <node id="53e4b339-8b6d-43fe-bf86-c9aef9f4ca61" 
> uname="space.ivenue.net" type="normal"/>
>      </nodes>
>      <resources>
>        <primitive id="ldapip" class="ocf" type="IPaddr2" provider="heartbeat" 
> is_managed="1">
>          <instance_attributes id="ldapip_instance_attrs">
>            <attributes>
>              <nvpair id="3a403587-6ef7-44eb-8e4f-e82d82c28067" name="ip" 
> value="192.168.100.6"/>
>              <nvpair id="734742c4-92e5-4f59-b063-aeba8db1241e" name="nic" 
> value="eth0:0"/>
>              <nvpair id="1dbaadaf-fc6b-47a9-abed-0029765860c3" 
> name="cidr_netmask" value="24"/>
>            </attributes>
>          </instance_attributes>
>          <operations>
>            <op id="a8449e20-6ee3-4bf0-9d8a-cb1d035f79c2" name="start" 
> interval="2s" timeout="60s"/>

I guess that you meant "monitor". Otherwise, you don't need this
op. And why would you want a repeating start operation?

>          </operations>
>        </primitive>
>        <primitive id="pingio" class="ocf" type="pingd" provider="heartbeat" 
> is_managed="#default">
>          <instance_attributes id="pingio_instance_attrs">
>            <attributes>
>              <nvpair id="162649da-ae5d-4a84-a3ae-744780d95606" 
> name="host_list" value="192.168.100.9"/>
>            </attributes>
>          </instance_attributes>
>          <operations>
>            <op id="328518fa-24f4-4ae2-ad90-c458e77f7b18" name="monitor" 
> interval="2s" timeout="10s"/>
>          </operations>
>        </primitive>
>      </resources>
>      <constraints>
>        <rsc_colocation id="ioisup" from="pingio" to="ldapip" 
> score="-INFINITY"/>
>      </constraints>
>    </configuration>
>  </cib>

You should also update your heartbeat version to 2.1.2.

> * Note: My first attempt at this, I actually installed heartbeat on both
> machines and I created an ipaddr2 resource.  When I clicked the Play
> button, it started the resource on both machines.  I never could figure
> out how to make it only start on one.  I don't have a copy of that old
> cib.xml.
> 
> I tend to think that something very basic is going over my head.  1.2.x
> was a breeze to set up, 2.0.x is kicking my butt at the moment.
> 
> TIA!
> -- 
> Regards...            Todd
> Chris: grep 500 sendmail.mc 
> undefine(`FAIL_MAIL_OVER_500_MILES')dnl
> Chris: just in case ...
> Linux kernel 2.6.17-6mdv   load average: 0.12, 0.12, 0.04
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to