Hi Markus, 

        I ran into the same problem. Didn't find any better way than to
modify the monitoring script of mysql and add in the case of a failure:

/usr/sbin/attrd_updater -n mysql_running -d 5s -v 0

And in the case of a success:

/usr/sbin/attrd_updater -n mysql-mod_running -d 5s -v 1

The running the monitor script as a clone:

       <clone id="mysql">
         <instance_attributes id="mysql">
           <attributes>
             <nvpair id="mysql-clone_node_max" name="clone_node_max"
value="1"/>
           </attributes>
         </instance_attributes>
         <primitive id="mysql-child" provider="heartbeat" class="ocf"
type="mysql">
           <operations>
             <op id="mysql-child-monitor" name="monitor" interval="20s"
timeout="40s" prereq="nothing">
               <instance_attributes id="mysql-child-monitor-attr">
               </instance_attributes>
             </op>
             <op id="mysql-child-start" name="start" prereq="nothing"/>
           </operations>
         </primitive>
       </clone>

And then had a constraint:

       <rsc_location rsc="group_1" id="cli-stop2-group_1">
         <rule score="-INFINITY" id="cli-stop2-rule-group_1">
           <expression operation="lte" value="0" id="cli-stop2-expr-group_1"
attribute="mysql_running"/>
         </rule>
       </rsc_location>

This will run the monitor on every node and set the score to -INFINITY for
the node where mysql fails.

If mysql comes back online though, the "mysql_running" will be set to "1"
but I don't think it will trigger a recalculation of the scores. Haven't
figured out yet how to cause this.


Hope this helps
-- 
Benjamin
TéliPhone inc.


--------------
N'envoyé pas de courriel à l'adresse qui suit, sinon vous serez
automatiquement mis sur notre liste noire.
[EMAIL PROTECTED]
Do not send an email to the email above or you will automatically be
blacklisted.

> -----Original Message-----
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf Of Markus W.
> Sent: May 10, 2007 10:11 AM
> To: [email protected]
> Subject: [Linux-HA] MySQL Master Master
> 
> Hi,
> 
> I want to run apache with mysql on a node. In case of a 
> failure apache should move to the next node while mysql is 
> running on each node (replication). I created a resource group with:
> 
>     <resources>
>       <group id="Res_Web">
>         <primitive id="Res_IP" class="ocf" type="IPaddr2" 
> provider="heartbeat">
>           <instance_attributes id="Res_IP_Attr">
>             <attributes>
>               <nvpair id="Res_IP_Attr_Value" name="ip" 
> value="xx.xx.xx.xx"/>
>               <nvpair id="Res_IP_target_role" name="target_role" 
> value="started"/>
>             </attributes>
>           </instance_attributes>
>           <operations>
>             <op id="Res_IP_Op_Start" name="start" timeout="5s"/>
>             <op id="Res_IP_Op_Stop" name="stop" timeout="5s"/>
>             <op id="Res_IP_Op_Monitor" interval="5s" name="monitor" 
> timeout="2s"/>
>           </operations>
>         </primitive>
>         <primitive id="Res_Apache" class="lsb" type="httpd" 
> provider="heartbeat">
>           <operations>
>             <op id="Res_Apache_Op_Start" name="start" timeout="5s"/>
>             <op id="Res_Apache_Op_Stop" name="stop" timeout="5s"/>
>             <op id="Res_Apache_Op_Monitor" interval="5s" 
> name="monitor" 
> timeout="2s"/>
>           </operations>
>           <instance_attributes id="Res_Apache_instance_attrs">
>             <attributes>
>               <nvpair id="Res_Apache_target_role" name="target_role" 
> value="started"/>
>             </attributes>
>           </instance_attributes>
>         </primitive>
>         <instance_attributes id="Res_Web_instance_attrs">
>           <attributes>
>             <nvpair id="Res_Web_target_role" name="target_role" 
> value="started"/>
>           </attributes>
>         </instance_attributes>
>       </group>
>     </resources>
> 
> Everything works fine, till I add the following resource to the above
> group:
> 
>         <primitive id="Res_MySQL" class="lsb" type="mysql-ha" 
> provider="heartbeat">
>           <operations>
>             <op id="Res_MySQL_Op_Monitor" interval="5s" 
> name="monitor" 
> timeout="2s"/>
>           </operations>
>           <instance_attributes id="Res_MySQL_instance_attrs">
>             <attributes>
>               <nvpair id="Res_MySQL_target_role" name="target_role" 
> value="started"/>
>             </attributes>
>           </instance_attributes>
>         </primitive>
> 
> with the lsb script "mysql-ha" as a dummy script:
> 
> case "$1" in
>  start)
>    status mysqld
>    ;;
>  stop)
>    exit 0
>    ;;
>  status)
>    status mysqld
>    ;;
>  *)
>    echo $"Usage: $0 {start|stop|status} (start|stop faked)"
>    exit 1
> esac
> 
> exit $?
> 
> No if I stop mysql on the first node, apache move to the 
> second node. If I start mysql on the first node again and 
> stop mysql on the second node, apache wont move back to the 
> first node. Does I have to definie some scores?
> 
> Best regards,
> 
> Markus
> 
> 
> 
> 
> 
> 
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to