Yes I mean by hand.

Mysql can't be managed by heartbeat for various reasons.
The fact is the agent mysql_orb is supposed to monitor mysql, if mysql fails, then another group of resources is moved away.
This is working currently.

The problem is when mysql is back again, as mysql_orb monitoring doesn't occur on the "failed" node, it doesn't detect mysql status change and resource don't come back.

I know I can do it by hand using crm_resource, that's my current solution but that's a mechanism I'd like to be automated.

Thanks for answering

Dejan Muhamedagic a écrit :
Hi,

On Wed, Feb 20, 2008 at 11:32:52AM +0100, Franck Ganachaud wrote:
Ok, I found what is the problem, when mysql_orb fails, it restarts and stays in state FAILED. I changed the monitor operation with the on_fail="stop" then it works.

But here is my problem now.
When the action is triggered due to monitoring mysql_orb fails, monitor of this cloned resource stops. we restart mysqld

I guess that you mean restart by hand?

but mysql_orb agent isn't monitored anymore and resources don't come back to this node. What do I have to change to the cloned mysql_orb resource to keep monitoring after a failure.

Clean the resource status:

crm_resource -C -r rsc

Actually, if you do this, then the cluster will try to start the
resource itself, so you could save yourself some typing.
Normally, you should let the cluster do start/stop and just take
care of whatever conditions caused a resource to fail. All you
have to do then is to signal the cluster that it should reprobe
the resource by using the above command.

Thanks,

Dejan

<clone globally_unique="false" id="MySQL_ORB" interleave="false" is_managed="true" notify="false" ordered="false">
     <instance_attributes id="MySQL_ORB_inst_attr">
       <attributes>
         <nvpair id="MySQL_ORB_attr_0" name="clone_max" value="2"/>
         <nvpair id="MySQL_ORB_attr_1" name="clone_node_max" value="1"/>
       </attributes>
     </instance_attributes>
<primitive class="ocf" id="mysql_orb1" is_managed="true" provider="heartbeat" type="mysql_orb">
       <operations>
<op id="mysql_orb_mon" interval="31s" name="monitor" on_fail="stop" timeout="30s"/>
       </operations>
     </primitive>
   </clone>

Franck Ganachaud a ?crit :
Well, On a 2 nodes cluster using heartbeat 2.1.2, I have one cloned ressource and 1 group executed preferably on nodeA but if clone if failed or off, group should be migrated to nodeB

When I stop the service, cloned ressource mysql_orb fails on nodeA but I don't know what to do to make the group_1 goes to nodeB. I don't know what to put in the configuration file to make it work, the following don't work : <rsc_colocation from="group_1" id="web_if_mysql" score="INFINITY" to="MySQL_ORB"/>

You can find cib.xml attached and log messages.

Thanks,
Franck.

------------------------------------------------------------------------

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


--
GANACHAUD Franck
Consultant
Tel. : +33 (0)2 98 05 43 21
http://www.altran.com
--
Technopôle Brest Iroise
Site du Vernis - CS 23866
29238 Brest Cedex 3 - France
Tel. : +33 (0)2 98 05 43 21
Fax. : +33 (0)2 98 05 20 34
e-mail: [EMAIL PROTECTED]

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to