On Dec 11, 2007, at 9:10 AM, Dejan Muhamedagic wrote:

Hi,

On Mon, Dec 10, 2007 at 04:27:37PM +0100, Franck Ganachaud wrote:
Ok, here is where I am now.

I built a clone set and colocated the group1 with the clone.
But when I stop mysql, the clone monitor operation (mysql_orb) return that
DB is down but group1 isn't relocated to serveur B.

Anyone has a clue?

Can you post logs? If you have hb_report you can use that.

I copy the cib :

<configuration>
  <crm_config>
    <cluster_property_set id="cib-bootstrap-options">
      <attributes>
        <nvpair id="cib-bootstrap-options-symmetric-cluster"
name="symmetric-cluster" value="true"/>
        <nvpair id="cib-bootstrap-options-no_quorum-policy"
name="no_quorum-policy" value="stop"/>
<nvpair id="cib-bootstrap-options-default-resource- stickiness"
name="default-resource-stickiness" value="0"/>
        <nvpair
id="cib-bootstrap-options-default-resource-failure-stickiness"
name="default-resource-failure-stickiness" value="0"/>
        <nvpair id="cib-bootstrap-options-stonith-enabled"
name="stonith-enabled" value="false"/>
        <nvpair id="cib-bootstrap-options-stonith-action"
name="stonith-action" value="reboot"/>
        <nvpair id="cib-bootstrap-options-stop-orphan-resources"
name="stop-orphan-resources" value="true"/>
        <nvpair id="cib-bootstrap-options-stop-orphan-actions"
name="stop-orphan-actions" value="true"/>
        <nvpair id="cib-bootstrap-options-remove-after-stop"
name="remove-after-stop" value="false"/>
        <nvpair id="cib-bootstrap-options-short-resource-names"
name="short-resource-names" value="true"/>
        <nvpair id="cib-bootstrap-options-transition-idle-timeout"
name="transition-idle-timeout" value="5min"/>
        <nvpair id="cib-bootstrap-options-default-action-timeout"
name="default-action-timeout" value="5s"/>
        <nvpair id="cib-bootstrap-options-is-managed-default"
name="is-managed-default" value="true"/>
      </attributes>
    </cluster_property_set>
  </crm_config>
  <nodes>
    <node id="0e8b2fa4-983b-4e56-a4a5-72dbb2aeaeec" type="normal"
uname="server_a">
      <instance_attributes
id="nodes-0e8b2fa4-983b-4e56-a4a5-72dbb2aeaeec">
        <attributes>
          <nvpair id="standby-0e8b2fa4-983b-4e56-a4a5-72dbb2aeaeec"
name="standby" value="off"/>
        </attributes>
      </instance_attributes>
    </node>
    <node id="5cdd04e8-035a-44cf-ab60-3065840109db" type="normal"
uname="server_b">
      <instance_attributes
id="nodes-5cdd04e8-035a-44cf-ab60-3065840109db">
        <attributes>
          <nvpair id="standby-5cdd04e8-035a-44cf-ab60-3065840109db"
name="standby" value="off"/>
        </attributes>
      </instance_attributes>
    </node>
    <node id="9e05d57a-ae9c-430d-a210-d03b9f37739e" type="normal"
uname="server_b"/>
    <node id="352f29b5-f0ed-4866-a839-71dbdbfd491d" type="normal"
uname="server_a"/>
  </nodes>

Nodes are listed twice.

  <resources>
    <clone id="MySQL_ORB" interleave="false" is_managed="true"
notify="false" ordered="false">
      <instance_attributes id="MySQL_ORB_inst_attr">
        <attributes>
          <nvpair id="MySQL_ORB_attr_0" name="clone_max" value="2"/>
<nvpair id="MySQL_ORB_attr_1" name="clone_node_max" value="1"/>
        </attributes>
      </instance_attributes>
      <primitive class="ocf" id="mysql_orb1" is_managed="false"
provider="heartbeat" type="mysql_orb">
        <operations>
          <op id="mysql_orb_mon" interval="30s" name="monitor"
on_fail="stop" timeout="30s"/>
        </operations>
      </primitive>
    </clone>
    <group id="group_1" restart_type="restart">
      <primitive class="ocf" id="IPaddr_Cluster" provider="heartbeat"
type="IPaddr">
        <operations>
          <op id="IPaddr_Cluster_mon" interval="5s" name="monitor"
timeout="5s"/>
        </operations>
        <instance_attributes id="IPaddr_Cluster_inst_attr">
          <attributes>
            <nvpair id="IPaddr_Cluster_attr_0" name="ip"
value="192.168.87.100"/>
          </attributes>
        </instance_attributes>
      </primitive>
     <primitive class="ocf" id="apache_2" provider="heartbeat"
type="apache">
        <operations>
          <op id="apache_2_mon" interval="30s" name="monitor"
timeout="30s"/>
        </operations>
        <instance_attributes id="apache_2_inst_attr">
          <attributes>
            <nvpair id="apache_2_attr_0" name="configfile"
value="/usr/local/apache/conf/httpd.conf"/>
          </attributes>
        </instance_attributes>
      </primitive>
    </group>
  </resources>
  <constraints>
    <rsc_location id="rsc_location_group_1" rsc="group_1">
      <rule id="prefered_location_group_1" score="100">
<expression attribute="#uname" id="prefered_location_group_1_expr"
operation="eq" value="server_a"/>
      </rule>
    </rsc_location>
    <rsc_colocation from="group_1" id="web_if_mysql" score="INFINITY"
to="mysql_orb1"/>

This constraint looks strange. Not sure how should the cluster
behave, because you have two clones and the web group is bound to
both.

that might still work.
the only problem i can think of is if the cluster chooses a node on which only the first clone is running. in which case the group wouldn't start.



Thanks,

Dejan

  </constraints>
</configuration>



Franck Ganachaud a ?crit :
Thanks for the tips both of you.

I'm going to work on that the next few days.

Andrew Beekhof a ?crit :

On Dec 5, 2007, at 1:44 PM, Dejan Muhamedagic wrote:

Hi,

On Wed, Dec 05, 2007 at 11:51:20AM +0100, Franck Ganachaud wrote:
Well I don't want hearbeat to stop or start mysql.

You should be better off if you do. Otherwise, you'll probably
end up with an unmaintainable and complex configuration.

And if I colocate R1 with mysql, will a R1 move from server A to B
implie mysql move also?

Not necessarily. Colocations don't have to be symmetrical.

particularly not if its a clone (ie. a resource that has a copy running
on each node)

if you really object to having heartbeat manage mysql, use
is_managed=false for just the mysql resource.
with this setting, the cluster will never ever modify the state of your resource (only check it's health and make decisions for R1 based on that)



Thanks,

Dejan

Franck.

Andrew Beekhof a ?crit :

On Dec 5, 2007, at 9:42 AM, Franck Ganachaud wrote:

I try to be more explicit.
I use v2 configuration

I got 2 servers A and B. A set (group) of resource R1.
R1 is running by default on server A and jump to server B in case of
trouble.
This is currently setup and working fine.

Now, i need to check mysql running on both server A and B.

If mysql isn't ok on server running R1, I need to stop
whateverdaemond and move R1 to the other server
If mysql isn't ok on server not running R1, I just need to stop
whateverdaemond.

I made a OCF agent that does the "stop whateverdaemond if mysql is
down" job

dont do that
write a proper agent for whateverdaemond (maybe you can use an
init-script instead)

add a clone resource for mysql
add a clone resource for whateverdaemond
colocate whateverdaemond with mysql
colocate R1 with mysql


This where I get puzzled, how to translate the relation between the "if running R1" and the agent I wrote it in heartbeat groups and
constraints

Hope it's clearer.
Franck.

Dejan Muhamedagic a ?crit :
Hi,

On Tue, Dec 04, 2007 at 05:25:08PM +0100, Franck Ganachaud wrote:

Hi,

I have a two 2 node cluster.
group1 is active/passive set of ressource.
group1 is currently hop-ing from one node to the other as intended
when one of the group ressource isn't available.

Now I have a second task for heartbeat.
On each node, I have a mysql server, if it goes wrong, I must
shutdown a service and migrate the group1 (if it runs on this node)
to the other node.
And this something I don't know how to do.

I was thinking about creating a group2 in an active/active
configuration testing mysql and if goes wrong just shutdown the service in the stop process of the group2 but I don't know how to force in this case the push of the group1 to the other node of it's
running on the group2 failing node.


Sorry, but I can't follow. Can you please rephrase.

Which configuration do you use: v1 or v2?

Thanks,

Dejan


Any one can help me?
Hope it's clear.

Franck.
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems



--
GANACHAUD Franck
Consultant
Tel. : +33 (0)2 98 05 43 21
http://www.altran.com
--
Technop?le Brest Iroise
Site du Vernis - CS 23866
29238 Brest Cedex 3 - France
Tel. : +33 (0)2 98 05 43 21
Fax. : +33 (0)2 98 05 20 34
e-mail: [EMAIL PROTECTED]

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


--
GANACHAUD Franck
Consultant
Tel. : +33 (0)2 98 05 43 21
http://www.altran.com
--
Technop?le Brest Iroise
Site du Vernis - CS 23866
29238 Brest Cedex 3 - France
Tel. : +33 (0)2 98 05 43 21
Fax. : +33 (0)2 98 05 20 34
e-mail: [EMAIL PROTECTED]

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems



--
GANACHAUD Franck
Consultant
Tel. : +33 (0)2 98 05 43 21
http://www.altran.com
--
Technop?le Brest Iroise
Site du Vernis - CS 23866
29238 Brest Cedex 3 - France
Tel. : +33 (0)2 98 05 43 21
Fax. : +33 (0)2 98 05 20 34
e-mail: [EMAIL PROTECTED]

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to