Andrew;
I relly appreciate your help, following your advices I was able to carry
it out the way I wanted.
Thanks.
Regards;

Guillermo


El mar, 09-12-2008 a las 22:23 +0100, Andrew Beekhof escribió:
> On Tue, Dec 9, 2008 at 19:40, guillermo <[EMAIL PROTECTED]> wrote:
> > Andrew;
> >
> > sorry for having been so unclear, basically the cluster is working in
> > the right way, when the services start this is made in the right
> > moment , even the files's assembly make it after the dbrd starts and put
> > the right node as master, this is:
> >
> > the cluster starts:
> >
> > - the ip virtual sets.
> > - the drbd starts.
> > - mysql starts.
> > - asterisk starts.
> > - the principal node designed to master states.
> > then the system's files mount in the node which was designed as a
> > master.
> >
> > so far it's ok , if for example , then I turn off the cluster,
> 
> using "/etc/init.d/heartbeat stop" or something else?
> 
> > which is
> > the primary , the services turn off in time and they do it ok  and then
> > they  set in the other node in time and ok, as I' ve already said
> > everything is ok now, what is my problem?, if for example a service
> > falls, as mysql, which is happening is that only a piece of the services
> > which are found in the group_1 they pass to the other node ,
> > the
> > services that pass to the other node are:
> > ipaddr
> > mysql
> > and asterick,
> > the drbd and the file's system continue in the secondary node, which I
> > need is before the first failure of any of the services , they pass, all
> > of them, to the other node.
> 
> So when a member of the group fails, you want _everything_ to move to
> the other node.
> Is that correct?
> 
> If so, then you need to colocate the group with drbd.
> 
> > In the contraints, resources location I have declared the pbx-2 node as
> > prefered, when the services start, in the resources order I declared the
> > drbd started before the filesystem mount in the pbx-2 node and the
> > colocation resource that the file's system mount in the same node where
> > the drbd master is.
> >
> > This is what the crm_resource -L give me back:
> >
> >  Master/Slave Set: ms-r0
> >    r0:0        (ocf::heartbeat:drbd)
> >    r0:1        (ocf::heartbeat:drbd)
> >  Resource Group: group_1
> >    IPaddr_192_168_123_205      (ocf::heartbeat:IPaddr)
> >    mysql_2     (ocf::heartbeat:mysql)
> >    apache2_2   (lsb:apache2)
> >    asterisk_3  (lsb:asterisk)
> >  fs0     (ocf::heartbeat:Filesystem)
> >
> >
> > I wish having been clear, any information you can provide me is
> > welcomed.
> > regards,
> >
> > Guillermo
> >
> >
> > El mar, 09-12-2008 a las 11:40 +0100, Andrew Beekhof escribió:
> >> I'm sorry, but I'm having trouble parsing this...
> >>
> >> On Fri, Dec 5, 2008 at 19:25, guillermo <[EMAIL PROTECTED]> wrote:
> >> > I have the following problem and I am not able to solve it; I have
> >> > configured Heartbeat v2 plus DRBD in two nodes, one of then is
> >> > configured to act as DRBD master, so it has preference in relation to
> >> > the other node when it has to start,
> >>
> >> are you talking about rsc_location or rsc_order constraints here?
> >>
> >> > because in the master assembles the
> >> > file system, together with the DRBD I have configured others services,
> >> > these services carry out in the master node, however, the problem is
> >> > when I want to before the first failure in some of the resources, all of
> >> > the servcices change to the other node, as I have configured it to the
> >> > scores, for example if I kill the mysql process the group_1 passes to
> >> > the other node, but the DRBD's and the file system assembled remain in
> >> > the same node, what I need is, before the failure of any of the other
> >> > resources all the servcies pass to the other nodes.
> >>
> >> you lost me here... but looking at your configuration, I think you
> >> need some rsc_colocation and rsc_order constraints so that the group
> >> runs on the same machine as the filesystem (and starts after it too).
> >>
> >> > I am using Debian etch, Heartbeat 2.1.4 (stable) and DRBD 0.7
> >> >
> >> > This is what the crm_resource -L give me back:
> >> >
> >> > Master/Slave Set: ms-r0
> >> >    r0:0        (ocf::heartbeat:drbd)
> >> >    r0:1        (ocf::heartbeat:drbd)
> >> > Resource Group: group_1
> >> >    IPaddr_192_168_123_205      (ocf::heartbeat:IPaddr)
> >> >    mysql_2     (ocf::heartbeat:mysql)
> >> >    apache2_2   (lsb:apache2)
> >> >    asterisk_3  (lsb:asterisk)
> >> > fs0     (ocf::heartbeat:Filesystem)
> >> >
> >> >
> >> > This is my cib.xml's file:
> >> >
> >> >  <cib generated="true" admin_epoch="0" have_quorum="true"
> >> > ignore_dtd="false" num_peers="2" cib_feature_revision="2.0"
> >> > crm_feature_set="2.0" epoch="52" num_updates="3" cib-last-written="Fri
> >> > Dec  5 15:15:13 2008" ccm_transition="2"
> >> > dc_uuid="813044e0-db95-40b2-9e48-67ec2e3e6584">
> >> >   <configuration>
> >> >     <crm_config>
> >> >       <cluster_property_set id="cib-bootstrap-options">
> >> >         <attributes>
> >> >           <nvpair id="cib-bootstrap-options-dc-version"
> >> > name="dc-version" value="2.1.4-node:
> >> > aa909246edb386137b986c5773344b98c6969999"/>
> >> >           <nvpair id="cib-bootstrap-options-symmetric-cluster"
> >> > name="symmetric-cluster" value="true"/>
> >> >           <nvpair id="cib-bootstrap-options-no-quorum-policy"
> >> > name="no-quorum-policy" value="stop"/>
> >> >           <nvpair
> >> > id="cib-bootstrap-options-default-resource-stickiness"
> >> > name="default-resource-stickiness" value="100"/>
> >> >           <nvpair
> >> > id="cib-bootstrap-options-default-resource-failure-stickiness"
> >> > name="default-resource-failure-stickiness" value="-400"/>
> >> >           <nvpair id="cib-bootstrap-options-stonith-enabled"
> >> > name="stonith-enabled" value="false"/>
> >> >           <nvpair id="cib-bootstrap-options-stonith-action"
> >> > name="stonith-action" value="reboot"/>
> >> >           <nvpair id="cib-bootstrap-options-startup-fencing"
> >> > name="startup-fencing" value="true"/>
> >> >           <nvpair id="cib-bootstrap-options-stop-orphan-resources"
> >> > name="stop-orphan-resources" value="true"/>
> >> >           <nvpair id="cib-bootstrap-options-stop-orphan-actions"
> >> > name="stop-orphan-actions" value="true"/>
> >> >           <nvpair id="cib-bootstrap-options-remove-after-stop"
> >> > name="remove-after-stop" value="false"/>
> >> >           <nvpair id="cib-bootstrap-options-short-resource-names"
> >> > name="short-resource-names" value="true"/>
> >> >           <nvpair id="cib-bootstrap-options-transition-idle-timeout"
> >> > name="transition-idle-timeout" value="5min"/>
> >> >           <nvpair id="cib-bootstrap-options-default-action-timeout"
> >> > name="default-action-timeout" value="20s"/>
> >> >           <nvpair id="cib-bootstrap-options-is-managed-default"
> >> > name="is-managed-default" value="true"/>
> >> >           <nvpair id="cib-bootstrap-options-cluster-delay"
> >> > name="cluster-delay" value="60s"/>
> >> >           <nvpair id="cib-bootstrap-options-pe-error-series-max"
> >> > name="pe-error-series-max" value="-1"/>
> >> >           <nvpair id="cib-bootstrap-options-pe-warn-series-max"
> >> > name="pe-warn-series-max" value="-1"/>
> >> >           <nvpair id="cib-bootstrap-options-pe-input-series-max"
> >> > name="pe-input-series-max" value="-1"/>
> >> >         </attributes>
> >> >       </cluster_property_set>
> >> >     </crm_config>
> >> >     <nodes>
> >> >       <node id="813044e0-db95-40b2-9e48-67ec2e3e6584" uname="pbx-2"
> >> > type="normal"/>
> >> >       <node id="91471b98-85a3-4f9c-a414-1f889143c8be" uname="pbx-1"
> >> > type="normal"/>
> >> >     </nodes>
> >> >     <resources>
> >> >       <master_slave id="ms-r0">
> >> >         <meta_attributes id="ma-ms-r0">
> >> >           <attributes>
> >> >             <nvpair id="ma-ms-r0-1" name="clone_max" value="2"/>
> >> >             <nvpair id="ma-ms-r0-2" name="clone_node_max" value="1"/>
> >> >             <nvpair id="ma-ms-r0-3" name="master_max" value="1"/>
> >> >             <nvpair id="ma-ms-r0-4" name="master_node_max" value="1"/>
> >> >             <nvpair id="ma-ms-r0-5" name="notify" value="yes"/>
> >> >             <nvpair id="ma-ms-r0-6" name="globally_unique"
> >> > value="false"/>
> >> >             <nvpair id="ma-ms-r0-7" name="target_role"
> >> > value="started"/>
> >> >           </attributes>
> >> >         </meta_attributes>
> >> >         <primitive id="r0" class="ocf" provider="heartbeat"
> >> > type="drbd">
> >> >           <instance_attributes id="ia-r0">
> >> >             <attributes>
> >> >               <nvpair id="ia-r0-1" name="drbd_resource" value="r0"/>
> >> >             </attributes>
> >> >           </instance_attributes>
> >> >           <operations>
> >> >             <op id="op-r0-1" name="monitor" interval="59s"
> >> > timeout="10s" role="Master"/>
> >> >             <op id="op-r0-2" name="monitor" interval="60s"
> >> > timeout="10s" role="Slave"/>
> >> >           </operations>
> >> >         </primitive>
> >> >       </master_slave>
> >> >       <group id="group_1">
> >> >         <primitive class="ocf" id="IPaddr_192_168_123_205"
> >> > provider="heartbeat" type="IPaddr">
> >> >           <operations>
> >> >             <op id="IPaddr_192_168_123_205_mon" interval="5s"
> >> > name="monitor" timeout="5s"/>
> >> >           </operations>
> >> >           <instance_attributes id="IPaddr_192_168_123_205_inst_attr">
> >> >             <attributes>
> >> >               <nvpair id="IPaddr_192_168_123_205_attr_0" name="ip"
> >> > value="192.168.123.205"/>
> >> >               <nvpair id="IPaddr_192_168_123_205_attr_1" name="netmask"
> >> > value="24"/>
> >> >               <nvpair id="IPaddr_192_168_123_205_attr_2" name="nic"
> >> > value="eth1"/>
> >> >             </attributes>
> >> >           </instance_attributes>
> >> >         </primitive>
> >> >         <primitive class="ocf" id="mysql_2" provider="heartbeat"
> >> > type="mysql">
> >> >           <operations>
> >> >             <op id="mysql_2_mon" interval="120s" name="monitor"
> >> > timeout="60s"/>
> >> >           </operations>
> >> >         </primitive>
> >> >         <primitive class="lsb" id="apache2_2" provider="heartbeat"
> >> > type="apache2">
> >> >           <operations>
> >> >             <op id="apache2_2_mon" interval="120s" name="monitor"
> >> > timeout="60s"/>
> >> >           </operations>
> >> >         </primitive>
> >> >         <primitive class="lsb" id="asterisk_3" provider="heartbeat"
> >> > type="asterisk">
> >> >           <operations>
> >> >             <op id="asterisk_3_mon" interval="120s" name="monitor"
> >> > timeout="60s"/>
> >> >           </operations>
> >> >         </primitive>
> >> >       </group>
> >> >       <primitive class="ocf" provider="heartbeat" type="Filesystem"
> >> > id="fs0">
> >> >         <instance_attributes id="ia-group_1">
> >> >           <attributes>
> >> >             <nvpair id="ia-fs0-1" name="fstype" value="ext3"/>
> >> >             <nvpair id="ia-fs0-2" name="directory"
> >> > value="/replicated"/>
> >> >             <nvpair id="ia-fs0-3" name="device" value="/dev/drbd0"/>
> >> >           </attributes>
> >> >         </instance_attributes>
> >> >       </primitive>
> >> >     </resources>
> >> >     <constraints>
> >> >       <rsc_location id="rsc_location_group_1" rsc="group_1">
> >> >         <rule id="prefered_location_group_1" score="100">
> >> >           <expression attribute="#uname"
> >> > id="prefered_location_group_1_expr" operation="eq" value="pbx-2"/>
> >> >         </rule>
> >> >       </rsc_location>
> >> >       <rsc_location id="r0_master_on_pbx-2" rsc="ms-r0">
> >> >         <rule id="r0_master_on_pbx-2_rule1" role="master" score="100">
> >> >           <expression id="r0_master_on_pbx-2_expression1"
> >> > attribute="#uname" operation="eq" value="pbx-2"/>
> >> >         </rule>
> >> >       </rsc_location>
> >> >       <rsc_order id="r0_before_fs0" to_action="start" to="fs0"
> >> > type="before" action="promote" from="ms-r0"/>
> >> >       <rsc_colocation id="fs0_on_r0" to="ms-r0" to_role="master"
> >> > from="fs0" score="100"/>
> >> >     </constraints>
> >> >   </configuration>
> >> >  </cib>
> >> >
> >> >
> >> >
> >> > I will appreciate if anyone can help me to solve it.
> >> >
> >> > Regards
> >> >
> >> > Guillermo
> >> >
> >> > _______________________________________________
> >> > Linux-HA mailing list
> >> > [email protected]
> >> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> >> > See also: http://linux-ha.org/ReportingProblems
> >> >
> >> _______________________________________________
> >> Linux-HA mailing list
> >> [email protected]
> >> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> >> See also: http://linux-ha.org/ReportingProblems
> >
> > _______________________________________________
> > Linux-HA mailing list
> > [email protected]
> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > See also: http://linux-ha.org/ReportingProblems
> >
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to