Attached is the cib I am using. By adjusting the scores on the
drbd_m_like_ rules I can migrate the drbd master between nodes, and the
filesystem cleanly dismounts first and remounts on the new master after.

What I also need it to do is to migrate the services in response to a
failure or other score change of the grp_www group. I've tried many
permutations and I can't figure this out. The best I come up with is
failure of the rsc_www_fs resource in situ after I manually dismount it
a few times. At worst, Bad Things Happen. 

As best as I can guess grp_www won't move to the slave node no matter
what. Perhaps because of the -INFINITY in the colocation? 

What I need is to have the other node become master and then have
grp_www start on it. Essentially I need the master state of drbd-ms to
effectively be the first member of grp_www. I know that cannot be done
overtly, but how does one get that effect?

What's the incantation to get the master_slave to change master in
response to failure/scorechange on a collocated service?

I am running hb2.0.8 on CentOS4.4 i386 running under vmware.
Drbd is v0.7 with the modified/fixed drbd ocf script I posted earlier.

Alastair Young
Director, Operations
Ludi labs
399 West El Camino Real
Mountain View, CA 94040
Email: [EMAIL PROTECTED]
Direct: 650-241-0068
Mobile: 925-784-0812

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Alastair N.
Young
Sent: Monday, April 23, 2007 2:19 PM
To: General Linux-HA mailing list
Subject: RE: [Linux-HA] Cannot create group containing drbd using HB GUI

I'm also wrangling with this issue (getting drbd OCF to work in V2,
logically grouping master mode with the services that are on it)

One thing I've run into so far is that there appear to be some bugs in
the drbd ocf script.

1) In do_cmd() it uses "local cmd_out" immediately before taking the
result code from $?. This always succeeds (on CentOS 4.4 32 bit anyway).
Declaring this local in an earlier line returns the correct return code
from the drbdadm command from the function. As this return code is used
elsewhere, it helps that failure codes are passed back as intended.

2) There needs to be a wait loop after the module is loaded, same as is
in the drbd distributed /etc/init.d/drbd script. I inserted this into
drbd_start() (UDEV_TIMEOUT is set in the script header to 10)

            # make sure udev has time to create the device files
            for RESOURCE in `$DRBDADM sh-resources`; do
                for DEVICE in `$DRBDADM sh-dev $RESOURCE`; do
                    UDEV_TIMEOUT_LOCAL=$UDEV_TIMEOUT
                    while [ ! -e $DEVICE ] && [ $UDEV_TIMEOUT_LOCAL -gt
0 ] ; do
                        sleep 1
                        UDEV_TIMEOUT_LOCAL=$(( $UDEV_TIMEOUT_LOCAL-1 ))
                    done
                done
            done

It takes several seconds after the modload returns for the /dev/drbd0
device to appear - and nothing works until it does.

3) A similar timer is needed in drbd_promote as drbdadm won't let you
"Primary" until the other is not "Primary". I found that hearbeat was
firing off the promote on "b" slightly before the "demote" on "a",
causing a failure.

I added this: (REMOTE_DEMOTE_TIMEOUT is set in the script header to 10)

 drbd_get_status
 DEMOTE_TIMEOUT_LOCAL=$REMOTE_DEMOTE_TIMEOUT
 while [ "x$DRBD_STATE_REMOTE" = "xPrimary" ] && [ $DEMOTE_TIMEOUT_LOCAL
-gt 0 ] ; do
    sleep 1
    DEMOTE_TIMEOUT_LOCAL=$(( $DEMOTE_TIMEOUT_LOCAL-1 ))
    drbd_get_status
 done

With these changes I was able to get drbd to start, stop and migrate
cleanly when I tweaked the location scores.

Getting the services dependent on that disk to do the same is still an
open question :-)

My modified drbd ocf script is attached, use at your own risk.


Alastair Young
Director, Operations
Ludi labs
399 West El Camino Real
Mountain View, CA 94040
Email: [EMAIL PROTECTED]
Direct: 650-241-0068
Mobile: 925-784-0812
-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Martin Fick
Sent: Thursday, April 19, 2007 1:13 PM
To: General Linux-HA mailing list
Subject: Re: [Linux-HA] Cannot create group containing drbd using HB GUI

Hi Doug,

I personally could not get the DRBD OCF to work, I am
using drbd .7x, what about you?  I never tried a
master/slave setup though.  I created my own drbd OCF,
it is on my site along with the CIB scripts.

http://www.theficks.name/bin/lib/ocf/drbd

You can even use the drbd CIBS as a starting place if
you want:

http://www.theficks.name/bin/lib/heartbeat/drbd


I just updated them all (CIBS and OCF agents) if you
want to try them out.  


-Martin



--- Doug Knight <[EMAIL PROTECTED]> wrote:

> I made the ID change indicated below (for the
> colocation constraints),
> and everything configured fine using cibadmin. Now,
> I started JUST the
> drbd master/slave resource, with the rsc_location
> rule setting the
> expression uname to one of the two nodes in the
> cluster. Both drbd
> processes come up and sync up the partition, but
> both are still in
> slave/secondary mode (i.e. the rsc_location rule did
> not cause a
> promotion). Am I missing something here? This is the
> rsc_location
> constraint:
> 
> <rsc_location id="locate_drbd" rsc="rsc_drbd_7788">
>         <rule id="rule_drbd_on_dk" role="master"
> score="100">
>                 <expression id="exp_drbd_on_dk"
> attribute="#uname"
> operation="eq" value="arc-dknightlx"/>
>         </rule>
> </rsc_location>
> 
> (By the way, the example from
> Idioms/MasterConstraints web page does not
> have an ID specified in the expression tag, so I
> added one to mine.)
> Doug
> On Thu, 2007-04-19 at 13:04 -0400, Doug Knight
> wrote:
> 
> > ...
> > 
> > > > > >>     
> > > > > >>>> For exemple
> > > > > >>>> <rsc_location id="drbd1_loc_nodeA"
> rsc="drbd1">
> > > > > >>>>     <rule id="pref_drbd1_loc_nodeA"
> score="600">
> > > > > >>>>          <expression attribute="#uname"
> operation="eq" value="nodeA" 
> > > > > >>>> id="pref_drbd1_loc_nodeA_attr"/>
> > > > > >>>>     </rule>
> > > > > >>>>     <rule id="pref_drbd1_loc_nodeB"
> score="800">
> > > > > >>>>          <expression attribute="#uname"
> operation="eq" value="nodeB" 
> > > > > >>>> id="pref_drbd1_loc_nodeB_attr"/>
> > > > > >>>>     </rule>
> > > > > >>>> </rsc_location>
> > > > > >>>>
> > > > > >>>> In this case, nodeB will be primary for
> resource drbd1. Is that what
> > > > > >>>>         
> > > > > >> you 
> > > > > >>     
> > > > > >>>> were looking for ?
> > > > > >>>>         
> > > > > >>> Not like this, not when using the drbd
> OCF Resource Agent as a
> > > > > >>> master-slave one. In that case, you need
> to bind the rsc_location to
> > > > > >>>       
> > > > > >> the
> > > > > >>     
> > > > > >>> role=Master as well.
> > > > > >>>       
> > > > > >> I was missing this in the CIB idioms
> page.  I just added it.
> > > > > >>
> > > > > >>    http://linux-ha.org/CIB/Idioms
> > 
> > 
> > I tried setting up colocation constraints similar
> to those shown in the
> > example referenced in the URL above, and it
> complained about the
> > identical ids:
> > 
> > [EMAIL PROTECTED] xml]# more
> rule_fs_on_drbd_slave.xml 
> > <rsc_colocation id="fs_on_drbd" to="rsc_drbd_7788"
> to_role="slave"
> > from="fs_mirror" score="-infinity"/>
> > 
> > [EMAIL PROTECTED] xml]# more
> rule_fs_on_drbd_stopped.xml 
> > <rsc_colocation id="fs_on_drbd" to="rsc_drbd_7788"
> to_role="stopped"
> > from="fs_mirror" score="-infinity"/>
> > 
> > [EMAIL PROTECTED] xml]# cibadmin -o constraints
> -C -x
> > rule_fs_on_drbd_stopped.xml 
> > 
> > [EMAIL PROTECTED] xml]# cibadmin -o constraints
> -C -x
> > rule_fs_on_drbd_slave.xml 
> > Call cib_create failed (-21): The object already
> exists
> >  <failed>
> >    <failed_update id="fs_on_drbd"
> object_type="rsc_colocation"
> > operation="add" reason="The object already
> exists">
> >      <rsc_colocation id="fs_on_drbd"
> to="rsc_drbd_7788" to_role="slave"
> > from="fs_mirror" score="-infinity"/>
> >    </failed_update>
> >  </failed>
> > 
> > I'm going to change the ids to be unique and try
> again, but wanted to
> > point this out since it is very similar to the
> example on the web page.
> > 
> > Doug
> > 
> > 
> > 
> > > > > >> 
> http://linux-ha.org/CIB/Idioms/MasterConstraints
> > > > > >>
> > > > > >>
> > > > > >>     
> > > > > >
> _______________________________________________
> > > > > > Linux-HA mailing list
> > > > > > [email protected]
> > > > > >
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > > > > See also:
> http://linux-ha.org/ReportingProblems
> > > > > >
> > > > > >   
> > > > >
> _______________________________________________
> > > > > Linux-HA mailing list
> > > > > [email protected]
> > > > >
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > > > See also:
> http://linux-ha.org/ReportingProblems
> > > > > 
> > > >
> _______________________________________________
> > > > Linux-HA mailing list
> > > > [email protected]
> > > >
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > > See also:
> http://linux-ha.org/ReportingProblems
> > > _______________________________________________
> > > Linux-HA mailing list
> > > [email protected]
> > >
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > See also: http://linux-ha.org/ReportingProblems
> > > 
> > _______________________________________________
> > Linux-HA mailing list
> > [email protected]
> >
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > See also: http://linux-ha.org/ReportingProblems
> > 
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
> 



--- Doug Knight <[EMAIL PROTECTED]> wrote:

> I made the ID change indicated below (for the
> colocation constraints),
> and everything configured fine using cibadmin. Now,
> I started JUST the
> drbd master/slave resource, with the rsc_location
> rule setting the
> expression uname to one of the two nodes in the
> cluster. Both drbd
> processes come up and sync up the partition, but
> both are still in
> slave/secondary mode (i.e. the rsc_location rule did
> not cause a
> promotion). Am I missing something here? This is the
> rsc_location
> constraint:
> 
> <rsc_location id="locate_drbd" rsc="rsc_drbd_7788">
>         <rule id="rule_drbd_on_dk" role="master"
> score="100">
>                 <expression id="exp_drbd_on_dk"
> attribute="#uname"
> operation="eq" value="arc-dknightlx"/>
>         </rule>
> </rsc_location>
> 
> (By the way, the example from
> Idioms/MasterConstraints web page does not
> have an ID specified in the expression tag, so I
> added one to mine.)
> Doug
> On Thu, 2007-04-19 at 13:04 -0400, Doug Knight
> wrote:
> 
> > ...
> > 
> > > > > >>     
> > > > > >>>> For exemple
> > > > > >>>> <rsc_location id="drbd1_loc_nodeA"
> rsc="drbd1">
> > > > > >>>>     <rule id="pref_drbd1_loc_nodeA"
> score="600">
> > > > > >>>>          <expression attribute="#uname"
> operation="eq" value="nodeA" 
> > > > > >>>> id="pref_drbd1_loc_nodeA_attr"/>
> > > > > >>>>     </rule>
> > > > > >>>>     <rule id="pref_drbd1_loc_nodeB"
> score="800">
> > > > > >>>>          <expression attribute="#uname"
> operation="eq" value="nodeB" 
> > > > > >>>> id="pref_drbd1_loc_nodeB_attr"/>
> > > > > >>>>     </rule>
> > > > > >>>> </rsc_location>
> > > > > >>>>
> > > > > >>>> In this case, nodeB will be primary for
> resource drbd1. Is that what
> > > > > >>>>         
> > > > > >> you 
> > > > > >>     
> > > > > >>>> were looking for ?
> > > > > >>>>         
> > > > > >>> Not like this, not when using the drbd
> OCF Resource Agent as a
> > > > > >>> master-slave one. In that case, you need
> to bind the rsc_location to
> > > > > >>>       
> > > > > >> the
> > > > > >>     
> > > > > >>> role=Master as well.
> > > > > >>>       
> > > > > >> I was missing this in the CIB idioms
> page.  I just added it.
> > > > > >>
> > > > > >>    http://linux-ha.org/CIB/Idioms
> > 
> > 
> > I tried setting up colocation constraints similar
> to those shown in the
> > example referenced in the URL above, and it
> complained about the
> > identical ids:
> > 
> > [EMAIL PROTECTED] xml]# more
> rule_fs_on_drbd_slave.xml 
> > <rsc_colocation id="fs_on_drbd" to="rsc_drbd_7788"
> to_role="slave"
> > from="fs_mirror" score="-infinity"/>
> > 
> > [EMAIL PROTECTED] xml]# more
> rule_fs_on_drbd_stopped.xml 
> > <rsc_colocation id="fs_on_drbd" to="rsc_drbd_7788"
> to_role="stopped"
> > from="fs_mirror" score="-infinity"/>
> > 
> > [EMAIL PROTECTED] xml]# cibadmin -o constraints
> -C -x
> > rule_fs_on_drbd_stopped.xml 
> > 
> > [EMAIL PROTECTED] xml]# cibadmin -o constraints
> -C -x
> > rule_fs_on_drbd_slave.xml 
> > Call cib_create failed (-21): The object already
> exists
> >  <failed>
> >    <failed_update id="fs_on_drbd"
> object_type="rsc_colocation"
> > operation="add" reason="The object already
> exists">
> >      <rsc_colocation id="fs_on_drbd"
> to="rsc_drbd_7788" to_role="slave"
> > from="fs_mirror" score="-infinity"/>
> >    </failed_update>
> >  </failed>
> > 
> > I'm going to change the ids to be unique and try
> again, but wanted to
> > point this out since it is very similar to the
> example on the web page.
> > 
> > Doug
> > 
> > 
> > 
> > > > > >> 
> http://linux-ha.org/CIB/Idioms/MasterConstraints


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems
 <cib admin_epoch="0" have_quorum="true" ignore_dtd="false" num_peers="2" cib_feature_revision="1.3" ccm_transition="14" generated="true" dc_uuid="c70fef6c-d0a4-446e-8e6a-c7c33f83e982" epoch="26" num_updates="2839">
   <configuration>
     <crm_config>
       <cluster_property_set id="cib-bootstrap-options">
         <attributes>
           <nvpair id="cib-bootstrap-options-transition-idle-timeout" name="transition-idle-timeout" value="60"/>
           <nvpair id="cib-bootstrap-options-symmetric-cluster" name="symmetric-cluster" value="FALSE"/>
           <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="FALSE"/>
           <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore"/>
           <nvpair id="cib-bootstrap-options-default-resource-stickiness" name="default-resource-stickiness" value="100"/>
           <nvpair id="cib-bootstrap-options-is-managed-default" name="is-managed-default" value="TRUE"/>
           <nvpair id="cib-bootstrap-options-stop-orphan-resources" name="stop-orphan-resources" value="TRUE"/>
           <nvpair id="cib-bootstrap-options-stop-orphan-actions" name="stop-orphan-actions" value="TRUE"/>
           <nvpair name="default-resource-failure-stickiness" id="cib-bootstrap-options-default-resource-failure-stickiness" value="-1"/>
           <nvpair name="last-lrm-refresh" id="cib-bootstrap-options-last-lrm-refresh" value="1177149698"/>
         </attributes>
       </cluster_property_set>
     </crm_config>
     <nodes>
       <node uname="virtual2" type="normal" id="c70fef6c-d0a4-446e-8e6a-c7c33f83e982">
         <instance_attributes id="nodes-c70fef6c-d0a4-446e-8e6a-c7c33f83e982">
           <attributes>
             <nvpair name="standby" id="standby-c70fef6c-d0a4-446e-8e6a-c7c33f83e982" value="off"/>
           </attributes>
         </instance_attributes>
       </node>
       <node uname="virtual1" type="normal" id="a1801cee-a452-4228-8796-2dbf9e378830">
         <instance_attributes id="nodes-a1801cee-a452-4228-8796-2dbf9e378830">
           <attributes>
             <nvpair name="standby" id="standby-a1801cee-a452-4228-8796-2dbf9e378830" value="off"/>
           </attributes>
         </instance_attributes>
       </node>
     </nodes>
     <resources>
       <master_slave ordered="false" interleave="false" notify="false" id="drbd-ms">
         <instance_attributes id="50f06de7-c596-4b1b-bc59-612cbd62c995">
           <attributes>
             <nvpair name="clone_max" value="2" id="1bbe5ab4-2307-452a-a7c9-61ef4d29386e"/>
             <nvpair name="clone_node_max" value="1" id="71c24a98-d080-4b6a-b693-688239f93ad8"/>
             <nvpair name="master_max" value="1" id="4a25d134-75d1-4874-989d-fabf3de6c01a"/>
             <nvpair name="master_node_max" value="1" id="e0ab1cdb-60c3-476d-b676-2ecaf86bbf95"/>
           </attributes>
         </instance_attributes>
         <primitive class="ocf" type="drbd" provider="heartbeat" id="drbd0">
           <instance_attributes id="02ed5b33-5fb1-4863-8e4c-785373874613">
             <attributes>
               <nvpair name="drbd_resource" value="fileserver" id="df971262-eaad-410a-9d09-5524bfd96a4b"/>
             </attributes>
           </instance_attributes>
           <operations/>
         </primitive>
         <instance_attributes id="drbd-ms">
           <attributes/>
         </instance_attributes>
       </master_slave>
       <group id="grp_www">
         <primitive id="rsc_www_fs" class="ocf" type="Filesystem" provider="heartbeat" is_managed="#default">
           <instance_attributes id="ce6aaf7f-e9fc-489f-8756-42dce8955d97">
             <attributes>
               <nvpair name="device" value="/dev/drbd0" id="6f3e58ec-1eb4-40ed-b606-52a69b3f6c19"/>
               <nvpair name="directory" value="/service/servers/www.roost.com" id="8532bf93-3dee-4850-b3a6-f23c54333fc1"/>
               <nvpair name="fstype" value="ext3" id="6b85606c-00d1-42fa-8a21-74e155daac80"/>
             </attributes>
           </instance_attributes>
         </primitive>
         <instance_attributes id="grp:www_instance_attrs">
           <attributes>
             <nvpair id="grp:www_target_role" name="target_role" value="started"/>
           </attributes>
         </instance_attributes>
       </group>
     </resources>
     <constraints>
       <rsc_location id="loc:drbd0_likes_virtual1" rsc="drbd-ms">
         <rule id="rule:drbd0_likes_virtual1" score="100">
           <expression attribute="#uname" operation="eq" value="virtual1" id="75c73907-21e9-4d11-960f-3daac27ef72d"/>
         </rule>
       </rsc_location>
       <rsc_location id="loc:drbd0_likes_virtual2" rsc="drbd-ms">
         <rule id="rule:drbd0_likes_virtual2" score="100">
           <expression attribute="#uname" operation="eq" value="virtual2" id="faea6153-de7c-4940-a0b0-1f6bb2fc9993"/>
         </rule>
       </rsc_location>
       <rsc_location rsc="drbd-ms" id="loc:drbd0_m_likes_virtual2">
         <rule role="master" id="rule:drbd0_m_likes_virtual2" score="100">
           <expression attribute="#uname" operation="eq" value="virtual2" id="f0f1d952-13a3-43e4-b875-06c65a555b3e"/>
         </rule>
       </rsc_location>
       <rsc_location rsc="drbd-ms" id="loc:drbd0_m_likes_virtual1">
         <rule role="master" id="rule:drbd0_m_likes_virtual1" score="200">
           <expression attribute="#uname" operation="eq" value="virtual1" id="f1b278ad-65b9-494b-9594-70f45c1eda0c"/>
         </rule>
       </rsc_location>
       <rsc_location id="loc:www_likes_virtual1" rsc="grp_www">
         <rule id="rule:www_likes_virtual1" score="100">
           <expression attribute="#uname" operation="eq" value="virtual1" id="5db49278-9f88-48f2-9b06-1a8843d6ef13"/>
         </rule>
       </rsc_location>
       <rsc_location id="loc:www_likes_virtual2" rsc="grp_www">
         <rule id="rule:www_likes_virtual2" score="100">
           <expression attribute="#uname" operation="eq" value="virtual2" id="5db49278-9f88-48f2-9b06-1a8843d6ef14"/>
         </rule>
       </rsc_location>
       <rsc_order id="drbd_before_fs" from="grp_www" action="start" to="drbd-ms" to_action="promote" type="after" symmetrical="true"/>
       <rsc_colocation id="col:drbd0_stop_www" from="grp_www" to="drbd-ms" to_role="stopped" score="-INFINITY"/>
       <rsc_colocation id="col:drbd0_slave_www" from="grp_www" to="drbd-ms" to_role="slave" score="-INFINITY"/>
     </constraints>
   </configuration>
   <status>
     <node_state uname="virtual2" crmd="online" shutdown="0" in_ccm="true" ha="active" id="c70fef6c-d0a4-446e-8e6a-c7c33f83e982" join="member" expected="member" crm-debug-origin="do_update_resource">
       <transient_attributes id="c70fef6c-d0a4-446e-8e6a-c7c33f83e982">
         <instance_attributes id="status-c70fef6c-d0a4-446e-8e6a-c7c33f83e982">
           <attributes>
             <nvpair id="status-c70fef6c-d0a4-446e-8e6a-c7c33f83e982-probe_complete" name="probe_complete" value="true"/>
           </attributes>
         </instance_attributes>
       </transient_attributes>
       <lrm id="c70fef6c-d0a4-446e-8e6a-c7c33f83e982">
         <lrm_resources>
           <lrm_resource id="drbd0:0" type="drbd" class="ocf" provider="heartbeat">
             <lrm_rsc_op id="drbd0:0_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" transition_key="3:221:8e55f5c9-e4b3-4430-9f07-9c70946f254e" transition_magic="4:7;3:221:8e55f5c9-e4b3-4430-9f07-9c70946f254e" call_id="2" crm_feature_set="1.0.7" rc_code="7" op_status="4" interval="0" op_digest="a5a7a25e4014af7e44041f01d572b027"/>
           </lrm_resource>
           <lrm_resource id="drbd0:1" type="drbd" class="ocf" provider="heartbeat">
             <lrm_rsc_op id="drbd0:1_monitor_0" operation="monitor" crm-debug-origin="build_active_RAs" transition_key="4:221:8e55f5c9-e4b3-4430-9f07-9c70946f254e" transition_magic="4:7;4:221:8e55f5c9-e4b3-4430-9f07-9c70946f254e" call_id="3" crm_feature_set="1.0.7" rc_code="7" op_status="4" interval="0" op_digest="a5a7a25e4014af7e44041f01d572b027"/>
             <lrm_rsc_op id="drbd0:1_start_0" operation="start" crm-debug-origin="build_active_RAs" transition_key="8:221:8e55f5c9-e4b3-4430-9f07-9c70946f254e" transition_magic="0:0;8:221:8e55f5c9-e4b3-4430-9f07-9c70946f254e" call_id="4" crm_feature_set="1.0.7" rc_code="0" op_status="0" interval="0" op_digest="a5a7a25e4014af7e44041f01d572b027"/>
             <lrm_rsc_op id="drbd0:1_promote_0" operation="promote" crm-debug-origin="do_update_resource" transition_key="6:8:8470120e-dc2a-430e-bbf0-61785645c014" transition_magic="0:0;6:8:8470120e-dc2a-430e-bbf0-61785645c014" call_id="10" crm_feature_set="1.0.7" rc_code="0" op_status="0" interval="0" op_digest="a5a7a25e4014af7e44041f01d572b027"/>
             <lrm_rsc_op id="drbd0:1_demote_0" operation="demote" crm-debug-origin="do_update_resource" transition_key="9:9:8470120e-dc2a-430e-bbf0-61785645c014" transition_magic="0:0;9:9:8470120e-dc2a-430e-bbf0-61785645c014" call_id="13" crm_feature_set="1.0.7" rc_code="0" op_status="0" interval="0" op_digest="a5a7a25e4014af7e44041f01d572b027"/>
           </lrm_resource>
           <lrm_resource id="rsc_www_fs" type="Filesystem" class="ocf" provider="heartbeat">
             <lrm_rsc_op id="rsc_www_fs_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" transition_key="3:6:8470120e-dc2a-430e-bbf0-61785645c014" transition_magic="0:7;3:6:8470120e-dc2a-430e-bbf0-61785645c014" call_id="9" crm_feature_set="1.0.7" rc_code="7" op_status="0" interval="0" op_digest="8ed4e0afdd84ee6305ec99d786ce59e2"/>
             <lrm_rsc_op id="rsc_www_fs_start_0" operation="start" crm-debug-origin="do_update_resource" transition_key="19:8:8470120e-dc2a-430e-bbf0-61785645c014" transition_magic="0:0;19:8:8470120e-dc2a-430e-bbf0-61785645c014" call_id="11" crm_feature_set="1.0.7" rc_code="0" op_status="0" interval="0" op_digest="8ed4e0afdd84ee6305ec99d786ce59e2"/>
             <lrm_rsc_op id="rsc_www_fs_stop_0" operation="stop" crm-debug-origin="do_update_resource" transition_key="18:9:8470120e-dc2a-430e-bbf0-61785645c014" transition_magic="0:0;18:9:8470120e-dc2a-430e-bbf0-61785645c014" call_id="12" crm_feature_set="1.0.7" rc_code="0" op_status="0" interval="0" op_digest="8ed4e0afdd84ee6305ec99d786ce59e2"/>
           </lrm_resource>
         </lrm_resources>
       </lrm>
     </node_state>
     <node_state uname="virtual1" crmd="online" in_ccm="true" ha="active" join="member" shutdown="0" expected="member" id="a1801cee-a452-4228-8796-2dbf9e378830" crm-debug-origin="do_update_resource">
       <lrm id="a1801cee-a452-4228-8796-2dbf9e378830">
         <lrm_resources>
           <lrm_resource id="drbd0:1" type="drbd" class="ocf" provider="heartbeat">
             <lrm_rsc_op id="drbd0:1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" transition_key="5:2:8470120e-dc2a-430e-bbf0-61785645c014" transition_magic="0:7;5:2:8470120e-dc2a-430e-bbf0-61785645c014" call_id="3" crm_feature_set="1.0.7" rc_code="7" op_status="0" interval="0" op_digest="a5a7a25e4014af7e44041f01d572b027"/>
           </lrm_resource>
           <lrm_resource id="drbd0:0" type="drbd" class="ocf" provider="heartbeat">
             <lrm_rsc_op id="drbd0:0_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" transition_key="4:2:8470120e-dc2a-430e-bbf0-61785645c014" transition_magic="0:7;4:2:8470120e-dc2a-430e-bbf0-61785645c014" call_id="2" crm_feature_set="1.0.7" rc_code="7" op_status="0" interval="0" op_digest="a5a7a25e4014af7e44041f01d572b027"/>
             <lrm_rsc_op id="drbd0:0_start_0" operation="start" crm-debug-origin="do_update_resource" transition_key="6:2:8470120e-dc2a-430e-bbf0-61785645c014" transition_magic="0:0;6:2:8470120e-dc2a-430e-bbf0-61785645c014" call_id="4" crm_feature_set="1.0.7" rc_code="0" op_status="0" interval="0" op_digest="a5a7a25e4014af7e44041f01d572b027"/>
             <lrm_rsc_op id="drbd0:0_promote_0" operation="promote" crm-debug-origin="do_update_resource" transition_key="6:9:8470120e-dc2a-430e-bbf0-61785645c014" transition_magic="0:0;6:9:8470120e-dc2a-430e-bbf0-61785645c014" call_id="12" crm_feature_set="1.0.7" rc_code="0" op_status="0" interval="0" op_digest="a5a7a25e4014af7e44041f01d572b027"/>
             <lrm_rsc_op id="drbd0:0_demote_0" operation="demote" crm-debug-origin="do_update_resource" transition_key="9:8:8470120e-dc2a-430e-bbf0-61785645c014" transition_magic="0:0;9:8:8470120e-dc2a-430e-bbf0-61785645c014" call_id="11" crm_feature_set="1.0.7" rc_code="0" op_status="0" interval="0" op_digest="a5a7a25e4014af7e44041f01d572b027"/>
           </lrm_resource>
           <lrm_resource id="rsc_www_fs" type="Filesystem" class="ocf" provider="heartbeat">
             <lrm_rsc_op id="rsc_www_fs_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" transition_key="5:6:8470120e-dc2a-430e-bbf0-61785645c014" transition_magic="0:7;5:6:8470120e-dc2a-430e-bbf0-61785645c014" call_id="8" crm_feature_set="1.0.7" rc_code="7" op_status="0" interval="0" op_digest="8ed4e0afdd84ee6305ec99d786ce59e2"/>
             <lrm_rsc_op id="rsc_www_fs_start_0" operation="start" crm-debug-origin="do_update_resource" transition_key="19:9:8470120e-dc2a-430e-bbf0-61785645c014" transition_magic="0:0;19:9:8470120e-dc2a-430e-bbf0-61785645c014" call_id="13" crm_feature_set="1.0.7" rc_code="0" op_status="0" interval="0" op_digest="8ed4e0afdd84ee6305ec99d786ce59e2"/>
             <lrm_rsc_op id="rsc_www_fs_stop_0" operation="stop" crm-debug-origin="do_update_resource" transition_key="18:8:8470120e-dc2a-430e-bbf0-61785645c014" transition_magic="0:0;18:8:8470120e-dc2a-430e-bbf0-61785645c014" call_id="10" crm_feature_set="1.0.7" rc_code="0" op_status="0" interval="0" op_digest="8ed4e0afdd84ee6305ec99d786ce59e2"/>
           </lrm_resource>
         </lrm_resources>
       </lrm>
       <transient_attributes id="a1801cee-a452-4228-8796-2dbf9e378830">
         <instance_attributes id="status-a1801cee-a452-4228-8796-2dbf9e378830">
           <attributes>
             <nvpair id="status-a1801cee-a452-4228-8796-2dbf9e378830-probe_complete" name="probe_complete" value="true"/>
           </attributes>
         </instance_attributes>
       </transient_attributes>
     </node_state>
   </status>
 </cib>

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to