Le Monday 12 November 2007 16:13:45 Andrew Beekhof, vous avez écrit :
> On Nov 12, 2007, at 2:29 PM, Cousin Marc wrote:
> > Hi,
> > I'm trying to figure out the actual behaviour of heartbeat (I'm using
> > 2.1.2+hg.11310.702e4f418ca8-2, as provided for debian sid)
> >
> > I'm sorry if it's a bit long and complicated, don't hesitate to ask
> > me to
> > clarify some parts ...
> >
> >
> > I've detailed the steps I'm following, as I've found several docs in
> > the wiki
> > contradicting each other.
> >
> > I'm trying for my tests to build a 3 node cluster (drbdtest1, 2 and
> > 3), with 2
> > nodes hosting a drbd device, and one resource group having to be
> > collocated
> > with the master drbd resource.
> >
> > First, I create a drbd resource ...
> >
> > No problem here. Without constraints, it's running anywhere.
> >
> > I'm adding a location constraint telling heartbeat not to run on
> > drbdtest3 ...
> >
> > cibadmin -o constraints -C -X '
> > <rsc_location id="place_r0" rsc="cloner0">
> >          <rule id="prefered_place_r0" score="-INFINITY">
> >            <expression attribute="#uname" id="colocr0_1"
> > operation="eq"
> > value="drbdtest3"/>
> >          </rule>'
>
> that will work, but you're better off with a rsc_colcation constraint
> (also set the score=-INFINITY)
>
> > It's OK, now it's not running on the third node. It's activating a
> > master on
> > drbdtest1.
> >
> > Then I create my resource group ('applisr0'), with no constraints
> > for now.
> >
> > It's OK, but of course, with no constraints, it's not running where
> > it should.
> > In fact, it's running on the third node.
> >
> > Then I put a constraint telling it not to run on a stopped node.
> >
> > cibadmin -o constraints -C -X '<rsc_colocation
> > id="r0_et_applis_stopped"
> > from="applisr0" to="cloner0" to_role="stopped" score="-INFINITY"/>
> >
> > It's not working. The group is still trying to start on the third
> > node.
>
> Instead of a double negative, just go with a positive:
>
> cibadmin -o constraints -C -X '<rsc_colocation id="r0_et_applis"
> from="applisr0" to="cloner0" score="INFINITY"/>
>
> > I've tried a lot of other configurations. I think that 'stopped' is
> > not
> > working...
> >
> > The most obvious way to verify it is this :
> >
> > if I replace the previous constraint with :
> > cibadmin -o constraints -U -X '<rsc_colocation
> > id="r0_et_applis_stopped"
> > from="applisr0" to="cloner0" to_role="stopped" score="+INFINITY"/>'
> >
> > the resource isn't running anywhere. I think it means that every
> > node is
> > considered to be in the stopped role.
> >
> >
> > I can make it work this way :
> >
> > cibadmin -o constraints -C -X '<rsc_colocation
> > id="r0_et_applis_master"
> > from="applisr0" to="cloner0" to_role="master" score="+INFINITY"/>'
> >
> >
> > So, to sum it up, it seems to me that the stopped role is seen as
> > true on
> > every node...
> >
> > Am I right ?
>
> For clones, this may be correct.
> Most people find the regular way sufficient but there's no argument
> that the double negative should also work.
>
> If you'd like to log a bug in bugzilla I will make sure it gets fixed.


Okay, but I still don't get it completely :

<rsc_colocation id="r0_et_applis_stopped" from="applisr0" to="cloner0" 
to_role="stopped" score="-INFINITY"/>

And
<rsc_colocation id="r0_et_applis_master" from="applisr0" to="cloner0" 
to_role="master" score="+INFINITY"/>'

aren't supposed to be exactly the same :

One tells heartbeat "NEVER run applisr0 where cloner0 is stopped".
The other tells heartbeat : "ALWAYS run applisr0 where cloner0 is master"

The case of when cloner0 is slave isn't specified in the first case...

Anyway, my example wasn't perfect, but I may want to run something everywhere 
except where cloner0 is master or slave...

I created this example because I was trying to follow the wiki page, and it 
didn't work, because of the stopped rule (and some XML errors too, by the 
way...). If there is no point in using the 'double negative' syntax, maybe we 
should clean it up ?

I'll file the bug report as soon as possible.
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to