On 19 Nov 2013, at 3:21 am, Sean Lutner <s...@rentul.net> wrote:

> 
> On Nov 17, 2013, at 7:40 PM, Andrew Beekhof <and...@beekhof.net> wrote:
> 
>> 
>> On 15 Nov 2013, at 2:28 pm, Sean Lutner <s...@rentul.net> wrote:
>> 
>>>>>> Yes the varnish resources are in a group which is then cloned.
>>>>> 
>>>>> -EDONTDOTHAT
>>>>> 
>>>>> You cant refer to the things inside a clone.
>>>>> 1.1.8 will have just been ignoring those constraints.
>>>> 
>>>> So the implicit order and colocation constraints in a group and clone will 
>>>> take care of those?
>>>> 
>>>> Which means remove the constraints and retry the upgrade?
>> 
>> No, it means rewrite them to refer to the clone - whatever is the outer most 
>> container. 
> 
> I see, thanks. Did I miss that in the docs or is it undocumented/implied? If 
> I didn't miss it, it'd be nice if that were explicitly documented.

Assuming:

<clone id=X ...>
  <primitive id=Y .../>
</clone>

For most of pacemaker's existence, it simply wasn't possible because there is 
no resource named Y (they were actually called Y:0 Y:1 .. Y:N).
Then pacemaker was made smarter and as a side effect, started being able to 
find something matching "Y".

Then I closed the loophole :)

So it was never legal.

> 
> Is there a combination of constraints I can configure for a single IP 
> resource and a cloned group such that if there is a failure only the IP 
> resource will move?

These are implied simply because they're in a group:

      <rsc_order first="Varnish" id="order-Varnish-Varnishlog-mandatory" 
then="Varnishlog"/>
      <rsc_order first="Varnishlog" id="order-Varnishlog-Varnishncsa-mandatory" 
then="Varnishncsa"/>
      <rsc_colocation id="colocation-Varnishlog-Varnish-INFINITY" 
rsc="Varnishlog" score="INFINITY" with-rsc="Varnish"/>
      <rsc_colocation id="colocation-Varnishncsa-Varnishlog-INFINITY" 
rsc="Varnishncsa" score="INFINITY" with-rsc="Varnishlog"/>
This:

      <rsc_order first="ClusterEIP_54.215.143.166" 
id="order-ClusterEIP_54.215.143.166-Varnish-mandatory" then="Varnish"/>

can be replaced with:

      pcs constraint order start ClusterEIP_54.215.143.166 then 
EIP-AND-VARNISH-clone

and:

      <rsc_colocation 
id="colocation-Varnish-ClusterEIP_54.215.143.166-INFINITY" rsc="Varnish" 
score="INFINITY" with-rsc="ClusterEIP_54.215.143.166"/>

is the same as:

      pcs constraint colocation EIP-AND-VARNISH-clone with 
ClusterEIP_54.215.143.166 

but that makes no sense because then the clone (which wants to run everywhere) 
can only run on the node the IP is on.
If thats what you want, then there is no point having a clone.

Better to reverse the colocation to be:

      pcs constraint colocation ClusterEIP_54.215.143.166 with 
EIP-AND-VARNISH-clone

and possibly the ordering too:

      pcs constraint order start EIP-AND-VARNISH-clone then 
ClusterEIP_54.215.143.166 


> Or in the case where I previously had the constraints applied to the 
> resources and not the clone was that causing a problem?
> 
> Thanks again.
> 
> 
>> 
>>> 
>>> I was able to get the upgrade done. I also had to upgrade the libqb 
>>> package. I know that's been mentioned in other threads, but I think that 
>>> should either be a dependency of pacemaker or explicitly documented.
>> 
>> libqb is a dependancy, just not a versioned one.
>> We should probably change that next time.
> 
> I would say it's a requirement that that be changed.

Well... pacemaker can build against older versions, but you need to run it with 
whatever version you built against.
possibly the libqb versioning is being done incorrectly preventing rpm from 
figuring this all out

> 
>> 
>>> 
>>> Second order of business is that failover is no longer working as expected. 
>>> Because the order and colocation constraints are gone, if one of the 
>>> varnish resources fails, the EIP resource does not move to the other node 
>>> like it used to.
>>> 
>>> Is there a way I can create or re-create that behavior?
>> 
>> See above :)
>> 
>>> 
>>> The resource group EIP-AND_VARNISH has the three varnish services and is 
>>> then cloned so running on both nodes. If any of them fail I want the EIP 
>>> resource to move to the other node.
>>> 
>>> Any advice for doing this?
>>> 
>>> Thanks
>> 
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>> 
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to