Hi Andrew

in fact, my real problem was simplier like this one described as a simplified 
example of my configuration :
(I'm working with pacemaker from RHEL7.1 GA (1.1.12-22) )

Let's say for the example that we have in a two-nodes configuration these 5 
resources :
1 resource VGA (ocf:heartbeat:LVM) 
1 resource fs-img-for-vm1 (ocf:heartbeat:Filesystem)
1 resource vm1 (ocf:heartbeat:VirtualDomain)
1 resource  fs-img-for-vm2 (ocf:heartbeat:Filesystem)
1 resource vm2 (ocf:heartbeat:VirtualDomain)
knowing that both fs-img are on LVs in VGA

So I have to set these constraints :
prefered location VGA on node1
order fs-img-vm1 after VGA
order fs-img-vm2 after VGA
order vm1 after fs-img-vm1
order vm2 after fs-img-vm2
colocation fs-img-vm1 with VGA
colocation fs-img-vm2 with VGA
colocation vm1 with fs-img-vm1
colocation vm2 with fs-img-vm2

but I unfornutately, I want that the vm2 (and fs-img-vm2) never starts on 
node2, said in another way : I want that if VGA is on node1 all resources 
fs-img-vm1, fs-img-vm2, vm1 and vm2 must be started on node1,
but if VGA for whatever reason has been migrated on node2, I want that only 
resources VGA, fs-img-vm1 and vm1 are started on node2, both others fs-img-vm2 
and vm2 remainig "Stopped"

So I add the constraint location fs-img-vm2 avoids node2 (hence +INF)

But testing this configuration, none of the 5 resources can never start on vm2. 

In my understanding, due to my choice of setting colocation that way : fs with 
VGA , and vm with fs, I was thinking that pacemaker, when migrating on node2, 
could start VGA, fs-img-vm1 and vm1 on node2 and leave fs-img-vm2 and vm2 
"Stopped". 

Am I wrong ? 
And is there a way to get this behavior ?

Thanks a lot
Alain

________________________________________
De : [email protected] [[email protected]] 
de la part de Andrew Beekhof [[email protected]]
Envoyé : vendredi 28 août 2015 04:31
À : Please subscribe to [email protected] instead
Objet : Re: [Linux-HA] Antw: Question around resources constraints      
(pacemaker on RHE7.1)

> On 25 Aug 2015, at 6:18 pm, Ulrich Windl <[email protected]> 
> wrote:
>
>>>> "MOULLE, ALAIN" <[email protected]> schrieb am 21.08.2015 um 15:27 in
> Nachricht
> <df84cff8a85ab546b2d53fff12267727022...@frauvj99ex5msx.ww931.my-it-solutions.net
>
> :
>> Hi
>>
>> I can't find a way to configure constraints in pacemaker so that with these
>
>> resources:
>>
>> Res1
>> Res2
>> Res3
>> Res4
>> Res5
>>
>> with current colocation constraints :
>> Res2 with Res1
>> Res3 with Res2
>
> I think pacemaker still cannot do transitive location constraints;

?

> can you
> try
> R2 with R1
> R3 with R1
> (and related) instead?

No no no.
This has the opposite effect, a failure of R3 could easily result in /none/ of 
the resources moving.

Better to just change:

>> Res4 with Res1

to:
   Res4 with Res3

Newer versions might do better with the existing config too.
If not, attach a crm_report :)

>> Res5 with Res4
>>
>> and current order symmetrical constraints :
>> Res2 after Res1
>> Res3 after Res2
>>
>> Res4 after Res1
>> Res5 after Res4
>>
>> and migration-threshold=1 on all resources .
>>
>> What I want it that if I have a failure for example on Res3  is that all the
>
>> 5 Ressources are migrated.
>>
>> Is there a solution ?
>>
>> For example , with an HA LVM configuration and VM resources :
>>
>> Res1=VGA
>> Res2=FS-img-VM1 (where are the VM1 image and .xml) (FS-img-VM1 is on VGA
> LV)
>> Res3=VM1
>> Res4=FS-img-VM2 (where are the VM2 image and .xml) (FS-img-VM1 is on another
> VGA
>> LV)
>> Res5=VM2
>>
>> So current constraints are so that VGA is activated before the FS-img is
>> mounted and before the VM is started.
>>
>> But I want that if the VM1 fails, it can migrate on another node (together
>> with all other 4 resources) but with only the constraints above, the VGA
>> stalls the migration of the VM ...
>>
>> Is there any solution by constraints configuration ?
>> Note : we can't use group resources for now, because VMs are remote-nodes.
>>
>> Thanks
>> Alain Moullé
>> _______________________________________________
>> Linux-HA mailing list is closing down.
>> Please subscribe to [email protected] instead.
>> http://clusterlabs.org/mailman/listinfo/users
>> _______________________________________________
>> [email protected]
>> http://lists.linux-ha.org/mailman/listinfo/linux-ha
>
>
>
> _______________________________________________
> Linux-HA mailing list is closing down.
> Please subscribe to [email protected] instead.
> http://clusterlabs.org/mailman/listinfo/users
> _______________________________________________
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha

_______________________________________________
Linux-HA mailing list is closing down.
Please subscribe to [email protected] instead.
http://clusterlabs.org/mailman/listinfo/users
_______________________________________________
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
_______________________________________________
Linux-HA mailing list is closing down.
Please subscribe to [email protected] instead.
http://clusterlabs.org/mailman/listinfo/users
_______________________________________________
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha

Reply via email to