Re: [ClusterLabs] Antw: [EXT] resource cloned group colocations

2023-03-02 Thread Gerald Vogt

On 02.03.23 14:30, Ulrich Windl wrote:

Gerald Vogt  schrieb am 02.03.2023 um 08:41 in Nachricht

<624d0b70-5983-4d21-6777-55be91688...@spamcop.net>:

Hi,

I am setting up a mail relay cluster which main purpose is to maintain
the service ips via IPaddr2 and move them between cluster nodes when
necessary.

The service ips should only be active on nodes which are running all
necessary mail (systemd) services.

So I have set up a resource for each of those services, put them into a
group in order they should start, cloned the group as they are normally
supposed to run on the nodes at all times.

Then I added an order constraint
start mail-services-clone then start mail1-ip
start mail-services-clone then start mail2-ip

and colocations to prefer running the ips on different nodes but only
with the clone running:

colocation add mail2-ip with mail1-ip -1000
colocation mail1-ip with mail-services-clone
colocation mail2-ip with mail-services-clone

as well as a location constraint to prefer running the first ip on the
first node and the second on the second

location mail1-ip prefers ha1=2000
location mail2-ip prefers ha2=2000

Now if I stop pacemaker on one of those nodes, e.g. on node ha2, it's
fine. mail2-ip will be moved immediately to ha3. Good.

However, if pacemaker on ha2 starts up again, it will immediately remove
mail2-ip from ha3 and keep it offline, while the services in the group are
starting on ha2. As the services unfortunately take some time to come
up, mail2-ip is offline for more than a minute.


That is because you wanted "mail2-ip prefers ha2=2000", so if the cluster _can_ 
run it there, then it will, even if it's running elsewhere.

Maybe explain what you really want.


As I wrote before: (and I have "fixed" my copy error above to use 
consistent resource names now)


1. I want to run all required services on all running nodes at all times.

2. I want two service IPs mail1-ip (ip1) and mail2-ip (ip2) running on 
the cluster but only on nodes where all required services are already 
running (and not just starting)


3. Both IPs should be running on two different nodes if possible.

4. Preferably mail1-ip should be on node ha1 if ha1 is running with all 
required services.


5. Preferably mail2-ip should be on node ha2 if ha1 is running with all 
required services.


So most importantly: I want ip resources mail1-ip and mail2-ip only be 
active on nodes which are already running all services. They should only 
be moved to nodes on which all services are already running.


Thanks,

Gerald
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Antw: [EXT] resource cloned group colocations

2023-03-02 Thread Vladislav Bogdanov
On Thu, 2023-03-02 at 14:30 +0100, Ulrich Windl wrote:
> > > > Gerald Vogt  schrieb am 02.03.2023 um 08:41
> > > > in Nachricht
> <624d0b70-5983-4d21-6777-55be91688...@spamcop.net>:
> > Hi,
> > 
> > I am setting up a mail relay cluster which main purpose is to
> > maintain 
> > the service ips via IPaddr2 and move them between cluster nodes
> > when 
> > necessary.
> > 
> > The service ips should only be active on nodes which are running
> > all 
> > necessary mail (systemd) services.
> > 
> > So I have set up a resource for each of those services, put them
> > into a 
> > group in order they should start, cloned the group as they are
> > normally 
> > supposed to run on the nodes at all times.
> > 
> > Then I added an order constraint
> >    start mail-services-clone then start mail1-ip
> >    start mail-services-clone then start mail2-ip
> > 
> > and colocations to prefer running the ips on different nodes but
> > only 
> > with the clone running:
> > 
> >    colocation add mail2-ip with mail1-ip -1000
> >    colocation ip1 with mail-services-clone
> >    colocation ip2 with mail-services-clone
> > 
> > as well as a location constraint to prefer running the first ip on
> > the 
> > first node and the second on the second
> > 
> >    location ip1 prefers ha1=2000
> >    location ip2 prefers ha2=2000
> > 
> > Now if I stop pacemaker on one of those nodes, e.g. on node ha2,
> > it's 
> > fine. ip2 will be moved immediately to ha3. Good.
> > 
> > However, if pacemaker on ha2 starts up again, it will immediately
> > remove 
> > ip2 from ha3 and keep it offline, while the services in the group
> > are 
> > starting on ha2. As the services unfortunately take some time to
> > come 
> > up, ip2 is offline for more than a minute.
> 
> That is because you wanted "ip2 prefers ha2=2000", so if the cluster
> _can_ run it there, then it will, even if it's running elsewhere.
> 

Pacemaker sometime places actions in the transition in a suboptimal
order (prom the humans point of view).
So instead of

start group on nodeB
stop vip on nodeA
start vip on nodeB

it runs

stop vip on nodeA
start group on nodeB
start vip on nodeB

So, if start of group takes a lot of time, then vip is not available on
any node during that start.

One more techniques to minimize the time during which vip is stopped
would be to add resource migration support to IPAddr2.
That could help, but I'm not sure.
At least I know for sure pacemaker behaves differently with migratable
resources and MAY decide to use the first order I provided..

> Maybe explain what you really want.
> 
> > 
> > It seems the colocations with the clone are already good once the
> > clone 
> > group begins to start services and thus allows the ip to be removed
> > from 
> > the current node.
> > 
> > I was wondering how can I define the colocation to be accepted only
> > if 
> > all services in the clone have been started? And not once the first
> > service in the clone is starting?
> > 
> > Thanks,
> > 
> > Gerald
> > 
> > 
> > ___
> > Manage your subscription:
> > https://lists.clusterlabs.org/mailman/listinfo/users 
> > 
> > ClusterLabs home: https://www.clusterlabs.org/ 
> 
> 
> 
> 
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] Antw: [EXT] resource cloned group colocations

2023-03-02 Thread Ulrich Windl
>>> Gerald Vogt  schrieb am 02.03.2023 um 08:41 in Nachricht
<624d0b70-5983-4d21-6777-55be91688...@spamcop.net>:
> Hi,
> 
> I am setting up a mail relay cluster which main purpose is to maintain 
> the service ips via IPaddr2 and move them between cluster nodes when 
> necessary.
> 
> The service ips should only be active on nodes which are running all 
> necessary mail (systemd) services.
> 
> So I have set up a resource for each of those services, put them into a 
> group in order they should start, cloned the group as they are normally 
> supposed to run on the nodes at all times.
> 
> Then I added an order constraint
>start mail-services-clone then start mail1-ip
>start mail-services-clone then start mail2-ip
> 
> and colocations to prefer running the ips on different nodes but only 
> with the clone running:
> 
>colocation add mail2-ip with mail1-ip -1000
>colocation ip1 with mail-services-clone
>colocation ip2 with mail-services-clone
> 
> as well as a location constraint to prefer running the first ip on the 
> first node and the second on the second
> 
>location ip1 prefers ha1=2000
>location ip2 prefers ha2=2000
> 
> Now if I stop pacemaker on one of those nodes, e.g. on node ha2, it's 
> fine. ip2 will be moved immediately to ha3. Good.
> 
> However, if pacemaker on ha2 starts up again, it will immediately remove 
> ip2 from ha3 and keep it offline, while the services in the group are 
> starting on ha2. As the services unfortunately take some time to come 
> up, ip2 is offline for more than a minute.

That is because you wanted "ip2 prefers ha2=2000", so if the cluster _can_ run 
it there, then it will, even if it's running elsewhere.

Maybe explain what you really want.

> 
> It seems the colocations with the clone are already good once the clone 
> group begins to start services and thus allows the ip to be removed from 
> the current node.
> 
> I was wondering how can I define the colocation to be accepted only if 
> all services in the clone have been started? And not once the first 
> service in the clone is starting?
> 
> Thanks,
> 
> Gerald
> 
> 
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users 
> 
> ClusterLabs home: https://www.clusterlabs.org/ 




___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/