Re: [ClusterLabs] Start resource only if another resource is stopped

2022-08-19 Thread Miro Igov
Ok thank you for the advice. I learnt about the attribute resource, did not
knew about it before.
 
I left the idea of NFS failover and am switching an IP to the node which has
all required components.

Node List:
  * Online: [ intranet-test1 intranet-test2 nas-sync-test1 nas-sync-test2 ]

Full List of Resources:
  * admin-ip(ocf::heartbeat:IPaddr2):Started nas-sync-test2
  * stonith-sbd (stonith:external/sbd):  Started nas-sync-test1
  * data_2  (ocf::heartbeat:Filesystem): Started intranet-test2
  * data_1  (ocf::heartbeat:Filesystem): Started intranet-test1
  * nfs_export_1(ocf::heartbeat:exportfs):   Started
nas-sync-test1
  * nfs_server_1(systemd:nfs-server):Started nas-sync-test1
  * nfs_export_2(ocf::heartbeat:exportfs):   Started
nas-sync-test2
  * nfs_server_2(systemd:nfs-server):Started nas-sync-test2
  * nginx_1 (systemd:nginx): Started intranet-test1
  * nginx_2 (systemd:nginx): Started intranet-test2
  * mysql_1 (systemd:mysql): Started intranet-test1
  * mysql_2 (systemd:mysql): Started intranet-test2
  * php_1   (systemd:php5.6-fpm):Started intranet-test1
  * php_2   (systemd:php5.6-fpm):Started intranet-test2
  * intranet-ip (ocf::heartbeat:IPaddr2):Started intranet-test2
  * nginx_1_active  (ocf::pacemaker:attribute):  Started
intranet-test1
  * nginx_2_active  (ocf::pacemaker:attribute):  Started
intranet-test2


intranet-ip is allocated at the node which has all of the data_x, php_x,
mysql_x and nginx_x resource. data_x requires having nfs_export_x and
nfs_server_x running on the sync nodes.
All working well thanks.

-Original Message-
From: Users  On Behalf Of Andrei Borzenkov
Sent: Thursday, August 18, 2022 21:26
To: users@clusterlabs.org
Subject: Re: [ClusterLabs] Start resource only if another resource is
stopped

On 17.08.2022 16:58, Miro Igov wrote:
> As you guessed i am using crm res stop nfs_export_1. 
> I tried the solution with attribute and it does not work correct.
> 

It does what you asked for originally, but you are shifting the goalposts
...

> When i stop nfs_export_1 it stops data_1 data_1_active, then it starts 
> data_2_failover - so far so good.
> 
> When i start nfs_export_1 it starts data_1, starts data_1_active and 
> then stops data_2_failover as result of order 
> data_1_active_after_data_1 and location
data_2_failover_if_data_1_inactive.
> 
> But stopping data_2_failover unmounts the mount and end result is 
> having no NFS export mounted:
> 

Nowhere before did you mention that you have two resources managing the same
mount point.

...
> Aug 17 15:24:52 intranet-test1 Filesystem(data_1)[16382]: INFO: 
> Running start for nas-sync-test1:/home/pharmya/NAS on 
> /data/synology/pharmya_office/NAS_Sync/NAS
> Aug 17 15:24:52 intranet-test1 Filesystem(data_1)[16382]: INFO: 
> Filesystem /data/synology/pharmya_office/NAS_Sync/NAS is already mounted.
...
> Aug 17 15:24:52 intranet-test1 Filesystem(data_2_failover)[16456]: INFO:
> Trying to unmount /data/synology/pharmya_office/NAS_Sync/NAS
> Aug 17 15:24:52 intranet-test1 systemd[1]:
> data-synology-pharmya_office-NAS_Sync-NAS.mount: Succeeded.

This configuration is wrong - period. Filesystem agent monitor action checks
for mounted mountpoint, so pacemaker cannot determine which resource is
started. You may get away with it because by default pacemaker does not run
recurrent monitor for inactive resource, but any probe will give wrong
results.

It is almost always wrong to have multiple independent pacemaker resources
managing the same underlying physical resource.

It looks like you attempt to reimplement high available NFS server on client
side. If you insist on this, I see as the only solution separate resource
agent that monitors state of export/data resources and sets attribute
accordingly. But effectively you will be duplicating pacemaker logic.

-- 
This message has been sent as a part of discussion between PHARMYA and the
addressee whose name is specified above. Should you receive this message by
mistake, we would be most grateful if you informed us that the message has
been sent to you. In this case, we also ask that you delete this message
from your mailbox, and do not forward it or any part of it to anyone else.
Thank you for your cooperation and understanding.
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] Antw: [EXT] Re: Start resource only if another resource is stopped

2022-08-19 Thread Ulrich Windl
>>> Andrei Borzenkov  schrieb am 18.08.2022 um 20:26 in
Nachricht :
...
> It is almost always wrong to have multiple independent pacemaker
> resources managing the same underlying physical resource.

It's not the cluster bible that says: "No man can have two masters" ;-)
But it applies to clusters quite nicely.

> 
> It looks like you attempt to reimplement high available NFS server on
> client side. If you insist on this, I see as the only solution separate
> resource agent that monitors state of export/data resources and sets
> attribute accordingly. But effectively you will be duplicating pacemaker
> logic.

I never did it, but didn't allow NFS to provide multiple sources, thus 
specifying redundancy (maybe never in Linux)?
The only thing is: It was designed for read-only servers as data consistency is 
quite a challenge then.

> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users 
> 
> ClusterLabs home: https://www.clusterlabs.org/ 




___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] Start resource only if another resource is stopped

2022-08-19 Thread Klaus Wenninger
On Thu, Aug 18, 2022 at 8:26 PM Andrei Borzenkov  wrote:
>
> On 17.08.2022 16:58, Miro Igov wrote:
> > As you guessed i am using crm res stop nfs_export_1.
> > I tried the solution with attribute and it does not work correct.
> >
>
> It does what you asked for originally, but you are shifting the
> goalposts ...
>
> > When i stop nfs_export_1 it stops data_1 data_1_active, then it starts
> > data_2_failover - so far so good.
> >
> > When i start nfs_export_1 it starts data_1, starts data_1_active and then
> > stops data_2_failover as result of order data_1_active_after_data_1 and
> > location data_2_failover_if_data_1_inactive.
> >
> > But stopping data_2_failover unmounts the mount and end result is having no
> > NFS export mounted:
> >
>
> Nowhere before did you mention that you have two resources managing the
> same mount point.
>
> ...
> > Aug 17 15:24:52 intranet-test1 Filesystem(data_1)[16382]: INFO: Running
> > start for nas-sync-test1:/home/pharmya/NAS on
> > /data/synology/pharmya_office/NAS_Sync/NAS
> > Aug 17 15:24:52 intranet-test1 Filesystem(data_1)[16382]: INFO: Filesystem
> > /data/synology/pharmya_office/NAS_Sync/NAS is already mounted.
> ...
> > Aug 17 15:24:52 intranet-test1 Filesystem(data_2_failover)[16456]: INFO:
> > Trying to unmount /data/synology/pharmya_office/NAS_Sync/NAS
> > Aug 17 15:24:52 intranet-test1 systemd[1]:
> > data-synology-pharmya_office-NAS_Sync-NAS.mount: Succeeded.
>
> This configuration is wrong - period. Filesystem agent monitor action
> checks for mounted mountpoint, so pacemaker cannot determine which
> resource is started. You may get away with it because by default
> pacemaker does not run recurrent monitor for inactive resource, but any
> probe will give wrong results.
>
> It is almost always wrong to have multiple independent pacemaker
> resources managing the same underlying physical resource.
>
> It looks like you attempt to reimplement high available NFS server on
> client side. If you insist on this, I see as the only solution separate
> resource agent that monitors state of export/data resources and sets
> attribute accordingly. But effectively you will be duplicating pacemaker
> logic.

As Ulrich already pointed out before in this thread, that sounds
a bit as if the concept of promotable resources might be helpful
here - as to have at least part of the logic done by pacemaker.
But as Andrei is saying - you'll need a custom resource-agent here.
Maybe it could be done in a generic way so that the community
might adopt it in the end though. I'm at least not aware that
such a thing would be out there already but ...

Klaus

> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/