Thank you. I have considered the dual-master approach. With a single VIP controlling the connection point for NFS a dual master configuration would work I believe. But before I do that I want to make sure I can't get all my drbd resources to promote on the same node through cluster configuration. I have yet to try using the "set" construct but the links I've read (courtesy of "Jehan-Guillaume de Rorthais") make me optimistic.
Regards, John > I wonder: > Is it possible to write some rule that sets the score to become master on > a > specific node higher than on another node? > Maybe the solution is to run DRBD in dual-primary configuration ;-) > So the "master" is always on the right node. > > Regards, > Ulrich > > >>>> "john tillman" <[email protected]> schrieb am 15.03.2022 um 19:53 in > Nachricht > <[email protected]>: >>> On 15.03.2022 19:35, john tillman wrote: >>>> Hello, >>>> >>>> I'm trying to guarantee that all my cloned drbd resources start on the >>>> same node and I can't figure out the syntax of the constraint to do >>>> it. >>>> >>>> I could nominate one of the drbd resources as a "leader" and have all >>>> the >>>> others follow it. But then if something happens to that leader the >>>> others >>>> are without constraint. >>>> >>> >>> Colocation is asymmetric. Resource B is colocated with resource A, so >>> pacemaker decides placement of resource A first. If resource A cannot >>> run anywhere (which is probably what you mean under "something happens >>> to that leader"), resource B cannot run anywhere. This is true also for >>> resources inside resource set. >>> >>> I do not think pacemaker supports "always run these resources together, >>> no matter how many resources can run". >>> >> >> >> Huh, no way to get all the masters to start on the same node. >> Interesting. >> >> The set construct has a boolean field "requireâall". I'll try that >> before >> I give up. >> >> Could I create a resource (some systemd service) that all the masters >> are >> colocated with? Feels like a hack but would it work? >> >> Thank you for the response. >> >> âJohn >> >> >>>> I tried adding them to a group but got a syntax error from pcs saying >>>> that >>>> I wasn't allowed to add cloned resources to a group. >>>> >>>> If anyone is interested, it started from this example: >>>> >> > https://edmondcck.medium.com/setupâaâhighlyâavailableânfsâclusterâwithâdiskâencryptio > >> nâusingâluksâdrbdâcorosyncâandâpacemakerâa96a5bdffcf8 >>>> There's a DRBD partition that gets mounted onto a local directory. >>>> The >>>> local directory is then mounted onto an exported directory (mount >>>> ââbind). >>>> Then the nfs service (samba too) get started and finally the VIP. >>>> >>>> Please note that while I have 3 DRBD resources currently, that number >>>> may >>>> increase after the initial configuration is performed. >>>> >>>> I would just like to know a mechanism to make sure all the DRBD >>>> resources >>>> are colocated. Any suggestions welcome. >>>> >>>> [root@nas00 ansible]# pcs resource >>>> * Clone Set: drbdShareâclone [drbdShare] (promotable): >>>> * Masters: [ nas00 ] >>>> * Slaves: [ nas01 ] >>>> * Clone Set: drbdShareReadâclone [drbdShareRead] (promotable): >>>> * Masters: [ nas00 ] >>>> * Slaves: [ nas01 ] >>>> * Clone Set: drbdShareWriteâclone [drbdShareWrite] (promotable): >>>> * Masters: [ nas00 ] >>>> * Slaves: [ nas01 ] >>>> * localShare (ocf::heartbeat:Filesystem): Started nas00 >>>> * localShareRead (ocf::heartbeat:Filesystem): Started nas00 >>>> * localShareWrite (ocf::heartbeat:Filesystem): Started nas00 >>>> * nfsShare (ocf::heartbeat:Filesystem): Started nas00 >>>> * nfsShareRead (ocf::heartbeat:Filesystem): Started nas00 >>>> * nfsShareWrite (ocf::heartbeat:Filesystem): Started nas00 >>>> * nfsService (systemd:nfsâserver): Started nas00 >>>> * smbService (systemd:smb): Started nas00 >>>> * vipN (ocf::heartbeat:IPaddr2): Started nas00 >>>> >>>> [root@nas00 ansible]# pcs constraint show ââall >>>> Location Constraints: >>>> Ordering Constraints: >>>> promote drbdShareâclone then start localShare (kind:Mandatory) >>>> promote drbdShareReadâclone then start localShareRead >>>> (kind:Mandatory) >>>> promote drbdShareWriteâclone then start localShareWrite >>>> (kind:Mandatory) >>>> start localShare then start nfsShare (kind:Mandatory) >>>> start localShareRead then start nfsShareRead (kind:Mandatory) >>>> start localShareWrite then start nfsShareWrite (kind:Mandatory) >>>> start nfsShare then start nfsService (kind:Mandatory) >>>> start nfsShareRead then start nfsService (kind:Mandatory) >>>> start nfsShareWrite then start nfsService (kind:Mandatory) >>>> start nfsService then start smbService (kind:Mandatory) >>>> start nfsService then start vipN (kind:Mandatory) >>>> Colocation Constraints: >>>> localShare with drbdShareâclone (score:INFINITY) >>>> (withârscârole:Master) >>>> localShareRead with drbdShareReadâclone (score:INFINITY) >>>> (withârscârole:Master) >>>> localShareWrite with drbdShareWriteâclone (score:INFINITY) >>>> (withârscârole:Master) >>>> nfsShare with localShare (score:INFINITY) >>>> nfsShareRead with localShareRead (score:INFINITY) >>>> nfsShareWrite with localShareWrite (score:INFINITY) >>>> nfsService with nfsShare (score:INFINITY) >>>> nfsService with nfsShareRead (score:INFINITY) >>>> nfsService with nfsShareWrite (score:INFINITY) >>>> smbService with nfsShare (score:INFINITY) >>>> smbService with nfsShareRead (score:INFINITY) >>>> smbService with nfsShareWrite (score:INFINITY) >>>> vipN with nfsService (score:INFINITY) >>>> Ticket Constraints: >>>> >>>> Thank you for your time and attention. >>>> >>>> âJohn >>>> >>>> >>>> _______________________________________________ >>>> Manage your subscription: >>>> https://lists.clusterlabs.org/mailman/listinfo/users >>>> >>>> ClusterLabs home: https://www.clusterlabs.org/ >>> >>> _______________________________________________ >>> Manage your subscription: >>> https://lists.clusterlabs.org/mailman/listinfo/users >>> >>> ClusterLabs home: https://www.clusterlabs.org/ >>> >>> >> >> >> _______________________________________________ >> Manage your subscription: >> https://lists.clusterlabs.org/mailman/listinfo/users >> >> ClusterLabs home: https://www.clusterlabs.org/ > > > > _______________________________________________ > Manage your subscription: > https://lists.clusterlabs.org/mailman/listinfo/users > > ClusterLabs home: https://www.clusterlabs.org/ > _______________________________________________ Manage your subscription: https://lists.clusterlabs.org/mailman/listinfo/users ClusterLabs home: https://www.clusterlabs.org/
