Hi Andrei, all,

So, what I want to achieve is that if both nodes are up, node1 preferentially has drbd as master. If that node fails, then node2 should become master. If node1 then comes back online, it should become master again.

I also want to avoid node3 and node4 ever running drbd, since they don't have the disks.

For the link below about promotion scores, what is the pcs command to achieve this? I'm unfamiliar with where the xml goes...



I notice that drbd9 has an auto promotion feature, perhaps that would help here, and so I can forget about configuring drbd in pacemaker? Is that how it is supposed to work? i.e. I can just concentrate on the overlying file system.

Sorry that I'm being a bit slow about all this.

Thanks,
Alastair.

On Tue, 11 May 2021, Andrei Borzenkov wrote:

[EXTERNAL EMAIL]

On 10.05.2021 20:36, Alastair Basden wrote:
Hi Andrei,

Thanks.  So, in summary, I need to:
pcs resource create resourcedrbd0 ocf:linbit:drbd drbd_resource=disk0 op
monitor interval=60s
pcs resource master resourcedrbd0Clone resourcedrbd0 master-max=1
master-node-max=1 clone-max=2 clone-node-max=1 notify=true

pcs constraint location resourcedrb0Clone prefers node1=100
pcs constraint location resourcedrb0Clone prefers node2=50
pcs constraint location resourcedrb0Clone avoids node3
pcs constraint location resourcedrb0Clone avoids node4

Does this mean that it will prefer to run as master on node1, and slave
on node2?

No. I already told you so.

  If not, how can I achieve that?


DRBD resource agents sets master scores based on disk state. If you
statically override this decision you are risking promoting stale copy
which means data loss (I do not know if agent allows it, hopefully not;
but then it will continue to attempt to promote wrong copy and
eventually fail). But if you insist, it is documented:

https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Pacemaker_Explained/s-promotion-scores.html

Also statically biasing one single node means workload will be relocated
every time node becomes available, which usually implies additional
downtime. That is something normally avoided (which is why resource
stickiness exists).
_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/
_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to