06.01.2019 8:16, Jason Pfingstmann пишет: > I am new to corosync and pacemaker, having only used heartbeat in the > past (which is barely even comparable, now that I’m in the middle of > this). I’m working on a system for RDQM (IBM’s MQ software, > clustering solution) and it uses corosync with pacemaker. I set it > up and had a 3 node cluster with resources available everywhere (any > node could be made active). However, we need to set this up as 2 > nodes as being available for the resources and 1 node to only > function as a quorum device. > > To try to accomplish this, I first banned the services from one of > the nodes (pcs resource ban <resource> <node> and uninstalled those > components from the node.
"pcs ban" is really intended for temporary moving resource off node; it even has timeout parameter to cancel its effect automatically. You should create normal location constraint if you want to permanently exclude some node from being used for specific resource. > Now when I do pcs status, I get Failed > Actions: * <resource_monitor_0> on node1 ‚not installed‘ > This is resource probe which is run one time when node is started; pacemaker is using it to get informed which resources are currently active. > Is there a „proper“ way to remove a resource from a single node? > One possibility is to set resource-discovery=never or resource-discovery=exclusive option on location constraint. Which one depends on whether you use opt-out or opt-in cluster (i.e. whether by default every resource can run anywhere or nowhere). It may be possible to simply set node in standby mode, although I am not sure whether probes are still run in this case. Finally you can simply convert your cluster into two-node cluster :) thus avoiding all these issues. What is the point of having the third node if it won't ever run any resource? > I did look up removing a resource from a specific node, but the only > reference I could find was how to remove a resource from ALL nodes. > Perhaps having the IBM tools create the cluster to begin with left me > missing some fundamental knowledge I would have had if I did it from > scratch, but for IBM to support our configuration, they require the > use of their setup tools. They don’t have any documentation on how > to do a 2 node cluster with additional qdevice, so we’re on our own > for this part. > It sounds like you already modify configuration in incompatible way thus possibly losing support. > My apologies for not being able to copy/paste the text, but our > system is air gapped and I have no way to do so. > > Thanks for any help with this. > > Jason Pfingstmann > > _______________________________________________ Users mailing list: > [email protected] > https://lists.clusterlabs.org/mailman/listinfo/users > > Project Home: http://www.clusterlabs.org Getting started: > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: > http://bugs.clusterlabs.org > _______________________________________________ Users mailing list: [email protected] https://lists.clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
