Thank you, Ken. Got it :) -----邮件原件----- 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Ken Gaillot 发送时间: 2018年3月6日 7:18 收件人: Cluster Labs - All topics related to open-source clustering welcomed <users@clusterlabs.org> 主题: Re: [ClusterLabs] 答复: 答复: How to configure to make each slave resource has one VIP
On Sun, 2018-02-25 at 02:24 +0000, 范国腾 wrote: > Hello, > > If all of the slave nodes crash, all of the slave vips could not work. > > Do we have any way to make all of the slave VIPs binds to the master > node if there is no slave nodes in the system? > > the user client will not know the system has problem in this way. > > Thanks Hi, If you colocate all the slave IPs "with pgsql-ha" instead of "with slave pgsql-ha", then they can run on either master or slave nodes. Including the master IP in the anti-colocation set will keep them apart normally. > > -----邮件原件----- > 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek > 发送时间: 2018年2月23日 17:37 > 收件人: users@clusterlabs.org > 主题: Re: [ClusterLabs] 答复: How to configure to make each slave resource > has one VIP > > Dne 23.2.2018 v 10:16 范国腾 napsal(a): > > Tomas, > > > > Thank you very much. I do the change according to your suggestion > > and it works. > > > > There is a question: If there are too much nodes (e.g. total 10 > > slave nodes ), I need run "pcs constraint colocation add pgsql- > > slave-ipx with pgsql-slave-ipy -INFINITY" many times. Is there a > > simple command to do this? > > I think colocation set does the trick: > pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2 > pgsql-slave-ip3 setoptions score=-INFINITY You may specify as many > resources as you need in this command. > > Tomas > > > > > Master/Slave Set: pgsql-ha [pgsqld] > > Masters: [ node1 ] > > Slaves: [ node2 node3 ] > > pgsql-master-ip (ocf::heartbeat:IPaddr2): Started > > node1 > > pgsql-slave-ip1 (ocf::heartbeat:IPaddr2): Started > > node3 > > pgsql-slave-ip2 (ocf::heartbeat:IPaddr2): Started > > node2 > > > > Thanks > > Steven > > > > -----邮件原件----- > > 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek > > 发送时间: 2018年2月23日 17:02 > > 收件人: users@clusterlabs.org > > 主题: Re: [ClusterLabs] How to configure to make each slave resource > > has one VIP > > > > Dne 23.2.2018 v 08:17 范国腾 napsal(a): > > > Hi, > > > > > > Our system manages the database (one master and multiple slave). > > > We > > > use one VIP for multiple Slave resources firstly. > > > > > > Now I want to change the configuration that each slave resource > > > has a separate VIP. For example, I have 3 slave nodes and my VIP > > > group has > > > 2 vip; The 2 vips binds to node1 and node2 now; When the node2 > > > fails, the vip could move to the node3. > > > > > > > > > I use the following command to add the VIP > > > > > > / pcs resource group add pgsql-slave-group pgsql-slave-ip1 > > > pgsql-slave-ip2/ > > > > > > / pcs constraint colocation add pgsql-slave-group with slave > > > pgsql-ha INFINITY/ > > > > > > But now the two VIPs are the same nodes: > > > > > > /Master/Slave Set: pgsql-ha [pgsqld]/ > > > > > > / Masters: [ node1 ]/ > > > > > > / Slaves: [ node2 node3 ]/ > > > > > > /pgsql-master-ip (ocf::heartbeat:IPaddr2): Started > > > node1/ > > > > > > /Resource Group: pgsql-slave-group/ > > > > > > */ pgsql-slave-ip1 (ocf::heartbeat:IPaddr2): Started > > > node2/* > > > > > > */ pgsql-slave-ip2 (ocf::heartbeat:IPaddr2): Started > > > node2/* > > > > > > Could anyone tell how to configure to make each slave node has a > > > VIP? > > > > Resources in a group always run on the same node. You want the ip > > resources to run on different nodes so you cannot put them into a > > group. > > > > This will take the resources out of the group: > > pcs resource ungroup pgsql-slave-group > > > > Then you can set colocation constraints for them: > > pcs constraint colocation add pgsql-slave-ip1 with slave pgsql-ha > > pcs constraint colocation add pgsql-slave-ip2 with slave pgsql-ha > > > > You may also need to tell pacemaker not to put both ips on the same > > node: > > pcs constraint colocation add pgsql-slave-ip1 with pgsql-slave-ip2 > > -INFINITY > > > > > > Regards, > > Tomas > > > > > > > > Thanks > > > > > > > > > > > > _______________________________________________ > > > Users mailing list: Users@clusterlabs.org > > > https://lists.clusterlabs.org/mailman/listinfo/users > > > > > > Project Home: http://www.clusterlabs.org Getting started: > > > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > > > Bugs: http://bugs.clusterlabs.org > > > > > > > _______________________________________________ > > Users mailing list: Users@clusterlabs.org > > https://lists.clusterlabs.org/mailman/listinfo/users > > > > Project Home: http://www.clusterlabs.org Getting started: > > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > > Bugs: http://bugs.clusterlabs.org > > _______________________________________________ > > Users mailing list: Users@clusterlabs.org > > https://lists.clusterlabs.org/mailman/listinfo/users > > > > Project Home: http://www.clusterlabs.org Getting started: > > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > > Bugs: http://bugs.clusterlabs.org > > > > _______________________________________________ > Users mailing list: Users@clusterlabs.org https://lists.clusterlabs.o > rg/mailman/listinfo/users > > Project Home: http://www.clusterlabs.org Getting started: > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org > _______________________________________________ > Users mailing list: Users@clusterlabs.org > https://lists.clusterlabs.org/mailman/listinfo/users > > Project Home: http://www.clusterlabs.org Getting started: > http://www.clusterlabs.org/doc/Cluster_from_Scratch. > pdf > Bugs: http://bugs.clusterlabs.org -- Ken Gaillot <kgail...@redhat.com> _______________________________________________ Users mailing list: Users@clusterlabs.org https://lists.clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org _______________________________________________ Users mailing list: Users@clusterlabs.org https://lists.clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org