Re: [ClusterLabs] How to set up fencing/stonith

2018-05-15 Thread Andrei Borzenkov
16.05.2018 06:52, Casey & Gina пишет:
> Hi, I'm trying to figure out how to get fencing/stonith going with
> pacemaker.
> 
> As far as I understand it, they are both part of the same thing -
> setting up stonith means setting up fencing.  If I'm mistaken on
> that, please let me know.
> 

They are often used interchangeably, although strictly speaking fencing
refers to making sure victim node cannot access (shared) resource while
stonith refers to making sure victim node is not running, usually by
turning it off externally. Fencing in this strict sense is more limited
as there are non-shared resources that still must be arbitrated (IP
address is the best example).

> Specifically, I'm wanting to use the external/vcenter plugin.  I've
> got the required vCenter CLI software installed and tested with
> `gethosts`, `on`, `off`, etc. commands as per
> /usr/share/doc/cluster-glue/stonith/README.vcenter.  I'm struggling
> to understand how to now get it set up with pacemaker.
> 
> Both the aforementioned document as well as
> https://www.hastexo.com/resources/hints-and-kinks/fencing-vmware-virtualized-pacemaker-nodes/
> have instructions for crm, not pcs, and I'm not sure how exactly to
> translate one to the other.  What I've done before in this
> circumstance is to install crmsh, execute the crm-based command, then
> look at the resulting .xml and try to figure out a pcs command that
> creates an equivalent result.  Anyways, those two instructions give
> very different commands, and I don't really understand either.
> 
> Firstly, I'll start with the documentation file included on my
> system, as I'm assuming that should be the most authoritative.  It
> provides the following two commands as examples:
> 
> crm configure primitive vfencing stonith::external/vcenter params \ 
> VI_SERVER="10.1.1.1" VI_CREDSTORE="/etc/vicredentials.xml" \ 
> HOSTLIST="hostname1=vmname1;hostname2=vmname2" RESETPOWERON="0" \ op
> monitor interval="60s"
> 
> crm configure clone Fencing vfencing
> 
> Why is the second line there?  What does it do?  Is it necessary?
> Unfortunately the document doesn't give any explanation.
> 

My understanding it that this is legacy. Once upon a time stonith
resource had to be started on a node to be usable. Today stonith
resource only provides monitoring, and stonithd will use it even if
pacemaker resource is not active. The only requirement is that resource
is not prohibited from being active on the node.

> Secondly, looking at the web link above, it says to add a primitive
> for each node in the cluster, as well as a location.  This seems
> rather different than the above approach.  Which is more correct?
> 

Single primitive without explicit constraint should actually be enough
using more or less recent pacemaker. Of course every node must fulfill
requirements (like having vCLI installed), if there are reasons to avoid
doing it everywhere you may restrict this resource to subset of nodes.

> Lastly, searching the web for some documentation on how to do this
> with PCS, I came across
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-fencedevicecreate-haar
> - which has yet another totally different way of doing things, by
> adding a "fencing device".  Attempting to fiddle around with
> fence_vmware command doesn't seem to get me anywhere - how is this
> related to the external/vcenter module?
> 

RH historically used notion of "fencing" where heartbeat/pacemaker used
"stonith". As mentioned, they are in essence the same. Things may be
different in older RH versions which used different cluster stack, I am
not familiar with them.

> So I'm really confused about what I should do, and why there seems to
> be radically different ways presented, none of which I can easily
> grasp.  I assume these questions are the same regardless of which
> particular plugin is being used...
> 
> Is there some good documentation that explains this in better detail
> and can definitively tell me the best way of going about this,
> preferably with pcs?
> 
> Thank you,
> 

___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] How to set up fencing/stonith

2018-05-15 Thread Casey & Gina
Hi, I'm trying to figure out how to get fencing/stonith going with pacemaker.

As far as I understand it, they are both part of the same thing - setting up 
stonith means setting up fencing.  If I'm mistaken on that, please let me know.

Specifically, I'm wanting to use the external/vcenter plugin.  I've got the 
required vCenter CLI software installed and tested with `gethosts`, `on`, 
`off`, etc. commands as per /usr/share/doc/cluster-glue/stonith/README.vcenter. 
 I'm struggling to understand how to now get it set up with pacemaker.

Both the aforementioned document as well as 
https://www.hastexo.com/resources/hints-and-kinks/fencing-vmware-virtualized-pacemaker-nodes/
 have instructions for crm, not pcs, and I'm not sure how exactly to translate 
one to the other.  What I've done before in this circumstance is to install 
crmsh, execute the crm-based command, then look at the resulting .xml and try 
to figure out a pcs command that creates an equivalent result.  Anyways, those 
two instructions give very different commands, and I don't really understand 
either.

Firstly, I'll start with the documentation file included on my system, as I'm 
assuming that should be the most authoritative.  It provides the following two 
commands as examples:

crm configure primitive vfencing stonith::external/vcenter params \
  VI_SERVER="10.1.1.1" VI_CREDSTORE="/etc/vicredentials.xml" \
  HOSTLIST="hostname1=vmname1;hostname2=vmname2" RESETPOWERON="0" \
  op monitor interval="60s"

crm configure clone Fencing vfencing

Why is the second line there?  What does it do?  Is it necessary?  
Unfortunately the document doesn't give any explanation.

Secondly, looking at the web link above, it says to add a primitive for each 
node in the cluster, as well as a location.  This seems rather different than 
the above approach.  Which is more correct?

Lastly, searching the web for some documentation on how to do this with PCS, I 
came across 
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-fencedevicecreate-haar
 - which has yet another totally different way of doing things, by adding a 
"fencing device".  Attempting to fiddle around with fence_vmware command 
doesn't seem to get me anywhere - how is this related to the external/vcenter 
module?

So I'm really confused about what I should do, and why there seems to be 
radically different ways presented, none of which I can easily grasp.  I assume 
these questions are the same regardless of which particular plugin is being 
used...

Is there some good documentation that explains this in better detail and can 
definitively tell me the best way of going about this, preferably with pcs?

Thank you,
-- 
Casey
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Pacemaker 2.0.0-rc4 now available

2018-05-15 Thread Ken Gaillot
Source code for the fourth (and likely final) release candidate for
Pacemaker version 2.0.0 is now available at:

https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-2.0.0-rc4

This release restores the possibility of rolling (live) upgrades from
Pacemaker 1.1.11 or later, on top of corosync 2 or 3. (Rolling upgrades
were accidentally broken in rc3.) Other setups can be upgraded with the
cluster stopped.

If upgrading a cluster with bundle resources using the default run-
command and container images running an older Pacemaker, special care
must be taken. Before upgrading, the bundle resource should be modified
to use an explicit run-command of /usr/sbin/pacemaker_remoted.

For more details, including bug fixes, see the change log:

  https://github.com/ClusterLabs/pacemaker/blob/2.0/ChangeLog

and a special wiki page for the 2.0 release:

  https://wiki.clusterlabs.org/wiki/Pacemaker_2.0_Changes

I expect the final release next week, with no major changes from this
candidate.

Everyone is encouraged to download, compile and test the new release.
We do many regression tests and simulations, but we can't cover all
possible use cases, so your feedback is important and appreciated.

Many thanks to all contributors of source code to this release,
including Gao,Yan, Hideo Yamauchi, Jan Pokorný, and Ken Gaillot.
-- 
Ken Gaillot 
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] pacemaker as data store

2018-05-15 Thread Ken Gaillot
On Tue, 2018-05-15 at 13:25 +0300, George Melikov wrote:
> Hello, 
> 
> Sorry for a (likely) dumb question,
> but is there a way to store and sync data via pacemaker/corosync?
> 
> Are there any way to store key/value properties or files?
> 
> I've found `pcs property set --force`, but it didn't survive cluster
> restart.

That's surprising, cluster properties (even unrecognized ones) should
persist. After setting it, try double-checking that it was written to
disk with pcs cluster cib | less. I would use some prefix (like the
name of your organization) for all property names, to make conflicts
with real properties less likely.

Permanent node attributes are another possibility, though they record a
separate value for each node. The values of any node, however, can be
queried from any other node. That means you could just pick one node
and set all your name/value pairs using its name.

However, there's a reason not to use pacemaker for this purpose:
changes to cluster properties or node attributes will trigger a new
calculation of where resources should be. It won't cause any harm, but
it will add CPU and I/O load unnecessarily. Similarly, if your data set
is large, it will take longer to do such calculations, slowing down
recovery unnecessarily.

You could run etcd or some NoSQL database as a cluster resource, then
keep your data there.

> 
> 
> Sincerely,
> George Melikov,
> Tel. 7-915-278-39-36
> Skype: georgemelikov
> 
> С наилучшими пожеланиями,
> Георгий Меликов,
> m...@gmelikov.ru
> Моб:         +7 9152783936
> Skype:     georgemelikov
-- 
Ken Gaillot 
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Frequent PAF log messages - Forbidding promotion on in state "startup"

2018-05-15 Thread Shobe, Casey
Thanks, I should have seen that.  I just assumed that everything was working 
fine because `pcs status` shows no errors.

This leads me to another question - is there a way to trigger a rebuild of a 
slave with pcs?  Or do I need to use `pcs cluster stop`, then manually do a new 
pg_basebackup, copy in the recovery.conf, and `pcs cluster start` for each 
standby node needing rebuilt?

> On May 13, 2018, at 5:58 AM, Jehan-Guillaume de Rorthais  
> wrote:
> 
> This message originated outside of DISH and was sent by: j...@dalibo.com
> 
> On Fri, 11 May 2018 16:25:18 +
> "Shobe, Casey"  wrote:
> 
>> I'm using PAF and my corosync log ends up filled with messages like this
>> (about 3 times per minute for each standby node):
>> 
>> pgsqlms(postgresql-10-main)[26822]: 2018/05/11_06:47:08  INFO: Forbidding
>> promotion on "d-gp2-dbp63-1" in state "startup"
>> pgsqlms(postgresql-10-main)[26822]: 2018/05/11_06:47:08  INFO: Forbidding
>> promotion on "d-gp2-dbp63-2" in state "startup"
>> 
>> What is the cause of this logging and does it indicate something is wrong
>> with my setup?
> 
> Yes, something is wrong with your setup. When a PostgreSQL standby is starting
> up, it tries to hook replication with the primary instance: this is the
> "startup" state. As soon as it is connected, it start replicating and tries to
> catchup with the master location, this is the "catchup" state. As soon as the
> standby is in sync with the master, it enters in "streaming" state. 
> See column "state" in the doc:
> https://www.postgresql.org/docs/current/static/monitoring-stats.html#PG-STAT-REPLICATION-VIEW
> 
> If you have one standby stuck in "startup" state, that means it was able to
> connect to the master but is not replicating with it for some reason
> (different/incompatible/non catchable timeline?).
> 
> Look for errors in your PostgreSQL logs on the primary and the standby.
> 
> 

___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] pacemaker as data store

2018-05-15 Thread George Melikov
Hello, 

Sorry for a (likely) dumb question,
but is there a way to store and sync data via pacemaker/corosync?

Are there any way to store key/value properties or files?

I've found `pcs property set --force`, but it didn't survive cluster restart.


Sincerely,
George Melikov,
Tel. 7-915-278-39-36
Skype: georgemelikov

С наилучшими пожеланиями,
Георгий Меликов,
m...@gmelikov.ru
Моб:         +7 9152783936
Skype:     georgemelikov
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Frequent PAF log messages - Forbidding promotion on in state "startup"

2018-05-15 Thread Jehan-Guillaume de Rorthais
On Mon, 14 May 2018 19:08:47 +
"Shobe, Casey"  wrote:

> > We do not trigger error for such scenario because it would require the
> > cluster to react...and there's really no way the cluster can solve such
> > issue. So we just put a negative score, which is already quite strange to
> > be noticed in most situation.  
> 
> Where is this negative score to be noticed?

I usually use "crm_mon -frnAo"

* f: show failcounts
* r: show all resources, even inactive ones
* n: group by node instead of resource
* A: show node attributes <- this one should show you the scores
* o: show operation history

Note that you can switch this argument interactively when crm_mon is already
running. Hit 'h' for help.

[...]
> > I advice you to put the recovery.conf.pcmk outside of the PGDATA and use
> > resource parameter "recovery_template". It would save you one step to deal
> > with the recovery.conf. But this is the simplest procedure, yes.  
> 
> I do this (minus the .pcmk suffix) already, but was just being overly
> paranoid about avoiding a multi-master situation.  I guess there is no need
> for me to manually copy in the recovery.conf.

When cloning the primary, it shouldn't have a "recovery.conf" existing. It may
have a "recovery.done", but this is not a problem.

When cloning from a standby, I can understand you might want to be over
paranoid and delete the recovery.conf file.

But in either case, on resource start, PAF will create the
"PGDATA/recovery.conf" file based on your template anyway. No need to create it
yourself.

> > Should you keep the cluster up on this node for some other resources, you
> > could temporary exclude your pgsql-ha from this node so the cluster stop
> > considering it for this particular node while you rebuild your standby.
> > Here is some inspiration:
> > https://clusterlabs.github.io/PAF/CentOS-7-admin-cookbook.html#forbidding-a-paf-resource-on-a-node
> >   
> 
> I was just reading that page before I saw this E-mail.  Another question I
> had though - is how I could deploy a change to the PostgreSQL configuration
> that requires a restart of the service, with minimal service interruption.
> For the moment, I'm assuming I need to, on each standby node, do a `pcs
> cluster stop; pcs cluster start` on each standby, then the same on the master
> which should cause a failover to one of the standby nodes.

According to the pcs manpage, you can restart a resource on one node using:

  pcs resource restart  

> If I need to change max_connections, though, I'm really not sure what to do,
> since the standby nodes will refuse to replicate from a master with a
> different max_connections setting.

You are missing a subtle detail here: standby will refuse to start if its
max_connections is lower than on the primary.

So you can change your max_connections:

* to a higher value starting from standby then the primary
* to a lower value starting from the primary then the standby

> On a related note, is there perhaps a pcs command that would issue a sighup
> to the master postgres process across all nodes, for when I change a
> configuration option that only requires a reload?

No. There are old discussions and patch about such feature in pacemaker, but
nothing end up in core. See:
https://lists.clusterlabs.org/pipermail/pacemaker/2014-February/044686.html

Note that PAF use a dummy function for the reload action anyway. But we could
easily add a "pg_ctl reload" to it if pcs (or crmsh) would allow to trigger it
manually.

Here, again, you can rely on ansible, salt, ssh command, etc. Either use
"pg_ctl -D  reload" or a simple query like "SELECT pg_reload_conf()".

> I was hoping optimistically that pcs+paf included more administrative
> functionality, since the systemctl commands such as reload can no longer be
> used.

It would be nice, I agree.

> Thank you for your assistance!

You are very welcome.

___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] 答复: 答复: How to change the "pcs constraint colocation set"

2018-05-15 Thread 范国腾
Sorry, my mistake. I should use the second id. It is ok now. Thanks Tomas.

-邮件原件-
发件人: 范国腾 
发送时间: 2018年5月15日 16:19
收件人: users@clusterlabs.org
主题: 答复: [ClusterLabs] 答复: How to change the "pcs constraint colocation set"

It could not find the id of constraint set.

[root@node1 ~]# pcs constraint colocation --full Colocation Constraints:
  clvmd-clone with dlm-clone (score:INFINITY) 
(id:colocation-clvmd-clone-dlm-clone-INFINITY)
  pgsql-master-ip with pgsql-ha (score:INFINITY) (rsc-role:Started) 
(with-rsc-role:Master) (id:colocation-pgsql-master-ip-pgsql-ha-INFINITY)
  pgsql-slave-ip2 with pgsql-ha (score:INFINITY) (rsc-role:Started) 
(with-rsc-role:Slave) (id:colocation-pgsql-slave-ip2-pgsql-ha-INFINITY)
  pgsql-slave-ip3 with pgsql-ha (score:INFINITY) (rsc-role:Started) 
(with-rsc-role:Slave) (id:colocation-pgsql-slave-ip3-pgsql-ha-INFINITY)
  Resource Sets:
set pgsql-slave-ip2 (id:pcs_rsc_set_pgsql-slave-ip2) setoptions score=-1000 
(id:pcs_rsc_colocation_set_pgsql-slave-ip2)
set pgsql-slave-ip2 pgsql-slave-ip3 
(id:pcs_rsc_set_pgsql-slave-ip2_pgsql-slave-ip3) setoptions score=-1000 
(id:pcs_rsc_colocation_set_pgsql-slave-ip2_pgsql-slave-ip3)
set pgsql-slave-ip2 pgsql-slave-ip3 
(id:pcs_rsc_set_pgsql-slave-ip2_pgsql-slave-ip3-1) setoptions score=-INFINITY 
(id:pcs_rsc_colocation_set_pgsql-slave-ip2_pgsql-slave-ip3-1)
[root@node1 ~]# pcs constraint remove pcs_rsc_set_pgsql-slave-ip2
Error: Unable to find constraint - 'pcs_rsc_set_pgsql-slave-ip2'
[root@node1 ~]# pcs constraint remove 
pcs_rsc_set_pgsql-slave-ip2_pgsql-slave-ip3
Error: Unable to find constraint - 'pcs_rsc_set_pgsql-slave-ip2_pgsql-slave-ip3'
[root@node1 ~]#

-邮件原件-
发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
发送时间: 2018年5月15日 16:12
收件人: users@clusterlabs.org
主题: Re: [ClusterLabs] 答复: How to change the "pcs constraint colocation set"

Dne 15.5.2018 v 10:02 范国腾 napsal(a):
> Thank you, Tomas. I know how to remove a constraint " pcs constraint 
> colocation remove   ". Is there a 
> command to delete a constraint colocation set?

There is "pcs constraint remove ". To get a constraint id, run 
"pcs constraint colocation --full" and find the constraint you want to remove.


> 
> -邮件原件-
> 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
> 发送时间: 2018年5月15日 15:42
> 收件人: users@clusterlabs.org
> 主题: Re: [ClusterLabs] How to change the "pcs constraint colocation set"
> 
> Dne 15.5.2018 v 05:25 范国腾 napsal(a):
>> Hi,
>>
>> We have two VIP resources and we use the following command to make them in 
>> different node.
>>
>> pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2 
>> setoptions score=-1000
>>
>> Now we add a new node into the cluster and we add a new VIP too. We want the 
>> constraint colocation set to change to be:
>> pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2
>> pgsql-slave-ip3 setoptions score=-1000
>>
>> How should we change the constraint set?
>>
>> Thanks
> 
> Hi,
> 
> pcs provides no commands for editing existing constraints. You can create a 
> new constraint and remove the old one. If you want to do it as a single 
> change from pacemaker's point of view, follow this procedure:
> 
> [root@node1:~]# pcs cluster cib cib1.xml [root@node1:~]# cp cib1.xml cib2.xml 
> [root@node1:~]# pcs -f cib2.xml constraint list --full Location Constraints:
> Ordering Constraints:
> Colocation Constraints:
> Resource Sets:
>   set pgsql-slave-ip1 pgsql-slave-ip2
> (id:pcs_rsc_set_pgsql-slave-ip1_pgsql-slave-ip2) setoptions
> score=-1000
> (id:pcs_rsc_colocation_set_pgsql-slave-ip1_pgsql-slave-ip2)
> Ticket Constraints:
> [root@node1:~]# pcs -f cib2.xml constraint remove
> pcs_rsc_colocation_set_pgsql-slave-ip1_pgsql-slave-ip2
> [root@node1:~]# pcs -f cib2.xml constraint colocation set
> pgsql-slave-ip1 pgsql-slave-ip2 pgsql-slave-ip3 setoptions score=-1000 
> [root@node1:~]# pcs cluster cib-push cib2.xml diff-against=cib1.xml 
> CIB updated
> 
> 
> Pcs older than 0.9.156 does not support the diff-against option, you can do 
> it like this:
> 
> [root@node1:~]# pcs cluster cib cib.xml [root@node1:~]# pcs -f cib.xml 
> constraint list --full Location Constraints:
> Ordering Constraints:
> Colocation Constraints:
> Resource Sets:
>   set pgsql-slave-ip1 pgsql-slave-ip2
> (id:pcs_rsc_set_pgsql-slave-ip1_pgsql-slave-ip2) setoptions
> score=-1000
> (id:pcs_rsc_colocation_set_pgsql-slave-ip1_pgsql-slave-ip2)
> Ticket Constraints:
> [root@node1:~]# pcs -f cib.xml constraint remove
> pcs_rsc_colocation_set_pgsql-slave-ip1_pgsql-slave-ip2
> [root@node1:~]# pcs -f cib.xml constraint colocation set
> pgsql-slave-ip1
> pgsql-slave-ip2 pgsql-slave-ip3 setoptions score=-1000 [root@node1:~]# 
> pcs cluster cib-push cib.xml CIB updated
> 
> 
> Regards,
> Tomas
> ___
> Users mailing list: Users@clusterlabs.org 
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: 

[ClusterLabs] 答复: 答复: How to change the "pcs constraint colocation set"

2018-05-15 Thread 范国腾
It could not find the id of constraint set.

[root@node1 ~]# pcs constraint colocation --full
Colocation Constraints:
  clvmd-clone with dlm-clone (score:INFINITY) 
(id:colocation-clvmd-clone-dlm-clone-INFINITY)
  pgsql-master-ip with pgsql-ha (score:INFINITY) (rsc-role:Started) 
(with-rsc-role:Master) (id:colocation-pgsql-master-ip-pgsql-ha-INFINITY)
  pgsql-slave-ip2 with pgsql-ha (score:INFINITY) (rsc-role:Started) 
(with-rsc-role:Slave) (id:colocation-pgsql-slave-ip2-pgsql-ha-INFINITY)
  pgsql-slave-ip3 with pgsql-ha (score:INFINITY) (rsc-role:Started) 
(with-rsc-role:Slave) (id:colocation-pgsql-slave-ip3-pgsql-ha-INFINITY)
  Resource Sets:
set pgsql-slave-ip2 (id:pcs_rsc_set_pgsql-slave-ip2) setoptions score=-1000 
(id:pcs_rsc_colocation_set_pgsql-slave-ip2)
set pgsql-slave-ip2 pgsql-slave-ip3 
(id:pcs_rsc_set_pgsql-slave-ip2_pgsql-slave-ip3) setoptions score=-1000 
(id:pcs_rsc_colocation_set_pgsql-slave-ip2_pgsql-slave-ip3)
set pgsql-slave-ip2 pgsql-slave-ip3 
(id:pcs_rsc_set_pgsql-slave-ip2_pgsql-slave-ip3-1) setoptions score=-INFINITY 
(id:pcs_rsc_colocation_set_pgsql-slave-ip2_pgsql-slave-ip3-1)
[root@node1 ~]# pcs constraint remove pcs_rsc_set_pgsql-slave-ip2
Error: Unable to find constraint - 'pcs_rsc_set_pgsql-slave-ip2'
[root@node1 ~]# pcs constraint remove 
pcs_rsc_set_pgsql-slave-ip2_pgsql-slave-ip3
Error: Unable to find constraint - 'pcs_rsc_set_pgsql-slave-ip2_pgsql-slave-ip3'
[root@node1 ~]#

-邮件原件-
发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
发送时间: 2018年5月15日 16:12
收件人: users@clusterlabs.org
主题: Re: [ClusterLabs] 答复: How to change the "pcs constraint colocation set"

Dne 15.5.2018 v 10:02 范国腾 napsal(a):
> Thank you, Tomas. I know how to remove a constraint " pcs constraint 
> colocation remove   ". Is there a 
> command to delete a constraint colocation set?

There is "pcs constraint remove ". To get a constraint id, run 
"pcs constraint colocation --full" and find the constraint you want to remove.


> 
> -邮件原件-
> 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
> 发送时间: 2018年5月15日 15:42
> 收件人: users@clusterlabs.org
> 主题: Re: [ClusterLabs] How to change the "pcs constraint colocation set"
> 
> Dne 15.5.2018 v 05:25 范国腾 napsal(a):
>> Hi,
>>
>> We have two VIP resources and we use the following command to make them in 
>> different node.
>>
>> pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2 
>> setoptions score=-1000
>>
>> Now we add a new node into the cluster and we add a new VIP too. We want the 
>> constraint colocation set to change to be:
>> pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2
>> pgsql-slave-ip3 setoptions score=-1000
>>
>> How should we change the constraint set?
>>
>> Thanks
> 
> Hi,
> 
> pcs provides no commands for editing existing constraints. You can create a 
> new constraint and remove the old one. If you want to do it as a single 
> change from pacemaker's point of view, follow this procedure:
> 
> [root@node1:~]# pcs cluster cib cib1.xml [root@node1:~]# cp cib1.xml cib2.xml 
> [root@node1:~]# pcs -f cib2.xml constraint list --full Location Constraints:
> Ordering Constraints:
> Colocation Constraints:
> Resource Sets:
>   set pgsql-slave-ip1 pgsql-slave-ip2
> (id:pcs_rsc_set_pgsql-slave-ip1_pgsql-slave-ip2) setoptions 
> score=-1000
> (id:pcs_rsc_colocation_set_pgsql-slave-ip1_pgsql-slave-ip2)
> Ticket Constraints:
> [root@node1:~]# pcs -f cib2.xml constraint remove
> pcs_rsc_colocation_set_pgsql-slave-ip1_pgsql-slave-ip2
> [root@node1:~]# pcs -f cib2.xml constraint colocation set
> pgsql-slave-ip1 pgsql-slave-ip2 pgsql-slave-ip3 setoptions score=-1000 
> [root@node1:~]# pcs cluster cib-push cib2.xml diff-against=cib1.xml 
> CIB updated
> 
> 
> Pcs older than 0.9.156 does not support the diff-against option, you can do 
> it like this:
> 
> [root@node1:~]# pcs cluster cib cib.xml [root@node1:~]# pcs -f cib.xml 
> constraint list --full Location Constraints:
> Ordering Constraints:
> Colocation Constraints:
> Resource Sets:
>   set pgsql-slave-ip1 pgsql-slave-ip2
> (id:pcs_rsc_set_pgsql-slave-ip1_pgsql-slave-ip2) setoptions 
> score=-1000
> (id:pcs_rsc_colocation_set_pgsql-slave-ip1_pgsql-slave-ip2)
> Ticket Constraints:
> [root@node1:~]# pcs -f cib.xml constraint remove
> pcs_rsc_colocation_set_pgsql-slave-ip1_pgsql-slave-ip2
> [root@node1:~]# pcs -f cib.xml constraint colocation set 
> pgsql-slave-ip1
> pgsql-slave-ip2 pgsql-slave-ip3 setoptions score=-1000 [root@node1:~]# 
> pcs cluster cib-push cib.xml CIB updated
> 
> 
> Regards,
> Tomas
> ___
> Users mailing list: Users@clusterlabs.org 
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org Getting started: 
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> ___
> Users mailing list: Users@clusterlabs.org 
> 

Re: [ClusterLabs] 答复: How to change the "pcs constraint colocation set"

2018-05-15 Thread Tomas Jelinek

Dne 15.5.2018 v 10:02 范国腾 napsal(a):

Thank you, Tomas. I know how to remove a constraint " pcs constraint colocation remove 
  ". Is there a command to delete a 
constraint colocation set?


There is "pcs constraint remove ". To get a constraint 
id, run "pcs constraint colocation --full" and find the constraint you 
want to remove.





-邮件原件-
发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
发送时间: 2018年5月15日 15:42
收件人: users@clusterlabs.org
主题: Re: [ClusterLabs] How to change the "pcs constraint colocation set"

Dne 15.5.2018 v 05:25 范国腾 napsal(a):

Hi,

We have two VIP resources and we use the following command to make them in 
different node.

pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2
setoptions score=-1000

Now we add a new node into the cluster and we add a new VIP too. We want the 
constraint colocation set to change to be:
pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2
pgsql-slave-ip3 setoptions score=-1000
   
How should we change the constraint set?


Thanks


Hi,

pcs provides no commands for editing existing constraints. You can create a new 
constraint and remove the old one. If you want to do it as a single change from 
pacemaker's point of view, follow this procedure:

[root@node1:~]# pcs cluster cib cib1.xml [root@node1:~]# cp cib1.xml cib2.xml 
[root@node1:~]# pcs -f cib2.xml constraint list --full Location Constraints:
Ordering Constraints:
Colocation Constraints:
Resource Sets:
  set pgsql-slave-ip1 pgsql-slave-ip2
(id:pcs_rsc_set_pgsql-slave-ip1_pgsql-slave-ip2) setoptions score=-1000
(id:pcs_rsc_colocation_set_pgsql-slave-ip1_pgsql-slave-ip2)
Ticket Constraints:
[root@node1:~]# pcs -f cib2.xml constraint remove
pcs_rsc_colocation_set_pgsql-slave-ip1_pgsql-slave-ip2
[root@node1:~]# pcs -f cib2.xml constraint colocation set
pgsql-slave-ip1 pgsql-slave-ip2 pgsql-slave-ip3 setoptions score=-1000 
[root@node1:~]# pcs cluster cib-push cib2.xml diff-against=cib1.xml CIB updated


Pcs older than 0.9.156 does not support the diff-against option, you can do it 
like this:

[root@node1:~]# pcs cluster cib cib.xml
[root@node1:~]# pcs -f cib.xml constraint list --full Location Constraints:
Ordering Constraints:
Colocation Constraints:
Resource Sets:
  set pgsql-slave-ip1 pgsql-slave-ip2
(id:pcs_rsc_set_pgsql-slave-ip1_pgsql-slave-ip2) setoptions score=-1000
(id:pcs_rsc_colocation_set_pgsql-slave-ip1_pgsql-slave-ip2)
Ticket Constraints:
[root@node1:~]# pcs -f cib.xml constraint remove
pcs_rsc_colocation_set_pgsql-slave-ip1_pgsql-slave-ip2
[root@node1:~]# pcs -f cib.xml constraint colocation set pgsql-slave-ip1
pgsql-slave-ip2 pgsql-slave-ip3 setoptions score=-1000 [root@node1:~]# pcs 
cluster cib-push cib.xml CIB updated


Regards,
Tomas
___
Users mailing list: Users@clusterlabs.org 
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org Getting started: 
http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] 答复: How to change the "pcs constraint colocation set"

2018-05-15 Thread 范国腾
Thank you, Tomas. I know how to remove a constraint " pcs constraint colocation 
remove   ". Is there a command to 
delete a constraint colocation set?

-邮件原件-
发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
发送时间: 2018年5月15日 15:42
收件人: users@clusterlabs.org
主题: Re: [ClusterLabs] How to change the "pcs constraint colocation set"

Dne 15.5.2018 v 05:25 范国腾 napsal(a):
> Hi,
> 
> We have two VIP resources and we use the following command to make them in 
> different node.
> 
> pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2 
> setoptions score=-1000
> 
> Now we add a new node into the cluster and we add a new VIP too. We want the 
> constraint colocation set to change to be:
> pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2 
> pgsql-slave-ip3 setoptions score=-1000
>   
> How should we change the constraint set?
> 
> Thanks

Hi,

pcs provides no commands for editing existing constraints. You can create a new 
constraint and remove the old one. If you want to do it as a single change from 
pacemaker's point of view, follow this procedure:

[root@node1:~]# pcs cluster cib cib1.xml [root@node1:~]# cp cib1.xml cib2.xml 
[root@node1:~]# pcs -f cib2.xml constraint list --full Location Constraints:
Ordering Constraints:
Colocation Constraints:
   Resource Sets:
 set pgsql-slave-ip1 pgsql-slave-ip2
(id:pcs_rsc_set_pgsql-slave-ip1_pgsql-slave-ip2) setoptions score=-1000
(id:pcs_rsc_colocation_set_pgsql-slave-ip1_pgsql-slave-ip2)
Ticket Constraints:
[root@node1:~]# pcs -f cib2.xml constraint remove
pcs_rsc_colocation_set_pgsql-slave-ip1_pgsql-slave-ip2
[root@node1:~]# pcs -f cib2.xml constraint colocation set
pgsql-slave-ip1 pgsql-slave-ip2 pgsql-slave-ip3 setoptions score=-1000 
[root@node1:~]# pcs cluster cib-push cib2.xml diff-against=cib1.xml CIB updated


Pcs older than 0.9.156 does not support the diff-against option, you can do it 
like this:

[root@node1:~]# pcs cluster cib cib.xml
[root@node1:~]# pcs -f cib.xml constraint list --full Location Constraints:
Ordering Constraints:
Colocation Constraints:
   Resource Sets:
 set pgsql-slave-ip1 pgsql-slave-ip2
(id:pcs_rsc_set_pgsql-slave-ip1_pgsql-slave-ip2) setoptions score=-1000
(id:pcs_rsc_colocation_set_pgsql-slave-ip1_pgsql-slave-ip2)
Ticket Constraints:
[root@node1:~]# pcs -f cib.xml constraint remove
pcs_rsc_colocation_set_pgsql-slave-ip1_pgsql-slave-ip2
[root@node1:~]# pcs -f cib.xml constraint colocation set pgsql-slave-ip1
pgsql-slave-ip2 pgsql-slave-ip3 setoptions score=-1000 [root@node1:~]# pcs 
cluster cib-push cib.xml CIB updated


Regards,
Tomas
___
Users mailing list: Users@clusterlabs.org 
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org Getting started: 
http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] How to change the "pcs constraint colocation set"

2018-05-15 Thread Tomas Jelinek

Dne 15.5.2018 v 05:25 范国腾 napsal(a):

Hi,

We have two VIP resources and we use the following command to make them in 
different node.

pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2 setoptions 
score=-1000

Now we add a new node into the cluster and we add a new VIP too. We want the 
constraint colocation set to change to be:
pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2 pgsql-slave-ip3 
setoptions score=-1000
  
How should we change the constraint set?


Thanks


Hi,

pcs provides no commands for editing existing constraints. You can 
create a new constraint and remove the old one. If you want to do it as 
a single change from pacemaker's point of view, follow this procedure:


[root@node1:~]# pcs cluster cib cib1.xml
[root@node1:~]# cp cib1.xml cib2.xml
[root@node1:~]# pcs -f cib2.xml constraint list --full
Location Constraints:
Ordering Constraints:
Colocation Constraints:
  Resource Sets:
set pgsql-slave-ip1 pgsql-slave-ip2 
(id:pcs_rsc_set_pgsql-slave-ip1_pgsql-slave-ip2) setoptions score=-1000 
(id:pcs_rsc_colocation_set_pgsql-slave-ip1_pgsql-slave-ip2)

Ticket Constraints:
[root@node1:~]# pcs -f cib2.xml constraint remove 
pcs_rsc_colocation_set_pgsql-slave-ip1_pgsql-slave-ip2
[root@node1:~]# pcs -f cib2.xml constraint colocation set 
pgsql-slave-ip1 pgsql-slave-ip2 pgsql-slave-ip3 setoptions score=-1000

[root@node1:~]# pcs cluster cib-push cib2.xml diff-against=cib1.xml
CIB updated


Pcs older than 0.9.156 does not support the diff-against option, you can 
do it like this:


[root@node1:~]# pcs cluster cib cib.xml
[root@node1:~]# pcs -f cib.xml constraint list --full
Location Constraints:
Ordering Constraints:
Colocation Constraints:
  Resource Sets:
set pgsql-slave-ip1 pgsql-slave-ip2 
(id:pcs_rsc_set_pgsql-slave-ip1_pgsql-slave-ip2) setoptions score=-1000 
(id:pcs_rsc_colocation_set_pgsql-slave-ip1_pgsql-slave-ip2)

Ticket Constraints:
[root@node1:~]# pcs -f cib.xml constraint remove 
pcs_rsc_colocation_set_pgsql-slave-ip1_pgsql-slave-ip2
[root@node1:~]# pcs -f cib.xml constraint colocation set pgsql-slave-ip1 
pgsql-slave-ip2 pgsql-slave-ip3 setoptions score=-1000

[root@node1:~]# pcs cluster cib-push cib.xml
CIB updated


Regards,
Tomas
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org