Hi,
Our system manages the database (one master and multiple slave). We use one VIP
for multiple Slave resources firstly.
Now I want to change the configuration that each slave resource has a separate
VIP. For example, I have 3 slave nodes and my VIP group has 2 vip; The 2 vips
binds to node1
Hi all,
Thanks for your responses.
With your advice I was able to configure it. I still have to test its
operation. When it is possible to restart the vCenter, I will post the results.
Have a nice weekend!
22 de febrero de 2018 16:00, "Tomas Jelinek" escribió:
> Try
Hi,
I see when I invoke
# pcs cluster setup --force --local --name
It reports "Removing all cluster configuration files..." and true to its word,
removes /etc/pacemaker/authkey.
My cluster configuration depends on nodes running pacemaker_remote and so I
depend on the authkey to
Try this:
pcs resource meta vmware_soap failure-timeout=
Tomas
Dne 22.2.2018 v 14:55 j...@disroot.org napsal(a):
Hi,
I am trying to configure the failure-timeout for stonith, but I only can do it
for the other resources.
When try to enable it for stonith, I get this error: "Error:
On 02/22/2018 02:55 PM, j...@disroot.org wrote:
> Hi,
>
> I am trying to configure the failure-timeout for stonith, but I only can do
> it for the other resources.
> When try to enable it for stonith, I get this error: "Error: resource
> option(s): 'failure-timeout', are not recognized for
Hi,
I am trying to configure the failure-timeout for stonith, but I only can do it
for the other resources.
When try to enable it for stonith, I get this error: "Error: resource
option(s): 'failure-timeout', are not recognized for resource type:
'stonith::fence_vmware_soap'".
Thanks.
22 de
On Thu, Feb 22, 2018 at 2:40 PM, wrote:
> Thanks for the responses.
>
> So, if I understand, this is the right behaviour and it does not affect to
> the stonith mechanism.
>
> If I remember correctly, the fault status persists for hours until I fix it
> manually.
> Is there
Thanks for the responses.
So, if I understand, this is the right behaviour and it does not affect to the
stonith mechanism.
If I remember correctly, the fault status persists for hours until I fix it
manually.
Is there any way to modify the expiry time to clean itself?.
22 de febrero de 2018
Stonith resource state should have no impact on actual stonith
operation. It only reflects whether monitor was successful or not and
serves as warning to administrator that something may be wrong. It
should automatically clear itself after failure-timeout has expired.
On Thu, Feb 22, 2018 at 1:58
Hi,
On Thu, Feb 22, 2018 at 11:58 AM, wrote:
>
> Hi,
>
> I have a 2 node pacemaker cluster configured with the fence agent
> vmware_soap.
> Everything works fine until the vCenter is restarted. After that, stonith
> fails and stop.
>
This is expected as we run 'monitor'
On 22/02/18 16:26 +0530, Dileep V Nair wrote:
Thanks for the response. I tried that but I think that does not take care
of HADR setup.
You'll have to do the setup yourself. The agent is just
running/monitoring the software in a Pacemaker cluster.
Regards,
Dileep V Nair
E-mail:
Hi,
I have a 2 node pacemaker cluster configured with the fence agent vmware_soap.
Everything works fine until the vCenter is restarted. After that, stonith fails
and stop.
[root@node1 ~]# pcs status
Cluster name: psqltest
Stack: corosync
Current DC: node2 (version 1.1.16-12.el7_4.7-94ff4df) -
Thanks for the response. I tried that but I think that does not take care
of HADR setup.
Regards,
On 21/02/18 21:41 +0530, Dileep V Nair wrote:
Hi,
I am trying to configure Pacemaker to automate a Sybase HADR setup.
Is anyone aware of a Resource Agent which I can use for this.
There's a Sybase ASE agent available at:
https://github.com/ClusterLabs/resource-agents/pull/
14 matches
Mail list logo