On 19 Apr 2015, at 11:37 pm, Andrei Borzenkov arvidj...@gmail.com wrote:
В Sun, 19 Apr 2015 14:23:27 +0200
Andreas Kurz andreas.k...@gmail.com пишет:
On 2015-04-17 12:36, Thomas Manninger wrote:
Hi list,
i have a pacemaker/corosync2 setup with 4 nodes, stonith configured over
ipmi
On 2015-04-17 12:36, Thomas Manninger wrote:
Hi list,
i have a pacemaker/corosync2 setup with 4 nodes, stonith configured over
ipmi interface.
My problem is, that sometimes, a wrong node is stonithed.
As example:
I have 4 servers: node1, node2, node3, node4
I start a hardware-
В Sun, 19 Apr 2015 14:23:27 +0200
Andreas Kurz andreas.k...@gmail.com пишет:
On 2015-04-17 12:36, Thomas Manninger wrote:
Hi list,
i have a pacemaker/corosync2 setup with 4 nodes, stonith configured over
ipmi interface.
My problem is, that sometimes, a wrong node is stonithed.
Hi list,
i have a pacemaker/corosync2 setup with 4 nodes, stonith configured over ipmi interface.
My problem is, that sometimes, a wrong node is stonithed.
As example:
I have 4 servers: node1, node2, node3, node4
I start a hardware- reset on node node1, but node1 and node3 will be
for information: i am using pacemaker 1.1.12 on debian wheezy
-Ursprüngliche Nachricht-
Gesendet: Freitag, 17 April 2015 um 12:36:00 Uhr
Von: Thomas Manninger dbgtmas...@gmx.at
An: pacemaker@oss.clusterlabs.org
Betreff: [Pacemaker] stonith
Hi list,
i have a pacemaker/corosync2 setup
В Mon, 3 Nov 2014 07:07:41 +
Alex Samad - Yieldbroker alex.sa...@yieldbroker.com пишет:
{snip}
What I am hearing is that its not available. Is it possible to hook to
a custom script on that event, I can write my own restart
Sure you can write your own external stonith script.
On 04/11/14 03:55 AM, Andrei Borzenkov wrote:
В Mon, 3 Nov 2014 07:07:41 +
Alex Samad - Yieldbroker alex.sa...@yieldbroker.com пишет:
{snip}
What I am hearing is that its not available. Is it possible to hook to
a custom script on that event, I can write my own restart
Sure you can
On 04/11/14 02:45 PM, Alex Samad - Yieldbroker wrote:
{snip}
Any pointers to a frame work somewhere ?
I do not think there is any formal stonith agent developers guide;
take at any existing agent like external/ipmi and modify to suite your
needs.
Does fenced have any handlers, I notice it
On 5 Nov 2014, at 9:39 am, Alex Samad - Yieldbroker
alex.sa...@yieldbroker.com wrote:
I read to mean that demorp2 killed this node Nov 4 23:21:37
demorp1 corosync[23415]: cman killed by node 2 because we were killed by
cman_tool or other application
Nov 4 23:21:37 demorp1
-Original Message-
From: Digimer [mailto:li...@alteeve.ca]
Sent: Sunday, 2 November 2014 9:49 AM
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] stonith q
On 01/11/14 06:27 PM, Alex Samad - Yieldbroker wrote:
Hi
2 node cluster, running under vmware
{snip
В Sun, 2 Nov 2014 10:01:59 +
Alex Samad - Yieldbroker alex.sa...@yieldbroker.com пишет:
-Original Message-
From: Digimer [mailto:li...@alteeve.ca]
Sent: Sunday, 2 November 2014 9:49 AM
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] stonith q
Subject: Re: [Pacemaker] stonith q
On 01/11/14 06:27 PM, Alex Samad - Yieldbroker wrote:
Hi
2 node cluster, running under vmware
{snip}
Alex
In cman's cluster.conf, you configure the fence device 'fence_pcmk', as you
have. That is a dummy/hook fence agent that simply passes fence requests
up
-Original Message-
From: Digimer [mailto:li...@alteeve.ca]
Sent: Monday, 3 November 2014 3:26 AM
To: The Pacemaker cluster resource manager; Alex Samad - Yieldbroker
Subject: Re: [Pacemaker] stonith q
On 02/11/14 06:45 AM, Andrei Borzenkov wrote:
В Sun, 2 Nov 2014 10:01:59
: [Pacemaker] stonith q
On 02/11/14 06:45 AM, Andrei Borzenkov wrote:
В Sun, 2 Nov 2014 10:01:59 +
Alex Samad - Yieldbroker alex.sa...@yieldbroker.com пишет:
-Original Message-
From: Digimer [mailto:li...@alteeve.ca]
Sent: Sunday, 2 November 2014 9:49 AM
{snip}
What I am hearing is that its not available. Is it possible to hook to
a custom script on that event, I can write my own restart
Sure you can write your own external stonith script.
Any pointers to a frame work somewhere ?
Does fenced have any handlers, I notice it logs a message
Hi
2 node cluster, running under vmware
Centos 6.5
pacemaker-libs-1.1.10-14.el6_5.3.x86_64
pacemaker-cluster-libs-1.1.10-14.el6_5.3.x86_64
pacemaker-cli-1.1.10-14.el6_5.3.x86_64
pacemaker-1.1.10-14.el6_5.3.x86_64
this is what I have in /etc/cluster/cluster.conf
fencedevices
fencedevice
On 01/11/14 06:27 PM, Alex Samad - Yieldbroker wrote:
Hi
2 node cluster, running under vmware
Centos 6.5
pacemaker-libs-1.1.10-14.el6_5.3.x86_64
pacemaker-cluster-libs-1.1.10-14.el6_5.3.x86_64
pacemaker-cli-1.1.10-14.el6_5.3.x86_64
pacemaker-1.1.10-14.el6_5.3.x86_64
this is what I have in
property set no-quorum-policy=ignore
Best regards
Andreas
-Ursprüngliche Nachricht-
Von: Andrew Beekhof [mailto:and...@beekhof.net]
Gesendet: Freitag, 11. Juli 2014 01:42
An: The Pacemaker cluster resource manager
Betreff: Re: [Pacemaker] pacemaker stonith No such device
On 9 Jul 2014, at 8:53
stonith-enabled=true
pcs property set no-quorum-policy=ignore
Best regards
Andreas
-Ursprüngliche Nachricht-
Von: Andrew Beekhof [mailto:and...@beekhof.net]
Gesendet: Freitag, 11. Juli 2014 01:42
An: The Pacemaker cluster resource manager
Betreff: Re: [Pacemaker] pacemaker stonith
] pacemaker stonith No such device
On 9 Jul 2014, at 8:53 pm, Dvorak Andreas andreas.dvo...@baaderbank.de
wrote:
Dear all,
unfortunately my stonith does not work on my pacemaker cluster. If I do
ifdown on the two cluster interconnect interfaces of server sv2827 the
server sv2828 want
On 9 Jul 2014, at 8:53 pm, Dvorak Andreas andreas.dvo...@baaderbank.de wrote:
Dear all,
unfortunately my stonith does not work on my pacemaker cluster. If I do
ifdown on the two cluster interconnect interfaces of server sv2827 the server
sv2828 want to fence the server sv2827, but the
Dear all,
unfortunately my stonith does not work on my pacemaker cluster. If I do ifdown
on the two cluster interconnect interfaces of server sv2827 the server sv2828
want to fence the server sv2827, but the messages log says:error:
remote_op_done: Operation reboot of sv2827-p1 by
Hi,
In case of two nodes cluster in case of Active/Active or Active/Passive, if
split brain happen , which node will STONITH the other ?
--
KHALED MOHAMMED ATTEYA
System Engineer
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
On 17/03/14 11:52 PM, khaled atteya wrote:
Hi,
In case of two nodes cluster in case of Active/Active or Active/Passive,
if split brain happen , which node will STONITH the other ?
The fast node is the short answer.
The long answer is that you can give one node a priority over the other
by
On 22 Jan 2014, at 12:18 am, Robert Lindgren robert.lindg...@gmail.com wrote:
Hi,
I'm trying to get rid of some stonith info logging but I fail :(
Turn off debug and, for everything else, edit the C source code
The log-lines are like this in syslog:
Jan 21 13:24:15 wolf1 stonith-ng:
Hi,
I'm trying to get rid of some stonith info logging but I fail :(
The log-lines are like this in syslog:
Jan 21 13:24:15 wolf1 stonith-ng: [6349]: info: stonith_command: Processed
st_execute from lrmd: rc=-1
Jan 21 13:24:15 wolf1 external/ipmi[11606]: [11616]: debug: ipmitool
output: Chassis
Can you trace the resource?
crm resource trace ...
Maybe, if you can do it you get more info.
2014/1/21 Robert Lindgren robert.lindg...@gmail.com
Hi,
I'm trying to get rid of some stonith info logging but I fail :(
The log-lines are like this in syslog:
Jan 21 13:24:15 wolf1 stonith-ng:
Am Mittwoch, 20. November 2013, 10:28:18 schrieb Andrew Beekhof:
On 19 Nov 2013, at 4:19 pm, Michael Schwartzkopff m...@sys4.de wrote:
Andrew Beekhof and...@beekhof.net schrieb:
On 19 Nov 2013, at 1:23 am, Michael Schwartzkopff m...@sys4.de wrote:
Hi,
I installed pacemaker on a RHEL
On 2013-11-20T11:20:45, Michael Schwartzkopff m...@sys4.de wrote:
I removed the pacemaker installation 1.1.9 from the opensuse build
server and installed the 1.1.10 from the RHEL-HA repository. now
everything is working as expected.
Besides some kernel panics, that are not related to the
Am Mittwoch, 20. November 2013, 11:33:55 schrieb Lars Marowsky-Bree:
On 2013-11-20T11:20:45, Michael Schwartzkopff m...@sys4.de wrote:
I removed the pacemaker installation 1.1.9 from the opensuse build
server and installed the 1.1.10 from the RHEL-HA repository. now
everything is working as
On 19 Nov 2013, at 4:19 pm, Michael Schwartzkopff m...@sys4.de wrote:
Andrew Beekhof and...@beekhof.net schrieb:
On 19 Nov 2013, at 1:23 am, Michael Schwartzkopff m...@sys4.de wrote:
Hi,
I installed pacemaker on a RHEL 6.4 machine. Now crm tells me that
there is no
stonith
Hi,
I installed pacemaker on a RHEL 6.4 machine. Now crm tells me that there is no
stonith ra class, onyl lsb, ocf and service.
What did I miss? thanks for any valuable comments.
--
Mit freundlichen Grüßen,
Michael Schwartzkopff
--
[*] sys4 AG
http://sys4.de, +49 (89) 30 90 46 64, +49
On 19 Nov 2013, at 1:23 am, Michael Schwartzkopff m...@sys4.de wrote:
Hi,
I installed pacemaker on a RHEL 6.4 machine. Now crm tells me that there is
no
stonith ra class, onyl lsb, ocf and service.
What did I miss? thanks for any valuable comments.
did you install the fencing-agents
Andrew Beekhof and...@beekhof.net schrieb:
On 19 Nov 2013, at 1:23 am, Michael Schwartzkopff m...@sys4.de wrote:
Hi,
I installed pacemaker on a RHEL 6.4 machine. Now crm tells me that
there is no
stonith ra class, onyl lsb, ocf and service.
What did I miss? thanks for any valuable
On 9 Nov 2013, at 1:55 am, s.oreilly s.orei...@linnovations.co.uk wrote:
Hi Chrissie, thanks I did try that and it didn't work, but then, neither has
adding the location constraints so maybe (and this is very possible) I am
doing
something else wrong!!
Quite probably. But we cant say for
I am trying to configure stonith on a 2 node cluster.
Using fence_vmware_soap and it works manually
I configure stonith as below
pcs stonith create test-stonith1 params ipaddr=vcenterserver login-login
passwd=passwd ssl=1 port=hostname1 action=reboot
pcs stonith create test-stonith2 params
fence of host1 needs to be running on host2 and fence of host2 needs to be
running on host1
2013/11/8 s.oreilly s.orei...@linnovations.co.uk
I am trying to configure stonith on a 2 node cluster.
Using fence_vmware_soap and it works manually
I configure stonith as below
pcs stonith create
That's what I thought. How do I specify which node to run them on?
Many thanks
Sean O'Reilly
On Fri 08/11/13 1:08 PM , emmanuel segura emi2f...@gmail.com sent:
fence of host1 needs to be running on host2 and fence of host2 needs to be
running on host1
2013/11/8 s.oreilly
I am trying to
with location constrain. if you need info about constrains, you can look
the clusterlab docs
2013/11/8 s.oreilly s.orei...@linnovations.co.uk
That's what I thought. How do I specify which node to run them on?
Many thanks
Sean O'Reilly
On Fri 08/11/13 1:08 PM , emmanuel segura
On 08/11/13 14:20, emmanuel segura wrote:
with location constrain. if you need info about constrains, you can look
the clusterlab docs
You don't need to do that. Stonith is intelligent enough to know how to
fence a node regardless of where the device is supposedly running from.
Try it ;-)
Hi Chrissie, thanks I did try that and it didn't work, but then, neither has
adding the location constraints so maybe (and this is very possible) I am doing
something else wrong!!
Sean O'Reilly
On Fri 08/11/13 2:40 PM , Christine Caulfield ccaul...@redhat.com sent:
On 08/11/13 14:20, emmanuel
Personally I use fence_xvm. IIRC, it's the supported equivalent of fence_virsh.
On 24 Oct 2013, at 6:38 pm, Beo Banks beo.ba...@googlemail.com wrote:
hi,
i have enable the debug option and i use the ip instead of hostname
primitive stonith-zarafa02 stonith:fence_virsh \
params
*hi,
i wants to testing the fail-over capabilities of my cluster.
i run pkill -9 corosync on 2nd node and i saw on the 1node that he wants to
stonith the node2 but he giving up after too many failures to fence node
via commandline it works without any problems
fence_virsh -a host2 -l root -x
Am Mittwoch, 23. Oktober 2013, 10:57:28 schrieb Beo Banks:
*hi,
i wants to testing the fail-over capabilities of my cluster.
i run pkill -9 corosync on 2nd node and i saw on the 1node that he wants to
stonith the node2 but he giving up after too many failures to fence node
via
hi,
thanks for answer.
the pacemaker|corosync is running on both nodes.
[chkconfig | grep corosync
corosync0:Aus 1:Aus 2:Ein 3:Ein 4:Ein 5:Ein 6:Aus
chkconfig | grep pacemaker
pacemaker 0:Aus 1:Aus 2:Ein 3:Ein 4:Ein 5:Ein 6:Aus
@ssh key
no, i created the
Am Mittwoch, 23. Oktober 2013, 12:39:35 schrieb Beo Banks:
hi,
thanks for answer.
the pacemaker|corosync is running on both nodes.
[chkconfig | grep corosync
corosync0:Aus 1:Aus 2:Ein 3:Ein 4:Ein 5:Ein 6:Aus
chkconfig | grep pacemaker
pacemaker 0:Aus 1:Aus
Hi Lars,
ouch, I though I have to write all the versions and of course forgot at the
end :-( sorry about that
I'm using pacemaker-1.1.8 (RHEL6), fence-agents-4.0.3 (I compiled this myself
in order to use new netio stonith plugin), corosync-1.4.1, kernel-3.10.11 (in
case this would be important)
On 2013-10-18T11:26:52, Nikola Ciprich nikola.cipr...@linuxbox.cz wrote:
I'm using pacemaker-1.1.8 (RHEL6), fence-agents-4.0.3 (I compiled this myself
in order to use new netio stonith plugin), corosync-1.4.1, kernel-3.10.11 (in
case this would be important)
Unless 1.1.8-rhel has some fixes
On 03/10/13 05:42, Nikola Ciprich wrote:
Hello,
I'm playing with netio 230-CS PDU, and it works pretty well as
fencing device for my testing pacemaker cluster. However, I'd like
to use two such units plugged to different power sources and use
them as fencing units for servers with
Hi Guys,
thanks a lot for the tip, fencing_topology seems to be exactly what I
need! However, there seems to be the problem, I'm not sure whether
it's me, pacemaker or stonith agent..
I've set 4 stonith primitives, as per document:
primitive stonith-vbox3-1-off stonith:fence_netio \
Hi,
On Fri, Oct 04, 2013 at 10:43:56AM +0200, Nikola Ciprich wrote:
Hi Guys,
thanks a lot for the tip, fencing_topology seems to be exactly what I
need! However, there seems to be the problem, I'm not sure whether
it's me, pacemaker or stonith agent..
I've set 4 stonith primitives, as
Hi Dejan,
and thanks for Your reply!
Good luck with that.
ouch, that doesn't sound too encouraging :)
Which version of crmsh do you run? If it's 1.2.6, please open a
bug report. If not, please upgrade :)
great, haven't even noticed there's 1.2.6 out..
however problem persists:
Hello,
I'm playing with netio 230-CS PDU, and it works pretty well as
fencing device for my testing pacemaker cluster. However, I'd like
to use two such units plugged to different power sources and
use them as fencing units for servers with redundant power supplies
(each connected to one of the
On 2013-10-03T12:22:27, David Vossel dvos...@redhat.com wrote:
Is there some way to tell, node needs to be fenced using two fencing
devices? Or I'll need to create my own fencing plugin allowing to
use two fencing devices simultaneously?
Not simultaneously (not sure if that is actually a
On 20/06/2013, at 2:52 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
20.06.2013 00:36, Andrew Beekhof wrote:
On 20/06/2013, at 6:33 AM, Doug Clow doug.c...@dashbox.com wrote:
Hello All,
I have some 2-node active-passive clusters that occasionally lose
Corosync connectivity. The
20.06.2013 09:00, Andrew Beekhof wrote:
On 20/06/2013, at 2:52 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
20.06.2013 00:36, Andrew Beekhof wrote:
On 20/06/2013, at 6:33 AM, Doug Clow doug.c...@dashbox.com wrote:
Hello All,
I have some 2-node active-passive clusters that
On 20/06/2013, at 5:02 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
20.06.2013 09:00, Andrew Beekhof wrote:
On 20/06/2013, at 2:52 PM, Vladislav Bogdanov bub...@hoster-ok.com wrote:
20.06.2013 00:36, Andrew Beekhof wrote:
On 20/06/2013, at 6:33 AM, Doug Clow doug.c...@dashbox.com
20.06.2013 03:12, Sven Arnold wrote:
Hi Doug,
I have some 2-node active-passive clusters that occasionally lose
Corosync connectivity. The connectivity is fixed with a reboot.
They don't have shared storage so stonith doesn't have to happen for
another node to take control of the resource.
echo 0 /sys/class/net/virbr0/bridge/multicast_snooping
That results in multicast packets are broadcasted to all bridge ports. I
prefer to have igmp querier turned on on a central switch.
I had the impression that the OP used virtual machines on a local
(virtual and private) network. I
I'll do some experiments to see if I can get Corosync more reliable. I'm using
Corosync v1 as part of cman-corosync-pacemaker. RRP with one port on a switch
and the other port on a crossover cable between the two hosts (although
technically each port is still part of a vSwitch since its a
On 20/06/2013, at 11:50 PM, Doug Clow doug.c...@dashbox.com wrote:
I'll do some experiments to see if I can get Corosync more reliable. I'm
using Corosync v1 as part of cman-corosync-pacemaker. RRP with one port on a
switch and the other port on a crossover cable between the two hosts
Hello All,
I have some 2-node active-passive clusters that occasionally lose Corosync
connectivity. The connectivity is fixed with a reboot. They don't have shared
storage so stonith doesn't have to happen for another node to take control of
the resource. Also they are VMs so I can't use a
On 20/06/2013, at 6:33 AM, Doug Clow doug.c...@dashbox.com wrote:
Hello All,
I have some 2-node active-passive clusters that occasionally lose Corosync
connectivity. The connectivity is fixed with a reboot. They don't have
shared storage so stonith doesn't have to happen for another
Hi Doug,
I have some 2-node active-passive clusters that occasionally lose
Corosync connectivity. The connectivity is fixed with a reboot.
They don't have shared storage so stonith doesn't have to happen for
another node to take control of the resource. Also they are VMs so I
can't use a
20.06.2013 00:36, Andrew Beekhof wrote:
On 20/06/2013, at 6:33 AM, Doug Clow doug.c...@dashbox.com wrote:
Hello All,
I have some 2-node active-passive clusters that occasionally lose
Corosync connectivity. The connectivity is fixed with a reboot. They
don't have shared storage so stonith
On 2013-05-16 11:31, Klaus Darilion wrote:
Hi Andreas!
On 15.05.2013 22:55, Andreas Kurz wrote:
On 2013-05-15 15:34, Klaus Darilion wrote:
On 15.05.2013 14:51, Digimer wrote:
On 05/15/2013 08:37 AM, Klaus Darilion wrote:
primitive st-pace1 stonith:external/xen0 \
params
On 2013-05-16 11:01, Lars Marowsky-Bree wrote:
On 2013-05-15T22:55:43, Andreas Kurz andr...@hastexo.com wrote:
start-delay is an option of the monitor operation ... in fact means
don't trust that start was successfull, wait for the initial monitor
some more time
It can be used on start
On 2013-05-15T22:55:43, Andreas Kurz andr...@hastexo.com wrote:
start-delay is an option of the monitor operation ... in fact means
don't trust that start was successfull, wait for the initial monitor
some more time
It can be used on start here though to avoid exactly this situation; and
it
Hi Andreas!
On 15.05.2013 22:55, Andreas Kurz wrote:
On 2013-05-15 15:34, Klaus Darilion wrote:
On 15.05.2013 14:51, Digimer wrote:
On 05/15/2013 08:37 AM, Klaus Darilion wrote:
primitive st-pace1 stonith:external/xen0 \
params hostlist=pace1 dom0=xentest1 \
op start
Using Pacemaker 1.1.8 on EL6.4 with the pacemaker plugin, I'm finding
strange behavior with stonith-admin -B node2. It seems to shut the
node down but not start it back up and ends up reporting a timer
expired:
# stonith_admin -B node2
Command failed: Timer expired
The pacemaker log for the
Hi!
I have a 2 nodes cluster: a simple test setup with a
ocf:heartbeat:IPaddr2 resource, using xen VMs and stonith:external/xen0.
Please see the complete config below.
Basically everything works fine, except in the case of broken corosync
communication between the nodes (simulated by
On 05/15/2013 08:37 AM, Klaus Darilion wrote:
primitive st-pace1 stonith:external/xen0 \
params hostlist=pace1 dom0=xentest1 \
op start start-delay=15s interval=0
Try;
primitive st-pace1 stonith:external/xen0 \
params hostlist=pace1 dom0=xentest1 delay=15 \
On 15.05.2013 14:51, Digimer wrote:
On 05/15/2013 08:37 AM, Klaus Darilion wrote:
primitive st-pace1 stonith:external/xen0 \
params hostlist=pace1 dom0=xentest1 \
op start start-delay=15s interval=0
Try;
primitive st-pace1 stonith:external/xen0 \
params
On 05/15/2013 09:34 AM, Klaus Darilion wrote:
On 15.05.2013 14:51, Digimer wrote:
On 05/15/2013 08:37 AM, Klaus Darilion wrote:
primitive st-pace1 stonith:external/xen0 \
params hostlist=pace1 dom0=xentest1 \
op start start-delay=15s interval=0
Try;
primitive st-pace1
On 2013-05-15 15:34, Klaus Darilion wrote:
On 15.05.2013 14:51, Digimer wrote:
On 05/15/2013 08:37 AM, Klaus Darilion wrote:
primitive st-pace1 stonith:external/xen0 \
params hostlist=pace1 dom0=xentest1 \
op start start-delay=15s interval=0
Try;
primitive st-pace1
On 13-03-25 03:50 PM, Jacek Konieczny wrote:
The first node to notice that the other is unreachable will fence (kill)
the other, making sure it is the only one operating on the shared data.
Right. But with typical two-node clusters ignoring no-quorum, because
quorum is being ignored, as soon
El 25/03/13 20:50, Jacek Konieczny escribió:
On Mon, 25 Mar 2013 20:01:28 +0100
Angel L. Mateo ama...@um.es wrote:
quorum {
provider: corosync_votequorum
expected_votes: 2
two_node: 1
}
Corosync will then manage quorum for the two-node cluster and
Pacemaker
I'm
On Tue, Mar 26, 2013 at 6:30 PM, Angel L. Mateo ama...@um.es wrote:
El 25/03/13 20:50, Jacek Konieczny escribió:
On Mon, 25 Mar 2013 20:01:28 +0100
Angel L. Mateo ama...@um.es wrote:
quorum {
provider: corosync_votequorum
expected_votes: 2
two_node: 1
}
Corosync
Hello,
I am newbie with pacemaker (and, generally, with ha clusters). I have
configured a two nodes cluster. Both nodes are virtual machines (vmware
esx) and use a shared storage (provided by a SAN, although access to the
SAN is from esx infrastructure and VM consider it as scsi disk). I
I have a production cluster, using two vm on esx cluster, for stonith i'm
using sbd, everything work fine
2013/3/25 emmanuel segura emi2f...@gmail.com
I have a production cluster, using two vm on esx cluster, for stonith i'm
using sbd, everything work find
2013/3/25 Angel L. Mateo
I have a production cluster, using two vm on esx cluster, for stonith i'm
using sbd, everything work find
2013/3/25 Angel L. Mateo ama...@um.es
Hello,
I am newbie with pacemaker (and, generally, with ha clusters). I
have configured a two nodes cluster. Both nodes are virtual machines
On Mon, 25 Mar 2013 13:54:22 +0100
My problem is how to avoid split brain situation with this
configuration, without configuring a 3rd node. I have read about
quorum disks, external/sbd stonith plugin and other references, but
I'm too confused with all this.
For example, [1]
Jacek Konieczny jaj...@jajcus.net escribió:
On Mon, 25 Mar 2013 13:54:22 +0100
My problem is how to avoid split brain situation with this
configuration, without configuring a 3rd node. I have read about
quorum disks, external/sbd stonith plugin and other references, but
I'm too
On Mon, 25 Mar 2013 20:01:28 +0100
Angel L. Mateo ama...@um.es wrote:
quorum {
provider: corosync_votequorum
expected_votes: 2
two_node: 1
}
Corosync will then manage quorum for the two-node cluster and
Pacemaker
I'm using corosync 1.1 which is the one provided with
On 2013-01-30T14:51:33, Matthew O'Connor m...@ecsorl.com wrote:
Hi! I must be doing something stupidly wrong... every time I add a new
node to my live cluster, the first thing the cluster decides to do is
STONITH the node, and despite any precautions I take (other than
flat-out disabling
On Thu, Jan 31, 2013 at 9:36 AM, Andreas Kurz andr...@hastexo.com wrote:
On 2013-01-30 20:51, Matthew O'Connor wrote:
Hi! I must be doing something stupidly wrong... every time I add a new
node to my live cluster, the first thing the cluster decides to do is
STONITH the node, and despite any
Hi! I must be doing something stupidly wrong... every time I add a new
node to my live cluster, the first thing the cluster decides to do is
STONITH the node, and despite any precautions I take (other than
flat-out disabling STONITH during the reconfiguration). Is this
normal? I'm currently
On 2013-01-30 20:51, Matthew O'Connor wrote:
Hi! I must be doing something stupidly wrong... every time I add a new
node to my live cluster, the first thing the cluster decides to do is
STONITH the node, and despite any precautions I take (other than
flat-out disabling STONITH during the
Ah, very good - thank you so much!!
On 01/30/2013 05:36 PM, Andreas Kurz wrote:
On 2013-01-30 20:51, Matthew O'Connor wrote:
Hi! I must be doing something stupidly wrong... every time I add a new
node to my live cluster, the first thing the cluster decides to do is
STONITH the node, and
On Sat, Nov 17, 2012 at 12:01 AM, Denny Schierz linuxm...@4lin.net wrote:
hi,
I've tried the latest fence_legacy, but only getstate works, but not the
reset one ..
https://raw.github.com/ClusterLabs/pacemaker/master/fencing/fence_legacy
I've tested also the complete path:
On Fri, Nov 16, 2012 at 4:49 AM, Dejan Muhamedagic deja...@fastmail.fm wrote:
On Thu, Nov 15, 2012 at 06:33:14PM +0100, Denny Schierz wrote:
hi,
Am 15.11.2012 um 17:02 schrieb Dejan Muhamedagic deja...@fastmail.fm:
Hmm, perhaps another case of this bug:
hi,
I've tried the latest fence_legacy, but only getstate works, but not the reset
one ..
https://raw.github.com/ClusterLabs/pacemaker/master/fencing/fence_legacy
I've tested also the complete path:
/vmfs/volumes/4c528367-19e3e2c9-9871-0021288ea4ad/sqlnode-02/sqlnode-02.vmx
again, works for
hi,
Am 13.11.2012 um 12:45 schrieb Dejan Muhamedagic deja...@fastmail.fm:
stonith -t external/vmware -p esxserver -T reset sqlnode02
it takes more time, than I thought ...
primitive sqlnode-stonith stonith:external/vmware \
op monitor interval=1s timeout=5s \
params
Hi,
On Thu, Nov 15, 2012 at 03:52:00PM +0100, Denny Schierz wrote:
hi,
Am 13.11.2012 um 12:45 schrieb Dejan Muhamedagic deja...@fastmail.fm:
stonith -t external/vmware -p esxserver -T reset sqlnode02
it takes more time, than I thought ...
primitive sqlnode-stonith
hi,
Am 15.11.2012 um 17:02 schrieb Dejan Muhamedagic deja...@fastmail.fm:
Hmm, perhaps another case of this bug:
http://marc.info/?l=linux-ha-devm=135298603602508w=2
nope, these lines doesn't exist:
http://hg.linux-ha.org/dev/file/e106561da999/lib/plugins/stonith/external/vmware
I should
On Thu, Nov 15, 2012 at 06:33:14PM +0100, Denny Schierz wrote:
hi,
Am 15.11.2012 um 17:02 schrieb Dejan Muhamedagic deja...@fastmail.fm:
Hmm, perhaps another case of this bug:
http://marc.info/?l=linux-ha-devm=135298603602508w=2
nope, these lines doesn't exist:
Am 15.11.2012 um 18:49 schrieb Dejan Muhamedagic deja...@fastmail.fm:
http://hg.linux-ha.org/dev/file/e106561da999/lib/plugins/stonith/external/vmware
The lines quoted are from /usr/sbin/fence_legacy not
external/vmware. The bug affects all stonith configurations
which have '=' in the
On Mon, Nov 12, 2012 at 03:41:19PM +0100, Denny Schierz wrote:
hi,
the external/vmware STONITH works, if I try it from the shell:
stonith -t external/vmware -p esxserver -T reset sqlnode02
but, I don't get the syntax for for crm itself:
crm(live)configure# configure primitive
hi,
Am 13.11.2012 um 12:45 schrieb Dejan Muhamedagic deja...@fastmail.fm:
You have one configure too many.
arrrghh
thank ... to much trees
cu denny
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
hi,
the external/vmware STONITH works, if I try it from the shell:
stonith -t external/vmware -p esxserver -T reset sqlnode02
but, I don't get the syntax for for crm itself:
crm(live)configure# configure primitive sqlnode-stonith stonith:external/vmware
params host_map=sqlnode01;sqlnode02
1 - 100 of 251 matches
Mail list logo