Hello,
I have a small problem with an active-active cluster, running drbd and gfs2
on Ubuntu Trusty. The problem is that after a couple of days running, then
testing the fencing, it looks like the node how does the fencing gets the
info to do the fencing, but never really does it! Running the
On 10/09/15 05:46 AM, Robert Lindgren wrote:
> Hello,
>
> I have a small problem with an active-active cluster, running drbd and
> gfs2 on Ubuntu Trusty. The problem is that after a couple of days
> running, then testing the fencing, it looks like the node how does the
> fencing gets the info to
- Original Message -
25.11.2014 23:41, David Vossel wrote:
- Original Message -
Hi!
is subj implemented?
Trying echo c /proc/sysrq-trigger on remote nodes and no fencing occurs.
Yes, fencing remote-nodes works. Are you certain your fencing devices can
25.11.2014 23:41, David Vossel wrote:
- Original Message -
Hi!
is subj implemented?
Trying echo c /proc/sysrq-trigger on remote nodes and no fencing occurs.
Yes, fencing remote-nodes works. Are you certain your fencing devices can handle
fencing the remote-node? Fencing a
Andrew Beekhof and...@beekhof.net writes:
This was fixed a few months ago:
+ David Vossel (9 months ago) 054fedf: Fix: stonith_api_time_helper now
returns when the most recent fencing operation completed (origin/pr/444)
+ Andrew Beekhof (9 months ago) d9921e5: Fix: Fencing: Pass the
On 25/11/14 19:55, Daniel Dehennin wrote:
Christine Caulfield ccaul...@redhat.com writes:
It seems to me that fencing is failing for some reason, though I can't
tell from the logs exactly why, so you might have to investgate your
setup for IPMI to see just what is happening (I'm no IPMI
26.11.2014 18:36, David Vossel wrote:
- Original Message -
25.11.2014 23:41, David Vossel wrote:
- Original Message -
Hi!
is subj implemented?
Trying echo c /proc/sysrq-trigger on remote nodes and no fencing occurs.
Yes, fencing remote-nodes works. Are you certain
- Original Message -
26.11.2014 18:36, David Vossel wrote:
- Original Message -
25.11.2014 23:41, David Vossel wrote:
- Original Message -
Hi!
is subj implemented?
Trying echo c /proc/sysrq-trigger on remote nodes and no fencing
occurs.
Hi!
is subj implemented?
Trying echo c /proc/sysrq-trigger on remote nodes and no fencing occurs.
Best,
Vladislav
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home:
On 25/11/14 10:45, Daniel Dehennin wrote:
Daniel Dehennin daniel.dehen...@baby-gnu.org writes:
I'm using Ubuntu 14.04:
- corosync 2.3.3-1ubuntu1
- pacemaker 1.1.10+git20130802-1ubuntu2.1
I thought everything was integrated in such configuration.
Here are some more informations:
- the
Christine Caulfield ccaul...@redhat.com writes:
It seems to me that fencing is failing for some reason, though I can't
tell from the logs exactly why, so you might have to investgate your
setup for IPMI to see just what is happening (I'm no IPMI expert,
sorry).
Thanks for looking, but
Hello,
In my pacemaker/corosync cluster it looks like I have some issues with
fencing ACK on DLM/cLVM.
When a node is fenced, dlm/cLVM are not aware of the fencing results and
LVM commands hangs unless I run “dlm_tools fence_ack ID_OF_THE_NODE”
Here are some log around the fencing of nebula1:
Am Montag, 24. November 2014, 15:14:26 schrieb Daniel Dehennin:
Hello,
In my pacemaker/corosync cluster it looks like I have some issues with
fencing ACK on DLM/cLVM.
When a node is fenced, dlm/cLVM are not aware of the fencing results and
LVM commands hangs unless I run “dlm_tools
Michael Schwartzkopff m...@sys4.de writes:
Yes. You have to tell all the underlying infrastructure to use the fencing of
pacemaker. I assume that you are working on a RH clone.
See:
http://clusterlabs.org/doc/en-US/Pacemaker/1.1-plugin/html/Clusters_from_Scratch/ch08s02s03.html
Sorry,
Andrei Borzenkov arvidj...@gmail.com writes:
[...]
Now I have one issue, when the bare metal host on which the VM is
running die, the VM is lost and can not be fenced.
Is there a way to make pacemaker ACK the fencing of the VM running on a
host when the host is fenced itself?
Yes, you
I think the suggestion was to put shooting the host in the fencing path of a
VM. This way if you can't get the host to fence the VM (as the host is already
dead) you just check if the host was fenced.
Daniel Dehennin daniel.dehen...@baby-gnu.org napisał:
Andrei Borzenkov arvidj...@gmail.com
В Mon, 10 Nov 2014 10:07:18 +0100
Tomasz Kontusz tomasz.kont...@gmail.com пишет:
I think the suggestion was to put shooting the host in the fencing path of a
VM. This way if you can't get the host to fence the VM (as the host is
already dead) you just check if the host was fenced.
Hello,
As I finally manage to integrate my VM to corosync and my dlm/clvm/GFS2
are running on it.
Now I have one issue, when the bare metal host on which the VM is
running die, the VM is lost and can not be fenced.
Is there a way to make pacemaker ACK the fencing of the VM running on a
host
В Fri, 07 Nov 2014 17:46:40 +0100
Daniel Dehennin daniel.dehen...@baby-gnu.org пишет:
Hello,
As I finally manage to integrate my VM to corosync and my dlm/clvm/GFS2
are running on it.
Now I have one issue, when the bare metal host on which the VM is
running die, the VM is lost and can
hi,
any recommendation/documentation for a reliable fencing implementation on
a multi-node cluster (4 or 6 nodes on 2 site).
i think of implementing multiple node-fencing devices for each host to
stonith remaining nodes on other site?
thank you!
Philipp
Andrew Beekhof and...@beekhof.net writes:
[...]
Is the ipaddr for each device really the same? If so, why not use a
single 'resource'?
No, sorry, the IP addr was not the same.
Also, 1.1.7 wasn't as smart as 1.1.12 when it came to deciding which fencing
device to use.
Likely you'll get
On 7 Oct 2014, at 6:28 pm, Daniel Dehennin daniel.dehen...@baby-gnu.org wrote:
Andrew Beekhof and...@beekhof.net writes:
Maybe not, the collocation should be sufficient, but even without the
orders, unclean VMs fencing is tried with other Stonith devices.
Which other devices? The config
Andrew Beekhof and...@beekhof.net writes:
Maybe not, the collocation should be sufficient, but even without the
orders, unclean VMs fencing is tried with other Stonith devices.
Which other devices? The config you sent through didnt have any
others.
Sorry I sent it to linux-cluster
Andrew Beekhof and...@beekhof.net writes:
It may be due to two “order”:
#+begin_src
order ONE-Frontend-after-its-Stonith inf: Stonith-ONE-Frontend ONE-Frontend
order Quorum-Node-after-its-Stonith inf: Stonith-Quorum-Node Quorum-Node
#+end_src
Probably. Any particular reason for them to
On 6 Oct 2014, at 8:14 pm, Daniel Dehennin daniel.dehen...@baby-gnu.org wrote:
Andrew Beekhof and...@beekhof.net writes:
It may be due to two “order”:
#+begin_src
order ONE-Frontend-after-its-Stonith inf: Stonith-ONE-Frontend ONE-Frontend
order Quorum-Node-after-its-Stonith inf:
On 3 Oct 2014, at 3:22 am, Daniel Dehennin daniel.dehen...@baby-gnu.org wrote:
emmanuel segura emi2f...@gmail.com writes:
for guest fencing you can use, something like this
http://www.daemonzone.net/e/3/, rather to have a full cluster stack in
your guest, you can try to use
Hello,
I'm setting up a 3 nodes OpenNebula[1] cluster on Debian Wheezy using a
SAN for shared storage and KVM as hypervisor.
The OpenNebula fontend is a VM for HA[2].
I had some quorum issues when the node running the fontend die as the
two other nodes loose quorum, so I added a pure quorum
for guest fencing you can use, something like this
http://www.daemonzone.net/e/3/, rather to have a full cluster stack in
your guest, you can try to use pacemaker-remote for your virtual guest
2014-10-02 18:41 GMT+02:00 Daniel Dehennin daniel.dehen...@baby-gnu.org:
Hello,
I'm setting up a 3
emmanuel segura emi2f...@gmail.com writes:
for guest fencing you can use, something like this
http://www.daemonzone.net/e/3/, rather to have a full cluster stack in
your guest, you can try to use pacemaker-remote for your virtual guest
I think it could be done for the pure quorum node, but my
On 14 Mar 2014, at 1:18 am, Karl Rößmann k.roessm...@fkf.mpg.de wrote:
Hi,
I changed the running resource by
crm / configure / edit / commit. It seemed to work.
I stopped the resource, and changed some details,
Whenever I commit again I get this warning:
warning: do_log: FSA: Input
Hi,
I changed the running resource by
crm / configure / edit / commit. It seemed to work.
I stopped the resource, and changed some details,
Whenever I commit again I get this warning:
warning: do_log: FSA: Input I_ELECTION_DC from do_election_check()
received in state S_INTEGRATION
see
On 2014-03-12T15:17:13, Karl Rößmann k.roessm...@fkf.mpg.de wrote:
Hi,
we have a two node HA cluster using SuSE SlES 11 HA Extension SP3,
latest release value.
A resource (xen) was manually stopped, the shutdown_timeout is 120s
but after 60s the node was fenced and shut down by the other
Hi.
primitive fkflmw ocf:heartbeat:Xen \
meta target-role=Started is-managed=true allow-migrate=true \
op monitor interval=10 timeout=30 \
op migrate_from interval=0 timeout=600 \
op migrate_to interval=0 timeout=600 \
params xmfile=/etc/xen/vm/fkflmw
On 2014-03-12T16:16:54, Karl Rößmann k.roessm...@fkf.mpg.de wrote:
primitive fkflmw ocf:heartbeat:Xen \
meta target-role=Started is-managed=true allow-migrate=true \
op monitor interval=10 timeout=30 \
op migrate_from interval=0 timeout=600 \
op migrate_to
Hi All,
I was trying to configure
a two node cluster with following services:
1) Configured the VIP for both the nodes.
2) Configured service in the form of ocf:heartbeat:app
Where
app is my script for start/stop/monitor of my binary application.
Things are
working good and till
On 02/02/14 12:54 AM, Parveen Jain wrote:
Hi All,
I was trying to configure a two node cluster with following services:
1)Configured the VIP for both the nodes.
2)Configured service in the form of ocf:heartbeat:app
Where app is my script for start/stop/monitor of my binary application.
- Original Message -
From: Michael Schwartzkopff m...@sys4.de
To: The Pacemaker cluster resource manager pacemaker@oss.clusterlabs.org
Sent: Monday, December 2, 2013 4:38:12 AM
Subject: Re: [Pacemaker] Fencing: Where?
Am Montag, 2. Dezember 2013, 13:15:23 schrieb Nikita Staroverov
Hi,
as far as I unterstood RH is going to do all infrastructure in the cman layer
and user pacemaker only for resource management. Whith this setup fencing also
will be a job of the fenced of the cman package.
This design has its advantages: If descisions are taked on a low level, all
parts
Hi,
as far as I unterstood RH is going to do all infrastructure in the cman layer
and user pacemaker only for resource management. Whith this setup fencing also
will be a job of the fenced of the cman package.
This design has its advantages: If descisions are taked on a low level, all
parts of
No. I use it now. But setting up fencing in two places
- cman - links to pacemaker
- pamcemaker
it not very nice to configure fencing in two places.
Is there any way that pacemaker can tell cman about it's fencing descisions?
It takes place because cman doing fencing through fenced. It is also
On 2013-02-20T10:43:57, Andrew Beekhof and...@beekhof.net wrote:
Some fence agents can have 'reboot' implemented without OFF/ON procedure
(direct usage of command 'reboot') for these multiple plug should be
forbidden.
Could we not fall back to off off on on ?
Some of those devices however
On Thu, Feb 7, 2013 at 8:22 PM, Marek Grac mg...@redhat.com wrote:
On 02/07/2013 07:07 AM, Andrew Beekhof wrote:
This is right, fence agents (all of them) can be used only with one port
value.
Would this be a good feature to add?
Particularly for people using reboot as the action.
Le 07/02/2013 07:12, Andrew Beekhof a écrit :
On Wed, Feb 6, 2013 at 4:42 AM, Thibaut Pouzet
thibaut.pou...@lyra-network.com wrote:
Le 05/02/2013 16:57, Marek Grac a écrit :
Hi,
On 02/05/2013 03:24 AM, Andrew Beekhof wrote:
I cleared the IPMI configuration and kept only the two WTI fencing
On Wed, Feb 6, 2013 at 2:57 AM, Marek Grac mg...@redhat.com wrote:
Hi,
On 02/05/2013 03:24 AM, Andrew Beekhof wrote:
I cleared the IPMI configuration and kept only the two WTI fencing
Primitives in my configuration to make it as simple as possible :
primitive wti_fence01 stonith:fence_wti
On Wed, Feb 6, 2013 at 4:42 AM, Thibaut Pouzet
thibaut.pou...@lyra-network.com wrote:
Le 05/02/2013 16:57, Marek Grac a écrit :
Hi,
On 02/05/2013 03:24 AM, Andrew Beekhof wrote:
I cleared the IPMI configuration and kept only the two WTI fencing
Primitives in my configuration to make it as
Hi,
On 02/05/2013 03:24 AM, Andrew Beekhof wrote:
I cleared the IPMI configuration and kept only the two WTI fencing
Primitives in my configuration to make it as simple as possible :
primitive wti_fence01 stonith:fence_wti \
params ipaddr=192.168.0.7 action=reboot verbose=true
Le 05/02/2013 16:57, Marek Grac a écrit :
Hi,
On 02/05/2013 03:24 AM, Andrew Beekhof wrote:
I cleared the IPMI configuration and kept only the two WTI fencing
Primitives in my configuration to make it as simple as possible :
primitive wti_fence01 stonith:fence_wti \
params
On Fri, Feb 1, 2013 at 10:08 PM, Thibaut Pouzet
thibaut.pou...@lyra-network.com wrote:
Hi,
I am running some tests in order to implement fencing with two methods, and
I got stuck on the WTI configuration while the IPMI configuration was pretty
straight forward.
I have an installation with
Hi,
I am running some tests in order to implement fencing with two methods,
and I got stuck on the WTI configuration while the IPMI configuration
was pretty straight forward.
I have an installation with two nodes on Centos 6.3 running pacemaker
1.1.7 + corosync 1.4.1 . Both servers supports
Hi all,
I've running several pacemaker clusters in KVM virtual machines (everything
based on Debian 6) and now it's up to configure fencing...
I've found that I have to use fence-virt for that task
(http://www.clusterlabs.org/wiki/Guest_Fencing) but it seems that it only
will work in
On Tue, Oct 2, 2012 at 3:56 PM, Masopust, Christian
christian.masop...@siemens.com wrote:
Hi all,
I've running several pacemaker clusters in KVM virtual machines (everything
based on
Debian 6) and now it's up to configure fencing...
I've found that I have to use fence-virt for that task
On Tue, Oct 2, 2012 at 12:02 AM, Dejan Muhamedagic deja...@fastmail.fm wrote:
Hi,
On Mon, Oct 01, 2012 at 11:30:19AM +, James Harper wrote:
I am trying to configure stonith. ipmilan was a complete disaster as it
kept passing port 623 as the hostname,
Interesting,
so now I'm trying
Hi Michael,
thank's a lot for that link! Will give it a try!!
br,
christian
-Ursprüngliche Nachricht-
Von: Michael Schwartzkopff [mailto:mi...@clusterbau.com]
Gesendet: Dienstag, 02. Oktober 2012 08:22
An: The Pacemaker cluster resource manager
Betreff: Re: [Pacemaker] Fencing
On 2012-10-02 07:56, Masopust, Christian wrote:
Hi all,
I've running several pacemaker clusters in KVM virtual machines
(everything based on
Debian 6) and now it's up to configure fencing...
I've found that I have to use fence-virt for that task
(http://www.clusterlabs.org/wiki/Guest_Fencing)
I am trying to configure stonith. ipmilan was a complete disaster as it kept
passing port 623 as the hostname, so now I'm trying external/ipmi.
The stonith resource is running as expected, but I still have
stonith-enabled=false in the cluster configuration. To test before I turn
stonith on
On Monday 01 October 2012 11:30:19 James Harper wrote:
I am trying to configure stonith. ipmilan was a complete disaster as it
kept passing port 623 as the hostname, so now I'm trying external/ipmi.
The stonith resource is running as expected, but I still have
stonith-enabled=false in the
Hi,
On Mon, Oct 01, 2012 at 11:30:19AM +, James Harper wrote:
I am trying to configure stonith. ipmilan was a complete disaster as it kept
passing port 623 as the hostname,
Interesting,
so now I'm trying external/ipmi.
but anyway external/ipmi is preferred and better supported.
The
Hi all,
I've running several pacemaker clusters in KVM virtual machines (everything
based on
Debian 6) and now it's up to configure fencing...
I've found that I have to use fence-virt for that task
(http://www.clusterlabs.org/wiki/Guest_Fencing)
but it seems that it only will work in case my
Hi Lars,
thank you very much for the deep explanation.
Regards,
Alberto
On 09/10/2012 03:42 PM, Lars Marowsky-Bree wrote:
On 2012-09-10T14:40:43, Alberto Menichettialbmeniche...@tai.it wrote:
Sorry, maybe I'm missing something, but suppose this scenario (also
remember that, being a 2-node
On 09/09/2012 09:53 PM, Lars Marowsky-Bree wrote:
On 2012-09-09T13:30:36, Alberto Menichettialbmeniche...@tai.it wrote:
I've successfully configured and tested the stonith plugin
external/vcenter; but this plugin introduces a single point of
failure in my cluster infrastructure because it
On 2012-09-10T10:45:30, Alberto Menichetti albmeniche...@tai.it wrote:
thank you for the quick response.
Maybe SPOF is not the best definition, but when the vcenter is
unavailable the safety of my data is not guaranteed.
The safety remains guaranteed; the availability of your service wouldn't
On 09/10/2012 12:15 PM, Lars Marowsky-Bree wrote:
On 2012-09-10T10:45:30, Alberto Menichettialbmeniche...@tai.it wrote:
thank you for the quick response.
Maybe SPOF is not the best definition, but when the vcenter is
unavailable the safety of my data is not guaranteed.
The safety remains
On 2012-09-10T14:40:43, Alberto Menichetti albmeniche...@tai.it wrote:
Sorry, maybe I'm missing something, but suppose this scenario (also
remember that, being a 2-node cluster, I had to set
no-quorum-policy=ignore):
1. the virtual center is unavailable
2. an event occurs that partition the
Hi all,
I'm setting up a two-node pacemaker cluster (SLES-HA Extension) on
vmware vsphere 5.
I've successfully configured and tested the stonith plugin
external/vcenter; but this plugin introduces a single point of failure
in my cluster infrastructure because it depends on the availability of
On 2012-09-09T13:30:36, Alberto Menichetti albmeniche...@tai.it wrote:
I've successfully configured and tested the stonith plugin
external/vcenter; but this plugin introduces a single point of
failure in my cluster infrastructure because it depends on the
availability of the virtual center
Hi Alberto,
I think that I should set plural external/vcenter if your problem is practice
of stonith when vcenter falls and is not usable.
Please refer to the next email and patch.
* http://www.gossamer-threads.com/lists/linuxha/dev/78702
Best Regards,
Hideo Yamauchi.
--- On Sun, 2012/9/9,
Dear Mailinglist,
I'm struggling with fencing: I have three Dell PowerEdge 2850 with DRAC
4/i running CentOS 6.3. Is it possible to use the DRAC as fencing
device? The fence_drac resource seems to be the appropriate choice,
but I can't figure out how to use it in pacemaker since crm ra info
- Original Message -
From: Uwe Ritzschke uwe.ritzschk...@cms.hu-berlin.de
To: pacemaker@oss.clusterlabs.org
Sent: Monday, August 6, 2012 8:39:51 AM
Subject: [Pacemaker] Fencing with DRAC 4
Dear Mailinglist,
I'm struggling with fencing: I have three Dell PowerEdge 2850 with
DRAC
Hi,
Scenario: two physical virtualisation hosts run various KVM-based
virtual machines, managed by Libvirt. Two VMs, one on each host, form a
Pacemaker cluster, say for a simple database server, using DRBD and a
virtual/cluster IP address. Using Ubuntu 10.04 and Pacemaker 1.1.6, with
Corosync
On Tue, Nov 29, 2011 at 6:55 AM, Andreas Ntaflos
d...@pseudoterminal.org wrote:
Hi,
Scenario: two physical virtualisation hosts run various KVM-based
virtual machines, managed by Libvirt. Two VMs, one on each host, form a
Pacemaker cluster, say for a simple database server, using DRBD and a
28.11.2011 22:55, Andreas Ntaflos wrote:
Hi,
Scenario: two physical virtualisation hosts run various KVM-based
virtual machines, managed by Libvirt. Two VMs, one on each host, form a
Pacemaker cluster, say for a simple database server, using DRBD and a
virtual/cluster IP address. Using
Il 22/09/2011 12:23, Dejan Muhamedagic ha scritto:
Hi,
On Thu, Sep 22, 2011 at 10:56:57AM +0200, Fiorenza Meini wrote:
Hi there,
I found this:
http://code.google.com/p/fence-xenserver/wiki/Installation
I installed on my test cluster system and from command line it works
properly, but I have
Hi there,
I found this:
http://code.google.com/p/fence-xenserver/wiki/Installation
I installed on my test cluster system and from command line it works
properly, but I have problem while configuring it as primitive with the
command crm.
Has anyone tried it with success ?
Thanks and regards
Hi,
On Wed, Jun 15, 2011 at 08:54:36PM -0400, Shravan Mishra wrote:
Hi all,
We are using idrac6 ipmi as stonith device on our 2 node cluster.
When one of the nodes power cords are being yanked out, both main and the
backup, the secondary node is not taking over as primary meaning the
Hi all,
We are using idrac6 ipmi as stonith device on our 2 node cluster.
When one of the nodes power cords are being yanked out, both main and the
backup, the secondary node is not taking over as primary meaning the fencing
operation didn't happen successfully from secondary to the abruptly
Hi,
On Fri, Mar 25, 2011 at 10:58:51AM +0100, Andrew Beekhof wrote:
On Mon, Mar 21, 2011 at 4:06 PM, Pavel Levshin pa...@levshin.spb.ru wrote:
Hi.
Today, we had a network outage. Quite a few problems suddenly arised in out
setup, including crashed corosync, known notify bug in DRBD RA
Hi.
Today, we had a network outage. Quite a few problems suddenly arised in
out setup, including crashed corosync, known notify bug in DRBD RA and
some problem with VirtualDomain RA timeout on stop.
But particularly strange was fencing behaviour.
Initially, one node (wapgw1-1) has parted
On 2011-01-12T22:52:14, Bart Coninckx bart.conin...@telenet.be wrote:
Jan 12 22:20:34 xen2 pengine: [6633]: WARN: unpack_rsc_op: Processing failed
op intranet1_stop_0 on xen1: unknown exec error (-2)
My monitors are set to restart a resorce. What makes the PE decide to fence
the node in
On Thursday 13 January 2011 09:51:16 Lars Marowsky-Bree wrote:
On 2011-01-12T22:52:14, Bart Coninckx bart.conin...@telenet.be wrote:
Jan 12 22:20:34 xen2 pengine: [6633]: WARN: unpack_rsc_op: Processing
failed op intranet1_stop_0 on xen1: unknown exec error (-2)
My monitors are set to
On 2011-01-13T11:08:49, Bart Coninckx bart.conin...@telenet.be wrote:
thx for your answer.
So do I get this straight:
- resource undergoes monitor operation
- monitor reports failure
- a restart of the resource is issued (stop and start)
- stop fails
- PE decides to fence the node because
On Thursday 13 January 2011 11:13:42 Lars Marowsky-Bree wrote:
On 2011-01-13T11:08:49, Bart Coninckx bart.conin...@telenet.be wrote:
thx for your answer.
So do I get this straight:
- resource undergoes monitor operation
- monitor reports failure
- a restart of the resource is issued
On Thursday 13 January 2011 11:13:42 Lars Marowsky-Bree wrote:
On 2011-01-13T11:08:49, Bart Coninckx bart.conin...@telenet.be wrote:
thx for your answer.
So do I get this straight:
- resource undergoes monitor operation
- monitor reports failure
- a restart of the resource is issued
On Thursday 13 January 2011 11:58:03 Lars Marowsky-Bree wrote:
On 2011-01-13T11:48:41, Bart Coninckx bart.conin...@telenet.be wrote:
I notice that you work Novell, this is a SLES11SP1 installation so if the
resource agent for Xen is faulty I guess you know about it?
Yes, I think I'd know
On 2011-01-13 13:16, Bart Coninckx wrote:
On Thursday 13 January 2011 11:58:03 Lars Marowsky-Bree wrote:
On 2011-01-13T11:48:41, Bart Coninckx bart.conin...@telenet.be wrote:
I notice that you work Novell, this is a SLES11SP1 installation so if the
resource agent for Xen is faulty I guess you
Bart Coninckx wrote:
By the way: things seem better when I change the monitor time out to 30
seconds in stead of 10 seconds. Very strange though, because the resource
agent basically does a xm list --long while monitoring, which takes less
than half a second in a console.
I think sometimes
On 2011-01-13T09:30:48, Michael Smith msm...@cbnco.com wrote:
the resource agent basically does a xm list --long while
monitoring, which takes less than half a second in a console.
I think sometimes xend hangs for a while. 30 seconds should be good.
There's a pending fix for this, which
Hi,
I get a lot of fencing on my two node cluster with these messages:
Jan 12 22:20:34 xen2 pengine: [6633]: info: get_failcount: intranet1 has
failed INFINITY times on xen1
Jan 12 22:20:34 xen2 pengine: [6633]: info: get_failcount: intranet1 has
failed INFINITY times on xen1
Jan 12 22:20:34
Hi,
I have a working fencing setup with heartbeat-1:
somehost1:~# grep ^stonith /etc/ha.d/ha.cf
stonith_host* cyclades172.21.101.79 root 10
So thats a cyclades stonith plugin, the IP adress of the Cyclades Alterpath,
login name for SSH login, and the serial port
Hi,
On Mon, Mar 08, 2010 at 12:00:44PM +0800, Martin Aspeli wrote:
Hi,
We have a two-node cluster of Dell servers. They have an iDRAC 6
Enterprise each. The cluster is also backed up by a UPS with a
diesel generator.
I realise on-board devices like the DRAC are not ideal for fencing,
On Sun, Mar 7, 2010 at 9:00 PM, Martin Aspeli optilude+li...@gmail.com wrote:
Hi,
We have a two-node cluster of Dell servers. They have an iDRAC 6 Enterprise
each. The cluster is also backed up by a UPS with a diesel generator.
Don't forget that to make it reliable you have to backup up by
Hi,
Is this still current? Can anyone point me to any documentation or
examples of configuring iDRAC 6 Enterprise for STONITH, if indeed
it's possible?
It should be possible, but I can't say. Perhaps you can try with
drac5. If that won't do, then somebody has to write a stonith
plugin.
Hi,
On Mon, Mar 08, 2010 at 06:08:37PM +0100, Sander van Vugt wrote:
Hi,
Is this still current? Can anyone point me to any documentation or
examples of configuring iDRAC 6 Enterprise for STONITH, if indeed
it's possible?
It should be possible, but I can't say. Perhaps you can try
Hi,
We have a two-node cluster of Dell servers. They have an iDRAC 6
Enterprise each. The cluster is also backed up by a UPS with a diesel
generator.
I realise on-board devices like the DRAC are not ideal for fencing, but
it's probably the best we're going to be able to do. However, I've
93 matches
Mail list logo