David Coulson da...@davidcoulson.net wrote:
Your configuration seems to have way too many moving parts and since you are
making routing changes when the nodes become primary it is difficult to
ensure that it will actually work based upon the monitoring you are doing
when it is passive.
--On Monday, February 13, 2012 11:21:14 AM +0200 Karlis Kisis
karlis.ki...@gmail.com wrote:
In most cluster tutorials, for simplicity, iptables is turned off.
Funny thing is that iptables is what I want to configure in HA cluster
(as redundant firewalls).
I debated about answering this
--On Thursday, August 04, 2011 03:21:58 PM +0200 Sebastian Kaps
sebastian.k...@imail.de wrote:
On Thu, 4 Aug 2011 08:31:07 +0200, Tegtmeier.Martin wrote:
in my case it is always the slower ring that fails (the 100MB
network). Does rrp_mode passive expect both rings to have the same
speed?
--On Friday, August 05, 2011 03:57:53 PM +0200 Albéric de Pertat
alberic.deper...@adelux.fr wrote:
I have two nodes with ips 10.0.9.11/24 and 10.0.9.12/24 routed
via 10.0.9.254. I have declared an IPaddr resource for
10.0.10.10/24. So far so good.
primitive ip ocf:heartbeat:IPaddr \
--On Wednesday, July 20, 2011 09:19:33 AM + pskrap psk...@hotmail.com
wrote:
I have a cluster where some of the resources cannot run on the same node.
All resources must be running to provide a functioning service. This
means that a certain amount of nodes needs to be up before it makes
It may be a bit late given that you've just created your own script,
but you can grab the check-cluster (and maybe check-drbd) scripts from
the gno-cluster-tools package at ftp://ftp.gno.org/pub/tools/cluster-tools
If I have a cluster that is not otherwise monitored, I run those
every three hours
I have both two-node clusters that use drbd with local disk (no SAN)
and a three node cluster where one node is always in standby and
is just used for quorum.
IMO, if you are using drbd-on-local-disk and have proper fencing
support, the three node case in its current incarnation is more
trouble
--On Monday, April 11, 2011 02:14:44 PM +0200 Patric Falinder
patric.falin...@omg.nu wrote:
The problem I have is when I need to migrate/move the resources to the
other node, or unmove it, I get this error message and mysqld won't
start/move properly:
[snip]
primitive mysqld lsb:mysql
I ran into a similar behavior with an earlier version of glusterfs
on raw disk (not DRBD). In that case it was a bug in gluster
that, although the nodes were supposed to be operating in a
mirror configuration, the one remaining node would refuse to
service requests after the other node was
--On Tuesday, March 08, 2011 11:24:50 AM + bikrish amatya
bikris...@hotmail.com wrote:
[snip]
My question is , Is it possible to just start res2 only without stopping
other services that follows the resource(res3 , res4 , res5) which has
been stopped.
A group implies ordering, so no
Johannes Freygner han...@freygner.at wrote:
*) Yes, and I found the wrong setting:
Excellent.
But if I pull the power cable without a regular shutting down,
the powerless node gets status UNCLEAN (offline) and the
resources remains stopped.
I would contend that would be correct behavior
--On Monday, January 03, 2011 09:14:29 PM +0100 han...@freygner.at wrote:
As I have tested, its not a problem on the shutdown order. On a regular
shutdown everything is working fine until I pull the power cable.
So just before pulling the power cable, the running node reports
itself as online
Johannes Freygner han...@freygner.at wrote:
could somebody give me an idea what will be the best stonith solution on a
drbd cluster to avoid split brain if the network between the nodes is lost.
I have already tried to use stonith with ILO, but if the power cable is
removed from the node
Johannes Freygner han...@freygner.at wrote:
You mean with corosync will work fine, because I am using heartbeat instead.
I suspect that it's a similar situation with heartbeat. The problem is
pacemaker losing communication before the node cleanly disconnects.
The behavior I saw on my own
--On Tuesday, November 23, 2010 10:21:04 AM +0100 Marko Potocnik
marko.potoc...@gmail.com wrote:
I'm using ftp just for test. I want a service to run on both nodes and
only IP to move in case a service fails.
I don't want to stop / start service if node fails.
You might be able to use the
--On Wednesday, October 27, 2010 09:47:14 AM +0200 Pavlos Parissis
pavlos.paris...@gmail.com wrote:
I have a APC AP9606 PDU and I am trying to find a stonith agent which
works with that PDU.
I know that this is an old thread, but I'll reply anyway.
I have a one cluster that uses an old APC
--On Monday, November 15, 2010 08:40:45 AM -0700 Rick Cone
rc...@securepaymentsystems.com wrote:
In production I am planning to have 2 separate AP7900 units each plugged
into 2 different APC UPS units to achieve that. I would then have the
single node name on each, for each of the 2 PS's on
Rick Cone rc...@securepaymentsystems.com wrote:
Perhaps I'll just use 1 outlet with the node name,
with a power splitter to the 2 redundant power supplies to reduce the
chances of problems.
IMO, if you're going to use a chassis with redundant power supplies,
you're better off with a system
Andrew Beekhof and...@beekhof.net wrote:
Do any other children start up?
None.
Where is the mgmtd binary installed to?
/usr/lib64/heartbeat/mgmtd
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
I was following the instructions for a new installation of corosync
and was wanting to make use of hb_gui so, following an installation
via yum per the docs, built Pacemaker-Python-GUI-pacemaker-mgmt-2.0.0
from source.
Starting corosync works normally without mgmtd in the picture, but as
soon as
20 matches
Mail list logo