Hi
I am trying to use heartbeat 2.1.3 with the 1.0.3 resource agents to move a
iSCSI (LIO) target across two machines. The two machines are named sedona and
toltec. Sedona is an HP DL180g5 and toltec is a whitebox with an ASUS P7P55D
motherboard. Each machine has 3 nics (management,
On Thu, May 06, 2010 at 08:50:21AM +0200, Michael Schwartzkopff wrote:
Am Donnerstag, 6. Mai 2010 01:22:13 schrieb Dimitri Maziuk:
Cameron Smith wrote:
Yeah!
Since I already have that in place for http and mysql I just wanted to
know if there was anything unique I need to do for
Am Freitag, 7. Mai 2010 17:06:44 schrieb Lars Ellenberg:
On Thu, May 06, 2010 at 08:50:21AM +0200, Michael Schwartzkopff wrote:
Am Donnerstag, 6. Mai 2010 01:22:13 schrieb Dimitri Maziuk:
Cameron Smith wrote:
Yeah!
Since I already have that in place for http and mysql I just wanted
Michael Schwartzkopff wrote:
The only reason to do a postfix cluster is to deliver locally queued mail
after
a failover.
Ah! That's what I didn't think of.
In theory you could restart postfix w/ different config files: send
only on the passive node and full setup on the active.
Dima
Am Donnerstag, 6. Mai 2010 01:22:13 schrieb Dimitri Maziuk:
Cameron Smith wrote:
Yeah!
Since I already have that in place for http and mysql I just wanted to
know if there was anything unique I need to do for postfix config for
when it is running on primary (managed by heartbeat) and how
On Tue, May 4, 2010 at 9:30 PM, Florian Haas florian.h...@linbit.com wrote:
On 05/04/2010 07:30 PM, Lars Marowsky-Bree wrote:
On 2010-04-25T11:39:10, Smaïne Kahlouch smain...@free.fr wrote:
Do we have to move from Heartbeat to OpenAIS ? Now or in the future ?
What are the differences between
On 05/06/2010 08:59 AM, Andrew Beekhof wrote:
About the only time I start heartbeat is for a few days before a release.
And even then only for 1.0 releases, 1.1 is only tested against corosync.
Probably true, though the amount of testing LINBIT does on both
messaging layers is quite
On Thu, May 6, 2010 at 9:29 AM, Florian Haas florian.h...@linbit.com wrote:
On 05/06/2010 08:59 AM, Andrew Beekhof wrote:
About the only time I start heartbeat is for a few days before a release.
And even then only for 1.0 releases, 1.1 is only tested against corosync.
Probably true, though
Cameron Smith wrote:
Yeah!
Since I already have that in place for http and mysql I just wanted to know
if there was anything unique I need to do for postfix config for when it is
running on primary (managed by heartbeat) and how do I handle the sending of
system emails on the secondary
I am currently using Heartbeat to manage http, mysql and a DRBD device
between two nodes.
I want to also manage Postfix with Heartbeat.
What things should I keep in mind in the configuration of Postfix so that
mail services are tied to the IP managed by Heartbeat rather then the IP's
of each
On Tue, 4 May 2010, Cameron Smith wrote:
I am currently using Heartbeat to manage http, mysql and a DRBD device
between two nodes.
I want to also manage Postfix with Heartbeat.
What things should I keep in mind in the configuration of Postfix so that
mail services are tied to the IP
On 2010-04-25T11:39:10, Smaïne Kahlouch smain...@free.fr wrote:
Do we have to move from Heartbeat to OpenAIS ? Now or in the future ?
What are the differences between these two project ?
Will the project heartbeat continue or will be replaced by
OpenAIS/Corosync.
In addition to what Florian
Am Dienstag, 4. Mai 2010 17:45:34 schrieb Cameron Smith:
I am currently using Heartbeat to manage http, mysql and a DRBD device
between two nodes.
I want to also manage Postfix with Heartbeat.
What things should I keep in mind in the configuration of Postfix so that
mail services are tied
On Tue, May 4, 2010 at 10:17 AM, David Lang
david.l...@digitalinsight.comwrote:
On Tue, 4 May 2010, Cameron Smith wrote:
I am currently using Heartbeat to manage http, mysql and a DRBD device
between two nodes.
I want to also manage Postfix with Heartbeat.
What things should I keep
Am Dienstag, 4. Mai 2010 20:19:02 schrieb Cameron Smith:
On Tue, May 4, 2010 at 10:17 AM, David Lang
david.l...@digitalinsight.comwrote:
On Tue, 4 May 2010, Cameron Smith wrote:
I am currently using Heartbeat to manage http, mysql and a DRBD device
between two nodes.
I want to
On Tue, May 4, 2010 at 11:26 AM, Michael Schwartzkopff mi...@multinet.dewrote:
Am Dienstag, 4. Mai 2010 20:19:02 schrieb Cameron Smith:
On Tue, May 4, 2010 at 10:17 AM, David Lang
david.l...@digitalinsight.comwrote:
On Tue, 4 May 2010, Cameron Smith wrote:
I am currently using
On 05/04/2010 07:30 PM, Lars Marowsky-Bree wrote:
On 2010-04-25T11:39:10, Smaïne Kahlouch smain...@free.fr wrote:
Do we have to move from Heartbeat to OpenAIS ? Now or in the future ?
What are the differences between these two project ?
Will the project heartbeat continue or will be replaced
On Tue, 4 May 2010, Cameron Smith wrote:
On Tue, May 4, 2010 at 10:17 AM, David Lang
david.l...@digitalinsight.comwrote:
On Tue, 4 May 2010, Cameron Smith wrote:
I am currently using Heartbeat to manage http, mysql and a DRBD device
between two nodes.
I want to also manage Postfix with
Hi everyone,
I know the question has already been asked.
see http://lists.linux-ha.org/pipermail/linux-ha/2009-March/036725.html
The answer is however not clear for me and i would like to have other
opinions.
Do we have to move from Heartbeat to OpenAIS ? Now or in the future ?
What are the
On 04/25/2010 11:39 AM, Smaïne Kahlouch wrote:
Hi everyone,
I know the question has already been asked.
see http://lists.linux-ha.org/pipermail/linux-ha/2009-March/036725.html
The answer is however not clear for me and i would like to have other
opinions.
Do we have to move from
On Wed, 2010-04-14 at 16:24 -0700, Stephen Punak wrote:
Heartbeat appears to start just fine on all nodes, but none of them see each
other.
Any chance there is a firewall blocking the heartbeat packets? You'd
still see them with wireshark, but they would be blocked from getting to
the
I'm having a very strange problem trying to get a cluster running.
I have a cluster of three nodes each running in their own VirtualBox Fedora 11
guest,
within a Fedora 11 host.
Heartbeat appears to start just fine on all nodes, but none of them see each
other.
I have run Wireshark on all
Hi all,
I done everything but heartbeat works only when i use service heartbeat stop,
when i unplug the ethernet cable from node 1 doesnt work, what's wrong? i have
to add something in the http://ha.cf ha.cf?
Thanks in advance
--
朱良晓 - Liang Xiao Zhu
Business Development Asia
Hi,
as already x-times mentioned, we don't have any crystall ball ;-)
- Version, configuration, logs ...???
Nikita Michalko
Am Montag, 15. März 2010 10:01 schrieb Liang Xiao Zhu:
Hi all,
I done everything but heartbeat works only when i use service heartbeat
stop, when i unplug the
http://www.clusterlabs.org/doc/en-US/Pacemaker/1.0/html/Pacemaker_Explained/s-operation-defaults.html#s-operation-timeouts
On Tue, Mar 9, 2010 at 4:35 PM, Carlos Eduardo Chiriboga Calderon
cchirib...@palosanto.com wrote:
Hi everybody,
I have a serious problem with my cluster: Sometimes, the
Hi everybody,
I have a serious problem with my cluster: Sometimes, the load average
in my server is very hight, and when the cluster tries to start the
mysql resource, I get the following lines in the log:
Mar 9 01:11:02 nodo1 lrmd: [1935]: info: rsc:mysql: start
Mar 9 01:12:01 nodo1
Hi,
I'm trying to upgrade from Heartbeat 2.1.4 (CRM-enabled) on CentOS 5
(using the epel packages) to the latest heartbeat 3.0.2 using packages
from http://www.clusterlabs.org/rpm/ but am running into a Digest
comparision failed: error when bringing up the new version.
I'm following the user's
On Fri, February 19, 2010 00:05, Brian Witt wrote:
Hi,
I'm trying to upgrade from Heartbeat 2.1.4 (CRM-enabled) on CentOS 5
(using the epel packages) to the latest heartbeat 3.0.2 using packages
from http://www.clusterlabs.org/rpm/ but am running into a Digest
comparision failed: error
Hi,
we had our HA server installation externally audited for potential
security problems. The report recommends that no processes but dedicated
server process should listen on external interfaces reachable through
the internet.
Well, this is quite a common security measure. However I was not
On Mon, Jan 25, 2010 at 08:56:52PM -0500, Carlos Eduardo Chiriboga Calderon
wrote:
Hi all,
I'm trying to configure a heartbeat cluster with 2 nodes and a drbd
filesystem (the cib.xml is attached to this message).
The problem is that, when I try to move the resource from a node to
On Sat, Jan 23, 2010 at 10:24 AM, Ilo Lorusso sneak...@gmail.com wrote:
Hi,
I'm busy setting an Active/Passive cluster using the below version of
all the above mentions software. Now i have a few question hope full
which will solve the problems I'm experience.
heartbeat-3.0.1-1.el5
Hi,
I'm busy setting an Active/Passive cluster using the below version of
all the above mentions software. Now i have a few question hope full
which will solve the problems I'm experience.
heartbeat-3.0.1-1.el5
pacemaker-1.0.7-2.el5
corosync-1.2.0-1.el5
First off I get heartbeat up and running,
hi,all
Who can tell me ,where is Heartbeat for openSuse10.2?
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems
Am Donnerstag, 21. Januar 2010 10:30:20 schrieb pqy_java_web:
hi,all
Who can tell me ,where is Heartbeat for openSuse10.2?
no heartbeat any more. See www.clusterlabs.org
--
Dr. Michael Schwartzkopff
MultiNET Services GmbH
Addresse: Bretonischer Ring 7; 85630 Grasbrunn; Germany
Tel: +49 -
Hi,
I have two linux machines, machine A and machine B. I currently have
heartbeat running between them successfully. I recently created a VLAN
interface with VCONFIG on A, eth0.10.
My aim is to set up heartbeat so that if eth0.10 on A goes down, eth0.10
on B is created with the same IP. What
Hi,
Can anyone point me to the source code of the location of GUI client? I
downloaded the pre-compiled GUI client for Redhat Enterprise server from
opensuse site but I got memory fault message when I launch it.
# hb_gui
Memory fault
Thanks in advance.
Ryan
This message (including any
Hi everyone,
as with the previous RCs, we're currently trailing a few days behind the
Heartbeat 3.0.2 release originally planned for this week. We've had no
functional changes in the Heartbeat code base, but some in Glue and
Agents (and of course Andrew released Pacemaker 1.0.7), and we want to
Hi,
On Wed, Jan 20, 2010 at 10:26:17AM -0500, Ruiyuan Jiang wrote:
Hi,
Can anyone point me to the source code of the location of GUI
client? I downloaded the pre-compiled GUI client for Redhat
Enterprise server from opensuse site but I got memory fault
message when I launch it.
#
...@lists.linux-ha.org] On Behalf Of Dejan Muhamedagic
Sent: Wednesday, January 20, 2010 11:13 AM
To: General Linux-HA mailing list
Subject: Re: [Linux-HA] Heartbeat GUI Client
Hi,
On Wed, Jan 20, 2010 at 10:26:17AM -0500, Ruiyuan Jiang wrote:
Hi,
Can anyone point me to the source code
On Fri, Jan 15, 2010 at 01:22:45AM -0500, David Sickmiller wrote:
I don't have autojoin in my ha.cf, and I believe it defaults to
autojoin none, so that wouldn't explain why heartbeat keeps
waiting
after all nodes have joined.
True. That should be fixed. Can you please open a
Of Dejan
Muhamedagic
Sent: Tuesday, January 12, 2010 3:51 AM
To: General Linux-HA mailing list
Subject: Re: [Linux-HA] heartbeat waits for initdead even after all
nodes have joined
Hi,
On Mon, Jan 11, 2010 at 03:21:05PM -0500, David Sickmiller wrote:
Hi,
I was hoping
On Thu, Jan 14, 2010 at 01:57:31PM +0100, Dejan Muhamedagic wrote:
Hi,
On Wed, Jan 13, 2010 at 07:55:28PM -0500, David Sickmiller wrote:
I don't have autojoin in my ha.cf, and I believe it defaults to
autojoin none, so that wouldn't explain why heartbeat keeps waiting
after all nodes
I don't have autojoin in my ha.cf, and I believe it defaults to
autojoin none, so that wouldn't explain why heartbeat keeps
waiting
after all nodes have joined.
True. That should be fixed. Can you please open a bugzilla for
this issue,
Thanks for your help! I've filed this as Bug 2311
: Tuesday, January 12, 2010 3:51 AM
To: General Linux-HA mailing list
Subject: Re: [Linux-HA] heartbeat waits for initdead even after all
nodes have joined
Hi,
On Mon, Jan 11, 2010 at 03:21:05PM -0500, David Sickmiller wrote:
Hi,
I was hoping to configure my 2-node cluster to start as soon
Hi,
On Mon, Jan 11, 2010 at 03:21:05PM -0500, David Sickmiller wrote:
Hi,
I was hoping to configure my 2-node cluster to start as soon as both
nodes were present but wait up to 15 minutes if the other node was
missing upon system startup. In my case, a delay of several minutes is
Hi,
I was hoping to configure my 2-node cluster to start as soon as both
nodes were present but wait up to 15 minutes if the other node was
missing upon system startup. In my case, a delay of several minutes is
better than a split-brain scenario. The Linux-HA documentation says
The initdead
Hi all,
I need to get information about how to configure a lotus domino
cluster with two nodes using heartbeat and drbd, I was searching in
google but I haven't found anything :(
Who can give me any reference or directive to do it?
Best regards,
Carlos Chiriboga Calderon
--- On Fri, 1/8/10, Carlos Eduardo Chiriboga Calderon
cchirib...@palosanto.com wrote:
From: Carlos Eduardo Chiriboga Calderon cchirib...@palosanto.com
Subject: [Linux-HA] Heartbeat and Lotus Domino?
To: General Linux-HA mailing list linux-ha@lists.linux-ha.org
Date: Friday, January 8, 2010
...@palosanto.com
Subject: [Linux-HA] Heartbeat and Lotus Domino?
To: General Linux-HA mailing list linux-ha@lists.linux-ha.org
Date: Friday, January 8, 2010, 1:21 PM
Hi all,
I need to get information about how to configure a lotus
domino
cluster with two nodes using heartbeat and drbd, I
Hi Folks,
I wrote a simple client by use of the Heartbeat client API and build it on
the top of the heartbeat 3.0.
My client is to form a simple private multi-node cluster and support
autojoin.
--%--segment of my ha.cf--
autojoinany
apiauth myclient uid=root
respawn root
Hi,
On Tue, Jan 05, 2010 at 06:50:38PM +0800, Javen Wu wrote:
Hi Folks,
I wrote a simple client by use of the Heartbeat client API and build it on
the top of the heartbeat 3.0.
My client is to form a simple private multi-node cluster and support
autojoin.
--%--segment of my
The system was reboot automatically after I set crm to yes :(
%===
Jan 05 19:08:52 na40-58 crmd: [6104]: info: crmd_init: Starting crmd
Jan 05 19:08:52 na40-58 crmd: [6104]: info: G_main_add_SignalHandler: Added
signal handler for signal 17
Jan 05 19:08:52 na40-58 heartbeat: [6091]: WARN:
Hi,
On Tue, Jan 05, 2010 at 07:15:24PM +0800, Javen Wu wrote:
The system was reboot automatically after I set crm to yes :(
Use crm respawn instead of crm yes then.
%===
Jan 05 19:08:52 na40-58 crmd: [6104]: info: crmd_init: Starting crmd
Jan 05 19:08:52 na40-58 crmd: [6104]: info:
Hi Dejan,
Actually I don't need CCM ,I tried to write my private CCM just leverage
Heartbeat's messaging channel and heartbeat function.
I want a my private memberhsip management. Is it possible? I think heartbeat
layer supports multiple nodes which means heartbeat can sendclustermsg() to
Hi,
On Tue, Jan 05, 2010 at 07:35:26PM +0800, Javen Wu wrote:
Hi Dejan,
Actually I don't need CCM ,I tried to write my private CCM just leverage
Heartbeat's messaging channel and heartbeat function.
I see.
I want a my private memberhsip management. Is it possible?
Definitely.
I think
On Tue, Jan 5, 2010 at 12:35 PM, Javen Wu wu.ja...@gmail.com wrote:
Hi Dejan,
Actually I don't need CCM ,I tried to write my private CCM
Ok, I'm officially scared.
What are you trying to achieve here?
just leverage
Heartbeat's messaging channel and heartbeat function.
I want a my private
I have configred heartbeat with each one Virtual IP resource for each node and
configured few lvm partitions for each node as per the requirement.
Tested the failover of IP and lvm partitions, it is working fine.
Now I want to configure NFS Server resource on nodeA and proper mount points -
Kamran Hanif wrote:
Please add the following in both of your nodes before the ping line and then
try again.
respawn hacluster /usr/lib/heartbeat/ipfail
I hope it should work.
this works, if I add a second interface on each node only for ipfail and
one interface connection goes down.
But in
If yes, which options should I use?
Pingd
# cat /etc/ha.d/haresources
node1 IPaddr2::192.168.100.100 drbddisk::r1
Filesystem::/dev/drbd0::/drbd::ext3::defaults
Its V1 style.. you should/must consider for V2.
V2 with xml config files is a little bit ugly, but I will try it.
Thank
On Thu, Dec 10, 2009 at 2:21 PM, thomas polnik linux...@polnik.de wrote:
If yes, which options should I use?
Pingd
# cat /etc/ha.d/haresources
node1 IPaddr2::192.168.100.100 drbddisk::r1
Filesystem::/dev/drbd0::/drbd::ext3::defaults
Its V1 style.. you should/must consider for V2.
V2
Hi,
can heartbeat detect a network failure? If yes, which options should I use?
my test configuration:
- 2 Servers (ubuntu) with heartbeat and drbd.
node1: primary node
node2: secondary node
gateway for both nodes: 10.10.10.1 (pseudo member for this cluster)
drbd: configuration as
can heartbeat detect a network failure?
Yes
If yes, which options should I use?
Pingd
# cat /etc/ha.d/haresources
node1 IPaddr2::192.168.100.100 drbddisk::r1
Filesystem::/dev/drbd0::/drbd::ext3::defaults
Its V1 style.. you should/must consider for V2.
Regards
Muhammad Sharfuddin
-Original Message-
From: linux-ha-boun...@lists.linux-ha.org
[mailto:linux-ha-boun...@lists.linux-ha.org] On Behalf Of
alex handle
Sent: November 27, 2009 5:39 AM
To: General Linux-HA mailing list
Subject: Re: [Linux-HA] Heartbeat + DRBD + NFSv4 automatic failover problem
I have setup a basic two node cluster for apache failover.
Ha.cf
use_logd yes
logfacility daemon
node hb1
node hb2
keepalive 3
warntime 5
deadtime 10
auto_failback on
ucast eth0 172.16.101.239
ucast eth0 172.16.101.136
crm yes
Then the haresources file which I put the
, 2009 3:20 PM
To: linux-ha
Subject: [Linux-HA] Heartbeat and Apache
I have setup a basic two node cluster for apache failover.
Ha.cf
use_logd yes
logfacility daemon
node hb1
node hb2
keepalive 3
warntime 5
deadtime 10
auto_failback on
ucast eth0 172.16.101.239
ucast eth0
Hello everyone,
Standing in for Lars Ellenberg and as part of our duties as maintainers
of the Heartbeat code base, here is our schedule for the upcoming
Heartbeat 3.0.2 release.
[Please note: 3.0.2 will be the first official 3.0 release. We chose
3.0.2 to avoid versioning conflicts with
On Thu, Nov 19, 2009 at 12:33:09PM +0100, Tomasz Chmielewski wrote:
When one PostgreSQL server fails, the setup will still work fine. When
the failed PostgreSQL instance is back, the data should be first
synchronized from the running PostgreSQL server to a server which was
failed a while ago.
On Fri, Nov 20, 2009 at 5:47 PM, Jason Maur jm...@dawsoncollege.qc.ca wrote:
I have used the exact same setup and i didn't find a solution to the
problem and there was also
a bug with nfsv4 locking
https://bugzilla.redhat.com/show_bug.cgi?id=524520 so i switched to
nfsv3
and now the failover
Actually,
The heartbeat API will allow you to get notified when you lose a
single link or it recovers - not just the whole node.
Quoting Dejan Muhamedagic deja...@fastmail.fm:
Hi,
On Thu, Nov 19, 2009 at 12:33:09PM +0100, Tomasz Chmielewski wrote:
Dejan Muhamedagic wrote:
When one
On Thu, Nov 19, 2009 at 6:53 PM, Jason Maur jm...@dawsoncollege.qc.ca wrote:
Not sure if this is a problem per se, but I'm here's my situation:
I have a cluster set up with CentOS + Heartbeat v1 + DRBD + NFSv4. When I
failover from one node to the other (by stopping the heartbeat service on
I have used the exact same setup and i didn't find a solution to the
problem and there was also
a bug with nfsv4 locking
https://bugzilla.redhat.com/show_bug.cgi?id=524520 so i switched to
nfsv3
and now the failover time is about 4 seconds :)
Thanks for the reply Alex,
I switched over to
Dejan Muhamedagic wrote:
When one PostgreSQL server fails, the setup will still work fine. When
the failed PostgreSQL instance is back, the data should be first
synchronized from the running PostgreSQL server to a server which was
failed a while ago.
It is best if such a script could be
Hi,
On Thu, Nov 19, 2009 at 12:33:09PM +0100, Tomasz Chmielewski wrote:
Dejan Muhamedagic wrote:
When one PostgreSQL server fails, the setup will still work fine. When
the failed PostgreSQL instance is back, the data should be first
synchronized from the running PostgreSQL server to a
Not sure if this is a problem per se, but I'm here's my situation:
I have a cluster set up with CentOS + Heartbeat v1 + DRBD + NFSv4. When I
failover from one node to the other (by stopping the heartbeat service on the
primary node), I get these messages in /var/log/messages after starting the
Hi,
On Tue, 17 Nov 2009, Casey Allen Shobe wrote:
Nov 17 12:43:45 radha heartbeat: [28049]: WARN: No STONITH device
configured.
Do not use it without STONITH. I assume you have STONITH enabled for the cluster
in crm_config but no device configured.
And update to the latest version.
On Wed, Nov 18, 2009 at 6:45 AM, Rolf Schmidt rolf.schm...@novell.comwrote:
Do not use it without STONITH. I assume you have STONITH enabled for the
cluster in crm_config but no device configured.
I don't intentionally have any stonith configured - I don't have any
hardware to enable this. I
Hi,
On Sun, Nov 15, 2009 at 09:09:53PM +0100, Tomasz Chmielewski wrote:
I have two nodes, node_1 and node_2.
node_2 was down, but is now up.
How can I execute a custom script on node_1 when it detects that node_2
is back?
That's not possible. What would you want to with that script?
On 11/15/2009 09:09 PM, Tomasz Chmielewski wrote:
I have two nodes, node_1 and node_2.
node_2 was down, but is now up.
How can I execute a custom script on node_1 when it detects that node_2
is back?
This is a little off the heartbeat list I guess, but we use Nagios to
monitor our
Dejan Muhamedagic wrote:
Hi,
On Sun, Nov 15, 2009 at 09:09:53PM +0100, Tomasz Chmielewski wrote:
I have two nodes, node_1 and node_2.
node_2 was down, but is now up.
How can I execute a custom script on node_1 when it detects that node_2
is back?
That's not possible. What would you
Tomasz Chmielewski wrote:
Dejan Muhamedagic wrote:
Hi,
On Sun, Nov 15, 2009 at 09:09:53PM +0100, Tomasz Chmielewski wrote:
I have two nodes, node_1 and node_2.
node_2 was down, but is now up.
How can I execute a custom script on node_1 when it detects that node_2
is back?
That's not
Hi,
On Mon, Nov 16, 2009 at 02:39:52PM +0100, Dominik Klein wrote:
Tomasz Chmielewski wrote:
Dejan Muhamedagic wrote:
Hi,
On Sun, Nov 15, 2009 at 09:09:53PM +0100, Tomasz Chmielewski wrote:
I have two nodes, node_1 and node_2.
node_2 was down, but is now up.
How can I execute
Hello,
About 2 years ago I implemented heartbeat 2.1.3 with crm (now
pacemaker). Now, I find myself having to start a new project which
needs HA capability and the monitoring of a few services in the
cluster. I began digging into the state of things now and became
confused fairly quickly. The
Hi,
I´m having some trouble setting up a new clustersystem with drbd.
I´m using a Cent0S 5.4, heartbeat-2.1.3-3.el5.centos and a drbd83-8.3.2-6.el5_3.
This is my config
resources
master_slave id=ms_drbd_mail
meta_attributes id=ms_drbd_mail-meta_attributes
attributes
nvpair
Hi List,
We are running heartbeat 2.1.3-3 on Centos 5.3 using old-style heartbeat 1.x
configs. We are running mysql 5.0.77 ontop of DRBD. What we have seen is
that when mysqld does a crash recovery heartbeat thinks the service is
failing to start and ends up switching to the other node. The
Am Mittwoch, 4. November 2009 16:24:41 schrieb Testuser SST:
Hi,
I´m having some trouble setting up a new clustersystem with drbd.
I´m using a Cent0S 5.4, heartbeat-2.1.3-3.el5.centos and a
drbd83-8.3.2-6.el5_3. This is my config
resources
master_slave id=ms_drbd_mail
Paolo Pisati wrote:
Dear guys,
i've a small problem with an NFS/drbd/heartbeat cluster: basically the
secondary node (that was previously promoted
as primary) when the primary come up again, is unable to release the
resources (ip/drbd) gracefully, and reboots.
I know there's resource
On Oct 21, 2009, at 5:44 AM, Paolo Pisati wrote:
Paolo Pisati wrote:
Dear guys,
i've a small problem with an NFS/drbd/heartbeat cluster: basically
the
secondary node (that was previously promoted
as primary) when the primary come up again, is unable to release the
resources (ip/drbd)
Alex Dean wrote:
This is mentioned in the HaNFS tutorial. See #3 in the 'Hints' section.
http://www.linux-ha.org/HaNFS
thanks, didn't know about that document.
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
On Sun, Oct 18, 2009 at 3:20 PM, Meir Chanan meir.cha...@gmail.com wrote:
So my question is - Does Linux HA R2-like support such configuration of
mysql master-master (Active-Active) cluster with floating VIP
(Active-Passive) ?
I believe so.
1. How do I configure it ?
Try:
Dear guys,
i've a small problem with an NFS/drbd/heartbeat cluster: basically the
secondary node (that was previously promoted
as primary) when the primary come up again, is unable to release the
resources (ip/drbd) gracefully, and reboots.
I know there's resource stickiness with pacemaker, but
Hi,
We have mysql-5.1.32 master-master replication cluster with floating VIP,
which determines which is the actual master that gets the traffic.
To manage the floating VIP we use Linux HA R1-like, in Active-Passive mode.
The MySQL is not managed by HA, since HA R1-like it does not support
second repeat :-)
It works now.
As you've said, it was a firewall problem, where the UDP ports below
1024 were blocked for all interfaces, so eth2 was also affected.
I've learnt today that ngrep reads the network data before it may be
filtered by iptables.
Best regards
Peter
Lars Ellenberg
On Sun, Sep 27, 2009 at 07:07:16PM +0200, Peter P GMX wrote:
The strange thing is: We can see in the ngrep logs below that both
machines receive communication on upd port 694 .
That was the first I crosschecked, as this is not the first machine we
setup sucessfully.
And if the firewall lets
The strange thing is: We can see in the ngrep logs below that both
machines receive communication on upd port 694 .
That was the first I crosschecked, as this is not the first machine we
setup sucessfully.
And if the firewall lets through the messages on port 694, as we can see
on both macines,
Some more info:
I compared the open ports to another HA inmstalltion which works and
found a difference.
The working HA cluster has another port open for heartbeat:
Proto Recv-Q Send-Q Local Address Foreign Address
State Benutzer Inode PID/Program name
raw
This kind of problem is nearly /always/ a firewall problem.
People tell me they don't have one. I repeat my advice. This repeats 2
or 3 times and eventually they find the problematic firewall - and
either open port 694 on it, or shut the firewall off. Then the problem
goes away. I cannot count
Hello,
This is the frist heartbeat I setup with heartbeat: version 2.1.3.
Some setups with older versions worked fine.
However I have a problem with a 2 node cluster under Ubuntu Server 8.043:
Machine fs2 comes up fine, starts 2 shared IPs and mysql.
Then heartbeat on Machine fs3 is started. It
Am Donnerstag, 24. September 2009 20:14:22 schrieb Peter P GMX:
Hello,
This is the frist heartbeat I setup with heartbeat: version 2.1.3.
Some setups with older versions worked fine.
Do yourself a favor and do not use that old version any more. It's buggy.
However I have a problem with a 2
Hello Michael,
I updated to 2.99-3 (pacemaker-heartbeat package for Ubuntu Hardy) and
still have the same behaviour. FS3 still ignores that FS2 is up and vice
versa.
(Btw: For auth method CRC it complained, that a shared secret is not
valid, so I shouldn't use one).
On each server I can
ping
Hello
I'm trying to get this working since two days now, but ldirectord
somehow does not work. Had no problem with it on older Heartbeat 2. Hope
you can give me a hint.
My setup:
- CentOS 5.3
- HA packages from
http://download.opensuse.org/repositories/server:/ha-clustering/RHEL_
$releasever/:
301 - 400 of 1067 matches
Mail list logo