Hi guys,
I have a simple 2 node cluster with a VIP running on RHEL 5.3 on s390.
Nothing else configured yet.
When I start up the cluster, all is well. The VIP starts up on the home
node and crm_mon shows the resource and nodes as on line. No errors in
the logs.
If I issue service heartbeat
is to reboot the node.
thanks
Marian Marinov wrote:
On Saturday 20 March 2010 03:56:27 mike wrote:
Hi guys,
I have a simple 2 node cluster with a VIP running on RHEL 5.3 on s390.
Nothing else configured yet.
When I start up the cluster, all is well. The VIP starts up on the home
node and crm_mon
/constraints
/configuration
/cib
Marian Marinov wrote:
Can you please give us your crm configuration ?
Marian
On Sunday 21 March 2010 23:30:46 mike wrote:
Thank you Marian. I removed th efile as you suggested but unfortunately
it has made no difference. The ip address is simply
/rsc_location
/constraints
rsc_defaults/
op_defaults/
/configuration
/cib
Marian Marinov wrote:
Hello mike,
Your problem is pretty simple. You simply don't have configured IPaddr
resource
within pacemaker.
Please look here for IPaddr:
http://www.linux-ha.org/doc/re-ra
I created a simple 2 node cluster, one resource with one VIP. After some
fiddling (thanks to Marian) I was able to get the cluster running smoothly.
I repeated the same process on another 2 node cluster. VIP, working
flawlessly.
Tonight I noticed these odd messages (lots of them) in cluster
Suddenly realized it is normal because all clusters are broadcasting on
the same UDP port. I changed the ports and the messages are sticking to
their own clusters.
mike wrote:
I created a simple 2 node cluster, one resource with one VIP. After some
fiddling (thanks to Marian) I was able
We've got LinuxHA set up with a single IP takeover of a VIP. Very simple
set up and it works fine. This is set up between two Linux Servers on a
s390x LPAR. Each guest has 3 ethernet devices and associated networks.
I've been asked to ensure that all hearteat communication occur over
network
Hello all,
I've got a simple 2 node cluster and it runs RHEL 5.3 on s390. We've got
a unique situation in that we are testing a homegrown stonith command if
you want to call it that. Our VM guy has come up with a set of commands
that we use to do specific things on Linux guests. One of those
Hi All,
I'm looking for a good document that can take someoen very new to
LinuxHA (me) and clearly explain how stonith is implemented and how to
configure it. Most of what I've found so far is very cryptic and frankly
dose not explain how to configure stonith in a working cluster. For my
So here's the situation:
Node A (primary node) heartbeat up and running a VIP and mysqld
Node B (secondary node) up and running but heartbeat stopped
I start heartbeat on Node B and expect it to come quickly, which it
does. I noticed in the logs on Node A that the cluster runs mysql start.
Why
nvpair
id=status-db80324b-c9de-4995-a66a-eedf93abb42c-probe_complete
name=probe_complete value=true/
/attributes
/instance_attributes
/transient_attributes
/node_state
/status
/cib
Florian Haas wrote:
Mike,
the information given reduces us to guesswork
DBSUAT1A.intranet.mydomain.com lrmd: [3297]: info: RA
output: (mysqld_2:monitor:stderr) Usage: /etc/init.d/mysqld
{start|stop|report|restart}
mike wrote:
Thanks for the reply Florian.
I installed from tar ball so am a little unsure of the releases but
looking at the READMEs I see this
heartbeat-3.0.2
processing.
Please run crm_verify -L to identify issues.
Any ideas?
Dejan Muhamedagic wrote:
Hi,
On Tue, Mar 30, 2010 at 10:24:59AM -0300, mike wrote:
Also noticed another oddity. I killed mysql on the primary node fully
expecting it to either trigger a failover or a restart of mysql
-ha-boun...@lists.linux-ha.org] On Behalf Of mike
Sent: 30 March 2010 15:42
To: General Linux-HA mailing list
Subject: Re: [Linux-HA] Why does mysld start run again?
Thank you Dejan,
I tried changing the script so that instead of requiring a report it
now takes status. Specifically I changed
how to do this.
Thank you
Florian Haas wrote:
ocf:heartbeat:mysql
I really need to change the examples in the DRBD User's Guide to no
longer include any references to LSB agents.
Cheers,
Florian
On 2010-03-30 17:52, mike wrote:
Thanks Darren. I'm not sure what you mean by the Mysql RA
After 3 or 4 runs with different errors, I was able to install a few
things that the ConfigureMe script required. I'm now down to the last
error (I think)
./ConfigureMe make fails with this error:
mgmt_crm.c:1307: warning: passing argument 9 of 'delete_attr' makes
integer from pointer without a
I think I must be missing something somewhere. I have configured an
Apache and VIP failover EXACTLY as per the instructions on this page:
http://www.clusterlabs.org/wiki/Example_configurations#Failover_IP_.2B_One_service
Specifically I did this:
Failover IP + One service
Here we assume
Thank you Jakob,
I did as you suggested (good idea btw) and what I saw was that LinuxHA
continually tried to restart it on the primary node. Is there a setting
that I can say After X number of times trying to restart, fail over ?
Jakob Curdes wrote:
mike schrieb:
I think I must
, at least not right now. I want to simulate a case
where httpd will not start. right now, all that appears to happen is the
cluster keeps trying to start httpd on the primary node. I'm obviously
missing something because this way it is set up is certainly not highly
available.
mike wrote
Hello all,
We had a simple 2 node MySQL cluster - nothing special. One instance
that worked perfectly. We recently added 3 instances and now we're
having some issues. The problem is that Heartbeat issues a MySQL Status
immediately after the MySQL Start .. and of course the MySQL Status will
Florian Haas wrote:
On 2010-05-03 09:24, Andrew Beekhof wrote:
On Thu, Apr 29, 2010 at 7:37 PM, mike mgbut...@nbnet.nb.ca wrote:
Hello all,
We had a simple 2 node MySQL cluster - nothing special. One instance
that worked perfectly. We recently added 3 instances and now we're
Hi guys,
I wonder if someone might be able to point me to a good cibadmin guide.
Maybe its something someone wrote on their own, I really am not picky
here. I would like to get my hands on a decent doc that I could read and
get to know how to do a few things a little better.
Thanks
Hello All,
I've set up ldirectord so that I am able to start it from the command
line like so:
service ldirectord start
The ldirectord.cf file is in place and ipvsadm shows the connections as
I would expect them to. Incorporating this into heartbeat should be a
joke but I'm running into a
the scores automatically. My only question is, do I only see
this message if the scores have been rolled back and what is responsible
to firing this thing off in the first place?
Thanks
Mike
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http
servers I issue the command from continually hitting the same server.
Certainly the results in the slapd log files are anything BUT round robin.
Can anyone help me out here with either my understanding of the round
robin set up or tell me what to change in my config file above?
Thanks guys
Mike
I assume Andrew means 15 minutes * 60 = 900 seconds * 1000 = 90
milliseconds
Vadym Chepkov wrote:
On May 19, 2010, at 2:51 AM, Andrew Beekhof wrote:
which is what my DBA was looking for. He wants mysql to failover if
there are 3 successive failures of MySQL but only if those
Andrew Beekhof wrote:
which is what my DBA was looking for. He wants mysql to failover if
there are 3 successive failures of MySQL but only if those successive
failures occur within 15 minutes.
You want migration-threshold=3 and failure-timeout=90 (15 * 60 * 1000
Thanks Andrew,
and correctly
land on the proper server, i.e. the backup node. So I think this kinda
rules out some weird Arp table issue on a switch somewhere.
Help me out here guys, where do I look?
Mike
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http
asking here is, is the 15
minute failure-timeout a rolling thing that gets reset or is it a one
shot deal, i.e. once ignored the first time always ignored from that
point on?
Thank you Andrew
Mike
Andrew Beekhof wrote:
On Wed, May 19, 2010 at 5:22 PM, mike mgbut...@nbnet.nb.ca wrote:
Andrew
What did I miss? Must have been something.
mike wrote:
So to see if I understand correctly a couple scenarios:
Assume a failure-timeout of 15 minutes
1. lets assume I have 2 failures within 5 minutes and then no failure
for 20 minutes afterwards. After that 20 minutes I have a failure
Gianluca Cecchi wrote:
On Thu, May 20, 2010 at 2:45 PM, mike mgbut...@nbnet.nb.ca wrote:
ok, I actually went ahead and did a test on my cluster. The results did
not occur as I would have expected.
I failed ldirectord twice on the main node. I waited 20 minutes and saw
this entry
I'm at my wits end here folks and I'm looking for some help.
Currently when the cluster is up and running on lvsuat1a, requests for
ldap come through the VIP and then get passed out to each real server
in a fairly round robin format. However, when I fail the cluster over I
am seeing some odd
Anyone ever see an issue where ldirector would not pass requests to 2
backend real servers on a certain port (in my case 8080) but if you
change that to port 22, it works flawlessly?
Its really strange that it would work on one port but not another. Any
hints?
Pushkar Pradhan wrote:
From: linux-ha-boun...@lists.linux-ha.org on behalf of mike
Sent: Fri 5/28/2010 10:01 AM
To: General Linux-HA mailing list
Subject: [Linux-HA] odd issues with LinuxHA/ldirector
Anyone ever see an issue where ldirector would
Pushkar Pradhan wrote:
From: linux-ha-boun...@lists.linux-ha.org on behalf of mike
Sent: Fri 5/28/2010 12:08 PM
To: General Linux-HA mailing list
Subject: Re: [Linux-HA] odd issues with LinuxHA/ldirector
Pushkar Pradhan wrote
So I've got ldirector up and running just fine and providing ldap high
availability to 2 backend real servers on port 389.
Here is the output of netstat on both real servers:
tcp0 0 0.0.0.0:389
0.0.0.0:* LISTEN
tcp0 0 :::389
Nikita Michalko wrote:
Hi mike,
it seems to be no HA-problem anymore though, but:
Am Montag, 31. Mai 2010 01:29 schrieb mike:
So I've got ldirector up and running just fine and providing ldap high
availability to 2 backend real servers on port 389.
Here is the output of netstat
mike wrote:
Nikita Michalko wrote:
Hi mike,
it seems to be no HA-problem anymore though, but:
Am Montag, 31. Mai 2010 01:29 schrieb mike:
So I've got ldirector up and running just fine and providing ldap high
availability to 2 backend real servers on port 389.
Here
We've got an application serving up JBoss on port 8080. I'm using LVS to
load balance and have incorporated ldirecord into LinuxHA to provide a
higly available LVS-tunnel cluster. We found that by starting up Jboss
on the real servers and listening on the RIP that for some reason, LVS
cannot
see what is
running and what isn't.
Mike
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems
Got a simple 2 node active/standby cluster with stonith disabled in my
cib. This morning I logged on to the server to see /var filled up over
the weekend. Looking at the ha-log file I can see why. It was polluted
with stonith messages. Specifically, messages that read Can't initiate
connection
Hi All,
I have a very simple Apache cluster with 3 VIPs. I've seen a few
failovers that have me stumped. The logs certainly indicate a problem
but they don't fully tell the story. I'll give a summary of the logs
here - anyone got any ideas why a failover occurred?
Jul 15 16:00:09
On 10-07-16 09:24 AM, mike wrote:
Hi All,
I have a very simple Apache cluster with 3 VIPs. I've seen a few
failovers that have me stumped. The logs certainly indicate a problem
but they don't fully tell the story. I'll give a summary of the logs
here - anyone got any ideas why a failover
in my ha logs I have the entries that appear several times a night. Now
I know in a previous post I was told these were indicative of resource
contention. These clusters that are seeing these messages are on a zVM
LPAR so they share CPU, memory and so on. Previously when we saw these
errors,
Hello all,
I've implemented a LVS cluster using ldirectord and LinuxHA. Here is a
snippet from my ldirectord.cf file:
virtual=172.28.185.54:8080
protocol=tcp
scheduler=wrr
checktype=connect
checkport=8080
#service=ldap
real=172.28.185.57:8080 ipip
server somehow took ownership of the VIP
and as a result was grabbing all requests. A reboot resolved it. Can you
tell me why this may have happened? What could be wrong on my backend
servers that they would grab the VIP like this?
Any help would be appreciated greatly.
Mike
On Thu, 2010-09-09
On 10-10-06 07:09 PM, AR wrote:
Hi, First let me say thank you to those of you that support the project.
It appears that there are orphan processes running? How do I get rid of
these?
# crm_verify -LVV
crm_verify[31892]: 2010/10/06_14:55:10 WARN: process_orphan_resource:
Nothing known
the address all is
working well.
On Wed, 2010-10-06 at 20:45 -0300, mike wrote:
On 10-10-06 07:09 PM, AR wrote:
Hi, First let me say thank you to those of you that support the project.
It appears that there are orphan processes running? How do I get rid
Hi all,
I've started building a simple 2 node http cluster. I've built several
clusters so this should be a joke. I got the first node fired up and
noticed these entries over and over again in my logs.
Oct 13 21:18:46 Firethorn crmd: [2403]: ERROR: te_connect_stonith:
Sign-in failed:
Fedora 13 on i686 btw.
On 10-10-13 09:26 PM, mike wrote:
Hi all,
I've started building a simple 2 node http cluster. I've built several
clusters so this should be a joke. I got the first node fired up and
noticed these entries over and over again in my logs.
Oct 13 21:18:46 Firethorn crmd
Hi guys,
Can you tell me what you would recommend for the following settings in
the ha.cf file:
Here are my settings.
# Thresholds (in seconds)
keepalive 1
warntime 6
deadtime 10
initdead 15
Are
On 10-11-02 11:52 AM, Dejan Muhamedagic wrote:
Hi,
On Tue, Nov 02, 2010 at 11:13:49AM -0300, mike wrote:
Hi guys,
Can you tell me what you would recommend for the following settings in
the ha.cf file:
Here are my settings.
# Thresholds (in seconds)
keepalive
Looking for a more experienced person who can explain this issue we had
last night.
Our backups kicked in during the night at 1AM. At 1:01AM, our mysql
cluster had issues. Specifically I can see in crm_mon where the cluster
has it as failed due to an unknown exec error. Looking at the
On 10-11-04 12:38 PM, Dejan Muhamedagic wrote:
Hi,
On Thu, Nov 04, 2010 at 11:06:48AM -0300, mike wrote:
Looking for a more experienced person who can explain this issue we had
last night.
Our backups kicked in during the night at 1AM. At 1:01AM, our mysql
cluster had issues
Hi all,
I'm running a simple MySQL cluster on a very heavily loaded LPAR and
experiencing some outages due to late heartbeat packets, Gmain timeouts
and so on. I'd like to adjust these settings:
# Thresholds (in seconds)
keepalive 1
warntime 6
On 10-11-05 06:40 PM, Pavlos Parissis wrote:
On 5 November 2010 20:32, mikemgbut...@nbnet.nb.ca wrote:
Hi all,
I'm running a simple MySQL cluster on a very heavily loaded LPAR and
experiencing some outages due to late heartbeat packets, Gmain timeouts
and so on.
Before we look
On 11-01-12 10:28 PM, Cody Zhang wrote:
Hi,All
Anybody help me? Found error when run heartbeat-2.1.3-3.
My configruation example:
*ha.cf:*
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
warntime 10
initdead 120
udpport 694
bcast eth0
ucast eth0 192.168.0.60
Hello All,
I've successfully set up a load balancing cluster using ldirectord and
LinuxHA. ldirectord.cf contains several stanzas for load balancing
several backend services. All seems to work as it should with the
exception of one minor detail.
I have one application on a backend server that
On 11-04-22 06:25 AM, SEILLIER Mathieu wrote:
Hi all,
First I'm french so sorry in advance for my English...
I have to use Heartbeat for High Availability between 2 Tomcat 5.5 servers
under Linux RedHat 5.3. The first server is active, the other one is passive.
The master is called
On 11-05-12 03:53 PM, gilmarli...@agrovale.com.br wrote:
Hello!I'm using heartbeat version 3.0.3-2 on debian squeeze with dedicated
gigabit
ethernet interface for the heartbeat. But even this generates the following
message:WARN: Gmain_timeout_dispatch: Dispatch function for send local
On 11-05-18 09:31 AM, Randy Katz wrote:
Hi, does anyone on this list know why there are UDP requests on port 691
of the HA nodes? I turned on firewalling and my crm_mon would not show
both nodes' status until I allowed UDP port 691 to flow through, please
advise,
Regards,
Randy
On 11-05-19 04:41 PM, Ariel wrote:
I'm starting a new LVS-DR setup with ldirectord but am unable to get it
working. I started with trying to set up a single director server with a
single real server.
My ldirectord.cf:
--
checktimeout=8
checkinterval=5
autoreload=yes
Hi guys,
I've always been a bit confused when it comes to what really is
pacemaker. Now, I've installed a few clusters and in my ha.cf file I
enabled crm. Am I correct in understanding that crm *is* pacemaker? The
reason I ask is that I've read some documentation from this site:
connections are
not flowing through to the back end web server.*
As usual - thanks for all replies and suggestions.
- Mike
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http
protocol=tcp
scheduler=lc
checktype=connect
checkport=80
#negotiatetimeout=10
real=192.168.2.16:80 gate
#real=172.28.185.38:389 ipip
#service=ldap
protocol=tcp
checktimeout=10
checkinterval=10
Now it works!
On 11-07-17 10:35 AM, mike wrote
in the cib.xml:
nvpair id=nvpair.id17897906
name=default-resource-failure-stickiness value=50/
Hope this helps.
-mike
On 11-07-22 02:55 PM, Hai Tao wrote:
Does HA monitor its resources? If I manually disable the floating IP, (for
example, ifdown eth0:0), will HA be able to detect
depending on the Linux variant you are using) and ten
start apache wit service apache start, for instance.
-mike
On 11-07-22 05:58 PM, Hai Tao wrote:
How can I disable HA without stopping the resources then?
I like to disable HA by stopping the heartbeat, but once I do that a failover
a failover. I set it with
this entry in the cib.xml:
nvpair id=nvpair.id17897906
name=default-resource-failure-stickiness value=50/
Hope this helps.
-mike
On 11-07-22 02:55 PM, Hai Tao wrote:
Does HA monitor its resources? If I manually disable the floating IP, (for
example, ifdown
heartbeat on
the
new box and copy the file over (or update the value in the file) on the new
box.
David Lang
On Fri, 29 Jul 2011, mike wrote:
Date: Fri, 29 Jul 2011 16:06:25 -0300
From: mikemgbut...@nbnet.nb.ca
Reply-To: General Linux-HA mailing listlinux-ha@lists.linux-ha.org
To: General
Are the cib.xml and cib.xml.sig IDENTICAL on both nodes?
On 11-08-01 07:49 PM, Hai Tao wrote:
also I found the heartbeat messages the node 2 get is weird:
[node2 ha.d]# cat /dev/ttyS0
0 0.00 1/162 16586
ttl=3
auth= 3872bbb8a107925fcdd6ea4e3716d8
ts=4e372cb5
ld=0.00 0.00 0.0086
ttl=3
The cib.xml* files should be in the /var/lib/heartbeat/crm directory on
both nodes.
On 11-08-02 04:38 PM, Hai Tao wrote:
these are no such files.
Thanks.
Hai Tao
Date: Tue, 2 Aug 2011 09:30:47 -0300
From: mgbut...@nbnet.nb.ca
To: linux-ha@lists.linux-ha.org
Subject: Re: [Linux-HA]
Permission problem perhaps? Not really sure what you're doing but the
fact that you have users configuring the cluster (why do you do this
btw?) may be pointing to a permission issue.
-mgb
On 11-08-03 06:57 PM, Rahul Kanna wrote:
Hi,
Our system setup:
Heartbeat 3.0.3
DRBD (to manage file
I've set up a few LVS clusters in our current environment and now I've
been asked to do the following.
Install HA and LVS on server 1 and server 2
The VIP will point to 2 servers - server 3 and server 1, i.e. back to
itself. Now don't ask why they want to do this, sometimes I just give up
does HA suddenly make it impossible to
connect to the backend servers from the primary node?
Another problem I have is that I cannot telnet to the VIP on port 8080
no matter what node it is running on. I think if I can resolve the
problem above, this one will go away too.
Thanks
-mike
/25 brd 172.28.89.127 scope global eth0.101
inet 172.28.191.155/25 brd 172.28.191.127 scope global eth0.101 -- VIP
On 11-09-14 06:37 PM, mike wrote:
Hope someone here can give me some pointers. I've set up a ldirectord
LinuxHA cluster.
When I start it up its a simple set up. I have one VIP
I have this set up with ldirectord on HA. Do the directors and the real
backend servers have to be on the same subnet?
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also:
Thanks Michael - I think this is the root of my problem with my previous
post. I was using DR on different subnets. Guess I'll switch to tunnel.
On 11-09-15 08:11 AM, Michael Schwartzkopff wrote:
I have this set up with ldirectord on HA. Do the directors and the real
backend servers have to be
I have an LVS HA set up. All is installed and the VIP comes up and
ldirectord comes up as well.
However, the VIP I have been asked to assign in the cluster will reside
on a different subnet than the real ip that currently resides on the
device. My gut says this wont work.
Here is the device
I've got HA set up on 2 nodes. Very simple setup. One VIP and ldirector.
The issue I am having here is bringing up the VIP on a VLAN device. For
example, here is an ifconfig output before HA is started
9: eth0.101@eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue
link/ether
On 11-09-22 10:41 AM, Nick Khamis wrote:
Hello Everyone,
We have almost setup a working prototype of what will be our
production cluster. A few simple question I have are:
i) We begin the installation by creating hacluster:haclient. How bad
is it to proceed with the installation as user
On 11-09-22 11:45 AM, Nick Khamis wrote:
Hello Mike,
Thank you so much for your response.
You do not need to install cluster stack on real or backend servers
just the nodes that are actually part of the cluster.
This is the part that I am trying to make sure I absolutely
understand
On 11-09-22 02:45 PM, Nick Khamis wrote:
Got it! On my way
Thanks Mike!
Nick.
On Thu, Sep 22, 2011 at 1:05 PM, mikemgbut...@nbnet.nb.ca wrote:
On 11-09-22 11:45 AM, Nick Khamis wrote:
Hello Mike,
Thank you so much for your response.
You do not need to install cluster stack on real
response was guys, start each instance
listening to a different port - problem solved. They're not too happy
with this solution so I'm here asking - is there any way possible to
load balance to several jboss instances running on the same backend
servers and on the same port?
Thanks
-mike
On 11-09-24 05:02 AM, Vladislav Bogdanov wrote:
23.09.2011 21:15, mike wrote:
Last year I set up an HA cluster with ldirector pointing to 2 load
balanced real servers. We had jboss on the backend listening to the
Real IP on port 8080. Initially, we could not get the backend to reply -
we kept
On 11-09-24 02:43 PM, Vladislav Bogdanov wrote:
24.09.2011 16:21, mike wrote:
On 11-09-24 05:02 AM, Vladislav Bogdanov wrote:
23.09.2011 21:15, mike wrote:
Last year I set up an HA cluster with ldirector pointing to 2 load
balanced real servers. We had jboss on the backend listening
On 11-09-25 05:13 AM, Vladislav Bogdanov wrote:
25.09.2011 11:09, Vladislav Bogdanov wrote:
25.09.2011 02:29, mike wrote:
On 11-09-24 02:43 PM, Vladislav Bogdanov wrote:
24.09.2011 16:21, mike wrote:
On 11-09-24 05:02 AM, Vladislav Bogdanov wrote:
23.09.2011 21:15, mike wrote:
Last year I
with Cluster
Glue - is that installed?
-mike
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems
On 11-09-25 08:04 AM, mike wrote:
On 11-09-25 05:13 AM, Vladislav Bogdanov wrote:
25.09.2011 11:09, Vladislav Bogdanov wrote:
25.09.2011 02:29, mike wrote:
On 11-09-24 02:43 PM, Vladislav Bogdanov wrote:
24.09.2011 16:21, mike wrote:
On 11-09-24 05:02 AM, Vladislav Bogdanov wrote
Running ldirectord on HA with a couple of LVS-DR backend instances set
up and working. Customer want to have another VIP in the cluster
pointing to another load balanced tomcat pair. Only thing is its goign
to have to be NAT. Any problems with NAT and DR co-existing on the same
director?
On 11-09-30 05:06 PM, Nick Khamis wrote:
Can't locate Socket6.pm
Socket6 is a perl module - install it and you should be fine
-mike
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http
On 11-09-30 08:02 PM, Nick Khamis wrote:
Hey Mike!
Thank you so much for your response. For those that may bump into this
problem, in debian squeeeze I had to
install:
apt-get install libio-socket-inet6-perl
apt-get install libwww-perl
And everything is ok now...
Nick.
On Fri, Sep
On 11-10-26 07:20 PM, Alessandra Giovanardi wrote:
Hi,
I have a cluster based on Heartbeat v2 with two nodes (DEBIAN):
gicdrupal01
gicdrupal02
with one RG active on gicdrupal02 (gicdrupal01 is in standby)
with these pkt release:
ii heartbeat
2.1.3-6lenny4 Subsystem for
On 11-10-31 07:56 AM, Robinson, Eric wrote:
I can't get a cluster up on RHEL6. First I tried pacemaker+corosync, but
corosync complains...
Could not get the ring status, the error is: 6
..and I cannot connect to the cluster.
So then I tried pacemaker+heartbeat, only to learn that
the correct value for use later in
the script. From cron, the same script errs out with the msg above.
Thanks for any suggestions.
-mike
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http
On 11-11-08 12:07 AM, Tim Serong wrote:
On 11/08/2011 12:24 PM, mike wrote:
So I'm putting together a quick little perl script to monitor HA by
running a few crm commands.
When I run the script from the command line as root, it works perfectly.
However, when I put it in root's crontab I get
Got a bit of an odd issue and I'm hoping someone can help me figure this
out.
The set up is not ideal but here's the basic flow:
Request goes to VIP1 on LVS on Blade A. It then routes it to one of 2
load balanced pairs on Blade A or B (works perfectly). That request then
goes to one of 2 load
Thanks Nick:
Here's the config that is the issue right now:
#SERVER1.vip.intranet.mydomain.com
virtual=172.28.191.194:8080
protocol=tcp
scheduler=lc
checktype=connect
checkport=8080
#negotiatetimeout=10
real=172.28.191.170:8080 masq
real=172.28.191.171:8080 masq
On Thu, Nov 10, 2011 at 03:14:44PM -0400, mike wrote:
On 11-11-09 09:33 PM, Simon Horman wrote:
On Wed, Nov 09, 2011 at 12:18:22PM -0400, mike wrote:
Thanks Nick:
Here's the config that is the issue right now:
#SERVER1.vip.intranet.mydomain.com
virtual=172.28.191.194:8080
On 12-01-23 04:59 AM, Niclas Müller wrote:
Hey,
I've build a cluster with one pacemaker resource named ClusterIP. The
change by failover is very fast and is ok. My problem is that services
like apache or mysql take at least 10-15 seconds to respond after IP
takeover. Is there a change to
Hi all,
I've got a working LVS HA cluster with several load balanced
applications running successfully. The HA cluster is a combination PROD
and Test (something I dont agree with but there you have it). Last week
we added an LVS Tunnel test pair to the Cluster. Here's what it looks like.
1 - 100 of 137 matches
Mail list logo