I think I must be missing something somewhere. I have configured an
Apache and VIP failover EXACTLY as per the instructions on this page:
http://www.clusterlabs.org/wiki/Example_configurations#Failover_IP_.2B_One_service
Specifically I did this:
Failover IP + One service
Here we assume
Thank you Jakob,
I did as you suggested (good idea btw) and what I saw was that LinuxHA
continually tried to restart it on the primary node. Is there a setting
that I can say After X number of times trying to restart, fail over ?
Jakob Curdes wrote:
mike schrieb:
I think I must
After 3 or 4 runs with different errors, I was able to install a few
things that the ConfigureMe script required. I'm now down to the last
error (I think)
./ConfigureMe make fails with this error:
mgmt_crm.c:1307: warning: passing argument 9 of 'delete_attr' makes
integer from pointer without a
nvpair
id=status-db80324b-c9de-4995-a66a-eedf93abb42c-probe_complete
name=probe_complete value=true/
/attributes
/instance_attributes
/transient_attributes
/node_state
/status
/cib
Florian Haas wrote:
Mike,
the information given reduces us to guesswork
DBSUAT1A.intranet.mydomain.com lrmd: [3297]: info: RA
output: (mysqld_2:monitor:stderr) Usage: /etc/init.d/mysqld
{start|stop|report|restart}
mike wrote:
Thanks for the reply Florian.
I installed from tar ball so am a little unsure of the releases but
looking at the READMEs I see this
heartbeat-3.0.2
processing.
Please run crm_verify -L to identify issues.
Any ideas?
Dejan Muhamedagic wrote:
Hi,
On Tue, Mar 30, 2010 at 10:24:59AM -0300, mike wrote:
Also noticed another oddity. I killed mysql on the primary node fully
expecting it to either trigger a failover or a restart of mysql
-ha-boun...@lists.linux-ha.org] On Behalf Of mike
Sent: 30 March 2010 15:42
To: General Linux-HA mailing list
Subject: Re: [Linux-HA] Why does mysld start run again?
Thank you Dejan,
I tried changing the script so that instead of requiring a report it
now takes status. Specifically I changed
how to do this.
Thank you
Florian Haas wrote:
ocf:heartbeat:mysql
I really need to change the examples in the DRBD User's Guide to no
longer include any references to LSB agents.
Cheers,
Florian
On 2010-03-30 17:52, mike wrote:
Thanks Darren. I'm not sure what you mean by the Mysql RA
So here's the situation:
Node A (primary node) heartbeat up and running a VIP and mysqld
Node B (secondary node) up and running but heartbeat stopped
I start heartbeat on Node B and expect it to come quickly, which it
does. I noticed in the logs on Node A that the cluster runs mysql start.
Why
Hello all,
I've got a simple 2 node cluster and it runs RHEL 5.3 on s390. We've got
a unique situation in that we are testing a homegrown stonith command if
you want to call it that. Our VM guy has come up with a set of commands
that we use to do specific things on Linux guests. One of those
Hi All,
I'm looking for a good document that can take someoen very new to
LinuxHA (me) and clearly explain how stonith is implemented and how to
configure it. Most of what I've found so far is very cryptic and frankly
dose not explain how to configure stonith in a working cluster. For my
We've got LinuxHA set up with a single IP takeover of a VIP. Very simple
set up and it works fine. This is set up between two Linux Servers on a
s390x LPAR. Each guest has 3 ethernet devices and associated networks.
I've been asked to ensure that all hearteat communication occur over
network
I created a simple 2 node cluster, one resource with one VIP. After some
fiddling (thanks to Marian) I was able to get the cluster running smoothly.
I repeated the same process on another 2 node cluster. VIP, working
flawlessly.
Tonight I noticed these odd messages (lots of them) in cluster
Suddenly realized it is normal because all clusters are broadcasting on
the same UDP port. I changed the ports and the messages are sticking to
their own clusters.
mike wrote:
I created a simple 2 node cluster, one resource with one VIP. After some
fiddling (thanks to Marian) I was able
/rsc_location
/constraints
rsc_defaults/
op_defaults/
/configuration
/cib
Marian Marinov wrote:
Hello mike,
Your problem is pretty simple. You simply don't have configured IPaddr
resource
within pacemaker.
Please look here for IPaddr:
http://www.linux-ha.org/doc/re-ra
is to reboot the node.
thanks
Marian Marinov wrote:
On Saturday 20 March 2010 03:56:27 mike wrote:
Hi guys,
I have a simple 2 node cluster with a VIP running on RHEL 5.3 on s390.
Nothing else configured yet.
When I start up the cluster, all is well. The VIP starts up on the home
node and crm_mon
/constraints
/configuration
/cib
Marian Marinov wrote:
Can you please give us your crm configuration ?
Marian
On Sunday 21 March 2010 23:30:46 mike wrote:
Thank you Marian. I removed th efile as you suggested but unfortunately
it has made no difference. The ip address is simply
Hi guys,
I have a simple 2 node cluster with a VIP running on RHEL 5.3 on s390.
Nothing else configured yet.
When I start up the cluster, all is well. The VIP starts up on the home
node and crm_mon shows the resource and nodes as on line. No errors in
the logs.
If I issue service heartbeat
?
Thank You,
Mike Sweetser
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems
Depends on your kernel and nfsd, there'll be some /proc/fs/nfsd/unlock_*
files.
use them to make nfsd drop files locks held on exported filesystems.
Unfortunately, these don't seem to exist in RHEL5. Anything else I can try?
Mike Sweetser
mount
and kill only those, or is there another way to reset things without
totally restarting NFS? We don't want to lose existing mounts other
than the resource we're actually migrating.
Thanks,
Mike Sweetser
___
Linux-HA mailing list
Linux-HA
We just had one of our two Heartbeat servers - in this case, the primary server
- do an emergency reboot earlier tonight, and I'm confused as to why. Here's
the sanitized version of the logs from right before the reboot occurred. Can
somebody tell me what happened and what I can do to make
---
if [ $HA_SYSLOGMSGFMT -o $HA_LOGFACILITY ]; then
awk '{print $1,$2,$3}'
else
127c127
#fi
---
fi
Thank You,
Mike Sweetser
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See
]; then
# awk '{print $1,$2,$3}'
#else
---
if [ $HA_SYSLOGMSGFMT -o $HA_LOGFACILITY ]; then
awk '{print $1,$2,$3}'
else
127c127
#fi
---
fi
Thank You,
Mike Sweetser
crashreport.tar.gz
Description: crashreport.tar.gz
regarding
resources, or does it keep everything steady while it reloads the
configuration? I want to do this without having to migrate resources
between servers.
Thanks,
Mike Sweetser
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux
not sure if that's possible at all.
We are indeed using the CIB configuration. What will happen when we set
is_managed to off? Will the resource keep running as is without
Heartbeat interfering with it until we turn is_managed back on? Will
this continue if we restart Heartbeat itself?
Mike
editing the XML and then kill-HUPing Heartbeat somehow? Do I
need to muddle through using crm_attribute or cibadmin?
Thanks!
Mike Sweetser
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http
I am using redhat 5.2 and I created a few new OCF files for radius and
syslog-ng
Is there anyway to make those show up in the heartbeat GUI without
restarting heartbeat on both nodes.
I tried a service heartbeat reload and that did not seem to add them in
Thanks
mike
I have a simple 3 server HA setup (at the moment) that just won't stay
up.
The scenario is:
1. YUM install heartbeat
2. Create SIMPLE ha.cf file that looks like this on the 3
blades.
crm on
auto_failback off
I have a 2 node, active/passive DRBD/HA setup using Redhat ES5, fully
patched.
One of my clients is running a process that needs to create a .lck file.
According to everything I've read, I should be able to setup the
/etc/hosts.allow file with the IP addresses of my nfs clients, and lockd
I've been searching all day and can't find an RPM for libnet for RedHat
EL5.
All the RPMs that I find have rf in the name and seem to just dump
man/doc entries and some include files. Am I seriously missing
something here??
And while I'm at it, trying to deploy the HB libraries it seems like
Is this really the easiest way to just set up simple monitoring of a
running service? For one, smb and winbind don't have OCF resource
agents so I'd have to write entire agents just for the services in
question. Is there any template agent for this usage?
Mike Sweetser
-Original Message
like to simply
monitor them and make sure they keep running - I want the services to
run on all of the nodes, not just one of them. It won't handle failover
or clustering, just monitoring and restarting them on the failed node in
case of problems.
How do I do this in Heartbeat v2?
Thanks!
Mike
-server and Redhat ES 4
To: General Linux-HA mailing list linux-ha@lists.linux-ha.org
Message-ID:
[EMAIL PROTECTED]
Content-Type: text/plain; charset=ISO-8859-1
On Feb 4, 2008 6:08 PM, Mike Toler [EMAIL PROTECTED]
wrote:
Just a quick RPM question.
Most of the HA/DRBD sites state
Just a quick RPM question.
Most of the HA/DRBD sites state that I have to have nfs-kernel-server
installed on my system. I can't find any reference to this for RedHat.
Is it named differently for a RedHat installation?
Michael Toler
System Test Engineer
Prodea Systems, Inc.
214-278-1834
I'm finally able to run my DRBD/HA NFS server on a V1 setup without
serious issue. My failovers work correctly and NFS service takes only a
minor interruption when a server is lost. The only thing I'm still
having problems using V1 with is SNMP.
Now, as an exercise in masochism, I'm trying to
I don't know if I'm an idiot, have failed to compile the load correctly,
or just don't have the secret handshake down correctly, but either way,
I am unable to query any statistics from Linux HA using snmp.
I've read the README file in the snmp-subagent directory in the
source, and I *THINK* I've
101 - 137 of 137 matches
Mail list logo