Sent: Thursday, October 11, 2012 2:41 AM
To: General Linux-HA mailing list
Subject: Re: [Linux-HA] Stickiness confusion
On Thu, Oct 11, 2012 at 1:04 AM, Yount, William D
yount.will...@menloworldwide.com wrote:
I don't know if you are aware of the Linux Cluster Management Console (LCMC).
I have
I have stickiness working on my two node setup. I tried finding a difference in
our setups but couldn't find any. I have attached my crm configure show
Aside from the stickiness level, I believe you can set a preferred location for
that resource. Your preferred location settings might be
I have a two node cluster. I have setup the following resources: ip address
(primitive), file sytem (primitive), nfs (primitive), nfslock (primitive) and
drbd (master/slave). I tried doing master/slave on all resources but they
weren't migrating over correctly from one server to the other. With
Subject: Re: [Linux-HA] IP Clone
On 8/20/12 6:54 PM, Yount, William D wrote:
No, no complaining. Just glad to get a definitive answer on it. Active/Active
made me think something that I guess isn't true. No worries. Honestly, thanks
for the reply. Without you, I would have kept trying and trying
No ideas?
-Original Message-
From: linux-ha-boun...@lists.linux-ha.org
[mailto:linux-ha-boun...@lists.linux-ha.org] On Behalf Of Yount, William D
Sent: Thursday, August 16, 2012 9:09 PM
To: linux-ha@lists.linux-ha.org
Subject: [Linux-HA] IP Clone
I have two servers. I am using pacemaker
-ha.org
[mailto:linux-ha-boun...@lists.linux-ha.org] On Behalf Of Dimitri Maziuk
Sent: Monday, August 20, 2012 5:50 PM
To: linux-ha@lists.linux-ha.org
Subject: Re: [Linux-HA] IP Clone
On 08/20/2012 05:01 PM, Yount, William D wrote:
I am trying to set up an Active/Active cluster. I have an
Active
I have two servers. I am using pacemaker/cman(corosync). I am trying to share
an IP address between them. I would like the IP address to run on both servers
at the same time. However, my testing has shown that the IP address stays
locked onto Server2. If I put Server2 in standby, then the IP
I am following with the Clusters from Scratch guide to setup a cluster on two
CentOS 6.3 boxes. I am at the part where corosync is started and working
correctly on both nodes. When I try to start pacemaker on either node, it keeps
failing. Here is the output from strace:
-ha-boun...@lists.linux-ha.org
[mailto:linux-ha-boun...@lists.linux-ha.org] On Behalf Of Yount, William D
Sent: Wednesday, August 08, 2012 2:52 AM
To: linux-ha@lists.linux-ha.org
Subject: [Linux-HA] Can't start pacemaker
I am following with the Clusters from Scratch guide to setup a cluster on two
Never mind. I had to add this to /etc/corosync/service.d/pcmk:
service {
name: pacemaker
ver: 1
}
-Original Message-
From: linux-ha-boun...@lists.linux-ha.org
[mailto:linux-ha-boun...@lists.linux-ha.org] On Behalf Of Yount, William D
Sent: Wednesday, August 08, 2012 5
I am using pacemaker and corosync. For some reason I keep getting this error in
my messages log:
ERROR: Cannot chdir to [/var/lib/heartbeat/cores/root]: No such file or
directory
Should I not worry about that since I am using corosync and not heartbeat
William
, William D
Sent: Friday, August 03, 2012 2:18 AM
To: linux-ha@lists.linux-ha.org
Subject: [Linux-HA] Heartbeat Error
I am using pacemaker and corosync. For some reason I keep getting this error in
my messages log:
ERROR: Cannot chdir to [/var/lib/heartbeat/cores/root]: No such file or
directory
NFS resources switch during
high use ?
I gladly give a look at your cib.xml file if you wish.
Regards,
Bruno
Le 01/08/2012 11:22, Yount, William D a écrit :
I was wondering if someone could give a look over my cib.xml file and see if
I need to change anything. I am attempting to create
Attached is my cib.xml file.
I have a two node DRBD cluster setup in Active/Active. For whatever reason, it
seems all my resources are attached to Node2. What I mean by that is that
although the resources show that they are collocated, whenever I turn Node2
off or unplug a cable from Node2,
So I have been giving some thought to my fencing agent, as it seems a proper
fencing solution is integral to any cluster. I only have access to basic
Optiplex 990 and 960 desktops which is what I have built my cloud out of. I
have been using the fence_pcmk agent but that doesn't seem to be a
I was wondering if someone could give a look over my cib.xml file and see if I
need to change anything. I am attempting to create an Active/Active cluster
offering up a DRBD volume for NFS share. Everything works fine as it is, but I
would like someone more knowledgeable to look it over at your
/2012 10:21 AM, Yount, William D wrote:
I have two servers: KNTCLFS001 and KNTCLFS002 I have a drbd partition
named nfs, on each server. They are mirrored. The mirroring works perfectly.
What I want is to serve this drbd partition up and have it so that if one
server goes down, the drbd
I have two servers: KNTCLFS001 and KNTCLFS002
I have a drbd partition named nfs, on each server. They are mirrored. The
mirroring works perfectly.
What I want is to serve this drbd partition up and have it so that if one
server goes down, the drbd partition is still available on the other
I am not sure which group to post this too. The corosync group has pointed me
to this group in the past, so I am starting here.
I have set up DRBD cluster for replicating a drbd volume across two servers.
Each server has its own IP address (10.89.99.31 and 10.89.99.32). I also have a
I have two servers, 10.89.99.31(KNTCLFS001) and 10.89.99.32(KNTCLFS002). I am
trying to use 10.89.99.30 to float between them in an Active/Active cluster.
I have several services which should be running simultaneously on both servers.
I am trying to set up an Active/Active cluster. I am using
I have two servers which are both Dell 990's. Each server has two 1tb hard
drives configured in RAID0. I have installed CentOS on both and they have the
same partition sizes. I am using /dev/KNTCLFS00X/Storage as a drbd partition
and attaching it to /dev/drbd0. DRBD syncing appears to working
For a while now, I have being trying to get pacemaker/heartbeat setup on two
servers I have. Each server has a 1.7tb LV that is replicating across servers
using DRBD. The replication works fine.
I was able to cobble together the attached document which I believe applies to
my situation. I have
22 matches
Mail list logo