Folks,
I am building a high availability web hosting platform which will include a
pair of web servers with an OCFS2 shared filesystem and a MySQL database server
with a backup (using a DRBD-based filesystem instead of MySQL replication).
Does this sound like one cluster or two (one for the
Everything (Apache+MySql) on the Master (Primary), plus a Slave (Secondary)
ready to run in case of Master failure, seems to be the good recipe.
But I'm not sure that a good Stateful Resource Agent for MySQL is
available, like the very good one from Takatoshi Matsuo (for Postgres).
2012/12/4 Art
Emmanuel,
I don't understand. This seems to be working very well for me:
primitive p_mysqld lsb:mysql
order o_mysqld inf: cl_fs_share p_mysqld
rsc_defaults resource-stickiness=100
cl_fs_share is an OCFS2 filesystem but, since I never have p_mysqld
running on more than one node, it could
I have two machines with corosync+pacemaker, and I want to run mysql +
websphere, how can I defined mysql start on node1 and websphere start on node2?
thanks.
I use drbd for data sync.
alonerhu via foxmail
___
Linux-HA mailing list
On 12/03/2012 10:26 PM, Shuge Lee wrote:
Hi all:
Please share user guide in PDF format(or in markdown)
http://www.linux-ha.org/doc/users-guide/_preface.html
thanks.
If you are starting a new project, you should not use heartbeat. It is
no longer developed and has been replaced by
-Original Message-
From: linux-ha-boun...@lists.linux-ha.org
[mailto:linux-ha-boun...@lists.linux-ha.org] On Behalf Of
Vladislav Bogdanov
Sent: Saturday, December 01, 2012 10:40 PM
To: General Linux-HA mailing list
Subject: Re: [Linux-HA] master/slave drbd resource STILL will
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 11/29/2012 10:14 PM, Robinson, Eric wrote:
Bump... does anyone have some insight on this? Google is not
turning up anything useful.
Our newest cluster will not failover master/slave drbd resources.
It works fine manually using drbdadm from a
--
Eric Robinson
Director of Information Technology
Physician Select Management, LLC
775-885-2211 x 111
I am not sure if that will really help you - but in my
cluster (ok older pacemaker version) I ahve the following to
define a master slave
resource:
primitive rsc_sap_HA0_ASCS00
yep, you can go with it, but as I see you get enough with a ``sleeping
slave`` with DRBD in Master/Slave mode, so you can't mount the FS on the
slave, therefore MySQL only wakes up after transition when `something bad
happens`.
I prefer the solution Master MySQL with Hot-spare on the other server,
On 2012-12-04T20:38:54, Fabian Herschel fabian.hersc...@arcor.de wrote:
I am not sure if that will really help you - but in my cluster (ok
older pacemaker version) I ahve the following to define a master slave
resource:
primitive rsc_sap_HA0_ASCS00 ocf:heartbeat:SAPInstance \
operations
I think I found how to configure. I change eth0 to xenbr0, because I
modified the networ/interface (ifconfig)
# cat ha.cf
logfacility daemon
keepalive 2
deadtime 10
warntime 5
initdead 120
udpport 694
ucast xenbr0 192.168.188.8
auto_failback on
node cloud10
node cloud8
use_logd yes
crm respawn
This setup might do the trick :
primitive srv-mysql lsb:mysql \
op monitor interval=120 \
op start interval=0 timeout=60 on-fail=restart \
op stop interval=0 timeout=60s on-fail=ignore
primitive srv-websphere lsb:websphere \
op monitor interval=120 \
op start interval=0 timeout=60
If the promote of DRBD on one node cannot be done, this might be because
the demote on the other node cannot be achieved.
Do you mount a FS ? If so, force : umount -fl /mountpoint
Double check (cat /proc/drbd) that the DRBD resource is really secondary on
the demoted node.
Maybe you could play
13 matches
Mail list logo