Raghavendra,
This is exactly what I was missing, thanks! Based on the information you gave
I was I was able to find and adapt the script from
http://xrsa.net/2015/04/25/ctdb-glusterfs-nfs-event-monitor-script/ to work
with CentOS 6.7 and stopping the NFS PID now results in CTDB failover in 20
On Thu, Jan 28, 2016 at 11:08 AM, Kris Laib wrote:
> Soumya,
>
> CTDB failover works great if the server crashes or the NIC is pulled, but
> I don't believe there's anything in the CTDB setup that would cause it to
> realize there is a problem if only the glusterfs process
Soumya,
CTDB failover works great if the server crashes or the NIC is pulled, but I
don't believe there's anything in the CTDB setup that would cause it to realize
there is a problem if only the glusterfs process responsible for serving NFS is
killed but network connectivity with other CTDB
Kris,
You can achieve what you want with Corosync-Pacemaker, Corosync is a heartbeat
and Pacemaker is a cluster manager.
You can create a pacemaker cluster using the hosts used for the Gluster
cluster, then configure a virtual IP resource and Gluster monitoring resources
with the count of the
On 01/28/2016 11:08 AM, Kris Laib wrote:
Soumya,
CTDB failover works great if the server crashes or the NIC is pulled, but I don't believe
there's anything in the CTDB setup that would cause it to realize there is a problem if
only the glusterfs process responsible for serving NFS is killed
Hi all,
We're getting ready to roll out Gluster using standard NFS from the clients,
and CTDB and RRDNS to help facilitate HA. I thought we were good to know, but
recently had an issue where there wasn't enough memory on one of the gluster
nodes in a test cluster, and OOM killer took out the
On 01/27/2016 09:39 PM, Kris Laib wrote:
Hi all,
We're getting ready to roll out Gluster using standard NFS from the
clients, and CTDB and RRDNS to help facilitate HA. I thought we were
good to know, but recently had an issue where there wasn't enough memory
on one of the gluster nodes in a