Re: [ClusterLabs] GFS and cLVM fencing requirements with DLM

2016-03-15 Thread Digimer
On 15/03/16 02:12 PM, Ferenc Wágner wrote: > Hi, > > I'm referring here to an ancient LKML thread introducing DLM. In > http://article.gmane.org/gmane.linux.kernel/299788 David Teigland > states: > > GFS requires that a failed node be fenced prior to gfs being told to > begin recovery

Re: [ClusterLabs] HA static route

2016-03-15 Thread S0ke
So I switched the resource to use the following: When I do a failover from the master to slave. It seems to add a route of 0.0.0.0/0 but still fails to start the p_src_eth0DEF resource with the same error. > stderr: RTNETLINK answers: File exists > stderr: ERROR: p_src_eth0DEF Failed to

[ClusterLabs] GFS and cLVM fencing requirements with DLM

2016-03-15 Thread Ferenc Wágner
Hi, I'm referring here to an ancient LKML thread introducing DLM. In http://article.gmane.org/gmane.linux.kernel/299788 David Teigland states: GFS requires that a failed node be fenced prior to gfs being told to begin recovery for that node which sounds very plausible as according to

Re: [ClusterLabs] Problems with pcs/corosync/pacemaker/drbd/vip/nfs

2016-03-15 Thread Todd Hebert
Thanks for the suggestion. I'll take a look at how this differs. :) -Original Message- I'm not very familiar with NFS in a cluster, but there is an ocf:heartbeat:nfsserver resource agent in the resource-agents package. OCF agents are generally preferable to lsb/systemd because they

Re: [ClusterLabs] Problems with pcs/corosync/pacemaker/drbd/vip/nfs

2016-03-15 Thread Ken Gaillot
On 03/14/2016 12:47 PM, Todd Hebert wrote: > Hello, > > I'm working on setting up a test-system that can handle NFS failover. > > The base is CentOS 7. > I'm using ZVOL block devices out of ZFS to back DRBD replicated volumes. > > I have four DRBD resources (r0, r1, r2, r3, which are /dev/drbd1

[ClusterLabs] fence_scsi no such device

2016-03-15 Thread marvin
Hi, I'm trying to get fence_scsi working, but i get "no such device" error. It's a two node cluster with nodes called "node01" and "node03". The OS is RHEL 7.2. here is some relevant info: # pcs status Cluster name: testrhel7cluster Last updated: Tue Mar 15 15:05:40 2016 Last