Re: [ClusterLabs] HA static route

2016-03-14 Thread S0ke
So even if the default gateway is set in /etc/sysconfig/network-script/ifcfg-eth* that could cause it? Original Message Subject: Re: [ClusterLabs] HA static route Local Time: March 14, 2016 9:52 PM UTC Time: March 15, 2016 2:52 AM From: denni...@conversis.de To: s...@protonmail

Re: [ClusterLabs] HA static route

2016-03-14 Thread Dennis Jacobfeuerborn
On 15.03.2016 02:25, S0ke wrote: > Trying to do HA for a static route. The resource is fine on HA1. But when I > try to failover to HA2 it does not seem to add the route. > > Operation start for p_src_eth0DEF (ocf:heartbeat:Route) returned 1 >> stderr: RTNETLINK answers: File exists >> stderr: ER

[ClusterLabs] HA static route

2016-03-14 Thread S0ke
Trying to do HA for a static route. The resource is fine on HA1. But when I try to failover to HA2 it does not seem to add the route. Operation start for p_src_eth0DEF (ocf:heartbeat:Route) returned 1 > stderr: RTNETLINK answers: File exists > stderr: ERROR: p_src_eth0DEF Failed to add network ro

Re: [ClusterLabs] Problems with pcs/corosync/pacemaker/drbd/vip/nfs

2016-03-14 Thread Digimer
On 14/03/16 01:47 PM, Todd Hebert wrote: > Hello, > > I'm working on setting up a test-system that can handle NFS failover. > > The base is CentOS 7. > I'm using ZVOL block devices out of ZFS to back DRBD replicated volumes. > > I have four DRBD resources (r0, r1, r2, r3, which are /dev/drbd1 dr

[ClusterLabs] Problems with pcs/corosync/pacemaker/drbd/vip/nfs

2016-03-14 Thread Todd Hebert
Hello, I'm working on setting up a test-system that can handle NFS failover. The base is CentOS 7. I'm using ZVOL block devices out of ZFS to back DRBD replicated volumes. I have four DRBD resources (r0, r1, r2, r3, which are /dev/drbd1 drbd2 drbd3 and drbd4 respectively) These all have XFS fi

Re: [ClusterLabs] Cluster failover failure with Unresolved dependency

2016-03-14 Thread Ken Gaillot
On 03/10/2016 09:49 AM, Lorand Kelemen wrote: > Dear List, > > After the creation and testing of a simple 2 node active-passive > drbd+postfix cluster nearly everything works flawlessly (standby, failure > of a filesystem resource + failover, splitbrain + manual recovery) however > when delibarate

Re: [ClusterLabs] Antw: Antw: notice: throttle_handle_load: High CPU load detected

2016-03-14 Thread Ken Gaillot
On 02/29/2016 07:00 AM, Kostiantyn Ponomarenko wrote: > I am back to this question =) > > I am still trying to understand the impact of "High CPU load detected" > messages in the log. > Looking in the code I figured out that setting "load-threshold" parameter > to something higher than 100% solves

Re: [ClusterLabs] ClusterIP location constraint reappears after reboot

2016-03-14 Thread Ken Gaillot
On 02/22/2016 05:23 PM, Jeremy Matthews wrote: > Thanks for the quick response again, and pardon for the delay in responding. > A colleague of mine and I have been trying some different things today. > > But from the reboot on Friday, further below are the logs from corosync.log > from the time

Re: [ClusterLabs] documentation on STONITH with remote nodes?

2016-03-14 Thread Adam Spiers
Ken Gaillot wrote: > On 03/12/2016 05:07 AM, Adam Spiers wrote: > > Is there any documentation on how STONITH works on remote nodes? I > > couldn't find any on clusterlabs.org, and it's conspicuously missing > > from: > > > > http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html-single/Pacemaker_R

Re: [ClusterLabs] documentation on STONITH with remote nodes?

2016-03-14 Thread Ken Gaillot
On 03/12/2016 05:07 AM, Adam Spiers wrote: > Is there any documentation on how STONITH works on remote nodes? I > couldn't find any on clusterlabs.org, and it's conspicuously missing > from: > > http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html-single/Pacemaker_Remote/ > > I'm guessing the ans

Re: [ClusterLabs] Security with Corosync

2016-03-14 Thread Jan Friesse
Nikhil Utane napsal(a): Follow-up question. I noticed that secauth was turned off in my corosync.conf file. I enabled it on all 3 nodes and restarted the cluster. Everything was working fine. However I just noticed that I had forgotten to copy the authkey to one of the node. It is present on 2 no