I can’t answer all of this because I’m still working out how to do fencing, but 
I’ve been setting up a Pacemaker cluster in Amazon Web Services across two 
separate availability zones. Naturally, this means that I have to bridge 
subnets, so I’ve battled through a good bit of this already.

Imagine that you have a cluster node in each of two IP subnets: 10.100.0.0/24 
and 10.200.0.0/24. This configuration prevents you from doing two things:


1.       You can’t use multicast

2.       You can’t pick an IP in either subnet as the cluster VIP

The way that I got around this was to pick an arbitrary subnet that exists 
_outside_ of all configured subnets in my environment: 10.0.0.0/24. I then 
created routes from each of my cluster node subnets to VIP(s) (I’m trying to 
make my cluster Active/Active, so I want 2) on this subnet:

Destination                          Target
10.0.0.100/32                     network interface for cluster node 1
10.0.0.101/32                     network interface for cluster node 2

I then set up Pacemaker cluster resources for the VIPs:

pcs resource create cluster_vip1 ocf:heartbeat:IPAddr2 ip=10.0.0.100 
cidr_netmask=32 nic=eth0 op monitor interval=15s
pcs resource create cluster_vip2 ocf:heartbeat:IPAddr2 ip=10.0.0.101 
cidr_netmask=32 nic=eth0 op monitor interval=15s

The voodoo in this is that you specify the device name of the network interface 
that you’re mapping to rather than just the IP address. Otherwise Pacemaker 
will throw an error about how 10.0.0.100 isn’t an address that exists on the 
cluster nodes. Then you need to make sure that the right VIP is running on the 
right cluster node by probably moving resources around.

pcs resource move cluster_vip1 clusternode1
pcs resource move cluster_vip2 clusternode2

At this point, if everything is working properly, you should be able to ping 
(assuming no firewall rules are in the way) the VIP IPs as long as they are 
associated with the appropriate node and the routes are correct. You’ll find 
that if you move the VIP to another node without updating the routing table, 
the pings will no longer work. Success! Well, almost…

While I know logically that to make this work I need to sort out a fencing 
method that detects node failure and then have a fencing script that updates 
the routing table as appropriate to get the traffic moving around properly to 
the failover node in the event of a…well…event… ☺

#!/bin/bash
sudo -u hacluster aws ec2 replace-route --route-table-id rtb-99999999 
--destination-cidr-block 10.0.0.100/32 --network-interface-id eni-11111111
pcs resource move cluster_vip1 clusternode2

Obviously, this would only work in an AWS deployment, and, like I said, I still 
haven’t figured out how to detect an outage to make this failover occur. 
Hopefully, though, this should get you pointed in the right direction.

--

[ jR ]
  M: +1 (703) 628-2621
  @: [email protected]<mailto:[email protected]>

  there is no path to greatness; greatness is the path

From: "bhargav M.P" <[email protected]>
Reply-To: Cluster Labs - All topics related to open-source clustering welcomed 
<[email protected]>
Date: Tuesday, August 9, 2016 at 2:40 PM
To: "[email protected]" <[email protected]>
Subject: [ClusterLabs] Can Pacemaker monitor geographical separated servers

Hi All,
I have deployment where we have two Linux servers that are geographically 
separated and they are across different subnets . I want the server to work in  
Active/Standby mode . I would like to use pacemaker-corsync for performing 
switch over when active fails.
My requirement:
 would like to have a single virtual IP address for accessing the Linux servers 
and only active must have VIP(virtual IP address)

Can pacemaker transfer the virtual IP address to new active  when the current 
active fails.? If so,  how can the same virtual IP address be accessible from 
the client since it has now moved to a different subnet.?
If the above use case cannot be supported by the pacemaker what are the 
possible options i need to look at.?

Thank you so much for the help,
Bhargav




_______________________________________________
Users mailing list: [email protected]
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to