Cloning IPAddr2 resources utilizes the iptables CLUSTERIP rule. Probably a good 
idea to start looking at it w/ tcpdump and seeing if either box gets the icmp 
echo-request packet (from a ping) and determining if it just doesn't respond 
properly, doesn't get it at all, or something else.

I'd say it's more of a iptables/networking issue than it is a pacemaker problem 
now. That said, you didn't detail why you wanted a shared VIP in the first 
place, or what the application is, so it's perhaps going to cause more problems 
than it's worth (e.g. if your app is running, but is broke on one box, the VIP 
will still route users to it).



On May 14, 2012, at 9:45 AM, Paul Damken wrote:

> Jake Smith <jsmith@...> writes:
> 
>> 
>> 
>> clone-node-max="2" should only be one.  How about the output from crm_mon -
> fr1And ip a s on each node? Jake
>> ----- Reply message -----From: "Paul Damken" <zen.suite <at> gmail.com>To: 
> <pacemaker <at> oss.clusterlabs.org>Subject: [Pacemaker] VIP on Active/Active 
> clusterDate: Sat, May 12, 2012 2:49 pm
>> 
>> 
>> 
>> 
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker@...
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>> 
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>> 
> 
> Jake, Thanks here is the whole info. Same behavior. VIP not pingable nor 
> reachable.
> 
> Do you think that Share VIP should work on SLES 11 SP1 HAE?
> I cannot get this VIP to work.
> 
> Resources: 
> 
> primitive ip_vip ocf:heartbeat:IPaddr2 \
>        params ip="192.168.1.100" nic="bond0" cidr_netmask="22" 
> broadcast="192.168.1.255" clusterip_hash="sourceip-sourceport" iflabel="VIP1" 
> \
>        op start interval="0" timeout="20" \
>        op stop interval="0" timeout="20" \
>        op monitor interval="10" timeout="20" start-delay="0"
> 
> clone cl_vip ip_vip \
>        meta interleave="true" globally-unique="true" clone-max="2" clone-node-
> max="1" target-role="Started" is-managed="true"
> 
> crm_mon:
> 
> ============
> Last updated: Mon May 14 08:27:50 2012
> Stack: openais
> Current DC: hanode1 - partition with quorum
> Version: 1.1.5-5bd2b9154d7d9f86d7f56fe0a74072a5a6590c60
> 2 Nodes configured, 2 expected votes
> 37 Resources configured.
> ============
> 
> Online: [ hanode2 hanode1 ]
> 
> Full list of resources:
> 
> cluster_mon     (ocf::pacemaker:ClusterMon):    Started hanode1
> Clone Set: HASI [HASI_grp]
>     Started: [ hanode2 hanode1 ]
> hanode1-stonith  (stonith:external/ipmi-operator):       Started hanode2
> hanode2-stonith  (stonith:external/ipmi-operator):       Started hanode1
> vghanode1        (ocf::heartbeat:LVM):   Started hanode1
> vghanode2        (ocf::heartbeat:LVM):   Started hanode2
> Clone Set: ora [ora_grp]
>     Started: [ hanode2 hanode1 ]
> Clone Set: cl_vip [ip_vip] (unique)
>     ip_vip:0   (ocf::heartbeat:IPaddr2):       Started hanode2
>     ip_vip:1   (ocf::heartbeat:IPaddr2):       Started hanode1
> 
> 
> 
> hanode1:~ # ip a s
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
>    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>    inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
>    inet 127.0.0.2/8 brd 127.255.255.255 scope host secondary lo
>    inet6 ::1/128 scope host
>       valid_lft forever preferred_lft forever
> 2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast 
> master bond0 state UP qlen 1000
>    link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> 3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast 
> master bond0 state UP qlen 1000
>    link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
> 4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue 
> state 
> UP
>    link/ether 9c:8e:99:24:72:a0 brd ff:ff:ff:ff:ff:ff
>    inet 192.168.1.58/22 brd 192.168.1.255 scope global bond0
>    inet 192.168.1.100/22 brd 192.168.1.255 scope global secondary bond0:VIP1
>    inet6 fe80::9e8e:99ff:fe24:72a0/64 scope link
>       valid_lft forever preferred_lft forever
> 
> -----------------------
> 
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to