Yep done that, works really well. I thought my config was good, but now i'm having trouble bringing the node back once failed. I'm unable to get it out of the on-fail state. Anyone know how?
Node test-01.sl.local (ea6257d7-d639-434f-8581-e5c7a831325a): standby (on-fail) online test-01.sl.local doesn't work. 2009/11/5 Malte Geierhos <[email protected]>: > You might want to set > net.ipv4.ip_nonlocal_bind = 1 > > so haproxy can already listen on vip_1 and vip_2 > > kind regards, > Malte Geierhos >> Apologies, all I needed was the on-fail setting and clone function. >> Now running a much simpler config! >> >> node $id="8d5816b1-a3d0-4fb8-b741-a090c2afb8b1" test-02.sl.local >> node $id="ea6257d7-d639-434f-8581-e5c7a831325a" test-01.sl.local >> primitive haproxy lsb:haproxy \ >> op monitor on-fail="standby" interval="15s" >> primitive haproxy_vip_1 ocf:heartbeat:IPaddr \ >> params ip="192.168.0.111" \ >> op monitor interval="10s" nic="eth1" >> primitive haproxy_vip_2 ocf:heartbeat:IPaddr \ >> params ip="192.168.0.112" \ >> op monitor interval="10s" nic="eth1" >> clone cl-haproxy haproxy >> property $id="cib-bootstrap-options" \ >> dc-version="1.0.5-462f1569a43740667daf7b0f6b521742e9eb8fa7" \ >> cluster-infrastructure="Heartbeat" \ >> stonith-enabled="false" >> >> 2009/11/5 Matt <[email protected]>: >> >>> Hi, >>> >>> I've got pacemaker running with heartbeat in a two node cluster. I'd >>> like haproxy service running on both servers with the VIP for each >>> server. But i'd like to failover the VIP if the local haproxy fails >>> to start or dies. I guess the problem is that haproxy shouldn't be a >>> primitive as its the same service on each server and I want it running >>> on both. The below crm doesn't work. i'm guessing the answer is going >>> to be really easy? >>> >>> node $id="8d5816b1-a3d0-4fb8-b741-a090c2afb8b1" test-02.sl.local >>> node $id="ea6257d7-d639-434f-8581-e5c7a831325a" test-01.sl.local >>> primitive haproxy-1 lsb:haproxy \ >>> op monitor interval="10s" >>> primitive haproxy-2 lsb:haproxy \ >>> op monitor interval="10s" >>> primitive haproxy_vip_1 ocf:heartbeat:IPaddr \ >>> params ip="192.168.0.111" \ >>> op monitor interval="10s" nic="eth1" >>> primitive haproxy_vip_2 ocf:heartbeat:IPaddr \ >>> params ip="192.168.0.112" \ >>> op monitor interval="10s" nic="eth1" >>> group haproxy_group_1 haproxy_vip_1 haproxy-1 >>> group haproxy_group_2 haproxy_vip_2 haproxy-2 >>> location test-01_ha-01 haproxy_group_1 \ >>> rule $id="test-01_ha-01_rule-1" 100: #uname eq test-01.sl.local >>> location test-01_ha-02 haproxy_group_2 \ >>> rule $id="test-01_ha-02_rule-2" 90: #uname eq test-02.sl.local >>> location test-02_ha-01 haproxy_group_1 \ >>> rule $id="test-02_ha-01_rule-2" 90: #uname eq test-01.sl.local >>> location test-02_ha-02 haproxy_group_2 \ >>> rule $id="test-02_ha-02_rule-1" 100: #uname eq test-02.sl.local >>> property $id="cib-bootstrap-options" \ >>> dc-version="1.0.5-462f1569a43740667daf7b0f6b521742e9eb8fa7" \ >>> cluster-infrastructure="Heartbeat" \ >>> stonith-enabled="false" >>> >>> >> _______________________________________________ >> Linux-HA mailing list >> [email protected] >> http://lists.linux-ha.org/mailman/listinfo/linux-ha >> See also: http://linux-ha.org/ReportingProblems >> > > _______________________________________________ > Linux-HA mailing list > [email protected] > http://lists.linux-ha.org/mailman/listinfo/linux-ha > See also: http://linux-ha.org/ReportingProblems > _______________________________________________ Linux-HA mailing list [email protected] http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
