Obviously if you want haproxy resource to run on both nodes, build a clone :
clone cl-haproxy haproxy meta clone-max="2" clone-node-max="1"
and bind it with to the following constraints you already wrote
but better call your resource haproxy some like : res-hap or kind. Easier
to dig into messages log then.

ps: desde donde estan esos : Servidor 3 & -Servidor 4 ?!


2012/9/27 Viviana Cuellar Rivera <[email protected]>

> Hi all,
> I'm trying to setup HAProxy with pacemaker, the scenario is as follows:
>
>                                              |-Server 1
> vip1---balancer 1------------|
>                                              |-Server 2
>
>
>                                              |-Servidor 3
> vip2---balancer 2------------|
>                                              |-Servidor 4
>
> My configuration is:
> lvs1:~#crm configure edit
> node lvs1
> node lvs2
> primitive haproxy lsb:haproxy \
> op monitor interval="30s" \
> meta is-managed="true" target-role="Started"
> primitive vip1 ocf:heartbeat:IPaddr2 \
> params ip="10.200.2.231" cidr_netmask="255.255.255.0" nic="eth0" \
> op monitor interval="40s" timeout="20s" \
> meta target-role="Started"
> primitive vip2 ocf:heartbeat:IPaddr2 \
> params ip="10.200.2.224" cidr_netmask="255.255.255.0" nic="eth0" \
> op monitor interval="40s" timeout="20s" \
> meta target-role="Started"
> location vip1_pref_1 vip1 100: lvs1
> location vip1_pref_2 vip1 50: lvs2
> location vip2_pref_1 vip2 100: lvs2
> location vip2_pref_2 vip2 50: lvs1
> colocation haproxy-with-failover inf: haproxy vip1 vip2
> order haproxy-after-failover-ip inf: ( vip1 vip2 ) haproxy
> property $id="cib-bootstrap-options" \
> dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \
> cluster-infrastructure="openais" \
> expected-quorum-votes="3" \
> stonith-enabled="false" \
> no-quorum-policy="ignore"
>
> root@lvs1:~# crm status
> ============
> Last updated: Wed Sep 26 15:37:22 2012
> Last change: Wed Sep 26 15:37:20 2012 via cibadmin on lvs1
> Stack: openais
> Current DC: lvs2 - partition with quorum
> Version: 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff
> 3 Nodes configured, 3 expected votes
> 3 Resources configured.
> ============
>
> Online: [ lvs1 lvs2 ]
>
>  vip1 (ocf::heartbeat:IPaddr2): Started lvs1
>  vip2 (ocf::heartbeat:IPaddr2): Started lvs2
>  haproxy (lsb:haproxy): Started lvs1
>
> But haproxy is not been monitored on both nodes, I don't know what I'm
> doing wrong :(
>
> I apologize for my English ;)
>
> Thanks!
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to