Hello Mathieu,

On 11/17/2011 07:22 PM, SEILLIER Mathieu wrote:
> Hi all,
> 
> I have to use Heartbeat with Pacemaker for High Availability between 2 Tomcat 
> 5.5 servers under Linux RedHat 5.4.
> The first server is active, the other one is passive. The master is called 
> servappli01, with IP address 186.20.100.81, the slave is called servappli02, 
> with IP address 186.20.100.82.
> I configured a virtual IP 186.20.100.83. Each Tomcat is not launched when 
> server is started, this is Heartbeat which starts Tomcat when it's running.
> All seem to be OK, each server see the other as active, and the crm_mon 
> command shows this below :
> 
> ============
> Last updated: Thu Nov 17 19:03:34 2011
> Stack: Heartbeat
> Current DC: servappli01 (bf8e9a46-8691-4838-82d9-942a13aeedca) - partition 
> with quorum
> Version: 1.0.11-1554a83db0d3c3e546cfd3aaff6af1184f79ee87
> 2 Nodes configured, 2 expected votes
> 2 Resources configured.
> ============
> 
> Online: [ servappli01 servappli02 ]
> 
>  Clone Set: ClusterIPClone (unique)
>      ClusterIP:0        (ocf::heartbeat:IPaddr2):       Started servappli01
>      ClusterIP:1        (ocf::heartbeat:IPaddr2):       Started servappli02

Your did not only configured a simple VIP but a cluster IP which acts
like a simple static loadbalancer ... man iptables ... search for CLUSTERIP.

If this was not your intention, simply don't clone it.

If you want a clusterip you have to choose correct meta attributes:

clone ClusterIPClone ClusterIP \
        meta globally-unique="true" clone-node-max=2 interleave=true

>  Clone Set: TomcatClone (unique)
>      Tomcat:0   (ocf::heartbeat:tomcat):        Started servappli01
>      Tomcat:1   (ocf::heartbeat:tomcat):        Started servappli02
> 
> 
> The 2 Tomcat servers as identical, and the same webapps are deployed on each 
> server in order to be able to access webapps on the other server if one is 
> down.
> By default, requests from clients are processed by the first server because 
> it's the master.
> My problem is that when I crash the Tomcat on the first server, requests from 
> clients are not redirected to the second server. For a while, requests are 
> not processed, then Heartbeat restarts Tomcat itself and requests are 
> processed again by the first server.
> Requests are never forwarded to the second Tomcat if the first is down.

Default behavior on monitoring errors is a local restart. If you always
test from the same IP I would expect your requests to fail while Tomcat
is not running on the one node you are redirected ... so if you choose
the clusterip_hash "sourceip-sourceport" your chance should be 50/50 to
get redirected ... if you want a "real" loadbalancer you might want to
integrate a service likde ldirectord with realserver checks to remove a
non-working service from the loadbalancing.

... use "ip addr show" or define a label to see your VIP ...

Regards,
Andreas

-- 
Need help with Pacemaker?
http://www.hastexo.com/now

> 
> Here is my configuration :
> 
> ha.cf file (the same on each server) :
> 
> crm             respawn
> logfacility     local0
> logfile         /var/log/ha-log
> debugfile       /var/log/ha-debug
> warntime        10
> deadtime        20
> initdead        120
> keepalive       2
> autojoin        none
> node            servappli01
> node            servappli02
> ucast           eth0 186.20.100.81 # ignored by node1 (owner of ip)
> ucast           eth0 186.20.100.82 # ignored by node2 (owner of ip)
> 
> cib.xml file (the same on each server) :
> 
> <?xml version="1.0" ?>
> <cib admin_epoch="0" crm_feature_set="3.0.1" 
> dc-uuid="bf8e9a46-8691-4838-82d9-942a13aeedca" epoch="127" have-quorum="1" 
> num_updates="51" validate-with="pacemaker-1.0">
>   <configuration>
>     <crm_config>
>       <cluster_property_set id="cib-bootstrap-options">
>         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" 
> value="1.0.11-1554a83db0d3c3e546cfd3aaff6af1184f79ee87"/>
>         <nvpair id="cib-bootstrap-options-cluster-infrastructure" 
> name="cluster-infrastructure" value="Heartbeat"/>
>         <nvpair id="cib-bootstrap-options-expected-quorum-votes" 
> name="expected-quorum-votes" value="2"/>
>         <nvpair id="cib-bootstrap-options-no-quorum-policy" 
> name="no-quorum-policy" value="ignore"/>
>         <nvpair id="cib-bootstrap-options-stonith-enabled" 
> name="stonith-enabled" value="false"/>
>       </cluster_property_set>
>     </crm_config>
>     <nodes>
>       <node id="489a0305-862a-4280-bce5-6defa329df3f" type="normal" 
> uname="servappli01"/>
>       <node id="bf8e9a46-8691-4838-82d9-942a13aeedca" type="normal" 
> uname="servappli02"/>
>     </nodes>
>     <resources>
>       <clone id="TomcatClone">
>         <meta_attributes id="TomcatClone-meta_attributes">
>           <nvpair id="TomcatClone-meta_attributes-globally-unique" 
> name="globally-unique" value="true"/>
>         </meta_attributes>
>         <primitive class="ocf" id="Tomcat" provider="heartbeat" type="tomcat">
>           <instance_attributes id="Tomcat-instance_attributes">
>             <nvpair id="Tomcat-instance_attributes-tomcat_name" 
> name="tomcat_name" value="TomcatSBNG"/>
>             <nvpair id="Tomcat-instance_attributes-tomcat_user" 
> name="tomcat_user" value="tomcat"/>
>             <nvpair id="Tomcat-instance_attributes-statusurl" 
> name="statusurl" value="http://127.0.0.1:8080/mas-security/Security"/>
>             <nvpair id="Tomcat-instance_attributes-java_home" 
> name="java_home" value="/usr/java/default"/>
>             <nvpair id="Tomcat-instance_attributes-catalina_home" 
> name="catalina_home" value="/usr/share/tomcat55"/>
>           </instance_attributes>
>           <operations>
>             <op id="Tomcat-monitor-30s" interval="30s" name="monitor" 
> timeout="60s"/>
>             <op id="Tomcat-start-0" interval="0" name="start" timeout="120s"/>
>             <op id="Tomcat-stop-0" interval="0" name="stop" timeout="120s"/>
>           </operations>
>         </primitive>
>       </clone>
>       <clone id="ClusterIPClone">
>         <meta_attributes id="ClusterIPClone-meta_attributes">
>           <nvpair id="ClusterIPClone-meta_attributes-globally-unique" 
> name="globally-unique" value="true"/>
>         </meta_attributes>
>         <primitive class="ocf" id="ClusterIP" provider="heartbeat" 
> type="IPaddr2">
>           <instance_attributes id="ClusterIP-instance_attributes">
>             <nvpair id="ClusterIP-instance_attributes-ip" name="ip" 
> value="186.20.100.83"/>
>             <nvpair id="ClusterIP-instance_attributes-cidr_netmask" 
> name="cidr_netmask" value="24"/>
>             <nvpair id="ClusterIP-instance_attributes-clusterip_hash" 
> name="clusterip_hash" value="sourceip"/>
>           </instance_attributes>
>           <operations>
>             <op id="ClusterIP-monitor-30s" interval="30s" name="monitor"/>
>           </operations>
>         </primitive>
>       </clone>
>     </resources>
>     <constraints>
>       <rsc_colocation id="ip-with-tomcat" rsc="ClusterIPClone" 
> score="INFINITY" with-rsc="TomcatClone"/>
>       <rsc_order first="TomcatClone" id="ip-after-tomcat" score="INFINITY" 
> then="ClusterIPClone"/>
>     </constraints>
>     <rsc_defaults>
>       <meta_attributes id="rsc-options">
>         <nvpair id="rsc-options-resource-stickiness" 
> name="resource-stickiness" value="100"/>
>       </meta_attributes>
>     </rsc_defaults>
>   </configuration>
> </cib>
> 
> Another strange thing is that the VIP never appears in the result of ifconfig 
> command...
> Can somebody help me please ?
> I guess there's is something wrong but I don't know what !
> Thanx
> 
> Mathieu
> _______________________________________________
> Linux-HA mailing list
> Linux-HA@lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems


Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to