The clustering configurations in both files are identical. That won't work
in a dynamic LB scenario.

Please try replacing the clustering sections in the respective axis2.xml
files with the attached ones.

If you want to understand how to configure clustering, please see;

1. http://wso2.org/library/articles/introduction-wso2-carbon-clustering
2.
http://wso2.org/library/articles/wso2-carbon-cluster-configuration-language


On Wed, Jul 13, 2011 at 1:30 AM, TrevorC <[email protected]> wrote:

>
> Here are my xml files synapse,axis server respectively
> contribution are highly appreciated
>
> Afkham Azeez wrote:
> >
> > No application members available suggests that the Axis2 nodes were not
> > able
> > to join the load balancers (LB) cluster. This indicates that either;
> >
> > 1. GroupManagement has not been enabled in the LB's axis2.xml
> > 2. A group management agent has not been defined in the axis2.xml file
> > corresponding to the domain defined in the Axis2 nodes' axis2.xml
> > 3. If you have enabled multicast based membership discovery, multicasting
> > is
> > not working on your network, hence the membership discovery fails
> > 4. In the LB you have enabled wka based membership discovery, and in the
> > Axis2 nodes, you have enabled WKA based discovery.
> > 5. Members advertising invalid hosts/ports because of invalid clustering
> > config.
> >
> >
> > Send me your axis2.xml files and synapse.xml file, and I will see what is
> > wrong.
> >
> >
> http://old.nabble.com/file/p32048735/axis2.xml axis2.xml
> http://old.nabble.com/file/p32048735/axis2.xml axis2.xml
> --
> View this message in context:
> http://old.nabble.com/Sample--57-Dynamic-load-balancing-tp31954754p32048735.html
> Sent from the Synapse - User mailing list archive at Nabble.com.
>
>


-- 
*Afkham Azeez*
Director of Architecture; WSO2, Inc.; http://wso2.com,
*Member; Apache Software Foundation;
**http://www.apache.org/*<http://www.apache.org/>
*
*
*email: **[email protected]* <[email protected]>* cell: +94 77 3320919
blog: **http://blog.afkham.org* <http://blog.afkham.org>*
twitter: **http://twitter.com/afkham_azeez*<http://twitter.com/afkham_azeez>
*
linked-in: **http://lk.linkedin.com/in/afkhamazeez*
*
*
*Lean . Enterprise . Middleware*
*
*
<clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">

        <!--
           This parameter indicates whether the cluster has to be automatically initalized
           when the AxisConfiguration is built. If set to "true" the initialization will not be
           done at that stage, and some other party will have to explictly initialize the cluster.
        -->
        <parameter name="AvoidInitiation">true</parameter>

        <!--
           The membership scheme used in this setup. The only values supported at the moment are
           "multicast" and "wka"

           1. multicast - membership is automatically discovered using multicasting
           2. wka - Well-Known Address based multicasting. Membership is discovered with the help
                    of one or more nodes running at a Well-Known Address. New members joining a
                    cluster will first connect to a well-known node, register with the well-known node
                    and get the membership list from it. When new members join, one of the well-known
                    nodes will notify the others in the group. When a member leaves the cluster or
                    is deemed to have left the cluster, it will be detected by the Group Membership
                    Service (GMS) using a TCP ping mechanism.
        -->
        <parameter name="membershipScheme">multicast</parameter>

        <!--
         The clustering domain/group. Nodes in the same group will belong to the same multicast
         domain. There will not be interference between nodes in different groups.
        -->
        <parameter name="domain">apache.axis2.application.domain</parameter>

        <!--
           When a Web service request is received, and processed, before the response is sent to the
           client, should we update the states of all members in the cluster? If the value of
           this parameter is set to "true", the response to the client will be sent only after
           all the members have been updated. Obviously, this can be time consuming. In some cases,
           such this overhead may not be acceptable, in which case the value of this parameter
           should be set to "false"
        -->
        <parameter name="synchronizeAll">false</parameter>

        <!--
          The maximum number of times we need to retry to send a message to a particular node
          before giving up and considering that node to be faulty
        -->
        <parameter name="maxRetries">10</parameter>

        <!-- The multicast address to be used -->
        <parameter name="mcastAddress">228.0.0.4</parameter>

        <!-- The multicast port to be used -->
        <parameter name="mcastPort">45564</parameter>

        <!-- The frequency of sending membership multicast messages (in ms) -->
        <parameter name="mcastFrequency">500</parameter>

        <!-- The time interval within which if a member does not respond, the member will be
         deemed to have left the group (in ms)
         -->
        <parameter name="memberDropTime">3000</parameter>

        <!--
           The IP address of the network interface to which the multicasting has to be bound to.
           Multicasting would be done using this interface.
        -->
        <parameter name="mcastBindAddress">127.0.0.1</parameter>

        <!-- The host name or IP address of this member -->
        
        <parameter name="localMemberHost">127.0.0.1</parameter>
        

        <!--
        The TCP port used by this member. This is the port through which other nodes will
        contact this member
         -->
        <parameter name="localMemberPort">4010</parameter>

        <!--
        Preserve message ordering. This will be done according to sender order.
        -->
        <parameter name="preserveMessageOrder">true</parameter>

        <!--
        Maintain atmost-once message processing semantics
        -->
        <parameter name="atmostOnceMessageSemantics">true</parameter>
 
        <!--
           This interface is responsible for handling state replication. The property changes in
           the Axis2 context hierarchy in this node, are propagated to all other nodes in the cluster.

           The "excludes" patterns can be used to specify the prefixes (e.g. local_*) or
           suffixes (e.g. *_local) of the properties to be excluded from replication. The pattern
           "*" indicates that all properties in a particular context should not be replicated.

            The "enable" attribute indicates whether context replication has been enabled
        -->
        <stateManager class="org.apache.axis2.clustering.state.DefaultStateManager"
                      enable="false">
            <replication>
                <defaults>
                    <exclude name="local_*"/>
                    <exclude name="LOCAL_*"/>
                </defaults>
                <context class="org.apache.axis2.context.ConfigurationContext">
                    <exclude name="local_*"/>
                    <exclude name="UseAsyncOperations"/>
                    <exclude name="SequencePropertyBeanMap"/>
                </context>
                <context class="org.apache.axis2.context.ServiceGroupContext">
                    <exclude name="local_*"/>
                    <exclude name="my.sandesha.*"/>
                </context>
                <context class="org.apache.axis2.context.ServiceContext">
                    <exclude name="local_*"/>
                    <exclude name="my.sandesha.*"/>
                </context>
            </replication>
        </stateManager>
    </clustering>
<clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">

        <!--
           This parameter indicates whether the cluster has to be automatically initalized
           when the AxisConfiguration is built. If set to "true" the initialization will not be
           done at that stage, and some other party will have to explictly initialize the cluster.
        -->
        <parameter name="AvoidInitiation">true</parameter>

        <!--
           The membership scheme used in this setup. The only values supported at the moment are
           "multicast" and "wka"

           1. multicast - membership is automatically discovered using multicasting
           2. wka - Well-Known Address based multicasting. Membership is discovered with the help
                    of one or more nodes running at a Well-Known Address. New members joining a
                    cluster will first connect to a well-known node, register with the well-known node
                    and get the membership list from it. When new members join, one of the well-known
                    nodes will notify the others in the group. When a member leaves the cluster or
                    is deemed to have left the cluster, it will be detected by the Group Membership
                    Service (GMS) using a TCP ping mechanism.
        -->
        <parameter name="membershipScheme">multicast</parameter>

        <!--
         The clustering domain/group. Nodes in the same group will belong to the same multicast
         domain. There will not be interference between nodes in different groups.
        -->
        <parameter name="domain">wso2.carbon.lb.domain</parameter>

        <!--
           When a Web service request is received, and processed, before the response is sent to the
           client, should we update the states of all members in the cluster? If the value of
           this parameter is set to "true", the response to the client will be sent only after
           all the members have been updated. Obviously, this can be time consuming. In some cases,
           such this overhead may not be acceptable, in which case the value of this parameter
           should be set to "false"
        -->
        <parameter name="synchronizeAll">false</parameter>

        <!--
          The maximum number of times we need to retry to send a message to a particular node
          before giving up and considering that node to be faulty
        -->
        <parameter name="maxRetries">10</parameter>

        <!-- The multicast address to be used -->
        <parameter name="mcastAddress">228.0.0.4</parameter>

        <!-- The multicast port to be used -->
        <parameter name="mcastPort">45564</parameter>

        <!-- The frequency of sending membership multicast messages (in ms) -->
        <parameter name="mcastFrequency">500</parameter>

        <!-- The time interval within which if a member does not respond, the member will be
         deemed to have left the group (in ms)
         -->
        <parameter name="memberDropTime">3000</parameter>

        <!--
           The IP address of the network interface to which the multicasting has to be bound to.
           Multicasting would be done using this interface.
        -->
        <parameter name="mcastBindAddress">127.0.0.1</parameter>

        <!-- The host name or IP address of this member -->
        
        <parameter name="localMemberHost">127.0.0.1</parameter>
        

        <!--
        The TCP port used by this member. This is the port through which other nodes will
        contact this member
         -->
        <parameter name="localMemberPort">4000</parameter>

        <!--
        Preserve message ordering. This will be done according to sender order.
        -->
        <parameter name="preserveMessageOrder">true</parameter>

        <!--
        Maintain atmost-once message processing semantics
        -->
        <parameter name="atmostOnceMessageSemantics">true</parameter>
        
       <!--
        Enable the groupManagement entry if you need to run this node as a cluster manager.
        Multiple application domains with different GroupManagementAgent implementations
        can be defined in this section.
        -->
        <groupManagement enable="true">
            <applicationDomain name="apache.axis2.application.domain"
                               description="Axis2 group"
                               agent="org.apache.axis2.clustering.management.DefaultGroupManagementAgent"/>
        </groupManagement>
 
        <!--
           This interface is responsible for handling state replication. The property changes in
           the Axis2 context hierarchy in this node, are propagated to all other nodes in the cluster.

           The "excludes" patterns can be used to specify the prefixes (e.g. local_*) or
           suffixes (e.g. *_local) of the properties to be excluded from replication. The pattern
           "*" indicates that all properties in a particular context should not be replicated.

            The "enable" attribute indicates whether context replication has been enabled
        -->
        <stateManager class="org.apache.axis2.clustering.state.DefaultStateManager"
                      enable="false">
            <replication>
                <defaults>
                    <exclude name="local_*"/>
                    <exclude name="LOCAL_*"/>
                </defaults>
                <context class="org.apache.axis2.context.ConfigurationContext">
                    <exclude name="local_*"/>
                    <exclude name="UseAsyncOperations"/>
                    <exclude name="SequencePropertyBeanMap"/>
                </context>
                <context class="org.apache.axis2.context.ServiceGroupContext">
                    <exclude name="local_*"/>
                    <exclude name="my.sandesha.*"/>
                </context>
                <context class="org.apache.axis2.context.ServiceContext">
                    <exclude name="local_*"/>
                    <exclude name="my.sandesha.*"/>
                </context>
            </replication>
        </stateManager>
    </clustering>

Reply via email to