http://git-wip-us.apache.org/repos/asf/activemq-6/blob/8ecd255f/docs/user-manual/en/clusters.xml
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/clusters.xml b/docs/user-manual/en/clusters.xml
new file mode 100644
index 0000000..25e79b2
--- /dev/null
+++ b/docs/user-manual/en/clusters.xml
@@ -0,0 +1,998 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!-- 
============================================================================= 
-->
+<!-- Copyright © 2009 Red Hat, Inc. and others.                               
     -->
+<!--                                                                           
    -->
+<!-- The text of and illustrations in this document are licensed by Red Hat 
under  -->
+<!-- a Creative Commons Attribution–Share Alike 3.0 Unported license 
("CC-BY-SA"). -->
+<!--                                                                           
    -->
+<!-- An explanation of CC-BY-SA is available at                                
    -->
+<!--                                                                           
    -->
+<!--            http://creativecommons.org/licenses/by-sa/3.0/.                
    -->
+<!--                                                                           
    -->
+<!-- In accordance with CC-BY-SA, if you distribute this document or an 
adaptation -->
+<!-- of it, you must provide the URL for the original version.                 
    -->
+<!--                                                                           
    -->
+<!-- Red Hat, as the licensor of this document, waives the right to enforce,   
    -->
+<!-- and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent    
    -->
+<!-- permitted by applicable law.                                              
    -->
+<!-- 
============================================================================= 
-->
+
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" 
"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd"; [
+        <!ENTITY % BOOK_ENTITIES SYSTEM "HornetQ_User_Manual.ent">
+        %BOOK_ENTITIES;
+        ]>
+<chapter id="clusters">
+    <title>Clusters</title>
+    <section>
+        <title>Clusters Overview</title>
+        <para>HornetQ clusters allow groups of HornetQ servers to be grouped 
together in order to
+            share message processing load. Each active node in the cluster is 
an active HornetQ
+            server which manages its own messages and handles its own 
connections. </para>
+        <note id="clustered-deprecation">
+            <para>The <emphasis>clustered</emphasis> parameter is deprecated 
and no longer needed for
+            setting up a cluster. If your configuration contains this 
parameter it will be ignored and
+            a message with the ID <literal>HQ221038</literal> will be 
logged.</para>
+        </note>
+        <para>The cluster is formed by each node declaring <emphasis>cluster 
connections</emphasis>
+            to other nodes in the core configuration file <literal
+                    >hornetq-configuration.xml</literal>. When a node forms a 
cluster connection to
+            another node, internally it creates a <emphasis>core 
bridge</emphasis> (as described in
+            <xref linkend="core-bridges"/>) connection between it and the 
other node, this is
+            done transparently behind the scenes - you don't have to declare 
an explicit bridge for
+            each node. These cluster connections allow messages to flow 
between the nodes of the
+            cluster to balance load.</para>
+        <para>Nodes can be connected together to form a cluster in many 
different topologies, we
+            will discuss a couple of the more common topologies later in this 
chapter.</para>
+        <para>We'll also discuss client side load balancing, where we can 
balance client connections
+            across the nodes of the cluster, and we'll consider message 
redistribution where HornetQ
+            will redistribute messages between nodes to avoid 
starvation.</para>
+        <para>Another important part of clustering is <emphasis>server 
discovery</emphasis> where
+            servers can broadcast their connection details so clients or other 
servers can connect
+            to them with the minimum of configuration.</para>
+        <warning id="copy-warning">
+            <para>Once a cluster node has been configured it is common to 
simply copy that configuration
+            to other nodes to produce a symmetric cluster. However, care must 
be taken when copying the
+            HornetQ files. Do not copy the HornetQ <emphasis>data</emphasis> 
(i.e. the
+            <literal>bindings</literal>, <literal>journal</literal>, and 
<literal>large-messages</literal>
+            directories) from one node to another. When a node is started for 
the first time and initializes
+            its journal files it also persists a special identifier to the 
<literal>journal</literal>
+            directory. This id <emphasis>must</emphasis> be unique among nodes 
in the cluster or the
+            cluster will not form properly.</para>
+        </warning>
+    </section>
+    <section id="clusters.server-discovery">
+        <title>Server discovery</title>
+        <para>Server discovery is a mechanism by which servers can propagate 
their connection details to:</para>
+        <itemizedlist>
+            <listitem>
+                <para>
+                    Messaging clients. A messaging client wants to be able to 
connect
+                    to the servers of the cluster without having specific 
knowledge of which servers
+                    in the cluster are up at any one time.
+                </para>
+            </listitem>
+            <listitem>
+                <para>Other servers. Servers in a cluster want to be able to 
create
+                    cluster connections to each other without having prior 
knowledge of all the
+                    other servers in the cluster.</para>
+            </listitem>
+        </itemizedlist>
+        <para>
+            This information, let's call it the Cluster Topology, is actually 
sent around normal HornetQ
+            connections to clients and to other servers over cluster 
connections. This being the case we need a
+            way of establishing the initial first connection. This can be done 
using
+            dynamic discovery techniques like <ulink 
url="http://en.wikipedia.org/wiki/User_Datagram_Protocol"; >UDP</ulink>
+            and <ulink url="http://www.jgroups.org/";>JGroups</ulink>, or by
+            providing a list of initial connectors.
+        </para>
+        <section>
+            <title>Dynamic Discovery</title>
+            <para>
+                Server discovery uses <ulink 
url="http://en.wikipedia.org/wiki/User_Datagram_Protocol"; >UDP</ulink>
+                multicast or <ulink 
url="http://www.jgroups.org/";>JGroups</ulink> to broadcast server connection 
settings. 
+            </para>
+            <section id="clusters.broadcast-groups">
+                <title>Broadcast Groups</title>
+                <para>A broadcast group is the means by which a server 
broadcasts connectors over the
+                    network. A connector defines a way in which a client (or 
other server) can make
+                    connections to the server. For more information on what a 
connector is, please see
+                    <xref linkend="configuring-transports"/>.</para>
+                <para>The broadcast group takes a set of connector pairs, each 
connector pair contains
+                    connection settings for a live and backup server (if one 
exists) and broadcasts them on
+                    the network. Depending on which broadcasting technique you 
configure the cluster, it
+                    uses either UDP or JGroups to broadcast connector pairs 
information.</para>
+                <para>Broadcast groups are defined in the server configuration 
file <literal
+                        >hornetq-configuration.xml</literal>. There can be 
many broadcast groups per
+                    HornetQ server. All broadcast groups must be defined in a 
<literal
+                            >broadcast-groups</literal> element.</para>
+                <para>Let's take a look at an example broadcast group from 
<literal
+                        >hornetq-configuration.xml</literal> that defines a 
UDP broadcast group:</para>
+                <programlisting>
+&lt;broadcast-groups>
+   &lt;broadcast-group name="my-broadcast-group">
+      &lt;local-bind-address>172.16.9.3&lt;/local-bind-address>
+      &lt;local-bind-port>5432&lt;/local-bind-port>
+      &lt;group-address>231.7.7.7&lt;/group-address>
+      &lt;group-port>9876&lt;/group-port>
+      &lt;broadcast-period>2000&lt;/broadcast-period>
+      &lt;connector-ref connector-name="netty-connector"/>
+   &lt;/broadcast-group>
+&lt;/broadcast-groups></programlisting>
+                <para>Some of the broadcast group parameters are optional and 
you'll normally use the
+                    defaults, but we specify them all in the above example for 
clarity. Let's discuss
+                    each one in turn:</para>
+                <itemizedlist>
+                    <listitem>
+                        <para><literal>name</literal> attribute. Each 
broadcast group in the server must
+                            have a unique name. </para>
+                    </listitem>
+                    <listitem>
+                        <para><literal>local-bind-address</literal>. This is 
the local bind address that
+                            the datagram socket is bound to. If you have 
multiple network interfaces on
+                            your server, you would specify which one you wish 
to use for broadcasts by
+                            setting this property. If this property is not 
specified then the socket
+                            will be bound to the wildcard address, an IP 
address chosen by the
+                            kernel. This is a UDP specific attribute.</para>
+                    </listitem>
+                    <listitem>
+                        <para><literal>local-bind-port</literal>. If you want 
to specify a local port to
+                            which the datagram socket is bound you can specify 
it here. Normally you
+                            would just use the default value of 
<literal>-1</literal> which signifies
+                            that an anonymous port should be used. This 
parameter is always specified in conjunction with
+                            <literal>local-bind-address</literal>. This is a 
UDP specific attribute.</para>
+                    </listitem>
+                    <listitem>
+                        <para><literal>group-address</literal>. This is the 
multicast address to which
+                            the data will be broadcast. It is a class D IP 
address in the range <literal
+                                    >224.0.0.0</literal> to 
<literal>239.255.255.255</literal>, inclusive.
+                            The address <literal>224.0.0.0</literal> is 
reserved and is not available
+                            for use. This parameter is mandatory. This is a 
UDP specific attribute.</para>
+                    </listitem>
+                    <listitem>
+                        <para><literal>group-port</literal>. This is the UDP 
port number used for
+                            broadcasting. This parameter is mandatory. This is 
a UDP specific attribute.</para>
+                    </listitem>
+                    <listitem>
+                        <para><literal>broadcast-period</literal>. This is the 
period in milliseconds
+                            between consecutive broadcasts. This parameter is 
optional, the default
+                            value is <literal>2000</literal> 
milliseconds.</para>
+                    </listitem>
+                    <listitem>
+                        <para><literal>connector-ref</literal>. This specifies 
the connector and
+                            optional backup connector that will be broadcasted 
(see <xref
+                                    linkend="configuring-transports"/> for 
more information on connectors).
+                            The connector to be broadcasted is specified by 
the <literal
+                                    >connector-name</literal> attribute.</para>
+                    </listitem>
+                </itemizedlist>
+
+                <para id="clusters.jgroups-example">Here is another example 
broadcast group that defines a JGroups broadcast group:</para>
+                <programlisting>
+&lt;broadcast-groups>
+   &lt;broadcast-group name="my-broadcast-group">
+      &lt;jgroups-file>test-jgroups-file_ping.xml&lt;/jgroups-file>
+      &lt;jgroups-channel>hornetq_broadcast_channel&lt;/jgroups-channel>
+      &lt;broadcast-period>2000&lt;/broadcast-period>
+    &lt;connector-ref connector-name="netty-connector"/>
+   &lt;/broadcast-group>
+&lt;/broadcast-groups></programlisting>
+                <para>To be able to use JGroups to broadcast, one must specify 
two attributes, i.e. 
+                    <literal>jgroups-file</literal> and 
<literal>jgroups-channel</literal>, as discussed
+                    in details as following:</para>
+                <itemizedlist>
+                    <listitem>
+                        <para><literal>jgroups-file</literal> attribute. This 
is the name of JGroups configuration
+                            file. It will be used to initialize JGroups 
channels. Make sure the file is in the 
+                            java resource path so that HornetQ can load it. 
</para>
+                    </listitem>
+                    <listitem>
+                        <para><literal>jgroups-channel</literal> attribute. 
The name that JGroups channels connect
+                        to for broadcasting.</para>
+                    </listitem>
+                </itemizedlist>
+                <note>
+                    <para>The JGroups attributes 
(<literal>jgroups-file</literal> and <literal>jgroups-channel</literal>)
+                    and UDP specific attributes described above are exclusive 
of each other. Only one set can be
+                    specified in a broadcast group configuration. Don't mix 
them!</para>
+                </note>
+                <para id="clusters.jgroups-file">
+                   The following is an example of a JGroups file
+                   <programlisting>
+&lt;config xmlns="urn:org:jgroups"
+   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+   xsi:schemaLocation="urn:org:jgroups 
http://www.jgroups.org/schema/JGroups-3.0.xsd";>
+   &lt;TCP loopback="true"
+      recv_buf_size="20000000"
+      send_buf_size="640000"
+      discard_incompatible_packets="true"
+      max_bundle_size="64000"
+      max_bundle_timeout="30"
+      enable_bundling="true"
+      use_send_queues="false"
+      sock_conn_timeout="300"
+
+      thread_pool.enabled="true"
+      thread_pool.min_threads="1"
+      thread_pool.max_threads="10"
+      thread_pool.keep_alive_time="5000"
+      thread_pool.queue_enabled="false"
+      thread_pool.queue_max_size="100"
+      thread_pool.rejection_policy="run"
+
+      oob_thread_pool.enabled="true"
+      oob_thread_pool.min_threads="1"
+      oob_thread_pool.max_threads="8"
+      oob_thread_pool.keep_alive_time="5000"
+      oob_thread_pool.queue_enabled="false"
+      oob_thread_pool.queue_max_size="100"
+      oob_thread_pool.rejection_policy="run"/>
+
+   &lt;FILE_PING location="../file.ping.dir"/>
+   &lt;MERGE2 max_interval="30000"
+      min_interval="10000"/>
+   &lt;FD_SOCK/>
+   &lt;FD timeout="10000" max_tries="5" />
+   &lt;VERIFY_SUSPECT timeout="1500"  />
+   &lt;BARRIER />
+   &lt;pbcast.NAKACK
+      use_mcast_xmit="false"
+      retransmit_timeout="300,600,1200,2400,4800"
+      discard_delivered_msgs="true"/>
+   &lt;UNICAST timeout="300,600,1200" />
+   &lt;pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
+      max_bytes="400000"/>
+   &lt;pbcast.GMS print_local_addr="true" join_timeout="3000"
+      view_bundling="true"/>
+   &lt;FC max_credits="2000000"
+      min_threshold="0.10"/>
+   &lt;FRAG2 frag_size="60000"  />
+   &lt;pbcast.STATE_TRANSFER/>
+   &lt;pbcast.FLUSH timeout="0"/>
+&lt;/config></programlisting>
+                </para>
+                <para>
+                   As it shows, the file content defines a jgroups protocol 
stacks. If you want hornetq
+                   to use this stacks for channel creation, you have to make 
sure the value of 
+                   <literal>jgroups-file</literal> in your 
broadcast-group/discovery-group configuration
+                   to be the name of this jgroups configuration file. For 
example if the above stacks 
+                   configuration is stored in a file named 
"jgroups-stacks.xml" then your
+                   <literal>jgroups-file</literal> should be like
+                   <programlisting>
+&lt;jgroups-file>jgroups-stacks.xml&lt;/jgroups-file></programlisting>
+                </para>
+            </section>
+            <section id="clusters.discovery-groups">
+                <title>Discovery Groups</title>
+                <para>While the broadcast group defines how connector 
information is broadcasted from a
+                    server, a discovery group defines how connector 
information is received from a
+                    broadcast endpoint (a UDP multicast address or JGroup 
channel).</para>
+                <para>A discovery group maintains a list of connector pairs - 
one for each broadcast by
+                    a different server. As it receives broadcasts on the 
broadcast endpoint from a
+                    particular server it updates its entry in the list for 
that server.</para>
+                <para>If it has not received a broadcast from a particular 
server for a length of time
+                    it will remove that server's entry from its list.</para>
+                <para>Discovery groups are used in two places in 
HornetQ:</para>
+                <itemizedlist>
+                    <listitem>
+                        <para>By cluster connections so they know how to 
obtain an initial connection to download the topology</para>
+                    </listitem>
+                    <listitem>
+                        <para>By messaging clients so they know how to obtain 
an initial connection to download the topology</para>
+                    </listitem>
+                </itemizedlist>
+                <para>
+                    Although a discovery group will always accept broadcasts, 
its current list of available live and
+                    backup servers is only ever used when an initial 
connection is made, from then server discovery is
+                    done over the normal HornetQ connections.
+                </para>
+                <note>
+                    <para>
+                    Each discovery group must be configured with broadcast 
endpoint (UDP or JGroups) that matches its broadcast
+                    group counterpart. For example, if broadcast is configured 
using UDP, the discovery group must also use UDP, and the same
+                    multicast address.
+                    </para>
+                </note>
+            </section>
+            <section>
+                <title>Defining Discovery Groups on the Server</title>
+                <para>For cluster connections, discovery groups are defined in 
the server side
+                    configuration file 
<literal>hornetq-configuration.xml</literal>. All discovery
+                    groups must be defined inside a 
<literal>discovery-groups</literal> element. There
+                    can be many discovery groups defined by HornetQ server. 
Let's look at an
+                    example:</para>
+                <programlisting>
+&lt;discovery-groups>
+   &lt;discovery-group name="my-discovery-group">
+      &lt;local-bind-address>172.16.9.7&lt;/local-bind-address>
+      &lt;group-address>231.7.7.7&lt;/group-address>
+      &lt;group-port>9876&lt;/group-port>
+      &lt;refresh-timeout>10000&lt;/refresh-timeout>
+   &lt;/discovery-group>
+&lt;/discovery-groups></programlisting>
+                <para>We'll consider each parameter of the discovery 
group:</para>
+                <itemizedlist>
+                    <listitem>
+                        <para><literal>name</literal> attribute. Each 
discovery group must have a unique
+                            name per server.</para>
+                    </listitem>
+                    <listitem>
+                        <para><literal>local-bind-address</literal>. If you 
are running with multiple network interfaces on the same machine, you
+                            may want to specify that the discovery group 
listens only only a specific interface. To do this you can specify the interface
+                            address with this parameter. This parameter is 
optional. This is a UDP specific attribute.</para>
+                    </listitem>
+                    <listitem>
+                        <para><literal>group-address</literal>. This is the 
multicast IP address of the
+                            group to listen on. It should match the 
<literal>group-address</literal> in
+                            the broadcast group that you wish to listen from. 
This parameter is
+                            mandatory.  This is a UDP specific 
attribute.</para>
+                    </listitem>
+                    <listitem>
+                        <para><literal>group-port</literal>. This is the UDP 
port of the multicast
+                            group. It should match the 
<literal>group-port</literal> in the broadcast
+                            group that you wish to listen from. This parameter 
is mandatory. This is a UDP specific attribute.</para>
+                    </listitem>
+                    <listitem>
+                        <para><literal>refresh-timeout</literal>. This is the 
period the discovery group
+                            waits after receiving the last broadcast from a 
particular server before
+                            removing that servers connector pair entry from 
its list. You would normally
+                            set this to a value significantly higher than the 
<literal
+                                    >broadcast-period</literal> on the 
broadcast group otherwise servers
+                            might intermittently disappear from the list even 
though they are still
+                            broadcasting due to slight differences in timing. 
This parameter is
+                            optional, the default value is 
<literal>10000</literal> milliseconds (10
+                            seconds).</para>
+                    </listitem>
+                </itemizedlist>
+                <para>Here is another example that defines a JGroups discovery 
group:</para>
+                <programlisting>
+&lt;discovery-groups>
+   &lt;discovery-group name="my-broadcast-group">
+      &lt;jgroups-file>test-jgroups-file_ping.xml&lt;/jgroups-file>
+      &lt;jgroups-channel>hornetq_broadcast_channel&lt;/jgroups-channel>
+      &lt;refresh-timeout>10000&lt;/refresh-timeout>
+   &lt;/discovery-group>
+&lt;/discovery-groups></programlisting>
+                <para>To receive broadcast from JGroups channels, one must 
specify two attributes, 
+                    <literal>jgroups-file</literal> and 
<literal>jgroups-channel</literal>, as discussed
+                    in details as following:</para>
+                <itemizedlist>
+                    <listitem>
+                        <para><literal>jgroups-file</literal> attribute. This 
is the name of JGroups configuration
+                            file. It will be used to initialize JGroups 
channels. Make sure the file is in the 
+                            java resource path so that HornetQ can load it. 
</para>
+                    </listitem>
+                    <listitem>
+                        <para><literal>jgroups-channel</literal> attribute. 
The name that JGroups channels connect
+                        to for receiving broadcasts.</para>
+                    </listitem>
+                </itemizedlist>
+                <note>
+                    <para>The JGroups attributes 
(<literal>jgroups-file</literal> and <literal>jgroups-channel</literal>)
+                    and UDP specific attributes described above are exclusive 
of each other. Only one set can be
+                    specified in a discovery group configuration. Don't mix 
them!</para>
+                </note>
+            </section>
+            <section id="clusters-discovery.groups.clientside">
+                <title>Discovery Groups on the Client Side</title>
+                <para>Let's discuss how to configure a HornetQ client to use 
discovery to discover a
+                    list of servers to which it can connect. The way to do 
this differs depending on
+                    whether you're using JMS or the core API.</para>
+                <section>
+                    <title>Configuring client discovery using JMS</title>
+                    <para>If you're using JMS and you're also using the JMS 
Service on the server to
+                        load your JMS connection factory instances into JNDI, 
then you can specify which
+                        discovery group to use for your JMS connection factory 
in the server side xml
+                        configuration <literal>hornetq-jms.xml</literal>. 
Let's take a look at an
+                        example:</para>
+                    <programlisting>
+&lt;connection-factory name="ConnectionFactory">
+   &lt;discovery-group-ref discovery-group-name="my-discovery-group"/>
+   &lt;entries>
+      &lt;entry name="ConnectionFactory"/>
+   &lt;/entries>
+&lt;/connection-factory></programlisting>
+                    <para>The element <literal>discovery-group-ref</literal> 
specifies the name of a
+                        discovery group defined in 
<literal>hornetq-configuration.xml</literal>.</para>
+                    <para>When this connection factory is downloaded from JNDI 
by a client application
+                        and JMS connections are created from it, those 
connections will be load-balanced
+                        across the list of servers that the discovery group 
maintains by listening on
+                        the multicast address specified in the discovery group 
configuration.</para>
+                    <para>If you're using JMS, but you're not using JNDI to 
lookup a connection factory
+                        - you're instantiating the JMS connection factory 
directly then you can specify
+                        the discovery group parameters directly when creating 
the JMS connection
+                        factory. Here's an
+                        example:</para>
+                    <programlisting>
+final String groupAddress = "231.7.7.7";
+
+final int groupPort = 9876;
+
+ConnectionFactory jmsConnectionFactory =
+HornetQJMSClient.createConnectionFactory(new 
DiscoveryGroupConfiguration(groupAddress, groupPort,
+                       new UDPBroadcastGroupConfiguration(groupAddress, 
groupPort, null, -1)), JMSFactoryType.CF);
+
+Connection jmsConnection1 = jmsConnectionFactory.createConnection();
+
+Connection jmsConnection2 = 
jmsConnectionFactory.createConnection();</programlisting>
+                    <para>The <literal>refresh-timeout</literal> can be set 
directly on the DiscoveryGroupConfiguration
+                        by using the setter method 
<literal>setDiscoveryRefreshTimeout()</literal> if you
+                        want to change the default value.</para>
+                    <para>There is also a further parameter settable on the 
DiscoveryGroupConfiguration using the
+                        setter method 
<literal>setDiscoveryInitialWaitTimeout()</literal>. If the connection
+                        factory is used immediately after creation then it may 
not have had enough time
+                        to received broadcasts from all the nodes in the 
cluster. On first usage, the
+                        connection factory will make sure it waits this long 
since creation before
+                        creating the first connection. The default value for 
this parameter is <literal
+                                >10000</literal> milliseconds.</para>
+                </section>
+                <section>
+                    <title>Configuring client discovery using Core</title>
+                    <para>If you're using the core API to directly instantiate
+                        <literal>ClientSessionFactory</literal> instances, 
then you can specify the
+                        discovery group parameters directly when creating the 
session factory. Here's an
+                        example:</para>
+                        <programlisting>
+final String groupAddress = "231.7.7.7";
+final int groupPort = 9876;
+ServerLocator factory = HornetQClient.createServerLocatorWithHA(new 
DiscoveryGroupConfiguration(groupAddress, groupPort,
+                           new UDPBroadcastGroupConfiguration(groupAddress, 
groupPort, null, -1))));
+ClientSessionFactory factory = locator.createSessionFactory();
+ClientSession session1 = factory.createSession();
+ClientSession session2 = factory.createSession();</programlisting>
+                    <para>The <literal>refresh-timeout</literal> can be set 
directly on the DiscoveryGroupConfiguration
+                        by using the setter method 
<literal>setDiscoveryRefreshTimeout()</literal> if you
+                        want to change the default value.</para>
+                    <para>There is also a further parameter settable on the 
DiscoveryGroupConfiguration using the
+                        setter method 
<literal>setDiscoveryInitialWaitTimeout()</literal>. If the session factory
+                        is used immediately after creation then it may not 
have had enough time to
+                        received broadcasts from all the nodes in the cluster. 
On first usage, the
+                        session factory will make sure it waits this long 
since creation before creating
+                        the first session. The default value for this 
parameter is <literal
+                                >10000</literal> milliseconds.</para>
+                </section>
+            </section>
+        </section>
+        <section>
+            <title>Discovery using static Connectors</title>
+            <para>Sometimes it may be impossible to use UDP on the network you 
are using. In this case its
+                possible to configure a connection with an initial list if 
possible servers. This could be just
+                one server that you know will always be available or a list of 
servers where at least one will
+                be available.</para>
+            <para>This doesn't mean that you have to know where all your 
servers are going to be hosted, you
+                can configure these servers to use the reliable servers to 
connect to. Once they are connected
+                there connection details will be propagated via the server it 
connects to</para>
+            <section>
+                <title>Configuring a Cluster Connection</title>
+                <para>For cluster connections there is no extra configuration 
needed, you just need to make sure that any
+                    connectors are defined in the usual manner, (see <xref 
linkend="configuring-transports"/> for more
+                    information on connectors). These are then referenced by 
the cluster connection configuration.</para>
+            </section>
+            <section>
+                <title>Configuring a Client Connection</title>
+                <para>A static list of possible servers can also be used by a 
normal client.</para>
+                <section>
+                    <title>Configuring client discovery using JMS</title>
+                    <para>If you're using JMS and you're also using the JMS 
Service on the server to
+                        load your JMS connection factory instances into JNDI, 
then you can specify which
+                        connectors to use for your JMS connection factory in 
the server side xml
+                        configuration <literal>hornetq-jms.xml</literal>. 
Let's take a look at an
+                        example:</para>
+                    <programlisting>
+&lt;connection-factory name="ConnectionFactory">
+   &lt;connectors>
+      &lt;connector-ref connector-name="netty-connector"/>
+      &lt;connector-ref connector-name="netty-connector2"/>
+      &lt;connector-ref connector-name="netty-connector3"/>
+   &lt;/connectors>
+   &lt;entries>
+      &lt;entry name="ConnectionFactory"/>
+   &lt;/entries>
+&lt;/connection-factory></programlisting>
+                    <para>
+                        The element <literal>connectors</literal> contains a 
list of pre defined connectors in the
+                        <literal>hornetq-configuration.xml</literal> file. 
When this connection factory is downloaded
+                        from JNDI by a client application and JMS connections 
are created from it, those connections will
+                        be load-balanced across the list of servers defined by 
these connectors.
+                    </para>
+                    <para>
+                        If you're using JMS, but you're not using JNDI to 
lookup a connection factory - you're instantiating
+                        the JMS connection factory directly then you can 
specify the connector list directly when creating
+                        the JMS connection factory. Here's an example:
+                    </para>
+                    <programlisting>
+HashMap&lt;String, Object> map = new HashMap&lt;String, Object>();
+map.put("host", "myhost");
+map.put("port", "5445");
+TransportConfiguration server1 = new 
TransportConfiguration(NettyConnectorFactory.class.getName(), map);
+HashMap&lt;String, Object> map2 = new HashMap&lt;String, Object>();
+map2.put("host", "myhost2");
+map2.put("port", "5446");
+TransportConfiguration server2 = new 
TransportConfiguration(NettyConnectorFactory.class.getName(), map2);
+
+HornetQConnectionFactory cf = 
HornetQJMSClient.createConnectionFactoryWithHA(JMSFactoryType.CF, server1, 
server2);</programlisting>
+                </section>
+                <section>
+                    <title>Configuring client discovery using Core</title>
+                    <para>If you are using the core API then the same can be 
done as follows:</para>
+                    <programlisting>
+HashMap&lt;String, Object> map = new HashMap&lt;String, Object>();
+map.put("host", "myhost");
+map.put("port", "5445");
+TransportConfiguration server1 = new 
TransportConfiguration(NettyConnectorFactory.class.getName(), map);
+HashMap&lt;String, Object> map2 = new HashMap&lt;String, Object>();
+map2.put("host", "myhost2");
+map2.put("port", "5446");
+TransportConfiguration server2 = new 
TransportConfiguration(NettyConnectorFactory.class.getName(), map2);
+
+ServerLocator locator = HornetQClient.createServerLocatorWithHA(server1, 
server2);
+ClientSessionFactory factory = locator.createSessionFactory();
+ClientSession session = factory.createSession();</programlisting>
+                </section>
+            </section>
+        </section>
+    </section>
+    <section>
+        <title>Server-Side Message Load Balancing</title>
+        <para>If cluster connections are defined between nodes of a cluster, 
then HornetQ will load
+            balance messages arriving at a particular node from a 
client.</para>
+        <para>Let's take a simple example of a cluster of four nodes A, B, C, 
and D arranged in a
+            <emphasis>symmetric cluster</emphasis> (described in
+            <xref linkend="symmetric-cluster"/>). We have a queue called 
<literal>OrderQueue</literal>
+            deployed on each node of the cluster.</para>
+        <para>We have client Ca connected to node A, sending orders to the 
server. We have also have
+            order processor clients Pa, Pb, Pc, and Pd connected to each of 
the nodes A, B, C, D. If
+            no cluster connection was defined on node A, then as order 
messages arrive on node A
+            they will all end up in the <literal>OrderQueue</literal> on node 
A, so will only get
+            consumed by the order processor client attached to node A, 
Pa.</para>
+        <para>If we define a cluster connection on node A, then as ordered 
messages arrive on node A
+            instead of all of them going into the local 
<literal>OrderQueue</literal> instance, they
+            are distributed in a round-robin fashion between all the nodes of 
the cluster. The
+            messages are forwarded from the receiving node to other nodes of 
the cluster. This is
+            all done on the server side, the client maintains a single 
connection to node A.</para>
+        <para>For example, messages arriving on node A might be distributed in 
the following order
+            between the nodes: B, D, C, A, B, D, C, A, B, D. The exact order 
depends on the order
+            the nodes started up, but the algorithm used is round robin.</para>
+        <para>HornetQ cluster connections can be configured to always blindly 
load balance messages
+            in a round robin fashion irrespective of whether there are any 
matching consumers on
+            other nodes, but they can be a bit cleverer than that and also be 
configured to only
+            distribute to other nodes if they have matching consumers. We'll 
look at both these
+            cases in turn with some examples, but first we'll discuss 
configuring cluster
+            connections in general.</para>
+        <section id="clusters.cluster-connections">
+            <title>Configuring Cluster Connections</title>
+            <para>Cluster connections group servers into clusters so that 
messages can be load
+                balanced between the nodes of the cluster. Let's take a look 
at a typical cluster
+                connection. Cluster connections are always defined in <literal
+                        >hornetq-configuration.xml</literal> inside a <literal
+                        >cluster-connection</literal> element. There can be 
zero or more cluster
+                connections defined per HornetQ server.</para>
+            <programlisting>
+&lt;cluster-connections>
+   &lt;cluster-connection name="my-cluster">
+      &lt;address>jms&lt;/address>
+      &lt;connector-ref>netty-connector&lt;/connector-ref>
+      &lt;check-period>1000&lt;/check-period>
+      &lt;connection-ttl>5000&lt;/connection-ttl>
+      &lt;min-large-message-size>50000&lt;/min-large-message-size>
+      &lt;call-timeout>5000&lt;/call-timeout>
+      &lt;retry-interval>500&lt;/retry-interval>
+      &lt;retry-interval-multiplier>1.0&lt;/retry-interval-multiplier>
+      &lt;max-retry-interval>5000&lt;/max-retry-interval>
+      &lt;initial-connect-attempts>-1&lt;/initial-connect-attempts>
+      &lt;reconnect-attempts>-1&lt;/reconnect-attempts>
+      &lt;use-duplicate-detection>true&lt;/use-duplicate-detection>
+      &lt;forward-when-no-consumers>false&lt;/forward-when-no-consumers>
+      &lt;max-hops>1&lt;/max-hops>
+      &lt;confirmation-window-size>32000&lt;/confirmation-window-size>
+      &lt;call-failover-timeout>30000&lt;/call-failover-timeout>
+      &lt;notification-interval>1000&lt;/notification-interval>
+      &lt;notification-attempts>2&lt;/notification-attempts>
+      &lt;discovery-group-ref discovery-group-name="my-discovery-group"/>
+   &lt;/cluster-connection>
+&lt;/cluster-connections></programlisting>
+            <para>In the above cluster connection all parameters have been 
explicitly specified. The following
+               shows all the available configuration options</para>
+            <itemizedlist>
+                <listitem id="clusters.address">
+                    <para><literal>address</literal>. Each cluster connection 
only applies to
+                        messages sent to an address that starts with this 
value. Note: this does
+                        not use wild-card matching.</para>
+                    <para>In this case, this cluster connection will load 
balance messages sent to
+                        address that start with <literal>jms</literal>. This 
cluster connection,
+                        will, in effect apply to all JMS queues and topics 
since they map to core
+                        queues that start with the substring "jms".</para>
+                    <para>The address can be any value and you can have many 
cluster connections
+                        with different values of <literal>address</literal>, 
simultaneously
+                        balancing messages for those addresses, potentially to 
different clusters of
+                        servers. By having multiple cluster connections on 
different addresses a
+                        single HornetQ Server can effectively take part in 
multiple clusters
+                        simultaneously.</para>
+                    <para>Be careful not to have multiple cluster connections 
with overlapping
+                        values of <literal>address</literal>, e.g. "europe" 
and "europe.news" since
+                        this could result in the same messages being 
distributed between more than
+                        one cluster connection, possibly resulting in 
duplicate deliveries.</para>
+                    <para>This parameter is mandatory.</para>
+                </listitem>
+                <listitem>
+                    <para><literal>connector-ref</literal>. This is the 
connector which will be sent to other nodes in
+                    the cluster so they have the correct cluster 
topology.</para>
+                    <para>This parameter is mandatory.</para>
+                </listitem>
+                <listitem>
+                    <para><literal>check-period</literal>. The period (in 
milliseconds) used to check if the cluster connection
+                        has failed to receive pings from another server. 
Default is 30000.</para>
+                </listitem>
+                <listitem>
+                   <para><literal>connection-ttl</literal>. This is how long a 
cluster connection should stay alive if it
+                   stops receiving messages from a specific node in the 
cluster. Default is 60000.</para>
+                </listitem>
+                <listitem>
+                    <para><literal>min-large-message-size</literal>. If the 
message size (in bytes) is larger than this
+                    value then it will be split into multiple segments when 
sent over the network to other cluster
+                    members. Default is 102400.</para>
+                </listitem>
+                <listitem>
+                   <para><literal>call-timeout</literal>. When a packet is 
sent via a cluster connection and is a blocking
+                   call, i.e. for acknowledgements, this is how long it will 
wait (in milliseconds) for the reply before
+                   throwing an exception. Default is 30000.</para>
+                </listitem>
+                <listitem>
+                    <para><literal>retry-interval</literal>. We mentioned 
before that, internally,
+                        cluster connections cause bridges to be created 
between the nodes of the
+                        cluster. If the cluster connection is created and the 
target node has not
+                        been started, or say, is being rebooted, then the 
cluster connections from
+                        other nodes will retry connecting to the target until 
it comes back up, in
+                        the same way as a bridge does.</para>
+                    <para>This parameter determines the interval in 
milliseconds between retry
+                        attempts. It has the same meaning as the 
<literal>retry-interval</literal>
+                        on a bridge (as described in <xref 
linkend="core-bridges"/>).</para>
+                    <para>This parameter is optional and its default value is 
<literal>500</literal>
+                        milliseconds.</para>
+                </listitem>
+                <listitem>
+                   <para><literal>retry-interval-multiplier</literal>. This is 
a multiplier used to increase the
+                   <literal>retry-interval</literal> after each reconnect 
attempt, default is 1.</para>
+                </listitem>
+                <listitem>
+                   <para><literal>max-retry-interval</literal>. The maximum 
delay (in milliseconds) for retries.
+                   Default is 2000.</para>
+                </listitem>
+                <listitem>
+                    <para><literal>initial-connect-attempts</literal>. The 
number of times the system will
+                        try to connect a node in the cluster initially. If the 
max-retry is achieved this
+                        node will be considered permanently down and the 
system will not route messages
+                        to this node. Default is -1 (infinite retries).</para>
+                </listitem>
+                <listitem>
+                    <para><literal>reconnect-attempts</literal>. The number of 
times the system will
+                        try to reconnect to a node in the cluster. If the 
max-retry is achieved this node will
+                        be considered permanently down and the system will 
stop routing messages to this
+                        node. Default is -1 (infinite retries).</para>
+                </listitem>
+                <listitem>
+                    <para><literal>use-duplicate-detection</literal>. 
Internally cluster connections
+                        use bridges to link the nodes, and bridges can be 
configured to add a
+                        duplicate id property in each message that is 
forwarded. If the target node
+                        of the bridge crashes and then recovers, messages 
might be resent from the
+                        source node. By enabling duplicate detection any 
duplicate messages will be
+                        filtered out and ignored on receipt at the target 
node.</para>
+                    <para>This parameter has the same meaning as 
<literal>use-duplicate-detection</literal>
+                        on a bridge. For more information on duplicate 
detection, please see
+                        <xref linkend="duplicate-detection"/>. Default is 
true.</para>
+                </listitem>
+                <listitem>
+                    <para><literal>forward-when-no-consumers</literal>. This 
parameter determines
+                        whether messages will be distributed round robin 
between other nodes of the
+                        cluster <emphasis>regardless</emphasis> of whether or 
not there are matching or
+                        indeed any consumers on other nodes. </para>
+                    <para>If this is set to <literal>true</literal> then each 
incoming message will
+                        be round robin'd even though the same queues on the 
other nodes of the
+                        cluster may have no consumers at all, or they may have 
consumers that have
+                        non matching message filters (selectors). Note that 
HornetQ will
+                        <emphasis>not</emphasis> forward messages to other 
nodes if there are no
+                        <emphasis>queues</emphasis> of the same name on the 
other nodes, even if
+                        this parameter is set to 
<literal>true</literal>.</para>
+                    <para>If this is set to <literal>false</literal> then 
HornetQ will only forward
+                        messages to other nodes of the cluster if the address 
to which they are
+                        being forwarded has queues which have consumers, and 
if those consumers have
+                        message filters (selectors) at least one of those 
selectors must match the
+                        message.</para>
+                    <para>Default is false.</para>
+                </listitem>
+                <listitem>
+                    <para><literal>max-hops</literal>. When a cluster 
connection decides the set of
+                        nodes to which it might load balance a message, those 
nodes do not have to
+                        be directly connected to it via a cluster connection. 
HornetQ can be
+                        configured to also load balance messages to nodes 
which might be connected
+                        to it only indirectly with other HornetQ servers as 
intermediates in a
+                        chain.</para>
+                    <para>This allows HornetQ to be configured in more complex 
topologies and still
+                        provide message load balancing. We'll discuss this 
more later in this
+                        chapter.</para>
+                    <para>The default value for this parameter is 
<literal>1</literal>, which means
+                        messages are only load balanced to other HornetQ 
serves which are directly
+                        connected to this server. This parameter is 
optional.</para>
+                </listitem>
+                <listitem>
+                   <para><literal>confirmation-window-size</literal>. The size 
(in bytes) of the window
+                   used for sending confirmations from the server connected 
to. So once the server has
+                   received <literal>confirmation-window-size</literal> bytes 
it notifies its client,
+                   default is 1048576. A value of -1 means no window.</para>
+                </listitem>
+                <listitem>
+                   <para><literal>call-failover-timeout</literal>. Similar to 
<literal>call-timeout</literal> but used
+                   when a call is made during a failover attempt. Default is 
-1 (no timeout).</para>
+                </listitem>
+                <listitem>
+                   <para><literal>notification-interval</literal>. How often 
(in milliseconds) the cluster connection
+                   should broadcast itself when attaching to the cluster. 
Default is 1000.</para>
+                </listitem>
+                <listitem>
+                   <para><literal>notification-attempts</literal>. How many 
times the cluster connection should
+                   broadcast itself when connecting to the cluster. Default is 
2.</para>
+                </listitem>
+                <listitem>
+                    <para><literal>discovery-group-ref</literal>. This 
parameter determines which
+                        discovery group is used to obtain the list of other 
servers in the cluster
+                        that this cluster connection will make connections 
to.</para>
+                </listitem>
+            </itemizedlist>
+            <para>
+                Alternatively if you would like your cluster connections to 
use a static list of
+                servers for discovery then you can do it like this.
+            </para>
+            <programlisting>
+&lt;cluster-connection name="my-cluster">
+   ...
+   &lt;static-connectors>
+      &lt;connector-ref>server0-connector&lt;/connector-ref>
+      &lt;connector-ref>server1-connector&lt;/connector-ref>
+   &lt;/static-connectors>
+&lt;/cluster-connection></programlisting>
+            <para>
+                Here we have defined 2 servers that we know for sure will that 
at least one will be available. There may
+                be many more servers in the cluster but these will; be 
discovered via one of these connectors once an
+                initial connection has been made.</para>
+        </section>
+        <section id="clusters.clusteruser">
+            <title>Cluster User Credentials</title>
+            <para>When creating connections between nodes of a cluster to form 
a cluster connection,
+                HornetQ uses a cluster user and cluster password which is 
defined in <literal
+                        >hornetq-configuration.xml</literal>:</para>
+            <programlisting>
+&lt;cluster-user>HORNETQ.CLUSTER.ADMIN.USER&lt;/cluster-user>
+&lt;cluster-password>CHANGE ME!!&lt;/cluster-password></programlisting>
+            <warning>
+                <para>It is imperative that these values are changed from 
their default, or remote
+                    clients will be able to make connections to the server 
using the default values.
+                    If they are not changed from the default, HornetQ will 
detect this and pester
+                    you with a warning on every start-up.</para>
+            </warning>
+        </section>
+    </section>
+    <section id="clusters.client.loadbalancing">
+        <title>Client-Side Load balancing</title>
+        <para>With HornetQ client-side load balancing, subsequent sessions 
created using a single
+            session factory can be connected to different nodes of the 
cluster. This allows sessions
+            to spread smoothly across the nodes of a cluster and not be 
"clumped" on any particular
+            node.</para>
+        <para>The load balancing policy to be used by the client factory is 
configurable. HornetQ
+            provides four out-of-the-box load balancing policies, and you can 
also implement your own
+            and use that.</para>
+        <para>The out-of-the-box policies are</para>
+        <itemizedlist>
+            <listitem>
+                <para>Round Robin. With this policy the first node is chosen 
randomly then each
+                    subsequent node is chosen sequentially in the same 
order.</para>
+                <para>For example nodes might be chosen in the order B, C, D, 
A, B, C, D, A, B or D,
+                    A, B, C, D, A, B, C, D or C, D, A, B, C, D, A, B, C.</para>
+                <para>Use 
<literal>org.hornetq.api.core.client.loadbalance.RoundRobinConnectionLoadBalancingPolicy</literal>
+                    as the 
<literal>&lt;connection-load-balancing-policy-class-name></literal>.</para>
+            </listitem>
+            <listitem>
+                <para>Random. With this policy each node is chosen 
randomly.</para>
+                <para>Use 
<literal>org.hornetq.api.core.client.loadbalance.RandomConnectionLoadBalancingPolicy</literal>
+                    as the 
<literal>&lt;connection-load-balancing-policy-class-name></literal>.</para>
+            </listitem>
+            <listitem>
+                <para>Random Sticky. With this policy the first node is chosen 
randomly and then re-used for subsequent
+                    connections.</para>
+                <para>Use 
<literal>org.hornetq.api.core.client.loadbalance.RandomStickyConnectionLoadBalancingPolicy</literal>
+                    as the 
<literal>&lt;connection-load-balancing-policy-class-name></literal>.</para>
+            </listitem>
+            <listitem>
+                <para>First Element. With this policy the "first" (i.e. 0th) 
node is always returned.</para>
+                <para>Use 
<literal>org.hornetq.api.core.client.loadbalance.FirstElementConnectionLoadBalancingPolicy</literal>
+                    as the 
<literal>&lt;connection-load-balancing-policy-class-name></literal>.</para>
+            </listitem>
+        </itemizedlist>
+        <para>You can also implement your own policy by implementing the 
interface <literal
+                
>org.hornetq.api.core.client.loadbalance.ConnectionLoadBalancingPolicy</literal></para>
+        <para>Specifying which load balancing policy to use differs whether 
you are using JMS or the
+            core API. If you don't specify a policy then the default will be 
used which is <literal
+                    
>org.hornetq.api.core.client.loadbalance.RoundRobinConnectionLoadBalancingPolicy</literal>.</para>
+        <para>If you're using JMS, and you're using JNDI on the server to put 
your JMS connection
+            factories into JNDI, then you can specify the load balancing 
policy directly in the
+            <literal>hornetq-jms.xml</literal> configuration file on the 
server as follows:</para>
+            <programlisting>
+&lt;connection-factory name="ConnectionFactory">
+   &lt;discovery-group-ref discovery-group-name="my-discovery-group"/>
+   &lt;entries>
+      &lt;entry name="ConnectionFactory"/>
+   &lt;/entries>
+   &lt;connection-load-balancing-policy-class-name>
+      
org.hornetq.api.core.client.loadbalance.RandomConnectionLoadBalancingPolicy
+   &lt;/connection-load-balancing-policy-class-name>
+&lt;/connection-factory></programlisting>
+        <para>The above example would deploy a JMS connection factory that 
uses the random connection load
+            balancing policy. </para>
+        <para>If you're using JMS but you're instantiating your connection 
factory directly on the
+            client side then you can set the load balancing policy using the 
setter on the
+            <literal>HornetQConnectionFactory</literal> before using it:</para>
+            <programlisting>
+ConnectionFactory jmsConnectionFactory = 
HornetQJMSClient.createConnectionFactory(...);
+jmsConnectionFactory.setLoadBalancingPolicyClassName("com.acme.MyLoadBalancingPolicy");</programlisting>
+        <para>If you're using the core API, you can set the load balancing 
policy directly on the
+            <literal>ServerLocator</literal> instance you are using:</para>
+            <programlisting>
+ServerLocator locator = HornetQClient.createServerLocatorWithHA(server1, 
server2);
+locator.setLoadBalancingPolicyClassName("com.acme.MyLoadBalancingPolicy");</programlisting>
+        <para>The set of servers over which the factory load balances can be 
determined in one of
+            two ways:</para>
+        <itemizedlist>
+            <listitem>
+                <para>Specifying servers explicitly</para>
+            </listitem>
+            <listitem>
+                <para>Using discovery.</para>
+            </listitem>
+        </itemizedlist>
+    </section>
+    <section>
+        <title>Specifying Members of a Cluster Explicitly</title>
+        <para>
+            Sometimes you want to explicitly define a cluster more explicitly, 
that is control which
+            server connect to each other in the cluster. This is typically 
used to form non symmetrical clusters
+            such as chain cluster or ring clusters. This can only be done 
using a static list of connectors and is
+            configured as follows:
+        </para>
+        <programlisting>
+&lt;cluster-connection name="my-cluster">
+   &lt;address>jms&lt;/address>
+   &lt;connector-ref>netty-connector&lt;/connector-ref>
+   &lt;retry-interval>500&lt;/retry-interval>
+   &lt;use-duplicate-detection>true&lt;/use-duplicate-detection>
+   &lt;forward-when-no-consumers>true&lt;/forward-when-no-consumers>
+   &lt;max-hops>1&lt;/max-hops>
+   &lt;static-connectors allow-direct-connections-only="true">
+      &lt;connector-ref>server1-connector&lt;/connector-ref>
+   &lt;/static-connectors>
+&lt;/cluster-connection></programlisting>
+        <para>
+            In this example we have set the attribute 
<literal>allow-direct-connections-only</literal> which means that
+            the only server that this server can create a cluster connection 
to is server1-connector. This means you can
+            explicitly create any cluster topology you want.
+        </para>
+    </section>
+    <section id="clusters.message-redistribution">
+        <title>Message Redistribution</title>
+        <para>Another important part of clustering is message redistribution. 
Earlier we learned how
+            server side message load balancing round robins messages across 
the cluster. If <literal
+                    >forward-when-no-consumers</literal> is false, then 
messages won't be forwarded to
+            nodes which don't have matching consumers, this is great and 
ensures that messages don't
+            arrive on a queue which has no consumers to consume them, however 
there is a situation
+            it doesn't solve: What happens if the consumers on a queue close 
after the messages have
+            been sent to the node? If there are no consumers on the queue the 
message won't get
+            consumed and we have a <emphasis>starvation</emphasis> 
situation.</para>
+        <para>This is where message redistribution comes in. With message 
redistribution HornetQ can
+            be configured to automatically <emphasis>redistribute</emphasis> 
messages from queues
+            which have no consumers back to other nodes in the cluster which 
do have matching
+            consumers.</para>
+        <para>Message redistribution can be configured to kick in immediately 
after the last
+            consumer on a queue is closed, or to wait a configurable delay 
after the last consumer
+            on a queue is closed before redistributing. By default message 
redistribution is
+            disabled.</para>
+        <para>Message redistribution can be configured on a per address basis, 
by specifying the
+            redistribution delay in the address settings, for more information 
on configuring
+            address settings, please see <xref 
linkend="queue-attributes"/>.</para>
+        <para>Here's an address settings snippet from 
<literal>hornetq-configuration.xml</literal>
+            showing how message redistribution is enabled for a set of 
queues:</para>
+        <programlisting>
+&lt;address-settings>
+   &lt;address-setting match="jms.#">
+      &lt;redistribution-delay>0&lt;/redistribution-delay>
+   &lt;/address-setting>
+&lt;/address-settings></programlisting>
+        <para>The above <literal>address-settings</literal> block would set a 
<literal
+                >redistribution-delay</literal> of <literal>0</literal> for 
any queue which is bound
+            to an address that starts with "jms.". All JMS queues and topic 
subscriptions are bound
+            to addresses that start with "jms.", so the above would enable 
instant (no delay)
+            redistribution for all JMS queues and topic subscriptions.</para>
+        <para>The attribute <literal>match</literal> can be an exact match or 
it can be a string
+            that conforms to the HornetQ wildcard syntax (described in <xref
+                    linkend="wildcard-syntax"/>).</para>
+        <para>The element <literal>redistribution-delay</literal> defines the 
delay in milliseconds
+            after the last consumer is closed on a queue before redistributing 
messages from that
+            queue to other nodes of the cluster which do have matching 
consumers. A delay of zero
+            means the messages will be immediately redistributed. A value of 
<literal>-1</literal>
+            signifies that messages will never be redistributed. The default 
value is <literal
+                    >-1</literal>.</para>
+        <para>It often makes sense to introduce a delay before redistributing 
as it's a common case
+            that a consumer closes but another one quickly is created on the 
same queue, in such a
+            case you probably don't want to redistribute immediately since the 
new consumer will
+            arrive shortly.</para>
+    </section>
+    <section>
+        <title>Cluster topologies</title>
+        <para>HornetQ clusters can be connected together in many different 
topologies, let's
+            consider the two most common ones here</para>
+        <section id="symmetric-cluster">
+            <title>Symmetric cluster</title>
+            <para>A symmetric cluster is probably the most common cluster 
topology, and you'll be
+                familiar with if you've had experience of JBoss Application 
Server
+                clustering.</para>
+            <para>With a symmetric cluster every node in the cluster is 
connected to every other
+                node in the cluster. In other words every node in the cluster 
is no more than one
+                hop away from every other node.</para>
+            <para>To form a symmetric cluster every node in the cluster 
defines a cluster connection
+                with the attribute <literal>max-hops</literal> set to 
<literal>1</literal>.
+                Typically the cluster connection will use server discovery in 
order to know what
+                other servers in the cluster it should connect to, although it 
is possible to
+                explicitly define each target server too in the cluster 
connection if, for example,
+                UDP is not available on your network.</para>
+            <para>With a symmetric cluster each node knows about all the 
queues that exist on all
+                the other nodes and what consumers they have. With this 
knowledge it can determine
+                how to load balance and redistribute messages around the 
nodes.</para>
+            <para>Don't forget <link linkend="copy-warning">this 
warning</link> when creating a
+                symmetric cluster.</para>
+        </section>
+        <section>
+            <title>Chain cluster</title>
+            <para>With a chain cluster, each node in the cluster is not 
connected to every node in
+                the cluster directly, instead the nodes form a chain with a 
node on each end of the
+                chain and all other nodes just connecting to the previous and 
next nodes in the
+                chain.</para>
+            <para>An example of this would be a three node chain consisting of 
nodes A, B and C.
+                Node A is hosted in one network and has many producer clients 
connected to it
+                sending order messages. Due to corporate policy, the order 
consumer clients need to
+                be hosted in a different network, and that network is only 
accessible via a third
+                network. In this setup node B acts as a mediator with no 
producers or consumers on
+                it. Any messages arriving on node A will be forwarded to node 
B, which will in turn
+                forward them to node C where they can get consumed. Node A 
does not need to directly
+                connect to C, but all the nodes can still act as a part of the 
cluster.</para>
+            <para>To set up a cluster in this way, node A would define a 
cluster connection that
+                connects to node B, and node B would define a cluster 
connection that connects to
+                node C. In this case we only want cluster connections in one 
direction since we're
+                only moving messages from node A->B->C and never from 
C->B->A.</para>
+            <para>For this topology we would set <literal>max-hops</literal> 
to <literal
+                    >2</literal>. With a value of <literal>2</literal> the 
knowledge of what queues and
+                consumers that exist on node C would be propagated from node C 
to node B to node A.
+                Node A would then know to distribute messages to node B when 
they arrive, even
+                though node B has no consumers itself, it would know that a 
further hop away is node
+                C which does have consumers.</para>
+        </section>
+    </section>
+   <section>
+      <title>Scaling Down</title>
+      <para>HornetQ supports scaling down a cluster with no message loss (even 
for non-durable messages). This is especially
+         useful in certain environments (e.g. the cloud) where the size of a 
cluster may change relatively frequently.
+         When scaling up a cluster (i.e. adding nodes) there is no risk of 
message loss, but when scaling down a cluster
+         (i.e. removing nodes) the messages on those nodes would be lost 
unless the broker sent them to another node in
+         the cluster. HornetQ can be configured to do just that.</para>
+      <para>The simplest way to enable this behavior is to set 
<literal>scale-down</literal> to
+         <literal>true</literal>. If the server is clustered and 
<literal>scale-down</literal> is
+         <literal>true</literal> then when the server is shutdown gracefully 
(i.e. stopped without crashing) it will find
+         another node in the cluster and send <emphasis>all</emphasis> of its 
messages (both durable and non-durable)
+         to that node. The messages are processed in order and go to the 
<emphasis>back</emphasis> of the respective
+         queues on the other node (just as if the messages were sent from an 
external client for the first time).</para>
+      <para>If more control over where the messages go is required then 
specify <literal>scale-down-group-name</literal>.
+         Messages will only be sent to another node in the cluster that uses 
the same <literal>scale-down-group-name</literal>
+         as the server being shutdown.</para>
+      <warning>
+         <para>If cluster nodes are grouped together with different 
<literal>scale-down-group-name</literal> values beware.
+            If all the nodes in a single group are shut down then the messages 
from that node/group will be lost.</para>
+      </warning>
+      <para>If the server is using multiple 
<literal>cluster-connection</literal> then use 
<literal>scale-down-clustername</literal>
+         to identify the name of the <literal>cluster-connection</literal> 
which should be used for scaling down.</para>
+   </section>
+</chapter>

Reply via email to