http://git-wip-us.apache.org/repos/asf/activemq-6/blob/8ecd255f/docs/user-manual/en/message-grouping.xml
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/message-grouping.xml 
b/docs/user-manual/en/message-grouping.xml
new file mode 100644
index 0000000..ce4e04c
--- /dev/null
+++ b/docs/user-manual/en/message-grouping.xml
@@ -0,0 +1,207 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!-- 
============================================================================= 
-->
+<!-- Copyright © 2009 Red Hat, Inc. and others.                               
     -->
+<!--                                                                           
    -->
+<!-- The text of and illustrations in this document are licensed by Red Hat 
under  -->
+<!-- a Creative Commons Attribution–Share Alike 3.0 Unported license 
("CC-BY-SA"). -->
+<!--                                                                           
    -->
+<!-- An explanation of CC-BY-SA is available at                                
    -->
+<!--                                                                           
    -->
+<!--            http://creativecommons.org/licenses/by-sa/3.0/.                
    -->
+<!--                                                                           
    -->
+<!-- In accordance with CC-BY-SA, if you distribute this document or an 
adaptation -->
+<!-- of it, you must provide the URL for the original version.                 
    -->
+<!--                                                                           
    -->
+<!-- Red Hat, as the licensor of this document, waives the right to enforce,   
    -->
+<!-- and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent    
    -->
+<!-- permitted by applicable law.                                              
    -->
+<!-- 
============================================================================= 
-->
+
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" 
"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd"; [
+<!ENTITY % BOOK_ENTITIES SYSTEM "HornetQ_User_Manual.ent">
+%BOOK_ENTITIES;
+]>
+<chapter id="message-grouping">
+   <title>Message Grouping</title>
+   <para>Message groups are sets of messages that have the following 
characteristics:</para>
+   <itemizedlist>
+      <listitem>
+         <para>Messages in a message group share the same group id, i.e. they 
have same group
+            identifier property (<literal>JMSXGroupID</literal> for JMS, 
<literal
+               >_HQ_GROUP_ID</literal> for HornetQ Core API).</para>
+      </listitem>
+      <listitem>
+         <para>Messages in a message group are always consumed by the same 
consumer, even if there
+            are many consumers on a queue. They pin all messages with the same 
group id to the same
+            consumer. If that consumer closes another consumer is chosen and 
will receive all
+            messages with the same group id.</para>
+      </listitem>
+   </itemizedlist>
+   <para>Message groups are useful when you want all messages for a certain 
value of the property to
+      be processed serially by the same consumer.</para>
+   <para>An example might be orders for a certain stock. You may want orders 
for any particular
+      stock to be processed serially by the same consumer. To do this you can 
create a pool of
+      consumers (perhaps one for each stock, but less will work too), then set 
the stock name as the
+      value of the _HQ_GROUP_ID property.</para>
+   <para>This will ensure that all messages for a particular stock will always 
be processed by the
+      same consumer.</para>
+   <note>
+      <para>Grouped messages can impact the concurrent processing of 
non-grouped messages due to the
+         underlying FIFO semantics of a queue. For example, if there is a 
chunk of 100 grouped messages at
+         the head of a queue followed by 1,000 non-grouped messages then all 
the grouped messages will need
+         to be sent to the appropriate client (which is consuming those 
grouped messages serially) before
+         any of the non-grouped messages can be consumed. The functional 
impact in this scenario is a
+         temporary suspension of concurrent message processing while all the 
grouped messages are processed.
+         This can be a performance bottleneck so keep it in mind when 
determining the size of your message
+         groups, and consider whether or not you should isolate your grouped 
messages from your non-grouped
+         messages.</para>
+   </note>
+   <section>
+      <title>Using Core API</title>
+      <para>The property name used to identify the message group is <literal
+            >"_HQ_GROUP_ID"</literal> (or the constant <literal
+         >MessageImpl.HDR_GROUP_ID</literal>). Alternatively, you can set 
<literal
+            >autogroup</literal> to true on the 
<literal>SessionFactory</literal> which will pick a
+         random unique id. </para>
+   </section>
+   <section id="message-grouping.jmsconfigure">
+      <title>Using JMS</title>
+      <para>The property name used to identify the message group is <literal
+         >JMSXGroupID</literal>.</para>
+      <programlisting>
+ // send 2 messages in the same group to ensure the same
+ // consumer will receive both
+ Message message = ...
+ message.setStringProperty("JMSXGroupID", "Group-0");
+ producer.send(message);
+
+ message = ...
+ message.setStringProperty("JMSXGroupID", "Group-0");
+ producer.send(message);</programlisting>
+      <para>Alternatively, you can set <literal>autogroup</literal> to true on 
the <literal
+            >HornetQConnectonFactory</literal> which will pick a random unique 
id. This can also be
+         set in the <literal>hornetq-jms.xml</literal> file like this:</para>
+      <programlisting>
+&lt;connection-factory name="ConnectionFactory">
+   &lt;connectors>
+      &lt;connector-ref connector-name="netty-connector"/>
+   &lt;/connectors>
+   &lt;entries>
+      &lt;entry name="ConnectionFactory"/>
+   &lt;/entries>
+   &lt;autogroup>true&lt;/autogroup>
+&lt;/connection-factory></programlisting>
+      <para>Alternatively you can set the group id via the connection factory. 
All messages sent
+         with producers created via this connection factory will set the 
<literal
+            >JMSXGroupID</literal> to the specified value on all messages 
sent. To configure the
+         group id set it on the connection factory in the 
<literal>hornetq-jms.xml</literal> config
+         file as follows
+         <programlisting>
+&lt;connection-factory name="ConnectionFactory">
+   &lt;connectors>
+      &lt;connector-ref connector-name="netty-connector"/>
+   &lt;/connectors>
+   &lt;entries>
+      &lt;entry name="ConnectionFactory"/>
+   &lt;/entries>
+   &lt;group-id>Group-0&lt;/group-id>
+&lt;/connection-factory></programlisting></para>
+   </section>
+   <section>
+      <title>Example</title>
+      <para>See <xref linkend="examples.message-group"/> for an example which 
shows how message
+         groups are configured and used with JMS.</para>
+   </section>
+   <section>
+      <title>Example</title>
+      <para>See <xref linkend="examples.message-group2"/> for an example which 
shows how message
+         groups are configured via a connection factory.</para>
+   </section>
+   <section>
+      <title> Clustered Grouping</title>
+      <para>Using message groups in a cluster is a bit more complex. This is 
because messages with a
+         particular group id can arrive on any node so each node needs to know 
about which group
+         id's are bound to which consumer on which node. The consumer handling 
messages for a
+         particular group id may be on a different node of the cluster, so 
each node needs to know
+         this information so it can route the message correctly to the node 
which has that consumer. </para>
+      <para>To solve this there is the notion of a grouping handler. Each node 
will have its own
+         grouping handler and when a messages is sent with a group id 
assigned, the handlers will
+         decide between them which route the message should take.</para>
+      <para id="message-grouping.type">There are 2 types of handlers; Local 
and Remote. Each cluster should choose 1 node to
+         have a local grouping handler and all the other nodes should have 
remote handlers- it's the
+         local handler that actually makes the decision as to what route 
should be used, all the
+         other remote handlers converse with this. Here is a sample config for 
both types of
+         handler, this should be configured in the <emphasis role="italic"
+            >hornetq-configuration.xml</emphasis>
+         file.<programlisting>
+&lt;grouping-handler name="my-grouping-handler">
+   &lt;type>LOCAL&lt;/type>
+   &lt;address>jms&lt;/address>
+   &lt;timeout>5000&lt;/timeout>
+&lt;/grouping-handler>
+
+&lt;grouping-handler name="my-grouping-handler">
+   &lt;type>REMOTE&lt;/type>
+   &lt;address>jms&lt;/address>
+   &lt;timeout>5000&lt;/timeout>
+&lt;/grouping-handler></programlisting></para>
+      <para id="message-grouping.address">The <emphasis 
role="italic">address</emphasis> attribute refers to a
+      <link linkend="clusters.address">cluster connection and the address it 
uses</link>,
+      refer to the clustering section on how to configure clusters. The
+            <emphasis role="italic">timeout</emphasis> attribute referees to 
how long to wait for a
+         decision to be made, an exception will be thrown during the send if 
this timeout is
+         reached, this ensures that strict ordering is kept.</para>
+      <para>The decision as to where a message should be routed to is 
initially proposed by the node
+         that receives the message. The node will pick a suitable route as per 
the normal clustered
+         routing conditions, i.e. round robin available queues, use a local 
queue first and choose a
+         queue that has a consumer. If the proposal is accepted by the 
grouping handlers the node
+         will route messages to this queue from that point on, if rejected an 
alternative route will
+         be offered and the node will again route to that queue indefinitely. 
All other nodes will
+         also route to the queue chosen at proposal time. Once the message 
arrives at the queue then
+         normal single server message group semantics take over and the 
message is pinned to a
+         consumer on that queue.</para>
+      <para>You may have noticed that there is a single point of failure with 
the single local
+         handler. If this node crashes then no decisions will be able to be 
made. Any messages sent
+         will be not be delivered and an exception thrown. To avoid this 
happening Local Handlers
+         can be replicated on another backup node. Simple create your back up 
node and configure it
+         with the same Local handler.</para>
+      <para/>
+      <section>
+         <title>Clustered Grouping Best Practices</title>
+         <para>Some best practices should be followed when using clustered 
grouping:<orderedlist>
+               <listitem>
+                  <para>Make sure your consumers are distributed evenly across 
the different nodes
+                     if possible. This is only an issue if you are creating 
and closing consumers
+                     regularly. Since messages are always routed to the same 
queue once pinned,
+                     removing a consumer from this queue may leave it with no 
consumers meaning the
+                     queue will just keep receiving the messages. Avoid 
closing consumers or make
+                     sure that you always have plenty of consumers, i.e., if 
you have 3 nodes have 3
+                     consumers.</para>
+               </listitem>
+               <listitem>
+                  <para>Use durable queues if possible. If queues are removed 
once a group is bound
+                     to it, then it is possible that other nodes may still try 
to route messages to
+                     it. This can be avoided by making sure that the queue is 
deleted by the session
+                     that is sending the messages. This means that when the 
next message is sent it
+                     is sent to the node where the queue was deleted meaning a 
new proposal can
+                     successfully take place. Alternatively you could just 
start using a different
+                     group id.</para>
+               </listitem>
+               <listitem>
+                  <para>Always make sure that the node that has the Local 
Grouping Handler is
+                     replicated. These means that on failover grouping will 
still occur.</para>
+               </listitem>
+            <listitem>
+               <para>In case you are using group-timeouts, the remote node 
should have a smaller group-timeout with at least half
+                     of the value on the main coordinator. This is because 
this will determine how often the last-time-use
+                     value should be updated with a round trip for a request 
to the group between the nodes.</para>
+            </listitem>
+            </orderedlist></para>
+      </section>
+      <section>
+         <title>Clustered Grouping Example</title>
+         <para>See <xref linkend="examples.clustered.grouping"/> for an 
example of how to configure
+            message groups with a HornetQ cluster</para>
+      </section>
+   </section>
+</chapter>

http://git-wip-us.apache.org/repos/asf/activemq-6/blob/8ecd255f/docs/user-manual/en/messaging-concepts.xml
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/messaging-concepts.xml 
b/docs/user-manual/en/messaging-concepts.xml
new file mode 100644
index 0000000..241fd98
--- /dev/null
+++ b/docs/user-manual/en/messaging-concepts.xml
@@ -0,0 +1,268 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!-- 
============================================================================= 
-->
+<!-- Copyright © 2009 Red Hat, Inc. and others.                               
     -->
+<!--                                                                           
    -->
+<!-- The text of and illustrations in this document are licensed by Red Hat 
under  -->
+<!-- a Creative Commons Attribution–Share Alike 3.0 Unported license 
("CC-BY-SA"). -->
+<!--                                                                           
    -->
+<!-- An explanation of CC-BY-SA is available at                                
    -->
+<!--                                                                           
    -->
+<!--            http://creativecommons.org/licenses/by-sa/3.0/.                
    -->
+<!--                                                                           
    -->
+<!-- In accordance with CC-BY-SA, if you distribute this document or an 
adaptation -->
+<!-- of it, you must provide the URL for the original version.                 
    -->
+<!--                                                                           
    -->
+<!-- Red Hat, as the licensor of this document, waives the right to enforce,   
    -->
+<!-- and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent    
    -->
+<!-- permitted by applicable law.                                              
    -->
+<!-- 
============================================================================= 
-->
+
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" 
"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd"; [
+<!ENTITY % BOOK_ENTITIES SYSTEM "HornetQ_User_Manual.ent">
+%BOOK_ENTITIES;
+]>
+<chapter id="messaging-concepts">
+    <title>Messaging Concepts</title>
+    <para>HornetQ is an asynchronous messaging system, an example of <ulink
+            
url="http://en.wikipedia.org/wiki/Message_oriented_middleware";>Message Oriented
+            Middleware</ulink> , we'll just call them messaging systems in the 
remainder of this
+        book.</para>
+    <para>We'll first present a brief overview of what kind of things 
messaging systems do,
+        where they're useful and the kind of concepts you'll hear about in the 
messaging
+        world.</para>
+    <para>If you're already familiar with what a messaging system is and what 
it's capable of, then
+        you can skip this chapter.</para>
+    <section>
+        <title>Messaging Concepts</title>
+        <para>Messaging systems allow you to loosely couple heterogeneous 
systems together, whilst
+            typically providing reliability, transactions and many other 
features.</para>
+        <para>Unlike systems based on a <ulink
+                
url="http://en.wikipedia.org/wiki/Remote_procedure_call";>Remote Procedure
+                Call</ulink> (RPC) pattern, messaging systems primarily use an 
asynchronous message
+            passing pattern with no tight relationship between requests and 
responses. Most
+            messaging systems also support a request-response mode but this is 
not a primary feature
+            of messaging systems.</para>
+        <para>Designing systems to be asynchronous from end-to-end allows you 
to really take
+            advantage of your hardware resources, minimizing the amount of 
threads blocking on IO
+            operations, and to use your network bandwidth to its full 
capacity. With an RPC approach
+            you have to wait for a response for each request you make so are 
limited by the network
+            round trip time, or <emphasis role="italic">latency</emphasis> of 
your network. With an
+            asynchronous system you can pipeline flows of messages in 
different directions, so are
+            limited by the network <emphasis 
role="italic">bandwidth</emphasis> not the latency.
+            This typically allows you to create much higher performance 
applications.</para>
+        <para>Messaging systems decouple the senders of messages from the 
consumers of messages. The
+            senders and consumers of messages are completely independent and 
know nothing of each
+            other. This allows you to create flexible, loosely coupled 
systems.</para>
+        <para>Often, large enterprises use a messaging system to implement a 
message bus which
+            loosely couples heterogeneous systems together. Message buses 
often form the core of an
+                <ulink 
url="http://en.wikipedia.org/wiki/Enterprise_service_bus";>Enterprise Service
+                Bus</ulink>. (ESB). Using a message bus to de-couple disparate 
systems can allow the
+            system to grow and adapt more easily. It also allows more 
flexibility to add new systems
+            or retire old ones since they don't have brittle dependencies on 
each other.</para>
+    </section>
+    <section>
+        <title>Messaging styles</title>
+        <para>Messaging systems normally support two main styles of 
asynchronous messaging: <ulink
+                url="http://en.wikipedia.org/wiki/Message_queue";> message 
queue</ulink> messaging
+            (also known as <emphasis role="italic">point-to-point 
messaging</emphasis>) and <ulink
+                url="http://en.wikipedia.org/wiki/Publish_subscribe";>publish 
subscribe</ulink>
+            messaging. We'll summarise them briefly here:</para>
+        <section>
+            <title>The Message Queue Pattern</title>
+            <para>With this type of messaging you send a message to a queue. 
The message is then
+                typically persisted to provide a guarantee of delivery, then 
some time later the
+                messaging system delivers the message to a consumer. The 
consumer then processes the
+                message and when it is done, it acknowledges the message. Once 
the message is
+                acknowledged it disappears from the queue and is not available 
to be delivered
+                again. If the system crashes before the messaging server 
receives an acknowledgement
+                from the consumer, then on recovery, the message will be 
available to be delivered
+                to a consumer again.</para>
+            <para>With point-to-point messaging, there can be many consumers 
on the queue but a
+                particular message will only ever be consumed by a maximum of 
one of them. Senders
+                (also known as<emphasis role="italic"> producers</emphasis>) 
to the queue are
+                completely decoupled from receivers (also known as <emphasis 
role="italic"
+                    >consumers</emphasis>) of the queue - they do not know of 
each other's
+                existence.</para>
+            <para>A classic example of point to point messaging would be an 
order queue in a
+                company's book ordering system. Each order is represented as a 
message which is sent
+                to the order queue. Let's imagine there are many front end 
ordering systems which
+                send orders to the order queue. When a message arrives on the 
queue it is persisted
+                - this ensures that if the server crashes the order is not 
lost. Let's also imagine
+                there are many consumers on the order queue - each 
representing an instance of an
+                order processing component - these can be on different 
physical machines but
+                consuming from the same queue. The messaging system delivers 
each message to one and
+                only one of the ordering processing components. Different 
messages can be processed
+                by different order processors, but a single order is only 
processed by one order
+                processor - this ensures orders aren't processed twice.</para>
+            <para>As an order processor receives a message, it fulfills the 
order, sends order
+                information to the warehouse system and then updates the order 
database with the
+                order details. Once it's done that it acknowledges the message 
to tell the server
+                that the order has been processed and can be forgotten about. 
Often the send to the
+                warehouse system, update in database and acknowledgement will 
be completed in a
+                single transaction to ensure <ulink 
url="http://en.wikipedia.org/wiki/ACID";
+                    >ACID</ulink> properties.</para>
+        </section>
+        <section>
+            <title>The Publish-Subscribe Pattern</title>
+            <para>With publish-subscribe messaging many senders can send 
messages to an entity on
+                the server, often called a <emphasis 
role="italic">topic</emphasis> (e.g. in the JMS
+                world).</para>
+            <para>There can be many <emphasis>subscriptions</emphasis> on a 
topic, a subscription is
+                just another word for a consumer of a topic. Each subscription 
receives a
+                    <emphasis>copy</emphasis> of <emphasis 
role="italic">each</emphasis> message
+                sent to the topic. This differs from the message queue pattern 
where each message is
+                only consumed by a single consumer.</para>
+            <para>Subscriptions can optionally be <emphasis 
role="italic">durable</emphasis> which
+                means they retain a copy of each message sent to the topic 
until the subscriber
+                consumes them - even if the server crashes or is restarted in 
between. Non-durable
+                subscriptions only last a maximum of the lifetime of the 
connection that created
+                them.</para>
+            <para>An example of publish-subscribe messaging would be a news 
feed. As news articles
+                are created by different editors around the world they are 
sent to a news feed
+                topic. There are many subscribers around the world who are 
interested in receiving
+                news items - each one creates a subscription and the messaging 
system ensures that a
+                copy of each news message is delivered to each 
subscription.</para>
+        </section>
+    </section>
+    <section>
+        <title>Delivery guarantees</title>
+        <para>A key feature of most messaging systems is <emphasis 
role="italic">reliable
+                messaging</emphasis>. With reliable messaging the server gives 
a guarantee that the
+            message will be delivered once and only once to each consumer of a 
queue or each durable
+            subscription of a topic, even in the event of system failure. This 
is crucial for many
+            businesses; e.g. you don't want your orders fulfilled more than 
once or any of your
+            orders to be lost.</para>
+        <para>In other cases you may not care about a once and only once 
delivery guarantee and are
+            happy to cope with duplicate deliveries or lost messages - an 
example of this might be
+            transient stock price updates - which are quickly superseded by 
the next update on the
+            same stock. The messaging system allows you to configure which 
delivery guarantees you
+            require.</para>
+    </section>
+    <section>
+        <title>Transactions</title>
+        <para>Messaging systems typically support the sending and 
acknowledgement of multiple
+            messages in a single local transaction. HornetQ also supports the 
sending and
+            acknowledgement of message as part of a large global transaction - 
using the Java
+            mapping of XA: JTA.</para>
+    </section>
+    <section>
+        <title>Durability</title>
+        <para>Messages are either durable or non durable. Durable messages 
will be persisted in
+            permanent storage and will survive server failure or restart. Non 
durable messages will
+            not survive server failure or restart. Examples of durable 
messages might be orders or
+            trades, where they cannot be lost. An example of a non durable 
message might be a stock
+            price update which is transitory and doesn't need to survive a 
restart.</para>
+    </section>
+    <section>
+        <title>Messaging APIs and protocols</title>
+        <para>How do client applications interact with messaging systems in 
order to send and
+            consume messages?</para>
+        <para>Several messaging systems provide their own proprietary APIs 
with which the client
+            communicates with the messaging system.</para>
+        <para>There are also some standard ways of operating with messaging 
systems and some
+            emerging standards in this space.</para>
+        <para>Let's take a brief look at these:</para>
+        <section>
+            <title>Java Message Service (JMS)</title>
+            <para><ulink 
url="http://en.wikipedia.org/wiki/Java_Message_Service";>JMS</ulink> is part
+                of Sun's JEE specification. It's a Java API that encapsulates 
both message queue and
+                publish-subscribe messaging patterns. JMS is a lowest common 
denominator
+                specification - i.e. it was created to encapsulate common 
functionality of the
+                already existing messaging systems that were available at the 
time of its
+                creation.</para>
+            <para>JMS is a very popular API and is implemented by most 
messaging systems. JMS is
+                only available to clients running Java.</para>
+            <para>JMS does not define a standard wire format - it only defines 
a programmatic API so
+                JMS clients and servers from different vendors cannot directly 
interoperate since
+                each will use the vendor's own internal wire protocol.</para>
+            <para>HornetQ provides a fully compliant JMS 1.1 and JMS 2.0 
API.</para>
+        </section>
+        <section>
+            <title>System specific APIs</title>
+            <para>Many systems provide their own programmatic API for which to 
interact with the
+                messaging system. The advantage of this it allows the full set 
of system
+                functionality to be exposed to the client application. API's 
like JMS are not
+                normally rich enough to expose all the extra features that 
most messaging systems
+                provide.</para>
+            <para>HornetQ provides its own core client API for clients to use 
if they wish to have
+                access to functionality over and above that accessible via the 
JMS API.</para>
+        </section>
+        <section>
+            <title>RESTful API</title>
+            <para><ulink 
url="http://en.wikipedia.org/wiki/Representational_State_Transfer";
+                    >REST</ulink> approaches to messaging are showing a lot 
interest
+                recently.</para>
+            <para>It seems plausible that API standards for cloud computing 
may converge on a REST
+                style set of interfaces and consequently a REST messaging 
approach is a very strong
+                contender for becoming the de-facto method for messaging 
interoperability.</para>
+            <para>With a REST approach messaging resources are manipulated as 
resources defined by a
+                URI and typically using a simple set of operations on those 
resources, e.g. PUT,
+                POST, GET etc. REST approaches to messaging often use HTTP as 
their underlying
+                protocol.</para>
+            <para>The advantage of a REST approach with HTTP is in its 
simplicity and the fact the
+                internet is already tuned to deal with HTTP optimally.</para>
+            <para>Please see <xref linkend="rest"/> for using HornetQ's 
RESTful interface.</para>
+        </section>
+        <section>
+            <title>STOMP</title>
+            <para><ulink
+                    url="http://stomp.github.io/";
+                    >Stomp</ulink> is a very simple text protocol for 
interoperating with messaging
+                systems. It defines a wire format, so theoretically any Stomp 
client can work with
+                any messaging system that supports Stomp. Stomp clients are 
available in many
+                different programming languages.</para>
+            <para>Please see <xref linkend="stomp"/> for using STOMP with 
HornetQ.</para>
+        </section>
+        <section>
+            <title>AMQP</title>
+            <para><ulink url="http://en.wikipedia.org/wiki/AMQP";>AMQP</ulink> 
is a specification for
+                interoperable messaging. It also defines a wire format, so any 
AMQP client can work
+                with any messaging system that supports AMQP. AMQP clients are 
available in many
+                different programming languages.</para>
+            <para>HornetQ implements the <ulink 
url="https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=amqp";>AMQP 
1.0</ulink>
+            specification. Any client that supports the 1.0 specification will 
be able to interact with HornetQ.</para>
+         </section>
+    </section>
+    <section>
+        <title>High Availability</title>
+        <para>High Availability (HA) means that the system should remain 
operational after failure
+            of one or more of the servers. The degree of support for HA varies 
between various
+            messaging systems.</para>
+        <para>HornetQ provides automatic failover where your sessions are 
automatically reconnected
+            to the backup server on event of live server failure.</para>
+        <para>For more information on HA, please see <xref 
linkend="ha"/>.</para>
+    </section>
+    <section>
+        <title>Clusters</title>
+        <para>Many messaging systems allow you to create groups of messaging 
servers called
+                <emphasis role="italic">clusters</emphasis>. Clusters allow 
the load of sending and
+            consuming messages to be spread over many servers. This allows 
your system to scale
+            horizontally by adding new servers to the cluster.</para>
+        <para>Degrees of support for clusters varies between messaging 
systems, with some systems
+            having fairly basic clusters with the cluster members being hardly 
aware of each
+            other.</para>
+        <para>HornetQ provides very configurable state-of-the-art clustering 
model where messages
+            can be intelligently load balanced between the servers in the 
cluster, according to the
+            number of consumers on each node, and whether they are ready for 
messages.</para>
+        <para>HornetQ also has the ability to automatically redistribute 
messages between nodes of a
+            cluster to prevent starvation on any particular node.</para>
+        <para>For full details on clustering, please see <xref 
linkend="clusters"/>.</para>
+    </section>
+    <section>
+        <title>Bridges and routing</title>
+        <para>Some messaging systems allow isolated clusters or single nodes 
to be bridged together,
+            typically over unreliable connections like a wide area network 
(WAN), or the
+            internet.</para>
+        <para>A bridge normally consumes from a queue on one server and 
forwards messages to another
+            queue on a different server. Bridges cope with unreliable 
connections, automatically
+            reconnecting when the connections becomes available again.</para>
+        <para>HornetQ bridges can be configured with filter expressions to 
only forward certain
+            messages, and transformation can also be hooked in.</para>
+        <para>HornetQ also allows routing between queues to be configured in 
server side
+            configuration. This allows complex routing networks to be set up 
forwarding or copying
+            messages from one destination to another, forming a global network 
of interconnected
+            brokers.</para>
+        <para>For more information please see <xref linkend="core-bridges"/> 
and <xref
+                linkend="diverts"/>.</para>
+    </section>
+</chapter>

http://git-wip-us.apache.org/repos/asf/activemq-6/blob/8ecd255f/docs/user-manual/en/notice.xml
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/notice.xml b/docs/user-manual/en/notice.xml
new file mode 100644
index 0000000..5ed879b
--- /dev/null
+++ b/docs/user-manual/en/notice.xml
@@ -0,0 +1,37 @@
+<!-- 
============================================================================= 
-->
+<!-- Copyright © 2009 Red Hat, Inc. and others.                               
     -->
+<!--                                                                           
    -->
+<!-- The text of and illustrations in this document are licensed by Red Hat 
under  -->
+<!-- a Creative Commons Attribution–Share Alike 3.0 Unported license 
("CC-BY-SA"). -->
+<!--                                                                           
    -->
+<!-- An explanation of CC-BY-SA is available at                                
    -->
+<!--                                                                           
    -->
+<!--            http://creativecommons.org/licenses/by-sa/3.0/.                
    -->
+<!--                                                                           
    -->
+<!-- In accordance with CC-BY-SA, if you distribute this document or an 
adaptation -->
+<!-- of it, you must provide the URL for the original version.                 
    -->
+<!--                                                                           
    -->
+<!-- Red Hat, as the licensor of this document, waives the right to enforce,   
    -->
+<!-- and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent    
    -->
+<!-- permitted by applicable law.                                              
    -->
+<!-- 
============================================================================= 
-->
+
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" 
"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd"; [
+<!ENTITY % BOOK_ENTITIES SYSTEM "HornetQ_User_Manual.ent">
+%BOOK_ENTITIES;
+]>
+
+<chapter id="notice">
+    <title>Legal Notice</title>
+        
+        <para>Copyright © 2010 Red Hat, Inc. and others.</para>
+        <para>The text of and illustrations in this document are licensed by 
Red Hat under
+            a Creative Commons Attribution–Share Alike 3.0 Unported license 
("CC-BY-SA").</para>
+        <para>An explanation of CC-BY-SA is available at 
+            <ulink 
url="http://creativecommons.org/licenses/by-sa/3.0/";>http://creativecommons.org/licenses/by-sa/3.0/</ulink>.
 
+            In accordance with CC-BY-SA, if you distribute this document or an 
adaptation
+            of it, you must provide the URL for the original version.</para>
+        <para>Red Hat, as the licensor of this document, waives the right to 
enforce, 
+            and agrees not to assert, Section 4d of CC-BY-SA to the fullest 
extent 
+            permitted by applicable law.</para>
+</chapter>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/activemq-6/blob/8ecd255f/docs/user-manual/en/paging.xml
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/paging.xml b/docs/user-manual/en/paging.xml
new file mode 100644
index 0000000..4a69037
--- /dev/null
+++ b/docs/user-manual/en/paging.xml
@@ -0,0 +1,216 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!-- 
============================================================================= 
-->
+<!-- Copyright © 2009 Red Hat, Inc. and others.                               
     -->
+<!--                                                                           
    -->
+<!-- The text of and illustrations in this document are licensed by Red Hat 
under  -->
+<!-- a Creative Commons Attribution–Share Alike 3.0 Unported license 
("CC-BY-SA"). -->
+<!--                                                                           
    -->
+<!-- An explanation of CC-BY-SA is available at                                
    -->
+<!--                                                                           
    -->
+<!--            http://creativecommons.org/licenses/by-sa/3.0/.                
    -->
+<!--                                                                           
    -->
+<!-- In accordance with CC-BY-SA, if you distribute this document or an 
adaptation -->
+<!-- of it, you must provide the URL for the original version.                 
    -->
+<!--                                                                           
    -->
+<!-- Red Hat, as the licensor of this document, waives the right to enforce,   
    -->
+<!-- and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent    
    -->
+<!-- permitted by applicable law.                                              
    -->
+<!-- 
============================================================================= 
-->
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" 
"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd"; [
+<!ENTITY % BOOK_ENTITIES SYSTEM "HornetQ_User_Manual.ent">
+%BOOK_ENTITIES;
+]>
+<chapter id="paging">
+    <title>Paging</title>
+    <para>HornetQ transparently supports huge queues containing millions of 
messages while the
+        server is running with limited memory.</para>
+    <para>In such a situation it's not possible to store all of the queues in 
memory at any one
+        time, so HornetQ transparently <emphasis>pages</emphasis> messages 
into and out of memory as
+        they are needed, thus allowing massive queues with a low memory 
footprint.</para>
+    <para>HornetQ will start paging messages to disk, when the size of all 
messages in memory for an
+        address exceeds a configured maximum size.</para>
+    <para>By default, HornetQ does not page messages - this must be explicitly 
configured to
+        activate it.</para>
+    <section>
+        <title>Page Files</title>
+        <para>Messages are stored per address on the file system. Each address 
has an individual
+            folder where messages are stored in multiple files (page files). 
Each file will contain
+            messages up to a max configured size 
(<literal>page-size-bytes</literal>). The system
+            will navigate on the files as needed, and it will remove the page 
file as soon as all
+            the messages are acknowledged up to that point.</para>
+        <para>Browsers will read through the page-cursor system.</para>
+        <para>Consumers with selectors will also navigate through the 
page-files and it will ignore
+            messages that don't match the criteria.</para>
+    </section>
+    <section id="paging.main.config">
+        <title>Configuration</title>
+        <para>You can configure the location of the paging folder</para>
+        <para>Global paging parameters are specified on the main configuration 
file (<literal
+                >hornetq-configuration.xml</literal>).</para>
+        <programlisting>
+&lt;configuration xmlns="urn:hornetq"
+   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+   xsi:schemaLocation="urn:hornetq /schema/hornetq-configuration.xsd">
+...
+&lt;paging-directory>/somewhere/paging-directory&lt;/paging-directory>
+...</programlisting>
+        <para>
+            <table frame="topbot">
+                <title>Paging Configuration Parameters</title>
+                <tgroup cols="3">
+                    <colspec colname="c1" colnum="1"/>
+                    <colspec colname="c2" colnum="2"/>
+                    <colspec colname="c3" colnum="3"/>
+                    <thead>
+                        <row>
+                            <entry>Property Name</entry>
+                            <entry>Description</entry>
+                            <entry>Default</entry>
+                        </row>
+                    </thead>
+                    <tbody>
+                        <row>
+                            <entry><literal>paging-directory</literal></entry>
+                            <entry>Where page files are stored. HornetQ will 
create one folder for
+                                each address being paged under this configured 
location.</entry>
+                            <entry>data/paging</entry>
+                        </row>
+                    </tbody>
+                </tgroup>
+            </table>
+        </para>
+    </section>
+    <section id="paging.mode">
+        <title>Paging Mode</title>
+        <para>As soon as messages delivered to an address exceed the 
configured size, that address
+            alone goes into page mode.</para>
+        <note>
+            <para>Paging is done individually per address. If you configure a 
max-size-bytes for an
+                address, that means each matching address will have a maximum 
size that you
+                specified. It DOES NOT mean that the total overall size of all 
matching addresses is
+                limited to max-size-bytes.</para>
+        </note>
+        <section>
+            <title>Configuration</title>
+            <para>Configuration is done at the address settings, done at the 
main configuration file
+                    (<literal>hornetq-configuration.xml</literal>).</para>
+            <programlisting>
+&lt;address-settings>
+   &lt;address-setting match="jms.someaddress">
+      &lt;max-size-bytes>104857600&lt;/max-size-bytes>
+      &lt;page-size-bytes>10485760&lt;/page-size-bytes>
+      &lt;address-full-policy>PAGE&lt;/address-full-policy>
+   &lt;/address-setting>
+&lt;/address-settings></programlisting>
+            <para>This is the list of available parameters on the address 
settings.</para>
+            <para>
+                <table frame="topbot">
+                    <title>Paging Address Settings</title>
+                    <tgroup cols="3">
+                        <colspec colname="c1" colnum="1"/>
+                        <colspec colname="c2" colnum="2"/>
+                        <colspec colname="c3" colnum="3"/>
+                        <thead>
+                            <row>
+                                <entry>Property Name</entry>
+                                <entry>Description</entry>
+                                <entry>Default</entry>
+                            </row>
+                        </thead>
+                        <tbody>
+                            <row>
+                                
<entry><literal>max-size-bytes</literal></entry>
+                                <entry>What's the max memory the address could 
have before entering
+                                    on page mode.</entry>
+                                <entry>-1 (disabled)</entry>
+                            </row>
+                            <row>
+                                
<entry><literal>page-size-bytes</literal></entry>
+                                <entry>The size of each page file used on the 
paging system</entry>
+                                <entry>10MiB (10 * 1024 * 1024 bytes)</entry>
+                            </row>
+                            <row>
+                                
<entry><literal>address-full-policy</literal></entry>
+                                <entry>This must be set to PAGE for paging to 
enable. If the value
+                                    is PAGE then further messages will be 
paged to disk. If the
+                                    value is DROP then further messages will 
be silently dropped. If
+                                    the value is FAIL then the messages will 
be dropped and the client
+                                    message producers will receive an 
exception. If the value is
+                                    BLOCK then client message producers will 
block when they try and
+                                    send further messages.</entry>
+                                <entry>PAGE</entry>
+                            </row>
+                            <row>
+                                
<entry><literal>page-max-cache-size</literal></entry>
+                                <entry>The system will keep up to &lt;<literal
+                                        >page-max-cache-size</literal> page 
files in memory to
+                                    optimize IO during paging 
navigation.</entry>
+                                <entry>5</entry>
+                            </row>
+                        </tbody>
+                    </tgroup>
+                </table>
+            </para>
+        </section>
+    </section>
+    <section>
+        <title>Dropping messages</title>
+        <para>Instead of paging messages when the max size is reached, an 
address can also be
+            configured to just drop messages when the address is full.</para>
+        <para>To do this just set the <literal>address-full-policy</literal> 
to <literal
+                >DROP</literal> in the address settings</para>
+    </section>
+    <section>
+        <title>Dropping messages and throwing an exception to producers</title>
+        <para>Instead of paging messages when the max size is reached, an 
address can also be
+            configured to drop messages and also throw an exception on the 
client-side
+            when the address is full.</para>
+        <para>To do this just set the <literal>address-full-policy</literal> 
to <literal
+                >FAIL</literal> in the address settings</para>
+    </section>
+    <section>
+        <title>Blocking producers</title>
+        <para>Instead of paging messages when the max size is reached, an 
address can also be
+            configured to block producers from sending further messages when 
the address is full,
+            thus preventing the memory being exhausted on the server.</para>
+        <para>When memory is freed up on the server, producers will 
automatically unblock and be
+            able to continue sending.</para>
+        <para>To do this just set the <literal>address-full-policy</literal> 
to <literal
+                >BLOCK</literal> in the address settings</para>
+        <para>In the default configuration, all addresses are configured to 
block producers after 10
+            MiB of data are in the address.</para>
+    </section>
+    <section>
+        <title>Caution with Addresses with Multiple Queues</title>
+        <para>When a message is routed to an address that has multiple queues 
bound to it, e.g. a
+            JMS subscription in a Topic, there is only 1 copy of the message 
in memory. Each queue
+            only deals with a reference to this. Because of this the memory is 
only freed up once
+            all queues referencing the message have delivered it.</para>
+        <para>If you have a single lazy subscription, the entire address will 
suffer IO performance
+            hit as all the queues will have messages being sent through an 
extra storage on the
+            paging system.</para>
+        <para>For example:</para>
+        <itemizedlist>
+            <listitem>
+                <para>An address has 10 queues </para>
+            </listitem>
+            <listitem>
+                <para>One of the queues does not deliver its messages (maybe 
because of a slow
+                    consumer).</para>
+            </listitem>
+            <listitem>
+                <para>Messages continually arrive at the address and paging is 
started.</para>
+            </listitem>
+            <listitem>
+                <para>The other 9 queues are empty even though messages have 
been sent.</para>
+            </listitem>
+        </itemizedlist>
+        <para>In this example all the other 9 queues will be consuming 
messages from the page
+            system. This may cause performance issues if this is an 
undesirable state.</para>
+    </section>
+    <section>
+        <title>Example</title>
+        <para>See <xref linkend="examples.paging"/> for an example which shows 
how to use paging
+            with HornetQ.</para>
+    </section>
+</chapter>

http://git-wip-us.apache.org/repos/asf/activemq-6/blob/8ecd255f/docs/user-manual/en/perf-tuning.xml
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/perf-tuning.xml 
b/docs/user-manual/en/perf-tuning.xml
new file mode 100644
index 0000000..da0c2ed
--- /dev/null
+++ b/docs/user-manual/en/perf-tuning.xml
@@ -0,0 +1,305 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!-- 
============================================================================= 
-->
+<!-- Copyright © 2009 Red Hat, Inc. and others.                               
     -->
+<!--                                                                           
    -->
+<!-- The text of and illustrations in this document are licensed by Red Hat 
under  -->
+<!-- a Creative Commons Attribution–Share Alike 3.0 Unported license 
("CC-BY-SA"). -->
+<!--                                                                           
    -->
+<!-- An explanation of CC-BY-SA is available at                                
    -->
+<!--                                                                           
    -->
+<!--            http://creativecommons.org/licenses/by-sa/3.0/.                
    -->
+<!--                                                                           
    -->
+<!-- In accordance with CC-BY-SA, if you distribute this document or an 
adaptation -->
+<!-- of it, you must provide the URL for the original version.                 
    -->
+<!--                                                                           
    -->
+<!-- Red Hat, as the licensor of this document, waives the right to enforce,   
    -->
+<!-- and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent    
    -->
+<!-- permitted by applicable law.                                              
    -->
+<!-- 
============================================================================= 
-->
+
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" 
"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd"; [
+<!ENTITY % BOOK_ENTITIES SYSTEM "HornetQ_User_Manual.ent">
+%BOOK_ENTITIES;
+]>
+<chapter id="perf-tuning">
+    <title>Performance Tuning</title>
+    <para>In this chapter we'll discuss how to tune HornetQ for optimum 
performance.</para>
+    <section>
+        <title>Tuning persistence</title>
+        <itemizedlist>
+            <listitem>
+                <para>Put the message journal on its own physical volume. If 
the disk is shared with
+                    other processes e.g. transaction co-ordinator, database or 
other journals which
+                    are also reading and writing from it, then this may 
greatly reduce performance
+                    since the disk head may be skipping all over the place 
between the different
+                    files. One of the advantages of an append only journal is 
that disk head
+                    movement is minimised - this advantage is destroyed if the 
disk is shared. If
+                    you're using paging or large messages make sure they're 
ideally put on separate
+                    volumes too.</para>
+            </listitem>
+            <listitem>
+                <para>Minimum number of journal files. Set 
<literal>journal-min-files</literal> to a
+                    number of files that would fit your average sustainable 
rate. If you see new
+                    files being created on the journal data directory too 
often, i.e. lots of data
+                    is being persisted, you need to increase the minimal 
number of files, this way
+                    the journal would reuse more files instead of creating new 
data files.</para>
+            </listitem>
+            <listitem>
+                <para>Journal file size. The journal file size should be 
aligned to the capacity of
+                    a cylinder on the disk. The default value 10MiB should be 
enough on most
+                    systems.</para>
+            </listitem>
+            <listitem>
+                <para>Use AIO journal. If using Linux, try to keep your 
journal type as AIO. AIO
+                    will scale better than Java NIO.</para>
+            </listitem>
+            <listitem>
+                <para>Tune <literal>journal-buffer-timeout</literal>. The 
timeout can be increased
+                    to increase throughput at the expense of latency.</para>
+            </listitem>
+            <listitem>
+                <para>If you're running AIO you might be able to get some 
better performance by
+                    increasing <literal>journal-max-io</literal>. DO NOT 
change this parameter if
+                    you are running NIO.</para>
+            </listitem>
+        </itemizedlist>
+    </section>
+    <section>
+        <title>Tuning JMS</title>
+        <para>There are a few areas where some tweaks can be done if you are 
using the JMS
+            API</para>
+        <itemizedlist>
+            <listitem>
+                <para>Disable message id. Use the 
<literal>setDisableMessageID()</literal> method on
+                    the <literal>MessageProducer</literal> class to disable 
message ids if you don't
+                    need them. This decreases the size of the message and also 
avoids the overhead
+                    of creating a unique ID.</para>
+            </listitem>
+            <listitem>
+                <para>Disable message timestamp. Use the <literal
+                        >setDisableMessageTimeStamp()</literal> method on the 
<literal
+                        >MessageProducer</literal> class to disable message 
timestamps if you don't
+                    need them.</para>
+            </listitem>
+            <listitem>
+                <para>Avoid <literal>ObjectMessage</literal>. 
<literal>ObjectMessage</literal> is
+                    convenient but it comes at a cost. The body of a <literal
+                        >ObjectMessage</literal> uses Java serialization to 
serialize it to bytes.
+                    The Java serialized form of even small objects is very 
verbose so takes up a lot
+                    of space on the wire, also Java serialization is slow 
compared to custom
+                    marshalling techniques. Only use 
<literal>ObjectMessage</literal> if you really
+                    can't use one of the other message types, i.e. if you 
really don't know the type
+                    of the payload until run-time.</para>
+            </listitem>
+            <listitem>
+                <para>Avoid <literal>AUTO_ACKNOWLEDGE</literal>. 
<literal>AUTO_ACKNOWLEDGE</literal>
+                    mode requires an acknowledgement to be sent from the 
server for each message
+                    received on the client, this means more traffic on the 
network. If you can, use
+                        <literal>DUPS_OK_ACKNOWLEDGE</literal> or use <literal
+                        >CLIENT_ACKNOWLEDGE</literal> or a transacted session 
and batch up many
+                    acknowledgements with one acknowledge/commit. </para>
+            </listitem>
+            <listitem>
+                <para>Avoid durable messages. By default JMS messages are 
durable. If you don't
+                    really need durable messages then set them to be 
non-durable. Durable messages
+                    incur a lot more overhead in persisting them to 
storage.</para>
+            </listitem>
+            <listitem>
+                <para>Batch many sends or acknowledgements in a single 
transaction. HornetQ will
+                    only require a network round trip on the commit, not on 
every send or
+                    acknowledgement.</para>
+            </listitem>
+        </itemizedlist>
+    </section>
+    <section>
+        <title>Other Tunings</title>
+        <para>There are various other places in HornetQ where we can perform 
some tuning:</para>
+        <itemizedlist>
+            <listitem>
+                <para>Use Asynchronous Send Acknowledgements. If you need to 
send durable messages
+                    non transactionally and you need a guarantee that they 
have reached the server
+                    by the time the call to send() returns, don't set durable 
messages to be sent
+                    blocking, instead use asynchronous send acknowledgements 
to get your
+                    acknowledgements of send back in a separate stream, see 
<xref
+                        linkend="send-guarantees"/> for more information on 
this.</para>
+            </listitem>
+            <listitem>
+                <para>Use pre-acknowledge mode. With pre-acknowledge mode, 
messages are acknowledged
+                        <literal>before</literal> they are sent to the client. 
This reduces the
+                    amount of acknowledgement traffic on the wire. For more 
information on this, see
+                        <xref linkend="pre-acknowledge"/>.</para>
+            </listitem>
+            <listitem>
+                <para>Disable security. You may get a small performance boost 
by disabling security
+                    by setting the <literal>security-enabled</literal> 
parameter to <literal
+                        >false</literal> in 
<literal>hornetq-configuration.xml</literal>.</para>
+            </listitem>
+            <listitem>
+                <para>Disable persistence. If you don't need message 
persistence, turn it off
+                    altogether by setting 
<literal>persistence-enabled</literal> to false in
+                        <literal>hornetq-configuration.xml</literal>.</para>
+            </listitem>
+            <listitem>
+                <para>Sync transactions lazily. Setting <literal
+                        >journal-sync-transactional</literal> to 
<literal>false</literal> in
+                        <literal>hornetq-configuration.xml</literal> can give 
you better
+                    transactional persistent performance at the expense of 
some possibility of loss
+                    of transactions on failure. See <xref 
linkend="send-guarantees"/> for more
+                    information.</para>
+            </listitem>
+            <listitem>
+                <para>Sync non transactional lazily. Setting <literal
+                        >journal-sync-non-transactional</literal> to 
<literal>false</literal> in
+                        <literal>hornetq-configuration.xml</literal> can give 
you better
+                    non-transactional persistent performance at the expense of 
some possibility of
+                    loss of durable messages on failure. See <xref 
linkend="send-guarantees"/> for
+                    more information.</para>
+            </listitem>
+            <listitem>
+                <para>Send messages non blocking. Setting 
<literal>block-on-durable-send</literal>
+                    and <literal>block-on-non-durable-send</literal> to 
<literal>false</literal> in
+                        <literal>hornetq-jms.xml</literal> (if you're using 
JMS and JNDI) or
+                    directly on the ServerLocator. This means you don't have 
to wait a whole
+                    network round trip for every message sent. See <xref 
linkend="send-guarantees"/>
+                    for more information.</para>
+            </listitem>
+            <listitem>
+                <para>If you have very fast consumers, you can increase 
consumer-window-size. This
+                    effectively disables consumer flow control.</para>
+            </listitem>
+            <listitem>
+                <para>Socket NIO vs Socket Old IO. By default HornetQ uses old 
(blocking) on the
+                    server and the client side (see the chapter on configuring 
transports for more
+                    information <xref linkend="configuring-transports"/>). NIO 
is much more scalable
+                    but can give you some latency hit compared to old blocking 
IO. If you need to be
+                    able to service many thousands of connections on the 
server, then you should
+                    make sure you're using NIO on the server. However, if 
don't expect many
+                    thousands of connections on the server you can keep the 
server acceptors using
+                    old IO, and might get a small performance advantage.</para>
+            </listitem>
+            <listitem>
+                <para>Use the core API not JMS. Using the JMS API you will 
have slightly lower
+                    performance than using the core API, since all JMS 
operations need to be
+                    translated into core operations before the server can 
handle them. If using the
+                    core API try to use methods that take 
<literal>SimpleString</literal> as much as
+                    possible. <literal>SimpleString</literal>, unlike 
java.lang.String does not
+                    require copying before it is written to the wire, so if 
you re-use <literal
+                        >SimpleString</literal> instances between calls then 
you can avoid some
+                    unnecessary copying.</para>
+            </listitem>
+        </itemizedlist>
+    </section>
+    <section>
+        <title>Tuning Transport Settings</title>
+        <itemizedlist>
+            <listitem>
+                <para>TCP buffer sizes. If you have a fast network and fast 
machines you may get a
+                    performance boost by increasing the TCP send and receive 
buffer sizes. See the
+                        <xref linkend="configuring-transports"/> for more 
information on this. </para>
+                <note>
+                    <para> Note that some operating systems like later 
versions of Linux include TCP
+                        auto-tuning and setting TCP buffer sizes manually can 
prevent auto-tune from
+                        working and actually give you worse performance!</para>
+                </note>
+            </listitem>
+            <listitem>
+                <para>Increase limit on file handles on the server. If you 
expect a lot of
+                    concurrent connections on your servers, or if clients are 
rapidly opening and
+                    closing connections, you should make sure the user running 
the server has
+                    permission to create sufficient file handles.</para>
+                <para>This varies from operating system to operating system. 
On Linux systems you
+                    can increase the number of allowable open file handles in 
the file <literal
+                        >/etc/security/limits.conf</literal> e.g. add the lines
+                    <programlisting>
+serveruser     soft    nofile  20000
+serveruser     hard    nofile  20000</programlisting>
+                    This would allow up to 20000 file handles to be open by 
the user <literal
+                        >serveruser</literal>. </para>
+            </listitem>
+            <listitem>
+                <para>Use <literal>batch-delay</literal> and set 
<literal>direct-deliver</literal>
+                    to false for the best throughput for very small messages. 
HornetQ comes with a
+                    preconfigured connector/acceptor pair 
(<literal>netty-throughput</literal>) in
+                        <literal>hornetq-configuration.xml</literal> and JMS 
connection factory
+                        (<literal>ThroughputConnectionFactory</literal>) in 
<literal
+                        >hornetq-jms.xml</literal>which can be used to give 
the very best
+                    throughput, especially for small messages. See the <xref
+                        linkend="configuring-transports"/> for more 
information on this.</para>
+            </listitem>
+        </itemizedlist>
+    </section>
+    <section>
+        <title>Tuning the VM</title>
+        <para>We highly recommend you use the latest Java JVM for the best 
performance. We test
+            internally using the Sun JVM, so some of these tunings won't apply 
to JDKs from other
+            providers (e.g. IBM or JRockit)</para>
+        <itemizedlist>
+            <listitem>
+                <para>Garbage collection. For smooth server operation we 
recommend using a parallel
+                    garbage collection algorithm, e.g. using the JVM argument 
<literal
+                        >-XX:+UseParallelOldGC</literal> on Sun JDKs.</para>
+            </listitem>
+            <listitem id="perf-tuning.memory">
+                <para>Memory settings. Give as much memory as you can to the 
server. HornetQ can run
+                    in low memory by using paging (described in <xref 
linkend="paging"/>) but if it
+                    can run with all queues in RAM this will improve 
performance. The amount of
+                    memory you require will depend on the size and number of 
your queues and the
+                    size and number of your messages. Use the JVM arguments 
<literal>-Xms</literal>
+                    and <literal>-Xmx</literal> to set server available RAM. 
We recommend setting
+                    them to the same high value.</para>
+            </listitem>
+            <listitem>
+                <para>Aggressive options. Different JVMs provide different 
sets of JVM tuning
+                    parameters, for the Sun Hotspot JVM the full list of 
options is available <ulink
+                        
url="http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html";
+                        >here</ulink>. We recommend at least using <literal
+                        >-XX:+AggressiveOpts</literal> and<literal>
+                        -XX:+UseFastAccessorMethods</literal>. You may get 
some mileage with the
+                    other tuning parameters depending on your OS platform and 
application usage
+                    patterns.</para>
+            </listitem>
+        </itemizedlist>
+    </section>
+    <section>
+        <title>Avoiding Anti-Patterns</title>
+        <itemizedlist>
+            <listitem>
+                <para>Re-use connections / sessions / consumers / producers. 
Probably the most
+                    common messaging anti-pattern we see is users who create a 
new
+                    connection/session/producer for every message they send or 
every message they
+                    consume. This is a poor use of resources. These objects 
take time to create and
+                    may involve several network round trips. Always re-use 
them.</para>
+                <note>
+                    <para>Some popular libraries such as the Spring JMS 
Template are known to use
+                        these anti-patterns. If you're using Spring JMS 
Template and you're getting
+                        poor performance you know why. Don't blame HornetQ! 
The Spring JMS Template
+                        can only safely be used in an app server which caches 
JMS sessions (e.g.
+                        using JCA), and only then for sending messages. It 
cannot be safely be used
+                        for synchronously consuming messages, even in an app 
server. </para>
+                </note>
+            </listitem>
+            <listitem>
+                <para>Avoid fat messages. Verbose formats such as XML take up 
a lot of space on the
+                    wire and performance will suffer as result. Avoid XML in 
message bodies if you
+                    can.</para>
+            </listitem>
+            <listitem>
+                <para>Don't create temporary queues for each request. This 
common anti-pattern
+                    involves the temporary queue request-response pattern. 
With the temporary queue
+                    request-response pattern a message is sent to a target and 
a reply-to header is
+                    set with the address of a local temporary queue. When the 
recipient receives the
+                    message they process it then send back a response to the 
address specified in
+                    the reply-to. A common mistake made with this pattern is 
to create a new
+                    temporary queue on each message sent. This will 
drastically reduce performance.
+                    Instead the temporary queue should be re-used for many 
requests.</para>
+            </listitem>
+            <listitem>
+                <para>Don't use Message-Driven Beans for the sake of it. As 
soon as you start using
+                    MDBs you are greatly increasing the codepath for each 
message received compared
+                    to a straightforward message consumer, since a lot of 
extra application server
+                    code is executed. Ask yourself do you really need MDBs? 
Can you accomplish the
+                    same task using just a normal message consumer?</para>
+            </listitem>
+        </itemizedlist>
+    </section>
+</chapter>

Reply via email to