http://git-wip-us.apache.org/repos/asf/activemq-6/blob/4245a6b4/docs/user-manual/en/ha.xml
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/ha.xml b/docs/user-manual/en/ha.xml
deleted file mode 100644
index 2df9e76..0000000
--- a/docs/user-manual/en/ha.xml
+++ /dev/null
@@ -1,985 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!-- 
============================================================================= 
-->
-<!-- Licensed to the Apache Software Foundation (ASF) under one or more        
    -->
-<!-- contributor license agreements. See the NOTICE file distributed with      
    -->
-<!-- this work for additional information regarding copyright ownership.       
    -->
-<!-- The ASF licenses this file to You under the Apache License, Version 2.0   
    -->
-<!-- (the "License"); you may not use this file except in compliance with      
    -->
-<!-- the License. You may obtain a copy of the License at                      
    -->
-<!--                                                                           
    -->
-<!--     http://www.apache.org/licenses/LICENSE-2.0                            
    -->
-<!--                                                                           
    -->
-<!-- Unless required by applicable law or agreed to in writing, software       
    -->
-<!-- distributed under the License is distributed on an "AS IS" BASIS,         
    -->
-<!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.  
    -->
-<!-- See the License for the specific language governing permissions and       
    -->
-<!-- limitations under the License.                                            
    -->
-<!-- 
============================================================================= 
-->
-
-<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" 
"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd"; [
-<!ENTITY % BOOK_ENTITIES SYSTEM "ActiveMQ_User_Manual.ent">
-%BOOK_ENTITIES;
-]>
-<chapter id="ha">
-    <title>High Availability and Failover</title>
-
-    <para>We define high availability as the <emphasis>ability for the system 
to continue
-       functioning after failure of one or more of the 
servers</emphasis>.</para>
-
-    <para>A part of high availability is <emphasis>failover</emphasis> which 
we define as the
-       <emphasis>ability for client connections to migrate from one server to 
another in event of
-          server failure so client applications can continue to 
operate</emphasis>.</para>
-    <section>
-        <title>Live - Backup Groups</title>
-
-        <para>ActiveMQ allows servers to be linked together as <emphasis>live 
- backup</emphasis>
-           groups where each live server can have 1 or more backup servers. A 
backup server is owned by
-           only one live server.  Backup servers are not operational until 
failover occurs, however 1
-           chosen backup, which will be in passive mode, announces its status 
and waits to take over
-           the live servers work</para>
-
-        <para>Before failover, only the live server is serving the ActiveMQ 
clients while the backup
-           servers remain passive or awaiting to become a backup server. When 
a live server crashes or
-           is brought down in the correct mode, the backup server currently in 
passive mode will become
-           live and another backup server will become passive. If a live 
server restarts after a
-           failover then it will have priority and be the next server to 
become live when the current
-           live server goes down, if the current live server is configured to 
allow automatic failback
-           then it will detect the live server coming back up and 
automatically stop.</para>
-
-        <section id="ha.policies">
-            <title>HA Policies</title>
-            <para>ActiveMQ supports two different strategies for backing up a 
server <emphasis>shared
-               store</emphasis> and <emphasis>replication</emphasis>. Which is 
configured via the
-               <literal>ha-policy</literal> configuration element.</para>
-           <programlisting>
-&lt;ha-policy>
-  &lt;replication/>
-&lt;/ha-policy>
-           </programlisting>
-           <para>
-              or
-           </para>
-           <programlisting>
-&lt;ha-policy>
-   &lt;shared-store/>
-&lt;/ha-policy>
-           </programlisting>
-           <para>
-              As well as these 2 strategies there is also a 3rd called 
<literal>live-only</literal>. This of course means there
-              will be no Backup Strategy and is the default if none is 
provided, however this is used to configure
-              <literal>scale-down</literal> which we will cover in a later 
chapter.
-           </para>
-           <note>
-              <para>
-                 The <literal>ha-policy</literal> configurations replaces any 
current HA configuration in the root of the
-                 <literal>activemq-configuration.xml</literal> configuration. 
All old configuration is now deprecated altho
-                 best efforts will be made to honour it if configured this way.
-              </para>
-           </note>
-            <note>
-                <para>Only persistent message data will survive failover. Any 
non persistent message
-                   data will not be available after failover.</para>
-            </note>
-           <para>The <literal>ha-policy</literal> type configures which 
strategy a cluster should use to provide the
-              backing up of a servers data. Within this configuration element 
is configured how a server should behave
-              within the cluster, either as a master (live), slave (backup) or 
colocated (both live and backup). This
-              would look something like: </para>
-           <programlisting>
-&lt;ha-policy>
-   &lt;replication>
-      &lt;master/>
-   &lt;/replication>
-&lt;/ha-policy>
-           </programlisting>
-           <para>
-              or
-           </para>
-           <programlisting>
-&lt;ha-policy>
-   &lt;shared-store/>
-      &lt;slave/>
-   &lt;/shared-store/>
-&lt;/ha-policy>
-           </programlisting>
-           <para>
-              or
-           </para>
-           <programlisting>
-&lt;ha-policy>
-   &lt;replication>
-      &lt;colocated/>
-   &lt;/replication>
-&lt;/ha-policy>
-           </programlisting>
-        </section>
-
-        <section id="ha.mode.replicated">
-            <title>Data Replication</title>
-            <para>Support for network-based data replication was added in 
version 2.3.</para>
-            <para>When using replication, the live and the backup servers do 
not share the same
-               data directories, all data synchronization is done over the 
network. Therefore all (persistent)
-               data received by the live server will be duplicated to the 
backup.</para>
-            <graphic fileref="images/ha-replicated-store.png" align="center"/>
-            <para>Notice that upon start-up the backup server will first need 
to synchronize all
-               existing data from the live server before becoming capable of 
replacing the live
-               server should it fail. So unlike when using shared storage, a 
replicating backup will
-               not be a fully operational backup right after start-up, but 
only after it finishes
-               synchronizing the data with its live server. The time it will 
take for this to happen
-               will depend on the amount of data to be synchronized and the 
connection speed.</para>
-
-            <note>
-                <para>Synchronization occurs in parallel with current network 
traffic so this won't cause any
-                  blocking on current clients.</para>
-            </note>
-            <para>Replication will create a copy of the data at the backup. 
One issue to be aware
-               of is: in case of a successful fail-over, the backup's data 
will be newer than
-               the one at the live's storage. If you configure your live 
server to perform a
-               <xref linkend="ha.allow-fail-back">'fail-back'</xref> when 
restarted, it will synchronize
-               its data with the backup's. If both servers are shutdown, the 
administrator will have
-               to determine which one has the latest data.</para>
-
-            <para>The replicating live and backup pair must be part of a 
cluster.  The Cluster
-               Connection also defines how backup servers will find the remote 
live servers to pair
-               with.  Refer to <xref linkend="clusters"/> for details on how 
this is done, and how
-               to configure a cluster connection. Notice that:</para>
-
-            <itemizedlist>
-                <listitem>
-                    <para>Both live and backup servers must be part of the 
same cluster.  Notice
-                       that even a simple live/backup replicating pair will 
require a cluster configuration.</para>
-                </listitem>
-                <listitem>
-                    <para>Their cluster user and password must match.</para>
-                </listitem>
-            </itemizedlist>
-
-            <para>Within a cluster, there are two ways that a backup server 
will locate a live server to replicate
-               from, these are:</para>
-
-            <itemizedlist>
-                <listitem>
-                    <para><literal>specifying a node group</literal>. You can 
specify a group of live servers that a backup
-                       server can connect to. This is done by configuring 
<literal>group-name</literal> in either the <literal>master</literal>
-                       or the <literal>slave</literal> element of the
-                       <literal>activemq-configuration.xml</literal>. A Backup 
server will only connect to a live server that
-                       shares the same node group name</para>
-                </listitem>
-                <listitem>
-                   <para><literal>connecting to any live</literal>. This will 
be the behaviour if <literal>group-name</literal>
-                      is not configured allowing a backup server to connect to 
any live server</para>
-                </listitem>
-            </itemizedlist>
-            <note>
-                <para>A <literal>group-name</literal> example: suppose you 
have 5 live servers and 6 backup
-                   servers:</para>
-                <itemizedlist>
-                    <listitem>
-                        <para><literal>live1</literal>, 
<literal>live2</literal>, <literal>live3</literal>: with
-                           <literal>group-name=fish</literal></para>
-                    </listitem>
-                    <listitem>
-                       <para><literal>live4</literal>, 
<literal>live5</literal>: with <literal>group-name=bird</literal></para>
-                    </listitem>
-                    <listitem>
-                       <para><literal>backup1</literal>, 
<literal>backup2</literal>, <literal>backup3</literal>,
-                          <literal>backup4</literal>: with 
<literal>group-name=fish</literal></para>
-                    </listitem>
-                    <listitem>
-                       <para><literal>backup5</literal>, 
<literal>backup6</literal>: with
-                          <literal>group-name=bird</literal></para>
-                    </listitem>
-                </itemizedlist>
-                <para>After joining the cluster the backups with 
<literal>group-name=fish</literal> will
-                   search for live servers with 
<literal>group-name=fish</literal> to pair with. Since there
-                   is one backup too many, the <literal>fish</literal> will 
remain with one spare backup.</para>
-                <para>The 2 backups with <literal>group-name=bird</literal> 
(<literal>backup5</literal> and
-                   <literal>backup6</literal>) will pair with live servers 
<literal>live4</literal> and
-                   <literal>live5</literal>.</para>
-            </note>
-            <para>The backup will search for any live server that it is 
configured to connect to. It then tries to
-               replicate with each live server in turn until it finds a live 
server that has no current backup
-               configured. If no live server is available it will wait until 
the cluster topology changes and
-               repeats the process.</para>
-            <note>
-               <para>This is an important distinction from a shared-store 
backup, if a backup starts and does not find
-                  a live server, the server will just activate and start to 
serve client requests.
-                  In the replication case, the backup just keeps
-                  waiting for a live server to pair with. Note that in 
replication the backup server
-                  does not know whether any data it might have is up to date, 
so it really cannot
-                  decide to activate automatically. To activate a replicating 
backup server using the data
-                  it has, the administrator must change its configuration to 
make it a live server by changing
-                  <literal>slave</literal> to <literal>master</literal>.</para>
-            </note>
-
-            <para>Much like in the shared-store case, when the live server 
stops or crashes,
-               its replicating backup will become active and take over its 
duties. Specifically,
-               the backup will become active when it loses connection to its 
live server. This can
-               be problematic because this can also happen because of a 
temporary network
-               problem. In order to address this issue, the backup will try to 
determine whether it
-               still can connect to the other servers in the cluster. If it 
can connect to more
-               than half the servers, it will become active, if more than half 
the servers also
-               disappeared with the live, the backup will wait and try 
reconnecting with the live.
-               This avoids a split brain situation.</para>
-
-            <section>
-                <title>Configuration</title>
-
-                <para>To configure the live and backup servers to be a 
replicating pair, configure
-                   the live server in ' 
<literal>activemq-configuration.xml</literal> to have:</para>
-
-                <programlisting>
-&lt;ha-policy>
-   &lt;replication>
-      &lt;master/>
-   &lt;/replication>
-&lt;/ha-policy>
-.
-&lt;cluster-connections>
-   &lt;cluster-connection name="my-cluster">
-      ...
-   &lt;/cluster-connection>
-&lt;/cluster-connections>
-                </programlisting>
-
-                <para>The backup server must be similarly configured but as a 
<literal>slave</literal></para>
-
-                <programlisting>
-&lt;ha-policy>
-   &lt;replication>
-      &lt;slave/>
-   &lt;/replication>
-&lt;/ha-policy></programlisting>
-            </section>
-           <section>
-              <title>All Replication Configuration</title>
-
-              <para>The following table lists all the 
<literal>ha-policy</literal> configuration elements for HA strategy
-                 Replication for <literal>master</literal>:</para>
-              <table>
-                 <tgroup cols="2">
-                    <colspec colname="c1" colnum="1"/>
-                    <colspec colname="c2" colnum="2"/>
-                    <thead>
-                       <row>
-                          <entry>name</entry>
-                          <entry>Description</entry>
-                       </row>
-                    </thead>
-                    <tbody>
-                       <row>
-                          
<entry><literal>check-for-live-server</literal></entry>
-                          <entry>Whether to check the cluster for a (live) 
server using our own server ID when starting
-                             up. This option is only necessary for performing 
'fail-back' on replicating servers.</entry>
-                       </row>
-                       <row>
-                          <entry><literal>cluster-name</literal></entry>
-                          <entry>Name of the cluster configuration to use for 
replication. This setting is only necessary if you
-                             configure multiple cluster connections. If 
configured then the connector configuration of the
-                             cluster configuration with this name will be used 
when connecting to the cluster to discover
-                          if a live server is already running, see 
<literal>check-for-live-server</literal>. If unset then
-                          the default cluster connections configuration is 
used (the first one configured)</entry>
-                       </row>
-                       <row>
-                          <entry><literal>group-name</literal></entry>
-                          <entry>If set, backup servers will only pair with 
live servers with matching group-name</entry>
-                       </row>
-                    </tbody>
-                 </tgroup>
-              </table>
-              <para>The following table lists all the 
<literal>ha-policy</literal> configuration elements for HA strategy
-                 Replication for <literal>slave</literal>:</para>
-              <table>
-                 <tgroup cols="2">
-                    <colspec colname="c1" colnum="1"/>
-                    <colspec colname="c2" colnum="2"/>
-                    <thead>
-                       <row>
-                          <entry>name</entry>
-                          <entry>Description</entry>
-                       </row>
-                    </thead>
-                    <tbody>
-                       <row>
-                          <entry><literal>cluster-name</literal></entry>
-                          <entry>Name of the cluster configuration to use for 
replication. This setting is only necessary if you
-                             configure multiple cluster connections. If 
configured then the connector configuration of the
-                             cluster configuration with this name will be used 
when connecting to the cluster to discover
-                             if a live server is already running, see 
<literal>check-for-live-server</literal>. If unset then
-                             the default cluster connections configuration is 
used (the first one configured)</entry>
-                       </row>
-                       <row>
-                          <entry><literal>group-name</literal></entry>
-                          <entry>If set, backup servers will only pair with 
live servers with matching group-name</entry>
-                       </row>
-                       <row>
-                          
<entry><literal>max-saved-replicated-journals-size</literal></entry>
-                          <entry>This specifies how many times a replicated 
backup server can restart after moving its files on start.
-                             Once there are this number of backup journal 
files the server will stop permanently after if fails
-                             back.</entry>
-                       </row>
-                       <row>
-                          <entry><literal>allow-failback</literal></entry>
-                          <entry>Whether a server will automatically stop when 
a another places a request to take over
-                             its place. The use case is when the backup has 
failed over </entry>
-                       </row>
-                       <row>
-                          <entry><literal>failback-delay</literal></entry>
-                          <entry>delay to wait before fail-back occurs on 
(failed over live's) restart</entry>
-                       </row>
-                    </tbody>
-                 </tgroup>
-              </table>
-           </section>
-        </section>
-
-        <section id="ha.mode.shared">
-            <title>Shared Store</title>
-            <para>When using a shared store, both live and backup servers 
share the
-               <emphasis>same</emphasis> entire data directory using a shared 
file system.
-               This means the paging directory, journal directory, large 
messages and binding
-               journal.</para>
-            <para>When failover occurs and a backup server takes over, it will 
load the
-               persistent storage from the shared file system and clients can 
connect to
-               it.</para>
-            <para>This style of high availability differs from data 
replication in that it
-               requires a shared file system which is accessible by both the 
live and backup
-               nodes. Typically this will be some kind of high performance 
Storage Area Network
-               (SAN). We do not recommend you use Network Attached Storage 
(NAS), e.g. NFS
-               mounts to store any shared journal (NFS is slow).</para>
-            <para>The advantage of shared-store high availability is that no 
replication occurs
-               between the live and backup nodes, this means it does not 
suffer any performance
-               penalties due to the overhead of replication during normal 
operation.</para>
-            <para>The disadvantage of shared store replication is that it 
requires a shared file
-               system, and when the backup server activates it needs to load 
the journal from
-               the shared store which can take some time depending on the 
amount of data in the
-               store.</para>
-            <para>If you require the highest performance during normal 
operation, have access to
-               a fast SAN and live with a slightly slower failover (depending 
on amount of
-               data).</para>
-            <graphic fileref="images/ha-shared-store.png" align="center"/>
-
-            <section id="ha/mode.shared.configuration">
-                <title>Configuration</title>
-                <para>To configure the live and backup servers to share their 
store, configure
-                   id via the <literal>ha-policy</literal> configuration in 
<literal>activemq-configuration.xml</literal>:</para>
-               <programlisting>
-&lt;ha-policy>
-   &lt;shared-store>
-      &lt;master/>
-   &lt;/shared-store>
-&lt;/ha-policy>
-.
-&lt;cluster-connections>
-   &lt;cluster-connection name="my-cluster">
-...
-   &lt;/cluster-connection>
-&lt;/cluster-connections>
-               </programlisting>
-
-               <para>The backup server must also be configured as a 
backup.</para>
-
-               <programlisting>
-&lt;ha-policy>
-   &lt;shared-store>
-      &lt;slave/>
-   &lt;/shared-store>
-&lt;/ha-policy>
-               </programlisting>
-                <para>In order for live - backup groups to operate properly 
with a shared store,
-                   both servers must have configured the location of journal 
directory to point
-                   to the <emphasis>same shared location</emphasis> (as 
explained in
-                   <xref linkend="configuring.message.journal"/>)</para>
-                <note>
-                    <para>todo write something about GFS</para>
-                </note>
-                <para>Also each node, live and backups, will need to have a 
cluster connection defined even if not
-                   part of a cluster. The Cluster Connection info defines how 
backup servers announce there presence
-                   to its live server or any other nodes in the cluster. Refer 
to <xref linkend="clusters"/> for details
-                   on how this is done.</para>
-            </section>
-        </section>
-        <section id="ha.allow-fail-back">
-            <title>Failing Back to live Server</title>
-            <para>After a live server has failed and a backup taken has taken 
over its duties, you may want to
-               restart the live server and have clients fail back.</para>
-            <para>In case of "shared disk", simply restart the original live 
server and kill the new live server by can
-               do this by killing the process itself. Alternatively you can 
set <literal>allow-fail-back</literal> to
-               <literal>true</literal> on the slave config which will force 
the backup that has become live to automatically
-               stop. This configuration would look like:</para>
-           <programlisting>
-&lt;ha-policy>
-   &lt;shared-store>
-      &lt;slave>
-         &lt;allow-failback>true&lt;/allow-failback>
-         &lt;failback-delay>5000&lt;/failback-delay>
-      &lt;/slave>
-   &lt;/shared-store>
-&lt;/ha-policy>
-           </programlisting>
-           <para>The <literal>failback-delay</literal> configures how long the 
backup must wait after automatically
-              stopping before it restarts. This is to gives the live server 
time to start and obtain its lock.</para>
-           <para id="hq.check-for-live-server">In replication HA mode you need 
to set an extra property <literal>check-for-live-server</literal>
-              to <literal>true</literal> in the <literal>master</literal> 
configuration. If set to true, during start-up
-              a live server will first search the cluster for another server 
using its nodeID. If it finds one, it will
-              contact this server and try to "fail-back". Since this is a 
remote replication scenario, the "starting live"
-              will have to synchronize its data with the server running with 
its ID, once they are in sync, it will
-              request the other server (which it assumes it is a back that has 
assumed its duties) to shutdown for it to
-              take over. This is necessary because otherwise the live server 
has no means to know whether there was a
-              fail-over or not, and if there was if the server that took its 
duties is still running or not.
-              To configure this option at your 
<literal>activemq-configuration.xml</literal> configuration file as 
follows:</para>
-           <programlisting>
-&lt;ha-policy>
-   &lt;replication>
-      &lt;master>
-         &lt;check-for-live-server>true&lt;/check-for-live-server>
-      &lt;master>
-   &lt;/replication>
-&lt;/ha-policy></programlisting>
-           <warning>
-              <para>
-                 Be aware that if you restart a live server while after 
failover has occurred then this value must be
-                 set to <literal><emphasis 
role="bold">true</emphasis></literal>. If not the live server will restart and 
server the same
-                 messages that the backup has already handled causing 
duplicates.
-              </para>
-           </warning>
-            <para>It is also possible, in the case of shared store, to cause 
failover to occur on normal server shutdown,
-               to enable this set the following property to true in the 
<literal>ha-policy</literal> configuration on either
-               the <literal>master</literal> or <literal>slave</literal> like 
so:</para>
-            <programlisting>
-&lt;ha-policy>
-   &lt;shared-store>
-      &lt;master>
-         &lt;failover-on-shutdown>true&lt;/failover-on-shutdown>
-      &lt;/master>
-   &lt;/shared-store>
-&lt;/ha-policy></programlisting>
-            <para>By default this is set to false, if by some chance you have 
set this to false but still
-               want to stop the server normally and cause failover then you 
can do this by using the management
-               API as explained at <xref 
linkend="management.core.server"/></para>
-            <para>You can also force the running live server to shutdown when 
the old live server comes back up allowing
-               the original live server to take over automatically by setting 
the following property in the
-               <literal>activemq-configuration.xml</literal> configuration 
file as follows:</para>
-            <programlisting>
-&lt;ha-policy>
-   &lt;shared-store>
-      &lt;slave>
-         &lt;allow-failback>true&lt;/allow-failback>
-      &lt;/slave>
-   &lt;/shared-store>
-&lt;/ha-policy></programlisting>
-
-           <section>
-              <title>All Shared Store Configuration</title>
-
-              <para>The following table lists all the 
<literal>ha-policy</literal> configuration elements for HA strategy
-                 shared store for <literal>master</literal>:</para>
-              <table>
-                 <tgroup cols="2">
-                    <colspec colname="c1" colnum="1"/>
-                    <colspec colname="c2" colnum="2"/>
-                    <thead>
-                       <row>
-                          <entry>name</entry>
-                          <entry>Description</entry>
-                       </row>
-                    </thead>
-                    <tbody>
-                       <row>
-                          <entry><literal>failback-delay</literal></entry>
-                          <entry>If a backup server is detected as being live, 
via the lock file, then the live server
-                          will wait announce itself as a backup and wait this 
amount of time (in ms) before starting as
-                          a live</entry>
-                       </row>
-                       <row>
-                          
<entry><literal>failover-on-server-shutdown</literal></entry>
-                          <entry>If set to true then when this server is 
stopped normally the backup will become live
-                          assuming failover. If false then the backup server 
will remain passive. Note that if false you
-                             want failover to occur the you can use the the 
management API as explained at <xref linkend="management.core.server"/></entry>
-                       </row>
-                    </tbody>
-                 </tgroup>
-              </table>
-              <para>The following table lists all the 
<literal>ha-policy</literal> configuration elements for HA strategy
-                 Shared Store for <literal>slave</literal>:</para>
-              <table>
-                 <tgroup cols="2">
-                    <colspec colname="c1" colnum="1"/>
-                    <colspec colname="c2" colnum="2"/>
-                    <thead>
-                       <row>
-                          <entry>name</entry>
-                          <entry>Description</entry>
-                       </row>
-                    </thead>
-                    <tbody>
-                       <row>
-                          
<entry><literal>failover-on-server-shutdown</literal></entry>
-                          <entry>In the case of a backup that has become live. 
then when set to true then when this server
-                             is stopped normally the backup will become 
liveassuming failover. If false then the backup
-                             server will remain passive. Note that if false 
you want failover to occur the you can use
-                             the the management API as explained at <xref 
linkend="management.core.server"/></entry>
-                       </row>
-                       <row>
-                          <entry><literal>allow-failback</literal></entry>
-                          <entry>Whether a server will automatically stop when 
a another places a request to take over
-                             its place. The use case is when the backup has 
failed over.</entry>
-                       </row>
-                       <row>
-                          <entry><literal>failback-delay</literal></entry>
-                          <entry>After failover and the slave has become live, 
this is set on the new live server.
-                             When starting If a backup server is detected as 
being live, via the lock file, then the live server
-                             will wait announce itself as a backup and wait 
this amount of time (in ms) before starting as
-                             a live, however this is unlikely since this 
backup has just stopped anyway. It is also used
-                          as the delay after failback before this backup will 
restart (if <literal>allow-failback</literal>
-                          is set to true.</entry>
-                       </row>
-                    </tbody>
-                 </tgroup>
-              </table>
-           </section>
-
-        </section>
-        <section id="ha.colocated">
-            <title>Colocated Backup Servers</title>
-            <para>It is also possible when running standalone to colocate 
backup servers in the same
-                JVM as another live server. Live Servers can be configured to 
request another live server in the cluster
-                to start a backup server in the same JVM either using shared 
store or replication. The new backup server
-                will inherit its configuration from the live server creating 
it apart from its name, which will be set to
-                <literal>colocated_backup_n</literal> where n is the number of 
backups the server has created, and any directories
-                 and its Connectors and Acceptors which are discussed later on 
in this chapter. A live server can also
-                be configured to allow requests from backups and also how many 
backups a live server can start. this way
-                you can evenly distribute backups around the cluster. This is 
configured via the <literal>ha-policy</literal>
-                element in the <literal>activemq-configuration.xml</literal> 
file like so:</para>
-            <programlisting>
-&lt;ha-policy>
-   &lt;replication>
-      &lt;colocated>
-         &lt;request-backup>true&lt;/request-backup>
-         &lt;max-backups>1&lt;/max-backups>
-         &lt;backup-request-retries>-1&lt;/backup-request-retries>
-         
&lt;backup-request-retry-interval>5000&lt;/backup-request-retry-interval>
-         &lt;master/>
-         &lt;slave/>
-      &lt;/colocated>
-   &lt;replication>
-&lt;/ha-policy>
-            </programlisting>
-            <para>the above example is configured to use replication, in this 
case the <literal>master</literal> and
-            <literal>slave</literal> configurations must match those for 
normal replication as in the previous chapter.
-            <literal>shared-store</literal> is also supported</para>
-
-           <graphic fileref="images/ha-colocated.png" align="center"/>
-           <section id="ha.colocated.connectorsandacceptors">
-              <title>Configuring Connectors and Acceptors</title>
-              <para>If the HA Policy is colocated then connectors and 
acceptors will be inherited from the live server
-                 creating it and offset depending on the setting of 
<literal>backup-port-offset</literal> configuration element.
-                 If this is set to say 100 (which is the default) and a 
connector is using port 5445 then this will be
-                 set to 5545 for the first server created, 5645 for the second 
and so on.</para>
-              <note><para>for INVM connectors and Acceptors the id will have 
<literal>colocated_backup_n</literal> appended,
-              where n is the backup server number.</para></note>
-              <section id="ha.colocated.connectorsandacceptors.remote">
-                 <title>Remote Connectors</title>
-                 <para>It may be that some of the Connectors configured are 
for external servers and hence should be excluded from the offset.
-                 for instance a Connector used by the cluster connection to do 
quorum voting for a replicated backup server,
-                  these can be omitted from being offset by adding them to the 
<literal>ha-policy</literal> configuration like so:</para>
-                 <programlisting>
-&lt;ha-policy>
-   &lt;replication>
-      &lt;colocated>
-         &lt;excludes>
-            &lt;connector-ref>remote-connector&lt;/connector-ref>
-         &lt;/excludes>
-.........
-&lt;/ha-policy>
-                 </programlisting>
-              </section>
-           </section>
-           <section id="ha.colocated.directories">
-              <title>Configuring Directories</title>
-              <para>Directories for the Journal, Large messages and Paging 
will be set according to what the HA strategy is.
-              If shared store the the requesting server will notify the target 
server of which directories to use. If replication
-              is configured then directories will be inherited from the 
creating server but have the new backups name
-              appended.</para>
-           </section>
-
-           <para>The following table lists all the 
<literal>ha-policy</literal> configuration elements:</para>
-           <table>
-              <tgroup cols="2">
-                 <colspec colname="c1" colnum="1"/>
-                 <colspec colname="c2" colnum="2"/>
-                 <thead>
-                    <row>
-                       <entry>name</entry>
-                       <entry>Description</entry>
-                    </row>
-                 </thead>
-                 <tbody>
-                    <row>
-                       <entry><literal>request-backup</literal></entry>
-                       <entry>If true then the server will request a backup on 
another node</entry>
-                    </row>
-                    <row>
-                       <entry><literal>backup-request-retries</literal></entry>
-                       <entry>How many times the live server will try to 
request a backup, -1 means for ever.</entry>
-                    </row>
-                    <row>
-                       
<entry><literal>backup-request-retry-interval</literal></entry>
-                       <entry>How long to wait for retries between attempts to 
request a backup server.</entry>
-                    </row>
-                    <row>
-                       <entry><literal>max-backups</literal></entry>
-                       <entry>Whether or not this live server will accept 
backup requests from other live servers.</entry>
-                    </row>
-                    <row>
-                       <entry><literal>backup-port-offset</literal></entry>
-                       <entry>The offset to use for the Connectors and 
Acceptors when creating a new backup server.</entry>
-                    </row>
-                 </tbody>
-              </tgroup>
-           </table>
-        </section>
-    </section>
-   <section id="ha.scaledown">
-      <title>Scaling Down</title>
-      <para>An alternative to using Live/Backup groups is to configure 
scaledown. when configured for scale down a server
-      can copy all its messages and transaction state to another live server. 
The advantage of this is that you dont need
-      full backups to provide some form of HA, however there are disadvantages 
with this approach the first being that it
-         only deals with a server being stopped and not a server crash. The 
caveat here is if you configure a backup to scale down. </para>
-      <para>Another disadvantage is that it is possible to lose message 
ordering. This happens in the following scenario,
-      say you have 2 live servers and messages are distributed evenly between 
the servers from a single producer, if one
-         of the servers scales down then the messages sent back to the other 
server will be in the queue after the ones
-         already there, so server 1 could have messages 1,3,5,7,9 and server 2 
would have 2,4,6,8,10, if server 2 scales
-         down the order in server 1 would be 1,3,5,7,9,2,4,6,8,10.</para>
-      <graphic fileref="images/ha-scaledown.png" align="center"/>
-      <para>The configuration for a live server to scale down would be 
something like:</para>
-      <programlisting>
-&lt;ha-policy>
-   &lt;live-only>
-      &lt;scale-down>
-         &lt;connectors>
-            &lt;connector-ref>server1-connector&lt;/connector-ref>
-         &lt;/connectors>
-      &lt;/scale-down>
-   &lt;/live-only>
-&lt;/ha-policy>
-      </programlisting>
-      <para>In this instance the server is configured to use a specific 
connector to scale down, if a connector is not
-         specified then the first INVM connector is chosen, this is to make 
scale down fromm a backup server easy to configure.
-         It is also possible to use discovery to scale down, this would look 
like:</para>
-      <programlisting>
-&lt;ha-policy>
-   &lt;live-only>
-      &lt;scale-down>
-         &lt;discovery-group>my-discovery-group&lt;/discovery-group>
-      &lt;/scale-down>
-   &lt;/live-only>
-&lt;/ha-policy>
-      </programlisting>
-      <section id="ha.scaledown.group">
-         <title>Scale Down with groups</title>
-         <para>It is also possible to configure servers to only scale down to 
servers that belong in the same group. This
-         is done by configuring the group like so:</para>
-         <programlisting>
-&lt;ha-policy>
-   &lt;live-only>
-      &lt;scale-down>
-         ...
-         &lt;group-name>my-group&lt;/group-name>
-      &lt;/scale-down>
-   &lt;/live-only>
-&lt;/ha-policy>
-         </programlisting>
-         <para>In this scenario only servers that belong to the group 
<literal>my-group</literal> will be scaled down to</para>
-      </section>
-      <section>
-         <title>Scale Down and Backups</title>
-         <para>It is also possible to mix scale down with HA via backup 
servers. If a slave is configured to scale down
-         then after failover has occurred, instead of starting fully the 
backup server will immediately scale down to
-         another live server. The most appropriate configuration for this is 
using the <literal>colocated</literal> approach.
-         it means as you bring up live server they will automatically be 
backed up by server and as live servers are
-         shutdown, there messages are made available on another live server. A 
typical configuration would look like:</para>
-         <programlisting>
-&lt;ha-policy>
-   &lt;replication>
-      &lt;colocated>
-         &lt;backup-request-retries>44&lt;/backup-request-retries>
-         
&lt;backup-request-retry-interval>33&lt;/backup-request-retry-interval>
-         &lt;max-backups>3&lt;/max-backups>
-         &lt;request-backup>false&lt;/request-backup>
-         &lt;backup-port-offset>33&lt;/backup-port-offset>
-         &lt;master>
-            &lt;group-name>purple&lt;/group-name>
-            &lt;check-for-live-server>true&lt;/check-for-live-server>
-            &lt;cluster-name>abcdefg&lt;/cluster-name>
-         &lt;/master>
-         &lt;slave>
-            &lt;group-name>tiddles&lt;/group-name>
-            
&lt;max-saved-replicated-journals-size>22&lt;/max-saved-replicated-journals-size>
-            &lt;cluster-name>33rrrrr&lt;/cluster-name>
-            &lt;restart-backup>false&lt;/restart-backup>
-            &lt;scale-down>
-               &lt;!--a grouping of servers that can be scaled down to-->
-               &lt;group-name>boo!&lt;/group-name>
-               &lt;!--either a discovery group-->
-               &lt;discovery-group>wahey&lt;/discovery-group>
-            &lt;/scale-down>
-         &lt;/slave>
-      &lt;/colocated>
-   &lt;/replication>
-&lt;/ha-policy>
-         </programlisting>
-      </section>
-   <section id="ha.scaledown.client">
-      <title>Scale Down and Clients</title>
-      <para>When a server is stopping and preparing to scale down it will send 
a message to all its clients informing them
-      which server it is scaling down to before disconnecting them. At this 
point the client will reconnect however this
-      will only succeed once the server has completed scaledown. This is to 
ensure that any state such as queues or transactions
-      are there for the client when it reconnects. The normal reconnect 
settings apply when the client is reconnecting so
-      these should be high enough to deal with the time needed to scale 
down.</para>
-      </section>
-   </section>
-    <section id="failover">
-        <title>Failover Modes</title>
-        <para>ActiveMQ defines two types of client failover:</para>
-        <itemizedlist>
-            <listitem>
-                <para>Automatic client failover</para>
-            </listitem>
-            <listitem>
-                <para>Application-level client failover</para>
-            </listitem>
-        </itemizedlist>
-        <para>ActiveMQ also provides 100% transparent automatic reattachment 
of connections to the
-            same server (e.g. in case of transient network problems). This is 
similar to failover,
-            except it is reconnecting to the same server and is discussed in
-            <xref linkend="client-reconnection"/></para>
-        <para>During failover, if the client has consumers on any non 
persistent or temporary
-            queues, those queues will be automatically recreated during 
failover on the backup node,
-            since the backup node will not have any knowledge of non 
persistent queues.</para>
-        <section id="ha.automatic.failover">
-            <title>Automatic Client Failover</title>
-            <para>ActiveMQ clients can be configured to receive knowledge of 
all live and backup servers, so
-                that in event of connection failure at the client - live 
server connection, the
-                client will detect this and reconnect to the backup server. 
The backup server will
-                then automatically recreate any sessions and consumers that 
existed on each
-                connection before failover, thus saving the user from having 
to hand-code manual
-                reconnection logic.</para>
-            <para>ActiveMQ clients detect connection failure when it has not 
received packets from
-                the server within the time given by 
<literal>client-failure-check-period</literal>
-                as explained in section <xref linkend="connection-ttl"/>. If 
the client does not
-                receive data in good time, it will assume the connection has 
failed and attempt
-                failover. Also if the socket is closed by the OS, usually if 
the server process is
-                killed rather than the machine itself crashing, then the 
client will failover straight away.
-                </para>
-            <para>ActiveMQ clients can be configured to discover the list of 
live-backup server groups in a
-                number of different ways. They can be configured explicitly or 
probably the most
-                common way of doing this is to use <emphasis>server 
discovery</emphasis> for the
-                client to automatically discover the list. For full details on 
how to configure
-                server discovery, please see <xref linkend="clusters"/>.
-                Alternatively, the clients can explicitly connect to a 
specific server and download
-                the current servers and backups see <xref 
linkend="clusters"/>.</para>
-            <para>To enable automatic client failover, the client must be 
configured to allow
-                non-zero reconnection attempts (as explained in <xref 
linkend="client-reconnection"
-                />).</para>
-            <para>By default failover will only occur after at least one 
connection has been made to
-                the live server. In other words, by default, failover will not 
occur if the client
-                fails to make an initial connection to the live server - in 
this case it will simply
-                retry connecting to the live server according to the 
reconnect-attempts property and
-                fail after this number of attempts.</para>
-            <section>
-                <title>Failing over on the Initial Connection</title>
-                <para>
-                    Since the client does not learn about the full topology 
until after the first
-                    connection is made there is a window where it does not 
know about the backup. If a failure happens at
-                    this point the client can only try reconnecting to the 
original live server. To configure
-                    how many attempts the client will make you can set the 
property <literal>initialConnectAttempts</literal>
-                    on the <literal>ClientSessionFactoryImpl</literal> or 
<literal >ActiveMQConnectionFactory</literal> or
-                    <literal>initial-connect-attempts</literal> in xml. The 
default for this is <literal>0</literal>, that
-                    is try only once. Once the number of attempts has been 
made an exception will be thrown.
-                </para>
-            </section>
-            <para>For examples of automatic failover with transacted and 
non-transacted JMS
-                sessions, please see <xref 
linkend="examples.transaction-failover"/> and <xref
-                    linkend="examples.non-transaction-failover"/>.</para>
-            <section id="ha.automatic.failover.noteonreplication">
-                <title>A Note on Server Replication</title>
-                <para>ActiveMQ does not replicate full server state between 
live and backup servers.
-                    When the new session is automatically recreated on the 
backup it won't have any
-                    knowledge of messages already sent or acknowledged in that 
session. Any
-                    in-flight sends or acknowledgements at the time of 
failover might also be
-                    lost.</para>
-                <para>By replicating full server state, theoretically we could 
provide a 100%
-                    transparent seamless failover, which would avoid any lost 
messages or
-                    acknowledgements, however this comes at a great cost: 
replicating the full
-                    server state (including the queues, session, etc.). This 
would require
-                    replication of the entire server state machine; every 
operation on the live
-                    server would have to replicated on the replica server(s) 
in the exact same
-                    global order to ensure a consistent replica state. This is 
extremely hard to do
-                    in a performant and scalable way, especially when one 
considers that multiple
-                    threads are changing the live server state 
concurrently.</para>
-                <para>It is possible to provide full state machine replication 
using techniques such
-                    as <emphasis role="italic">virtual synchrony</emphasis>, 
but this does not scale
-                    well and effectively serializes all operations to a single 
thread, dramatically
-                    reducing concurrency.</para>
-                <para>Other techniques for multi-threaded active replication 
exist such as
-                    replicating lock states or replicating thread scheduling 
but this is very hard
-                    to achieve at a Java level.</para>
-                <para>Consequently it has decided it was not worth massively 
reducing performance
-                    and concurrency for the sake of 100% transparent failover. 
Even without 100%
-                    transparent failover, it is simple to guarantee <emphasis 
role="italic">once and
-                        only once</emphasis> delivery, even in the case of 
failure, by using a
-                    combination of duplicate detection and retrying of 
transactions. However this is
-                    not 100% transparent to the client code.</para>
-            </section>
-            <section id="ha.automatic.failover.blockingcalls">
-                <title>Handling Blocking Calls During Failover</title>
-                <para>If the client code is in a blocking call to the server, 
waiting for a response
-                    to continue its execution, when failover occurs, the new 
session will not have
-                    any knowledge of the call that was in progress. This call 
might otherwise hang
-                    for ever, waiting for a response that will never 
come.</para>
-                <para>To prevent this, ActiveMQ will unblock any blocking 
calls that were in progress
-                    at the time of failover by making them throw a <literal
-                        >javax.jms.JMSException</literal> (if using JMS), or a 
<literal
-                        >ActiveMQException</literal> with error code <literal
-                        >ActiveMQException.UNBLOCKED</literal>. It is up to 
the client code to catch
-                    this exception and retry any operations if desired.</para>
-                <para>If the method being unblocked is a call to commit(), or 
prepare(), then the
-                    transaction will be automatically rolled back and ActiveMQ 
will throw a <literal
-                        >javax.jms.TransactionRolledBackException</literal> 
(if using JMS), or a
-                        <literal>ActiveMQException</literal> with error code 
<literal
-                        >ActiveMQException.TRANSACTION_ROLLED_BACK</literal> 
if using the core
-                    API.</para>
-            </section>
-            <section id="ha.automatic.failover.transactions">
-                <title>Handling Failover With Transactions</title>
-                <para>If the session is transactional and messages have 
already been sent or
-                    acknowledged in the current transaction, then the server 
cannot be sure that
-                    messages sent or acknowledgements have not been lost 
during the failover.</para>
-                <para>Consequently the transaction will be marked as 
rollback-only, and any
-                    subsequent attempt to commit it will throw a <literal
-                        >javax.jms.TransactionRolledBackException</literal> 
(if using JMS), or a
-                        <literal>ActiveMQException</literal> with error code 
<literal
-                        >ActiveMQException.TRANSACTION_ROLLED_BACK</literal> 
if using the core
-                    API.</para>
-               <warning>
-                  <title>2 phase commit</title>
-                  <para>
-                     The caveat to this rule is when XA is used either via JMS 
or through the core API.
-                     If 2 phase commit is used and prepare has already been 
called then rolling back could
-                     cause a <literal>HeuristicMixedException</literal>. 
Because of this the commit will throw
-                     a <literal>XAException.XA_RETRY</literal> exception. This 
informs the Transaction Manager
-                     that it should retry the commit at some later point in 
time, a side effect of this is
-                     that any non persistent messages will be lost. To avoid 
this use persistent
-                     messages when using XA. With acknowledgements this is not 
an issue since they are
-                     flushed to the server before prepare gets called.
-                  </para>
-               </warning>
-                <para>It is up to the user to catch the exception, and perform 
any client side local
-                    rollback code as necessary. There is no need to manually 
rollback the session -
-                    it is already rolled back. The user can then just retry 
the transactional
-                    operations again on the same session.</para>
-                <para>ActiveMQ ships with a fully functioning example 
demonstrating how to do this,
-                    please see <xref 
linkend="examples.transaction-failover"/></para>
-                <para>If failover occurs when a commit call is being executed, 
the server, as
-                    previously described, will unblock the call to prevent a 
hang, since no response
-                    will come back. In this case it is not easy for the client 
to determine whether
-                    the transaction commit was actually processed on the live 
server before failure
-                    occurred.</para>
-               <note>
-                  <para>
-                     If XA is being used either via JMS or through the core 
API then an <literal>XAException.XA_RETRY</literal>
-                     is thrown. This is to inform Transaction Managers that a 
retry should occur at some point. At
-                     some later point in time the Transaction Manager will 
retry the commit. If the original
-                     commit has not occurred then it will still exist and be 
committed, if it does not exist
-                     then it is assumed to have been committed although the 
transaction manager may log a warning.
-                  </para>
-               </note>
-                <para>To remedy this, the client can simply enable duplicate 
detection (<xref
-                        linkend="duplicate-detection"/>) in the transaction, 
and retry the
-                    transaction operations again after the call is unblocked. 
If the transaction had
-                    indeed been committed on the live server successfully 
before failover, then when
-                    the transaction is retried, duplicate detection will 
ensure that any durable
-                    messages resent in the transaction will be ignored on the 
server to prevent them
-                    getting sent more than once.</para>
-                <note>
-                    <para>By catching the rollback exceptions and retrying, 
catching unblocked calls
-                        and enabling duplicate detection, once and only once 
delivery guarantees for
-                        messages can be provided in the case of failure, 
guaranteeing 100% no loss
-                        or duplication of messages.</para>
-                </note>
-            </section>
-            <section id="ha.automatic.failover.nontransactional">
-                <title>Handling Failover With Non Transactional 
Sessions</title>
-                <para>If the session is non transactional, messages or 
acknowledgements can be lost
-                    in the event of failover.</para>
-                <para>If you wish to provide <emphasis role="italic">once and 
only once</emphasis>
-                    delivery guarantees for non transacted sessions too, 
enabled duplicate
-                    detection, and catch unblock exceptions as described in 
<xref
-                        linkend="ha.automatic.failover.blockingcalls"/></para>
-            </section>
-        </section>
-        <section>
-            <title>Getting Notified of Connection Failure</title>
-            <para>JMS provides a standard mechanism for getting notified 
asynchronously of
-                connection failure: 
<literal>java.jms.ExceptionListener</literal>. Please consult
-                the JMS javadoc or any good JMS tutorial for more information 
on how to use
-                this.</para>
-            <para>The ActiveMQ core API also provides a similar feature in the 
form of the class
-                    
<literal>org.apache.activemq.core.client.SessionFailureListener</literal></para>
-            <para>Any ExceptionListener or SessionFailureListener instance 
will always be called by
-                ActiveMQ on event of connection failure, <emphasis role="bold"
-                    >irrespective</emphasis> of whether the connection was 
successfully failed over,
-                reconnected or reattached, however you can find out if 
reconnect or reattach has happened
-            by either the <literal>failedOver</literal> flag passed in on the 
<literal>connectionFailed</literal>
-               on <literal>SessionfailureListener</literal> or by inspecting 
the error code on the
-               <literal>javax.jms.JMSException</literal> which will be one of 
the following:</para>
-           <table frame="topbot" border="2">
-              <title>JMSException error codes</title>
-              <tgroup cols="2">
-                 <colspec colname="c1" colnum="1"/>
-                 <colspec colname="c2" colnum="2"/>
-                 <thead>
-                    <row>
-                       <entry>error code</entry>
-                       <entry>Description</entry>
-                    </row>
-                 </thead>
-                 <tbody>
-                    <row>
-                       <entry>FAILOVER</entry>
-                       <entry>
-                          Failover has occurred and we have successfully 
reattached or reconnected.
-                       </entry>
-                    </row>
-                    <row>
-                       <entry>DISCONNECT</entry>
-                       <entry>
-                          No failover has occurred and we are disconnected.
-                       </entry>
-                    </row>
-                 </tbody>
-              </tgroup>
-           </table>
-        </section>
-        <section>
-            <title>Application-Level Failover</title>
-            <para>In some cases you may not want automatic client failover, 
and prefer to handle any
-                connection failure yourself, and code your own manually 
reconnection logic in your
-                own failure handler. We define this as 
<emphasis>application-level</emphasis>
-                failover, since the failover is handled at the user 
application level.</para>
-            <para>To implement application-level failover, if you're using JMS 
then you need to set
-                an <literal>ExceptionListener</literal> class on the JMS 
connection. The
-                <literal>ExceptionListener</literal> will be called by 
ActiveMQ in the event that
-                connection failure is detected. In your 
<literal>ExceptionListener</literal>, you
-                would close your old JMS connections, potentially look up new 
connection factory
-                instances from JNDI and creating new connections. In this case 
you may well be using
-                <ulink 
url="http://www.jboss.org/community/wiki/JBossHAJNDIImpl";>HA-JNDI</ulink>
-                to ensure that the new connection factory is looked up from a 
different server.</para>
-            <para>For a working example of application-level failover, please 
see
-                <xref linkend="application-level-failover"/>.</para>
-            <para>If you are using the core API, then the procedure is very 
similar: you would set a
-                    <literal>FailureListener</literal> on the core 
<literal>ClientSession</literal>
-                instances.</para>
-        </section>
-    </section>
-</chapter>

http://git-wip-us.apache.org/repos/asf/activemq-6/blob/4245a6b4/docs/user-manual/en/images/activemq-logo.jpg
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/images/activemq-logo.jpg 
b/docs/user-manual/en/images/activemq-logo.jpg
new file mode 100644
index 0000000..d514448
Binary files /dev/null and b/docs/user-manual/en/images/activemq-logo.jpg differ

http://git-wip-us.apache.org/repos/asf/activemq-6/blob/4245a6b4/docs/user-manual/en/intercepting-operations.md
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/intercepting-operations.md 
b/docs/user-manual/en/intercepting-operations.md
new file mode 100644
index 0000000..7d78976
--- /dev/null
+++ b/docs/user-manual/en/intercepting-operations.md
@@ -0,0 +1,84 @@
+Intercepting Operations
+=======================
+
+ActiveMQ supports *interceptors* to intercept packets entering and
+exiting the server. Incoming and outgoing interceptors are be called for
+any packet entering or exiting the server respectively. This allows
+custom code to be executed, e.g. for auditing packets, filtering or
+other reasons. Interceptors can change the packets they intercept. This
+makes interceptors powerful, but also potentially dangerous.
+
+Implementing The Interceptors
+=============================
+
+An interceptor must implement the `Interceptor interface`:
+
+    package org.apache.activemq.api.core.interceptor;
+
+    public interface Interceptor
+    {   
+       boolean intercept(Packet packet, RemotingConnection connection) throws 
ActiveMQException;
+    }
+
+The returned boolean value is important:
+
+-   if `true` is returned, the process continues normally
+
+-   if `false` is returned, the process is aborted, no other
+    interceptors will be called and the packet will not be processed
+    further by the server.
+
+Configuring The Interceptors
+============================
+
+Both incoming and outgoing interceptors are configured in
+`activemq-configuration.xml`:
+
+    <remoting-incoming-interceptors>
+       
<class-name>org.apache.activemq.jms.example.LoginInterceptor</class-name>
+       
<class-name>org.apache.activemq.jms.example.AdditionalPropertyInterceptor</class-name>
+    </remoting-incoming-interceptors>
+
+    <remoting-outgoing-interceptors>
+       
<class-name>org.apache.activemq.jms.example.LogoutInterceptor</class-name>
+       
<class-name>org.apache.activemq.jms.example.AdditionalPropertyInterceptor</class-name>
+    </remoting-outgoing-interceptors>
+
+The interceptors classes (and their dependencies) must be added to the
+server classpath to be properly instantiated and called.
+
+Interceptors on the Client Side
+===============================
+
+The interceptors can also be run on the client side to intercept packets
+either sent by the client to the server or by the server to the client.
+This is done by adding the interceptor to the `ServerLocator` with the
+`addIncomingInterceptor(Interceptor)` or
+`addOutgoingInterceptor(Interceptor)` methods.
+
+As noted above, if an interceptor returns `false` then the sending of
+the packet is aborted which means that no other interceptors are be
+called and the packet is not be processed further by the client.
+Typically this process happens transparently to the client (i.e. it has
+no idea if a packet was aborted or not). However, in the case of an
+outgoing packet that is sent in a `blocking` fashion a
+`ActiveMQException` will be thrown to the caller. The exception is
+thrown because blocking sends provide reliability and it is considered
+an error for them not to succeed. `Blocking` sends occurs when, for
+example, an application invokes `setBlockOnNonDurableSend(true)` or
+`setBlockOnDurableSend(true)` on its `ServerLocator` or if an
+application is using a JMS connection factory retrieved from JNDI that
+has either `block-on-durable-send` or `block-on-non-durable-send` set to
+`true`. Blocking is also used for packets dealing with transactions
+(e.g. commit, roll-back, etc.). The `ActiveMQException` thrown will
+contain the name of the interceptor that returned false.
+
+As on the server, the client interceptor classes (and their
+dependencies) must be added to the classpath to be properly instantiated
+and invoked.
+
+Example
+=======
+
+See ? for an example which shows how to use interceptors to add
+properties to a message on the server.

http://git-wip-us.apache.org/repos/asf/activemq-6/blob/4245a6b4/docs/user-manual/en/intercepting-operations.xml
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/intercepting-operations.xml 
b/docs/user-manual/en/intercepting-operations.xml
deleted file mode 100644
index f89e1a6..0000000
--- a/docs/user-manual/en/intercepting-operations.xml
+++ /dev/null
@@ -1,99 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-
-<!-- 
============================================================================= 
-->
-<!-- Licensed to the Apache Software Foundation (ASF) under one or more        
    -->
-<!-- contributor license agreements. See the NOTICE file distributed with      
    -->
-<!-- this work for additional information regarding copyright ownership.       
    -->
-<!-- The ASF licenses this file to You under the Apache License, Version 2.0   
    -->
-<!-- (the "License"); you may not use this file except in compliance with      
    -->
-<!-- the License. You may obtain a copy of the License at                      
    -->
-<!--                                                                           
    -->
-<!--     http://www.apache.org/licenses/LICENSE-2.0                            
    -->
-<!--                                                                           
    -->
-<!-- Unless required by applicable law or agreed to in writing, software       
    -->
-<!-- distributed under the License is distributed on an "AS IS" BASIS,         
    -->
-<!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.  
    -->
-<!-- See the License for the specific language governing permissions and       
    -->
-<!-- limitations under the License.                                            
    -->
-<!-- 
============================================================================= 
-->
-
-<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" 
"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd"; [
-<!ENTITY % BOOK_ENTITIES SYSTEM "ActiveMQ_User_Manual.ent">
-%BOOK_ENTITIES;
-]>
-
-<chapter id="intercepting-operations">
-   <title>Intercepting Operations</title>
-   <para>ActiveMQ supports <emphasis>interceptors</emphasis> to intercept 
packets entering
-       and exiting the server. Incoming and outgoing interceptors are be 
called for any packet
-       entering or exiting the server respectively. This allows custom code to 
be executed,
-       e.g. for auditing packets, filtering or other reasons. Interceptors can 
change the
-       packets they intercept. This makes interceptors powerful, but also 
potentially
-       dangerous.</para>
-   <section>
-      <title>Implementing The Interceptors</title>
-      <para>An interceptor must implement the <literal>Interceptor 
interface</literal>:</para>
-      <programlisting>
-package org.apache.activemq.api.core.interceptor;
-
-public interface Interceptor
-{   
-   boolean intercept(Packet packet, RemotingConnection connection) throws 
ActiveMQException;
-}</programlisting>
-      <para>The returned boolean value is important:</para>
-      <itemizedlist>
-         <listitem>
-            <para>if <literal>true</literal> is returned, the process 
continues normally</para>
-         </listitem>
-         <listitem>
-            <para>if <literal>false</literal> is returned, the process is 
aborted, no other interceptors
-                will be called and the packet will not be processed further by 
the server.</para>
-         </listitem>
-      </itemizedlist>
-   </section>
-   <section>
-      <title>Configuring The Interceptors</title>
-      <para>Both incoming and outgoing interceptors are configured in
-          <literal>activemq-configuration.xml</literal>:</para>
-      <programlisting>
-&lt;remoting-incoming-interceptors>
-   
&lt;class-name>org.apache.activemq.jms.example.LoginInterceptor&lt;/class-name>
-   
&lt;class-name>org.apache.activemq.jms.example.AdditionalPropertyInterceptor&lt;/class-name>
-&lt;/remoting-incoming-interceptors></programlisting>
-      <programlisting>
-&lt;remoting-outgoing-interceptors>
-   
&lt;class-name>org.apache.activemq.jms.example.LogoutInterceptor&lt;/class-name>
-   
&lt;class-name>org.apache.activemq.jms.example.AdditionalPropertyInterceptor&lt;/class-name>
-&lt;/remoting-outgoing-interceptors></programlisting>
-      <para>The interceptors classes (and their dependencies) must be added to 
the server classpath
-         to be properly instantiated and called.</para>
-   </section>
-   <section>
-      <title>Interceptors on the Client Side</title>
-      <para>The interceptors can also be run on the client side to intercept 
packets either sent by the
-         client to the server or by the server to the client. This is done by 
adding the interceptor to
-         the <code>ServerLocator</code> with the 
<code>addIncomingInterceptor(Interceptor)</code> or
-         <code>addOutgoingInterceptor(Interceptor)</code> methods.</para>
-      <para>As noted above, if an interceptor returns <literal>false</literal> 
then the sending of the
-         packet is aborted which means that no other interceptors are be 
called and the packet is not
-         be processed further by the client. Typically this process happens 
transparently to the client
-         (i.e. it has no idea if a packet was aborted or not). However, in the 
case of an outgoing packet
-         that is sent in a <literal>blocking</literal> fashion a 
<literal>ActiveMQException</literal> will
-         be thrown to the caller. The exception is thrown because blocking 
sends provide reliability and
-         it is considered an error for them not to succeed. 
<literal>Blocking</literal> sends occurs when,
-         for example, an application invokes 
<literal>setBlockOnNonDurableSend(true)</literal> or
-         <literal>setBlockOnDurableSend(true)</literal> on its 
<literal>ServerLocator</literal> or if an
-         application is using a JMS connection factory retrieved from JNDI 
that has either
-         <literal>block-on-durable-send</literal> or 
<literal>block-on-non-durable-send</literal>
-         set to <literal>true</literal>. Blocking is also used for packets 
dealing with transactions (e.g.
-         commit, roll-back, etc.). The <literal>ActiveMQException</literal> 
thrown will contain the name
-         of the interceptor that returned false.</para>
-      <para>As on the server, the client interceptor classes (and their 
dependencies) must be added to the classpath
-         to be properly instantiated and invoked.</para>
-   </section>
-   <section>
-      <title>Example</title>
-      <para>See <xref linkend="examples.interceptor" /> for an example which
-         shows how to use interceptors to add properties to a message on the 
server.</para>
-   </section>
-</chapter>

http://git-wip-us.apache.org/repos/asf/activemq-6/blob/4245a6b4/docs/user-manual/en/interoperability.md
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/interoperability.md 
b/docs/user-manual/en/interoperability.md
new file mode 100644
index 0000000..58b2257
--- /dev/null
+++ b/docs/user-manual/en/interoperability.md
@@ -0,0 +1,365 @@
+Interoperability
+================
+
+Stomp
+=====
+
+[Stomp](http://stomp.github.com/) is a text-orientated wire protocol
+that allows Stomp clients to communicate with Stomp Brokers. ActiveMQ
+now supports Stomp 1.0, 1.1 and 1.2.
+
+Stomp clients are available for several languages and platforms making
+it a good choice for interoperability.
+
+Native Stomp support
+--------------------
+
+ActiveMQ provides native support for Stomp. To be able to send and
+receive Stomp messages, you must configure a `NettyAcceptor` with a
+`protocols` parameter set to have `stomp`:
+
+    <acceptor name="stomp-acceptor">
+       
<factory-class>org.apache.activemq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
+       <param key="protocols"  value="STOMP"/>
+       <param key="port"  value="61613"/>
+    </acceptor>
+
+With this configuration, ActiveMQ will accept Stomp connections on the
+port `61613` (which is the default port of the Stomp brokers).
+
+See the `stomp` example which shows how to configure a ActiveMQ server
+with Stomp.
+
+### Limitations
+
+Message acknowledgements are not transactional. The ACK frame can not be
+part of a transaction (it will be ignored if its `transaction` header is
+set).
+
+### Stomp 1.1/1.2 Notes
+
+#### Virtual Hosting
+
+ActiveMQ currently doesn't support virtual hosting, which means the
+'host' header in CONNECT fram will be ignored.
+
+#### Heart-beating
+
+ActiveMQ specifies a minimum value for both client and server heart-beat
+intervals. The minimum interval for both client and server heartbeats is
+500 milliseconds. That means if a client sends a CONNECT frame with
+heartbeat values lower than 500, the server will defaults the value to
+500 milliseconds regardless the values of the 'heart-beat' header in the
+frame.
+
+Mapping Stomp destinations to ActiveMQ addresses and queues
+-----------------------------------------------------------
+
+Stomp clients deals with *destinations* when sending messages and
+subscribing. Destination names are simply strings which are mapped to
+some form of destination on the server - how the server translates these
+is left to the server implementation.
+
+In ActiveMQ, these destinations are mapped to *addresses* and *queues*.
+When a Stomp client sends a message (using a `SEND` frame), the
+specified destination is mapped to an address. When a Stomp client
+subscribes (or unsubscribes) for a destination (using a `SUBSCRIBE` or
+`UNSUBSCRIBE` frame), the destination is mapped to a ActiveMQ queue.
+
+STOMP and connection-ttl
+------------------------
+
+Well behaved STOMP clients will always send a DISCONNECT frame before
+closing their connections. In this case the server will clear up any
+server side resources such as sessions and consumers synchronously.
+However if STOMP clients exit without sending a DISCONNECT frame or if
+they crash the server will have no way of knowing immediately whether
+the client is still alive or not. STOMP connections therefore default to
+a connection-ttl value of 1 minute (see chapter on
+[connection-ttl](#connection-ttl) for more information. This value can
+be overridden using connection-ttl-override.
+
+If you need a specific connection-ttl for your stomp connections without
+affecting the connection-ttl-override setting, you can configure your
+stomp acceptor with the "connection-ttl" property, which is used to set
+the ttl for connections that are created from that acceptor. For
+example:
+
+    <acceptor name="stomp-acceptor">
+       
<factory-class>org.apache.activemq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
+       <param key="protocols"  value="STOMP"/>
+       <param key="port"  value="61613"/>
+       <param key="connection-ttl"  value="20000"/>
+    </acceptor>
+
+The above configuration will make sure that any stomp connection that is
+created from that acceptor will have its connection-ttl set to 20
+seconds.
+
+> **Note**
+>
+> Please note that the STOMP protocol version 1.0 does not contain any
+> heartbeat frame. It is therefore the user's responsibility to make
+> sure data is sent within connection-ttl or the server will assume the
+> client is dead and clean up server side resources. With `Stomp 1.1`
+> users can use heart-beats to maintain the life cycle of stomp
+> connections.
+
+Stomp and JMS interoperability
+------------------------------
+
+### Using JMS destinations
+
+As explained in ?, JMS destinations are also mapped to ActiveMQ
+addresses and queues. If you want to use Stomp to send messages to JMS
+destinations, the Stomp destinations must follow the same convention:
+
+-   send or subscribe to a JMS *Queue* by prepending the queue name by
+    `jms.queue.`.
+
+    For example, to send a message to the `orders` JMS Queue, the Stomp
+    client must send the frame:
+
+        SEND
+        destination:jms.queue.orders
+
+        hello queue orders
+        ^@
+
+-   send or subscribe to a JMS *Topic* by prepending the topic name by
+    `jms.topic.`.
+
+    For example to subscribe to the `stocks` JMS Topic, the Stomp client
+    must send the frame:
+
+        SUBSCRIBE
+        destination:jms.topic.stocks
+
+        ^@
+
+### Sending and consuming Stomp message from JMS or ActiveMQ Core API
+
+Stomp is mainly a text-orientated protocol. To make it simpler to
+interoperate with JMS and ActiveMQ Core API, our Stomp implementation
+checks for presence of the `content-length` header to decide how to map
+a Stomp message to a JMS Message or a Core message.
+
+If the Stomp message does *not* have a `content-length` header, it will
+be mapped to a JMS *TextMessage* or a Core message with a *single
+nullable SimpleString in the body buffer*.
+
+Alternatively, if the Stomp message *has* a `content-length` header, it
+will be mapped to a JMS *BytesMessage* or a Core message with a *byte[]
+in the body buffer*.
+
+The same logic applies when mapping a JMS message or a Core message to
+Stomp. A Stomp client can check the presence of the `content-length`
+header to determine the type of the message body (String or bytes).
+
+### Message IDs for Stomp messages
+
+When receiving Stomp messages via a JMS consumer or a QueueBrowser, the
+messages have no properties like JMSMessageID by default. However this
+may bring some inconvenience to clients who wants an ID for their
+purpose. ActiveMQ Stomp provides a parameter to enable message ID on
+each incoming Stomp message. If you want each Stomp message to have a
+unique ID, just set the `stomp-enable-message-id` to true. For example:
+
+    <acceptor name="stomp-acceptor">
+       
<factory-class>org.apache.activemq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
+       <param key="protocols" value="STOMP"/>
+       <param key="port" value="61613"/>
+       <param key="stomp-enable-message-id" value="true"/>
+    </acceptor>
+
+When the server starts with the above setting, each stomp message sent
+through this acceptor will have an extra property added. The property
+key is `
+            hq-message-id` and the value is a String representation of a
+long type internal message id prefixed with "`STOMP`", like:
+
+    hq-message-id : STOMP12345
+
+If `stomp-enable-message-id` is not specified in the configuration,
+default is `false`.
+
+### Handling of Large Messages with Stomp
+
+Stomp clients may send very large bodys of frames which can exceed the
+size of ActiveMQ server's internal buffer, causing unexpected errors. To
+prevent this situation from happening, ActiveMQ provides a stomp
+configuration attribute `stomp-min-large-message-size`. This attribute
+can be configured inside a stomp acceptor, as a parameter. For example:
+
+       <acceptor name="stomp-acceptor">
+       
<factory-class>org.apache.activemq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
+       <param key="protocols" value="STOMP"/>
+       <param key="port" value="61613"/>
+       <param key="stomp-min-large-message-size" value="10240"/>
+    </acceptor>
+
+The type of this attribute is integer. When this attributed is
+configured, ActiveMQ server will check the size of the body of each
+Stomp frame arrived from connections established with this acceptor. If
+the size of the body is equal or greater than the value of
+`stomp-min-large-message`, the message will be persisted as a large
+message. When a large message is delievered to a stomp consumer, the
+HorentQ server will automatically handle the conversion from a large
+message to a normal message, before sending it to the client.
+
+If a large message is compressed, the server will uncompressed it before
+sending it to stomp clients. The default value of
+`stomp-min-large-message-size` is the same as the default value of
+[min-large-message-size](#large-messages.core.config).
+
+Stomp Over Web Sockets
+----------------------
+
+ActiveMQ also support Stomp over [Web
+Sockets](http://dev.w3.org/html5/websockets/). Modern web browser which
+support Web Sockets can send and receive Stomp messages from ActiveMQ.
+
+To enable Stomp over Web Sockets, you must configure a `NettyAcceptor`
+with a `protocol` parameter set to `stomp_ws`:
+
+    <acceptor name="stomp-ws-acceptor">
+       
<factory-class>org.apache.activemq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
+       <param key="protocols" value="STOMP_WS"/>
+       <param key="port" value="61614"/>
+    </acceptor>
+
+With this configuration, ActiveMQ will accept Stomp connections over Web
+Sockets on the port `61614` with the URL path `/stomp`. Web browser can
+then connect to `ws://<server>:61614/stomp` using a Web Socket to send
+and receive Stomp messages.
+
+A companion JavaScript library to ease client-side development is
+available from [GitHub](http://github.com/jmesnil/stomp-websocket)
+(please see its [documentation](http://jmesnil.net/stomp-websocket/doc/)
+for a complete description).
+
+The `stomp-websockets` example shows how to configure ActiveMQ server to
+have web browsers and Java applications exchanges messages on a JMS
+topic.
+
+StompConnect
+------------
+
+[StompConnect](http://stomp.codehaus.org/StompConnect) is a server that
+can act as a Stomp broker and proxy the Stomp protocol to the standard
+JMS API. Consequently, using StompConnect it is possible to turn
+ActiveMQ into a Stomp Broker and use any of the available stomp clients.
+These include clients written in C, C++, c\# and .net etc.
+
+To run StompConnect first start the ActiveMQ server and make sure that
+it is using JNDI.
+
+Stomp requires the file `jndi.properties` to be available on the
+classpath. This should look something like:
+
+    
java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory
+
+Configure any required JNDI resources in this file according to the
+documentation.
+
+Make sure this file is in the classpath along with the StompConnect jar
+and the ActiveMQ jars and simply run `java org.codehaus.stomp.jms.Main`.
+
+REST
+====
+
+Please see ?
+
+AMQP
+====
+
+ActiveMQ supports the [AMQP
+1.0](https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=amqp)
+specification. To enable AMQP you must configure a Netty Acceptor to
+receive AMQP clients, like so:
+
+    <acceptor name="stomp-acceptor">
+    
<factory-class>org.apache.activemq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
+    <param key="protocols"  value="AMQP"/>
+    <param key="port"  value="5672"/>
+    </acceptor>
+            
+
+ActiveMQ will then accept AMQP 1.0 clients on port 5672 which is the
+default AMQP port.
+
+There are 2 Stomp examples available see proton-j and proton-ruby which
+use the qpid Java and Ruby clients respectively
+
+AMQP and security
+-----------------
+
+The ActiveMQ Server accepts AMQP SASL Authentication and will use this
+to map onto the underlying session created for the connection so you can
+use the normal ActiveMQ security configuration.
+
+AMQP Links
+----------
+
+An AMQP Link is a uni directional transport for messages between a
+source and a target, i.e. a client and the ActiveMQ Broker. A link will
+have an endpoint of which there are 2 kinds, a Sender and A Receiver. At
+the Broker a Sender will have its messages converted into a ActiveMQ
+Message and forwarded to its destination or target. A Receiver will map
+onto a ActiveMQ Server Consumer and convert ActiveMQ messages back into
+AMQP messages before being delivered.
+
+AMQP and destinations
+---------------------
+
+If an AMQP Link is dynamic then a temporary queue will be created and
+either the remote source or remote target address will be set to the
+name of the temporary queue. If the Link is not dynamic then the the
+address of the remote target or source will used for the queue. If this
+does not exist then an exception will be sent
+
+> **Note**
+>
+> For the next version we will add a flag to aut create durable queue
+> but for now you will have to add them via the configuration
+
+AMQP and Coordinations - Handling Transactions
+----------------------------------------------
+
+An AMQP links target can also be a Coordinator, the Coordinator is used
+to handle transactions. If a coordinator is used the the underlying
+HormetQ Server session will be transacted and will be either rolled back
+or committed via the coordinator.
+
+> **Note**
+>
+> AMQP allows the use of multiple transactions per session,
+> `amqp:multi-txns-per-ssn`, however in this version ActiveMQ will only
+> support single transactions per session
+
+OpenWire
+========
+
+ActiveMQ now supports the
+[OpenWire](http://activemq.apache.org/openwire.html) protocol so that an
+ActiveMQ JMS client can talk directly to a ActiveMQ server. To enable
+OpenWire support you must configure a Netty Acceptor, like so:
+
+    <acceptor name="openwire-acceptor">
+    
<factory-class>org.apache.activemq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
+    <param key="protocols"  value="OPENWIRE"/>
+    <param key="port"  value="61616"/>
+    </acceptor>
+            
+
+The ActiveMQ server will then listens on port 61616 for incoming
+openwire commands. Please note the "protocols" is not mandatory here.
+The openwire configuration conforms to ActiveMQ's "Single Port" feature.
+Please refer to [Configuring Single
+Port](#configuring-transports.single-port) for details.
+
+Please refer to the openwire example for more coding details.
+
+Currently we support ActiveMQ clients that using standard JMS APIs. In
+the future we will get more supports for some advanced, ActiveMQ
+specific features into ActiveMQ.

Reply via email to