Author: buildbot
Date: Fri Dec  2 22:24:42 2016
New Revision: 1002074

Log:
Production update by buildbot for activemq

Modified:
    websites/production/activemq/content/cache/main.pageCache
    websites/production/activemq/content/kahadb-replication-experimental.html

Modified: websites/production/activemq/content/cache/main.pageCache
==============================================================================
Binary files - no diff available.

Modified: 
websites/production/activemq/content/kahadb-replication-experimental.html
==============================================================================
--- websites/production/activemq/content/kahadb-replication-experimental.html 
(original)
+++ websites/production/activemq/content/kahadb-replication-experimental.html 
Fri Dec  2 22:24:42 2016
@@ -83,65 +83,10 @@
   <tbody>
         <tr>
         <td valign="top" width="100%">
-<div class="wiki-content maincontent"><div class="confluence-information-macro 
confluence-information-macro-warning"><p class="title">Note</p><span 
class="aui-icon aui-icon-small aui-iconfont-error 
confluence-information-macro-icon"></span><div 
class="confluence-information-macro-body">
-<p>This is under review - and not currently supported. See <a shape="rect" 
href="replicated-leveldb-store.html">Replicated LevelDB Store</a> for the 
successor.</p>
-</div></div> 
-
-
-<h2 id="KahaDBReplication(Experimental)-Overview">Overview</h2>
-
-<p>The new KahaDB store supports a very fast and flexible replication system.  
It features:</p>
-
-<ul><li>Journal level replication (The translates into lower overhead to the 
master to replicate records).</li><li>Support for multiple 
slaves.</li><li>Support to dynamically add slaves at runtime.</li><li>Uses 
multiple concurrent data transfer sessions to do an initial slave 
synchronization.</li><li>Big slave synchronizations can be resumed so 
synchronization progress is not lost if a slave is restarted.</li><li>A 
configurable minimum number of replicas allows you to pause processing until 
the data has been guaranteed to be replicated enough times.</li></ul>
-
-
-
-<h2 id="KahaDBReplication(Experimental)-MasterElection">Master Election</h2>
-
-<p>KahaDB supports a pluggable Master Election algorithm but the only current 
implementation is one based on <a shape="rect" class="external-link" 
href="http://hadoop.apache.org/zookeeper";>ZooKeeper</a>. </p>
-
-<p>ZooKeeper is used to implement the master election algorithm.  ZooKeeper is 
a very fast, replicated, in memory database with features that make it easy to 
implement cluster control algorithms.  It is an Apache project which you can <a 
shape="rect" class="external-link" 
href="http://hadoop.apache.org/zookeeper/releases.html";>freely download</a>. 
You must installed and have at least one ZooKeeper server running before 
setting up a KahaDB Master Slave configuration.</p>
-
-<h2 id="KahaDBReplication(Experimental)-ConfiguringaBroker:">Configuring a 
Broker:</h2>
-
-<p>The ActiveMQ binary distribution includes a KahaDB HA broker configuration 
at <strong>$ACTIVEMQ_HOME/conf/ha.xml</strong>.  </p>
-
-<p>It it setup to look for a ZooKeeper 3.0.0 server on localhost at port 2181. 
 Edit the configuration if this is not where you are running your ZooKeeper 
server.</p>
-
-<p>Start the configuation up by running:</p>
-<div class="code panel pdl" style="border-width: 1px;"><div class="codeContent 
panelContent pdl">
-<pre class="brush: java; gutter: false; theme: Default" 
style="font-size:12px;">
-prompt&gt; $ACTIVEMQ_HOME/bin/activemq xbean:ha.xml
+<div class="wiki-content maincontent"><div class="confluence-information-macro 
confluence-information-macro-warning"><p class="title">Note</p><span 
class="aui-icon aui-icon-small aui-iconfont-error 
confluence-information-macro-icon"></span><div 
class="confluence-information-macro-body"><p>This is under review - and not 
currently supported.</p></div></div><h2 
id="KahaDBReplication(Experimental)-Overview">Overview</h2><p>The new KahaDB 
store supports a very fast and flexible replication system. It 
features:</p><ul><li>Journal level replication (The translates into lower 
overhead to the master to replicate records).</li><li>Support for multiple 
slaves.</li><li>Support to dynamically add slaves at runtime.</li><li>Uses 
multiple concurrent data transfer sessions to do an initial slave 
synchronization.</li><li>Big slave synchronizations can be resumed so 
synchronization progress is not lost if a slave is restarted.</li><li>A 
configurable minimum number of replicas allows you to pause proc
 essing until the data has been guaranteed to be replicated enough 
times.</li></ul><h2 id="KahaDBReplication(Experimental)-MasterElection">Master 
Election</h2><p>KahaDB supports a pluggable Master Election algorithm but the 
only current implementation is one based on <a shape="rect" 
class="external-link" 
href="http://hadoop.apache.org/zookeeper";>ZooKeeper</a>.</p><p>ZooKeeper is 
used to implement the master election algorithm. ZooKeeper is a very fast, 
replicated, in memory database with features that make it easy to implement 
cluster control algorithms. It is an Apache project which you can <a 
shape="rect" class="external-link" 
href="http://hadoop.apache.org/zookeeper/releases.html";>freely download</a>. 
You must installed and have at least one ZooKeeper server running before 
setting up a KahaDB Master Slave configuration.</p><h2 
id="KahaDBReplication(Experimental)-ConfiguringaBroker:">Configuring a 
Broker:</h2><p>The ActiveMQ binary distribution includes a KahaDB HA broker 
configura
 tion at <strong>$ACTIVEMQ_HOME/conf/ha.xml</strong>.</p><p>It it setup to look 
for a ZooKeeper 3.0.0 server on localhost at port 2181. Edit the configuration 
if this is not where you are running your ZooKeeper server.</p><p>Start the 
configuation up by running:</p><div class="code panel pdl" style="border-width: 
1px;"><div class="codeContent panelContent pdl">
+<pre class="brush: java; gutter: false; theme: Default" 
style="font-size:12px;">prompt&gt; $ACTIVEMQ_HOME/bin/activemq xbean:ha.xml
 </pre>
-</div></div>
-
-<p>The actual contents of the configuration file follows:</p>
-
-<div class="error"><span class="error">Error formatting macro: snippet: 
java.lang.IndexOutOfBoundsException: Index: 20, Size: 20</span> </div>
-
-<h2 
id="KahaDBReplication(Experimental)-UnderstandingthekahadbReplicationXMLelement">Understanding
 the kahadbReplication XML element</h2>
-
-<h3 id="KahaDBReplication(Experimental)-ThebrokerURIAttribute">The brokerURI 
Attribute</h3>
-
-<p>Notice that the the brokerURI attribute points at another broker 
configuration file.  The ha-broker.xml contains the actual broker configuration 
that the broker uses when the node take over as master.  The ha-broker.xml 
configuration file is a standard broker configuration except in these 
aspects:</p>
-
-<ul><li>It MUST set the start="false" attribute on the broker 
element.</li><li>It MUST not configure a persistenceAdapter.</li></ul>
-
-
-<p>The above rules allows the replication system to inject the replicated 
KahaDB store into the Master when it's starting up.</p>
-
-<h3 id="KahaDBReplication(Experimental)-TheminimumReplicasAttribute">The 
minimumReplicas Attribute</h3>
-
-<p>The minimumReplicas specifies how many copies of the database are required 
before synchronous update operations are deemed successful.  Setting this to 0 
allows a broker to continue operating even if there are no slaves attached.  If 
the value is set to 1 or greater, and there are no slaves attached, the brokers 
persistent message processing will be suspended until the minimum number of 
slaves are attached and the data synchronized.</p>
-
-
-<h3 id="KahaDBReplication(Experimental)-TheuriAttribute">The uri Attribute</h3>
-
-<p>The uri attribute should always be configured with a 
<strong>kdbr://</strong> based URI.  KDBR stands for 'KahaDB Replication' and 
this is the replication protocol used between the masters and the slaves.  The 
master binds the specified port with slaves subsequently connect to and 
establish replication sessions.  The host name in the uri MUST get updated to 
the actual machine's host name since this is also used to identify the nodes in 
the cluster. </p>
-
-<h3 id="KahaDBReplication(Experimental)-ThedirectoryAttribute">The directory 
Attribute</h3>
-
-<p>This is the data directory where the KahaDB will store it's persistence 
files.</p></div>
+</div></div><p>The actual contents of the configuration file follows:</p><div 
class="error"><span class="error">Error formatting macro: snippet: 
java.lang.IndexOutOfBoundsException: Index: 20, Size: 20</span> </div><h2 
id="KahaDBReplication(Experimental)-UnderstandingthekahadbReplicationXMLelement">Understanding
 the kahadbReplication XML element</h2><h3 
id="KahaDBReplication(Experimental)-ThebrokerURIAttribute">The brokerURI 
Attribute</h3><p>Notice that the the brokerURI attribute points at another 
broker configuration file. The ha-broker.xml contains the actual broker 
configuration that the broker uses when the node take over as master. The 
ha-broker.xml configuration file is a standard broker configuration except in 
these aspects:</p><ul><li>It MUST set the start="false" attribute on the broker 
element.</li><li>It MUST not configure a persistenceAdapter.</li></ul><p>The 
above rules allows the replication system to inject the replicated KahaDB store 
into the Master when it's starti
 ng up.</p><h3 
id="KahaDBReplication(Experimental)-TheminimumReplicasAttribute">The 
minimumReplicas Attribute</h3><p>The minimumReplicas specifies how many copies 
of the database are required before synchronous update operations are deemed 
successful. Setting this to 0 allows a broker to continue operating even if 
there are no slaves attached. If the value is set to 1 or greater, and there 
are no slaves attached, the brokers persistent message processing will be 
suspended until the minimum number of slaves are attached and the data 
synchronized.</p><h3 id="KahaDBReplication(Experimental)-TheuriAttribute">The 
uri Attribute</h3><p>The uri attribute should always be configured with a 
<strong>kdbr://</strong> based URI. KDBR stands for 'KahaDB Replication' and 
this is the replication protocol used between the masters and the slaves. The 
master binds the specified port with slaves subsequently connect to and 
establish replication sessions. The host name in the uri MUST get updated to 
the 
 actual machine's host name since this is also used to identify the nodes in 
the cluster.</p><h3 
id="KahaDBReplication(Experimental)-ThedirectoryAttribute">The directory 
Attribute</h3><p>This is the data directory where the KahaDB will store it's 
persistence files.</p></div>
         </td>
         <td valign="top">
           <div class="navigation">


Reply via email to