Author: buildbot
Date: Fri May 17 14:21:33 2013
New Revision: 862263
Log:
Production update by buildbot for activemq
Modified:
websites/production/activemq/content/cache/main.pageCache
websites/production/activemq/content/replicated-leveldb-store.html
Modified: websites/production/activemq/content/cache/main.pageCache
==============================================================================
Binary files - no diff available.
Modified: websites/production/activemq/content/replicated-leveldb-store.html
==============================================================================
--- websites/production/activemq/content/replicated-leveldb-store.html
(original)
+++ websites/production/activemq/content/replicated-leveldb-store.html Fri May
17 14:21:33 2013
@@ -130,16 +130,17 @@ failover:(tcp:<span class="code-comment"
<p>All the broker nodes that are part of the same replication set should have
matching <tt>brokerName</tt> XML attributes. The following configuration
properties should be the same on all the broker nodes that are part of the same
replication set:</p>
<div class="table-wrap">
-<table class="confluenceTable"><tbody><tr><th colspan="1" rowspan="1"
class="confluenceTh"> property name </th><th colspan="1" rowspan="1"
class="confluenceTh"> default value </th><th colspan="1" rowspan="1"
class="confluenceTh"> Comments </th></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"> replicas </td><td colspan="1" rowspan="1"
class="confluenceTd"> 2 </td><td colspan="1" rowspan="1" class="confluenceTd">
The number of store replicas that will exist in the cluster. At least
(replicas/2)+1 nodes must be online to avoid service outage. </td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"> securityToken </td><td
colspan="1" rowspan="1" class="confluenceTd"> </td><td colspan="1"
rowspan="1" class="confluenceTd"> A security token which must match on all
replication nodes for them to accept each others replication requests.
</td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"> zkAddress
</td><td colspan="1" rowspan="1" class="confluenceTd">
127.0.0.1:2181 </td><td colspan="1" rowspan="1" class="confluenceTd"> A comma
separated list of ZooKeeper servers. </td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"> zkPassword </td><td colspan="1" rowspan="1"
class="confluenceTd"> </td><td colspan="1" rowspan="1"
class="confluenceTd"> The password to use when connecting to the ZooKeeper
server. </td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"> zkPath
</td><td colspan="1" rowspan="1" class="confluenceTd"> /default </td><td
colspan="1" rowspan="1" class="confluenceTd"> The path to the ZooKeeper
directory where Master/Slave election information will be exchanged.
</td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"> zkSessionTmeout
</td><td colspan="1" rowspan="1" class="confluenceTd"> 2s </td><td colspan="1"
rowspan="1" class="confluenceTd"> How quickly a node failure will be detected
by ZooKeeper. </td></tr></tbody></table>
+<table class="confluenceTable"><tbody><tr><th colspan="1" rowspan="1"
class="confluenceTh"> property name </th><th colspan="1" rowspan="1"
class="confluenceTh"> default value </th><th colspan="1" rowspan="1"
class="confluenceTh"> Comments </th></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"> replicas </td><td colspan="1" rowspan="1"
class="confluenceTd"> 2 </td><td colspan="1" rowspan="1" class="confluenceTd">
The number of store replicas that will exist in the cluster. At least
(replicas/2)+1 nodes must be online to avoid service outage. </td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"> securityToken </td><td
colspan="1" rowspan="1" class="confluenceTd"> </td><td colspan="1"
rowspan="1" class="confluenceTd"> A security token which must match on all
replication nodes for them to accept each others replication requests.
</td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"> zkAddress
</td><td colspan="1" rowspan="1" class="confluenceTd">
127.0.0.1:2181 </td><td colspan="1" rowspan="1" class="confluenceTd"> A comma
separated list of ZooKeeper servers. </td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"> zkPassword </td><td colspan="1" rowspan="1"
class="confluenceTd"> </td><td colspan="1" rowspan="1"
class="confluenceTd"> The password to use when connecting to the ZooKeeper
server. </td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"> zkPath
</td><td colspan="1" rowspan="1" class="confluenceTd"> /default </td><td
colspan="1" rowspan="1" class="confluenceTd"> The path to the ZooKeeper
directory where Master/Slave election information will be exchanged.
</td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"> zkSessionTmeout
</td><td colspan="1" rowspan="1" class="confluenceTd"> 2s </td><td colspan="1"
rowspan="1" class="confluenceTd"> How quickly a node failure will be detected
by ZooKeeper. </td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd">
sync </td><td colspan=
"1" rowspan="1" class="confluenceTd"> quorum_mem </td><td colspan="1"
rowspan="1" class="confluenceTd"> Controls where updates are reside before
being considered complete. This setting is a comma separated list of the
following options: local_mem, local_disk, remote_mem, remote_disk, quorum_mem,
quorum_disk. If you combine two settings for a target, the stronger guarantee
is used. For example, configuring 'local_mem, local_disk' results in
'local_disk'. quorum_mem is the same as 'local_mem, remote_mem' and
'quorum_disk' is the same as 'local_disk, remote_disk'
</td></tr></tbody></table>
</div>
+
<p>Different replication sets can share the same <tt>zkPath</tt> as long they
have different <tt>brokerName</tt>.</p>
<p>The following configuration properties can be unique per node:</p>
<div class="table-wrap">
-<table class="confluenceTable"><tbody><tr><th colspan="1" rowspan="1"
class="confluenceTh"> property name </th><th colspan="1" rowspan="1"
class="confluenceTh"> default value </th><th colspan="1" rowspan="1"
class="confluenceTh"> Comments </th></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"> bind </td><td colspan="1" rowspan="1"
class="confluenceTd"> tcp://0.0.0.0:61619 </td><td colspan="1" rowspan="1"
class="confluenceTd"> When this node becomes a master, it will bind the
configured address and port to service the replication protocol.
</td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"> hostname
</td><td colspan="1" rowspan="1" class="confluenceTd"> </td><td
colspan="1" rowspan="1" class="confluenceTd"> The host name used to advertise
the replication service when this node becomes the master. If not set it will
be automatically determined. </td></tr></tbody></table>
+<table class="confluenceTable"><tbody><tr><th colspan="1" rowspan="1"
class="confluenceTh"> property name </th><th colspan="1" rowspan="1"
class="confluenceTh"> default value </th><th colspan="1" rowspan="1"
class="confluenceTh"> Comments </th></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"> bind </td><td colspan="1" rowspan="1"
class="confluenceTd"> tcp://0.0.0.0:61619 </td><td colspan="1" rowspan="1"
class="confluenceTd"> When this node becomes a master, it will bind the
configured address and port to service the replication protocol. Using dynamic
ports is also supported. Just configure with
'tcp://0.0.0.0:0'</td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"> hostname </td><td colspan="1" rowspan="1"
class="confluenceTd"> </td><td colspan="1" rowspan="1"
class="confluenceTd"> The host name used to advertise the replication service
when this node becomes the master. If not set it will be automatically
determined. </td></tr></tbody></table>
</div>
@@ -148,7 +149,7 @@ failover:(tcp:<span class="code-comment"
<h3><a shape="rect"
name="ReplicatedLevelDBStore-StandardLevelDBStoreProperties"></a>Standard
LevelDB Store Properties</h3>
<div class="table-wrap">
-<table class="confluenceTable"><tbody><tr><th colspan="1" rowspan="1"
class="confluenceTh"> property name </th><th colspan="1" rowspan="1"
class="confluenceTh"> default value </th><th colspan="1" rowspan="1"
class="confluenceTh"> Comments </th></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"> directory </td><td colspan="1" rowspan="1"
class="confluenceTd"> "LevelDB" </td><td colspan="1" rowspan="1"
class="confluenceTd"> The directory which the store will use to hold it's data
files. The store will create the directory if it does not already exist.
</td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"> readThreads
</td><td colspan="1" rowspan="1" class="confluenceTd"> 10 </td><td colspan="1"
rowspan="1" class="confluenceTd"> The number of concurrent IO read threads to
allowed. </td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"> sync
</td><td colspan="1" rowspan="1" class="confluenceTd"> true </td><td
colspan="1" rowspan="1" class="confluenceTd">
If set to false, then the store does not sync logging operations to disk
</td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"> logSize
</td><td colspan="1" rowspan="1" class="confluenceTd"> 104857600 (100 MB)
</td><td colspan="1" rowspan="1" class="confluenceTd"> The max size (in bytes)
of each data log file before log file rotation occurs. </td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"> logWriteBufferSize </td><td
colspan="1" rowspan="1" class="confluenceTd"> 4194304 (4 MB) </td><td
colspan="1" rowspan="1" class="confluenceTd"> That maximum amount of log data
to build up before writing to the file system. </td></tr><tr><td colspan="1"
rowspan="1" class="confluenceTd"> verifyChecksums </td><td colspan="1"
rowspan="1" class="confluenceTd"> false </td><td colspan="1" rowspan="1"
class="confluenceTd"> Set to true to force checksum verification of all data
that is read from the file system. </td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd">
paranoidChecks </td><td colspan="1" rowspan="1" class="confluenceTd"> false
</td><td colspan="1" rowspan="1" class="confluenceTd"> Make the store error out
as soon as possible if it detects internal corruption. </td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"> indexFactory </td><td colspan="1"
rowspan="1" class="confluenceTd"> org.fusesource.leveldbjni.JniDBFactory,
org.iq80.leveldb.impl.Iq80DBFactory </td><td colspan="1" rowspan="1"
class="confluenceTd"> The factory classes to use when creating the LevelDB
indexes </td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd">
indexMaxOpenFiles </td><td colspan="1" rowspan="1" class="confluenceTd"> 1000
</td><td colspan="1" rowspan="1" class="confluenceTd"> Number of open files
that can be used by the index. </td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"> indexBlockRestartInterval </td><td colspan="1"
rowspan="1" class="confluenceTd"> 16 </td><td colspan="1" rowspan="1"
class="confluenceTd"> N
umber keys between restart points for delta encoding of keys.
</td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd">
indexWriteBufferSize </td><td colspan="1" rowspan="1" class="confluenceTd">
6291456 (6 MB) </td><td colspan="1" rowspan="1" class="confluenceTd"> Amount of
index data to build up in memory before converting to a sorted on-disk file.
</td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"> indexBlockSize
</td><td colspan="1" rowspan="1" class="confluenceTd"> 4096 (4 K) </td><td
colspan="1" rowspan="1" class="confluenceTd"> The size of index data packed per
block. </td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd">
indexCacheSize </td><td colspan="1" rowspan="1" class="confluenceTd"> 268435456
(256 MB) </td><td colspan="1" rowspan="1" class="confluenceTd"> The maximum
amount of off-heap memory to use to cache index blocks. </td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"> indexCompression </td><td
colspan="1" rowspan="1"
class="confluenceTd"> snappy </td><td colspan="1" rowspan="1"
class="confluenceTd"> The type of compression to apply to the index blocks.
Can be snappy or none. </td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"> logCompression </td><td colspan="1" rowspan="1"
class="confluenceTd"> none </td><td colspan="1" rowspan="1"
class="confluenceTd"> The type of compression to apply to the log records. Can
be snappy or none. </td></tr></tbody></table>
+<table class="confluenceTable"><tbody><tr><th colspan="1" rowspan="1"
class="confluenceTh"> property name </th><th colspan="1" rowspan="1"
class="confluenceTh"> default value </th><th colspan="1" rowspan="1"
class="confluenceTh"> Comments </th></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"> directory </td><td colspan="1" rowspan="1"
class="confluenceTd"> "LevelDB" </td><td colspan="1" rowspan="1"
class="confluenceTd"> The directory which the store will use to hold it's data
files. The store will create the directory if it does not already exist.
</td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"> readThreads
</td><td colspan="1" rowspan="1" class="confluenceTd"> 10 </td><td colspan="1"
rowspan="1" class="confluenceTd"> The number of concurrent IO read threads to
allowed. </td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd">
logSize </td><td colspan="1" rowspan="1" class="confluenceTd"> 104857600 (100
MB) </td><td colspan="1" rowspan="1" class
="confluenceTd"> The max size (in bytes) of each data log file before log file
rotation occurs. </td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"> logWriteBufferSize </td><td colspan="1" rowspan="1"
class="confluenceTd"> 4194304 (4 MB) </td><td colspan="1" rowspan="1"
class="confluenceTd"> That maximum amount of log data to build up before
writing to the file system. </td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"> verifyChecksums </td><td colspan="1" rowspan="1"
class="confluenceTd"> false </td><td colspan="1" rowspan="1"
class="confluenceTd"> Set to true to force checksum verification of all data
that is read from the file system. </td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"> paranoidChecks </td><td colspan="1" rowspan="1"
class="confluenceTd"> false </td><td colspan="1" rowspan="1"
class="confluenceTd"> Make the store error out as soon as possible if it
detects internal corruption. </td></tr><tr><td colspan="1" rowspan="1" cla
ss="confluenceTd"> indexFactory </td><td colspan="1" rowspan="1"
class="confluenceTd"> org.fusesource.leveldbjni.JniDBFactory,
org.iq80.leveldb.impl.Iq80DBFactory </td><td colspan="1" rowspan="1"
class="confluenceTd"> The factory classes to use when creating the LevelDB
indexes </td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd">
indexMaxOpenFiles </td><td colspan="1" rowspan="1" class="confluenceTd"> 1000
</td><td colspan="1" rowspan="1" class="confluenceTd"> Number of open files
that can be used by the index. </td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"> indexBlockRestartInterval </td><td colspan="1"
rowspan="1" class="confluenceTd"> 16 </td><td colspan="1" rowspan="1"
class="confluenceTd"> Number keys between restart points for delta encoding of
keys. </td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd">
indexWriteBufferSize </td><td colspan="1" rowspan="1" class="confluenceTd">
6291456 (6 MB) </td><td colspan="1" rowspan="1" class="
confluenceTd"> Amount of index data to build up in memory before converting to
a sorted on-disk file. </td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"> indexBlockSize </td><td colspan="1" rowspan="1"
class="confluenceTd"> 4096 (4 K) </td><td colspan="1" rowspan="1"
class="confluenceTd"> The size of index data packed per block.
</td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"> indexCacheSize
</td><td colspan="1" rowspan="1" class="confluenceTd"> 268435456 (256 MB)
</td><td colspan="1" rowspan="1" class="confluenceTd"> The maximum amount of
off-heap memory to use to cache index blocks. </td></tr><tr><td colspan="1"
rowspan="1" class="confluenceTd"> indexCompression </td><td colspan="1"
rowspan="1" class="confluenceTd"> snappy </td><td colspan="1" rowspan="1"
class="confluenceTd"> The type of compression to apply to the index blocks.
Can be snappy or none. </td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"> logCompression </td><td colspa
n="1" rowspan="1" class="confluenceTd"> none </td><td colspan="1" rowspan="1"
class="confluenceTd"> The type of compression to apply to the log records. Can
be snappy or none. </td></tr></tbody></table>
</div>
</div>