Author: buildbot
Date: Fri Dec 9 16:22:55 2016
New Revision: 1002515
Log:
Production update by buildbot for activemq
Modified:
websites/production/activemq/content/cache/main.pageCache
websites/production/activemq/content/kahadb.html
Modified: websites/production/activemq/content/cache/main.pageCache
==============================================================================
Binary files - no diff available.
Modified: websites/production/activemq/content/kahadb.html
==============================================================================
--- websites/production/activemq/content/kahadb.html (original)
+++ websites/production/activemq/content/kahadb.html Fri Dec 9 16:22:55 2016
@@ -81,23 +81,21 @@
<tbody>
<tr>
<td valign="top" width="100%">
-<div class="wiki-content maincontent"><p>KahaDB is a file based persistence
database that is local to the message broker that is using it. It has been
optimised for fast persistence and is the the default storage mechanism from
ActiveMQ 5.4 onwards. KahaDB uses less file descriptors and provides faster
recovery than its predecessor, the <a shape="rect"
href="amq-message-store.html">AMQ Message Store</a>.</p><h2
id="KahaDB-Configuration">Configuration</h2><p>You can configure ActiveMQ to
use KahaDB for its persistence adapter - like below:</p><div class="code panel
pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
-<pre class="brush: xml; gutter: false; theme: Default"
style="font-size:12px;">
- <broker brokerName="broker" ... >
+<div class="wiki-content maincontent"><p>KahaDB is a file based persistence
database that is local to the message broker that is using it. It has been
optimized for fast persistence. It is the the default storage mechanism since
<strong>ActiveMQ 5.4</strong>. KahaDB uses less file descriptors and provides
faster recovery than its predecessor, the <a shape="rect"
href="amq-message-store.html">AMQ Message Store</a>.</p><h2
id="KahaDB-Configuration">Configuration</h2><p>To use KahaDB as the broker's
persistence adapter configure ActiveMQ as follows (example):</p><div
class="code panel pdl" style="border-width: 1px;"><div class="codeContent
panelContent pdl">
+<pre class="brush: xml; gutter: false; theme: Default"
style="font-size:12px;"> <broker brokerName="broker">
<persistenceAdapter>
<kahaDB directory="activemq-data" journalMaxFileLength="32mb"/>
</persistenceAdapter>
- ...
</broker>
-
</pre>
-</div></div><h3 id="KahaDB-KahaDBProperties">KahaDB Properties</h3><div
class="table-wrap"><table class="confluenceTable"><tbody><tr><th colspan="1"
rowspan="1" class="confluenceTh"><p>property name</p></th><th colspan="1"
rowspan="1" class="confluenceTh"><p>default value</p></th><th colspan="1"
rowspan="1" class="confluenceTh"><p>Comments</p></th></tr><tr><td colspan="1"
rowspan="1"
class="confluenceTd"><p><code>archiveCorruptedIndex</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If enabled, corrupted indexes
found at startup will be archived (not deleted).</p></td></tr><tr><td
colspan="1" rowspan="1"
class="confluenceTd"><p><code>archiveDataLogs</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>If enabled, will move a message data log to
the archive directory instead of deleting it.</p></t
d></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>checkForCorruptJournalFiles</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If enabled, will check for
corrupted Journal files on startup and try and recover
them.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>checkpointInterval</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>5000</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>Time (ms) before check-pointing the
journal.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>checksumJournalFiles</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>true</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Create a checksum for a journal
file - to enable checking for corrupted journals.</p><p>Before <strong>ActiveMQ
5.9.0</strong>: the default i
s <strong><code>false</code></strong>.</p></td></tr><tr><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>cleanupInterval</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>30000</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Time (ms) before checking for a
discarding/moving message data logs that are no longer
used.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>compactAcksAfterNoGC</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>10</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ
5.14.0</strong>: when the acknowledgement compaction feature is enabled this
value controls how many store GC cycles must be completed with no other files
being cleaned up before the compaction logic is triggered to possibly compact
older acknowledgements spread across journal files into a new log file. 
The lower the value set the faster the compaction may oc
cur which can impact performance if it runs to often.</p></td></tr><tr><td
colspan="1" rowspan="1"
class="confluenceTd"><p><code>compactAcksIgnoresStoreGrowth</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ
5.14.0</strong>: when the acknowledgement compaction feature is enabled
this value controls whether compaction is run when the store is still growing
or if it should only occur when the store has stopped growing (either due to
idle or store limits reached).  If enabled the compaction runs regardless
of the store still having room or being active which can decrease overall
performance but reclaim space faster. </p></td></tr><tr><td colspan="1"
rowspan="1"
class="confluenceTd"><p><code>concurrentStoreAndDispatchQueues</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>true</code></p></td><td
colspan="1" rowspan="1" class="confl
uenceTd"><p>Enable the dispatching of Queue messages to interested clients to
happen concurrently with message storage.</p></td></tr><tr><td colspan="1"
rowspan="1"
class="confluenceTd"><p><code>concurrentStoreAndDispatchTopics</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Enable the dispatching of Topic
messages to interested clients to happen concurrently with message
storage</p><div class="confluence-information-macro
confluence-information-macro-warning"><span class="aui-icon aui-icon-small
aui-iconfont-error confluence-information-macro-icon"></span><div
class="confluence-information-macro-body">Enabling this property is not
recommended.</div></div></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>directory</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>activemq-data</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p
>The path to the directory to use to store the message store data and log
>files.</p></td></tr><tr><td colspan="1" rowspan="1"
>class="confluenceTd"><p><code>directoryArchive</code></p></td><td colspan="1"
>rowspan="1" class="confluenceTd"><p><code>null</code></p></td><td colspan="1"
>rowspan="1" class="confluenceTd"><p>Define the directory to move data logs to
>when they all the messages they contain have been
>consumed.</p></td></tr><tr><td colspan="1" rowspan="1"
>class="confluenceTd"><p><code>enableAckCompaction</code></p></td><td
>colspan="1" rowspan="1" class="confluenceTd"><p><code>true</code></p></td><td
>colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ
>5.14.0</strong>: this setting controls whether the store will perform
>periodic compaction of older journal log files that contain only Message
>acknowledgements.  By compacting these older acknowledgements into new
>journal log files the older files can be removed freeing space and allowing
>the message store to
continue to operate without hitting store size limits.</p></td></tr><tr><td
colspan="1" rowspan="1"
class="confluenceTd"><p><code>enableIndexWriteAsync</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If set, will asynchronously
write indexes.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>enableJournalDiskSyncs</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>true</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><span>Ensure every journal
write is followed by a disk sync (JMS durability requirement).</span></p><div
class="confluence-information-macro confluence-information-macro-warning"><span
class="aui-icon aui-icon-small aui-iconfont-error
confluence-information-macro-icon"></span><div
class="confluence-information-macro-body">This property is deprecated as of
ActiveMQ 5.14.0, see <span style="color: rgb(34,34,34)
;">journalDiskSyncStrategy for version 5.14.0 and
newer.</span></div></div></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><code><span>journalDiskSyncStrategy</span></code></td><td
colspan="1" rowspan="1" class="confluenceTd">always</td><td colspan="1"
rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ 5.14.0</strong>: this
setting configures the disk sync policy. The default strategy is set to
<strong>always</strong>.</p><ul><li><strong><code>always</code></strong>
<span>Ensure every journal write is followed by a disk sync (JMS durability
requirement). This is the safest option but is also the slowest because it
requires a sync after every message write. This is equivalent to the deprecated
property enableJournalDiskSyncs being set to
true.</span></li><li><strong><code>periodic</code></strong> <span style="color:
rgb(34,34,34);">The disk will be synced at set intervals (if a write has
occurred) instead of after every journal write which will reduce the load o
n the disk and should improve throughput</span>. The disk will also be synced
when rolling over to a new journal file. The default setting is set to 1 second
which generally provides very good performance while being safer than never
disk syncing as only up to 1 second of data can be lost. See
<strong>journalDiskSyncInterval</strong> to change the frequency of disk
syncs.</li><li><strong><code>never</code></strong> A sync will never be
explicitly called and it will be up to the operating system to flush to disk.
This is equivalent to setting the deprectated enableJournalDiskSyncs property
to false. This is the fastest option but is the least safe as there's no
guarantee as to when data is flushed to disk so message loss can occur on
failure.</li></ul></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><code><span>journalDiskSyncInterval</span></code></td><td
colspan="1" rowspan="1" class="confluenceTd">1000</td><td colspan="1"
rowspan="1" class="confluenceTd">Interval (ms
) for when to perform a disk sync when
<strong>journalDiskSyncStrategy</strong> is set to <strong>periodic</strong>. A
sync will only be performed if a write has occurred to the journal since the
last disk sync or when the journal rolls over to a new journal
file.</td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>ignoreMissingJournalfiles</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If enabled, will ignore a
missing message log file.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>indexCacheSize</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>10000</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>Number of index pages cached in
memory.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>indexDirectory</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"> <
/td><td colspan="1" rowspan="1" class="confluenceTd"><p><span>From
<strong>ActiveMQ 5.10.0</strong>: If set, configures where the KahaDB index
files (<strong><code>db.data</code></strong>
and <strong><code>db.redo</code></strong>) will be stored. If not set, the
index files are stored in the directory specified by
the <strong><code>directory</code></strong>
attribute.</span></p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>indexWriteBatchSize</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>1000</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Number of indexes written in a
batch.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>journalMaxFileLength</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>32mb</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>A hint to set the maximum size
of the message data logs.</p></td></tr><tr><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>maxAsyncJobs</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>10000</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>The maximum number of
asynchronous messages that will be queued awaiting storage (should be the same
as the number of concurrent MessageProducers).</p></td></tr><tr><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>preallocationScope</code></p></td><td
colspan="1" rowspan="1"
class="confluenceTd"><code>entire_journal</code></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ 5.14.0</strong>: this
setting configures how journal data files are preallocated. The default
strategy preallocates the journal file on first use using the appender
thread. </p><ul><li><strong><code>entire_journal_async</code></strong>
will use preallocate ahead of time in a separate
thread.</li><li><strong><code>none</code></strong> disables
preallocation.</li></ul><p>On SSD,
using <strong><code>entire_journal_async</code></strong> avoids delaying
writes pending preallocation on first use.</p><p><strong>Note</strong>: on HDD
the additional thread contention for disk has a negative impact. Therefore use
the default.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>preallocationStrategy</code></p></td><td
colspan="1" rowspan="1"
class="confluenceTd"><p><code>sparse_file</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ
5.12.0</strong>: This setting configures how the broker will try to
preallocate the journal files when a new journal file is
needed.</p><ul><li><strong><code>sparse_file</code></strong> - sets the file
length, but does not populate it with any
data.</li><li><strong><code>os_kernel_copy</code></strong> - delegates the
preallocation to the Operating
System.</li><li><strong><code>zeros</code></strong>  - each preallocated
journal file contains nothing but <strong><
code>0x00</code></strong> throughout.</li></ul></td></tr><tr><td colspan="1"
rowspan="1"
class="confluenceTd"><p><code>storeOpenWireVersion</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>11</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Determines the version of
OpenWire commands that are marshaled to the KahaDB journal. </p><p>Before
<strong>ActiveMQ 5.12.0</strong>: the default value is
<strong><code>6</code></strong>.</p><p>Some features of the broker depend on
information stored in the OpenWire commands from newer protocol revisions and
these may not work correctly if the store version is set to a lower
value.  KahaDB stores from broker versions greater than 5.9.0 will in many
cases still be readable by the broker but will cause the broker to continue
using the older store version meaning newer features may not work as
intended. </p><p>For KahaDB stores that were created in versions prior to
<strong>ActiveMQ 5.9.0</str
ong> it will be necessary to manually set
<strong><code>storeOpenWireVersion="6"</code></strong> in order to start a
broker without error.</p></td></tr></tbody></table></div><p>For tuning locking
properties please take a look at <a shape="rect"
href="pluggable-storage-lockers.html">Pluggable storage lockers</a></p><h3
id="KahaDB-Slowfilesystemaccessdiagnosticlogging">Slow file system access
diagnostic logging</h3><p>You can configure a non zero threshold in
milliseconds for database updates. If database operation is slower than that
threshold (for example if you set it to 500), you may see messages like</p><div
class="panel" style="border-width: 1px;"><div class="panelContent">
+</div></div><h3 id="KahaDB-KahaDBProperties">KahaDB Properties</h3><div
class="table-wrap"><table class="confluenceTable"><tbody><tr><th colspan="1"
rowspan="1" class="confluenceTh"><p>property name</p></th><th colspan="1"
rowspan="1" class="confluenceTh"><p>default value</p></th><th colspan="1"
rowspan="1" class="confluenceTh"><p>Comments</p></th></tr><tr><td colspan="1"
rowspan="1"
class="confluenceTd"><p><code>archiveCorruptedIndex</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If
<strong><code>true</code></strong>, corrupted indexes found at startup will be
archived (not deleted).</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>archiveDataLogs</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>If <strong><code>true</code></strong>, will
move a message data log t
o the archive directory instead of deleting it.</p></td></tr><tr><td
colspan="1" rowspan="1"
class="confluenceTd"><p><code>checkForCorruptJournalFiles</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If
<strong><code>true</code></strong>, will check for corrupt journal files on
startup and try and recover them.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>checkpointInterval</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>5000</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>Time (ms) before check-pointing the
journal.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>checksumJournalFiles</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>true</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Create a checksum for a journal
file. The presence of a checks
um is required in order for the persistence adapter to be able to detect
corrupt journal files.</p><p>Before <strong>ActiveMQ 5.9.0</strong>: the
default is <strong><code>false</code></strong>.</p></td></tr><tr><td
colspan="1" rowspan="1"
class="confluenceTd"><p><code>cleanupInterval</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>30000</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>The interval (in ms) between consecutive
checks that determine which journal files, if any, are eligible for removal
from the message store. An eligible journal file is one that has no outstanding
references.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>compactAcksAfterNoGC</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>10</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ
5.14.0</strong>: when the acknowledgement compaction feature is enabled this
value controls how
many store GC cycles must be completed with no other files being cleaned up
before the compaction logic is triggered to possibly compact older
acknowledgements spread across journal files into a new log file.  The
lower the value set the faster the compaction may occur which can impact
performance if it runs to often.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>compactAcksIgnoresStoreGrowth</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ
5.14.0</strong>: when the acknowledgement compaction feature is enabled
this value controls whether compaction is run when the store is still growing
or if it should only occur when the store has stopped growing (either due to
idle or store limits reached).  If enabled the compaction runs regardless
of the store still having room or being active which can decrease overall
performance bu
t reclaim space faster. </p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>concurrentStoreAndDispatchQueues</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>true</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Enable the dispatching of Queue
messages to interested clients to happen concurrently with message
storage.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>concurrentStoreAndDispatchTopics</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Enable the dispatching of Topic
messages to interested clients to happen concurrently with message
storage</p><div class="confluence-information-macro
confluence-information-macro-warning"><span class="aui-icon aui-icon-small
aui-iconfont-error confluence-information-macro-icon"></span><div
class="confluence-information-macro-body">Enabling this prop
erty is not recommended.</div></div></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>directory</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>activemq-data</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>The path to the directory to
use to store the message store data and log files.</p></td></tr><tr><td
colspan="1" rowspan="1"
class="confluenceTd"><p><code>directoryArchive</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>null</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>Define the directory to move data logs to
when they all the messages they contain have been
consumed.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>enableAckCompaction</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>true</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ
5.14.0</strong>: this setting controls wheth
er the store will perform periodic compaction of older journal log files that
contain only Message acknowledgements. By compacting these older
acknowledgements into new journal log files the older files can be removed
freeing space and allowing the message store to continue to operate without
hitting store size limits.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>enableIndexWriteAsync</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If
<strong><code>true</code></strong>, the index is updated
asynchronously.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>enableJournalDiskSyncs</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>true</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><span>Ensure every journal
write is followed by a disk sync (JMS durability requirement).</span></p><div
class="co
nfluence-information-macro confluence-information-macro-warning"><span
class="aui-icon aui-icon-small aui-iconfont-error
confluence-information-macro-icon"></span><div
class="confluence-information-macro-body"><p>This property is deprecated as of
<strong>ActiveMQ</strong> <strong>5.14.0</strong>.</p><p>From
<strong>ActiveMQ</strong> <strong>5.14.0</strong>: see <span style="color:
rgb(34,34,34);"><strong><code>journalDiskSyncStrategy</code></strong>.</span></p></div></div></td></tr><tr><td
colspan="1" rowspan="1"
class="confluenceTd"><code><span>journalDiskSyncStrategy</span></code></td><td
colspan="1" rowspan="1" class="confluenceTd"><code>always</code></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ
5.14.0</strong>: this setting configures the disk sync policy. The list of
available sync strategies are (in order of decreasing safety, and increasing
performance):</p><ul><li><strong><code>always</code></strong> <span>Ensure
every journal write is follow
ed by a disk sync (JMS durability requirement). This is the safest option but
is also the slowest because it requires a sync after every message write. This
is equivalent to the deprecated
property <strong><code>enableJournalDiskSyncs=true</code></strong>.</span></li><li><strong><code>periodic</code></strong>
<span style="color: rgb(34,34,34);">The disk will be synced at set intervals
(if a write has occurred) instead of after every journal write which will
reduce the load on the disk and should improve throughput</span>. The disk will
also be synced when rolling over to a new journal file. The default interval is
1 second. The default interval offers very good performance, whilst being safer
than <strong><code>never</code></strong> disk syncing, as data loss is
limited to a maximum of 1 second's worth. See
<strong><code>journalDiskSyncInterval</code></strong> to change the frequency
of disk syncs.</li><li><strong><code>never</code></strong> A sync will never be
explicitly
called and it will be up to the operating system to flush to disk. This is
equivalent to setting the deprecated property
<strong><code>enableJournalDiskSyncs=false</code></strong>. This is the fastest
option but is the least safe as there's no guarantee as to when data is flushed
to disk. Consequently message loss <em>can</em> occur on broker
failure.</li></ul></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><code><span>journalDiskSyncInterval</span></code></td><td
colspan="1" rowspan="1" class="confluenceTd"><code>1000</code></td><td
colspan="1" rowspan="1" class="confluenceTd">Interval (ms) for when to perform
a disk sync
when <strong><code>journalDiskSyncStrategy=periodic</code></strong>. A
sync will only be performed if a write has occurred to the journal since the
last disk sync or when the journal rolls over to a new journal
file.</td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>ignoreMissingJournalfiles</code></p></td><td
colspan="1"
rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>If <strong><code>true</code></strong>,
reports of missing journal files are ignored.</p></td></tr><tr><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>indexCacheSize</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>10000</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Number of index pages cached in
memory.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>indexDirectory</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"> </td><td colspan="1" rowspan="1"
class="confluenceTd"><p><span>From <strong>ActiveMQ 5.10.0</strong>: If set,
configures where the KahaDB index files (<strong><code>db.data</code></strong>
and <strong><code>db.redo</code></strong>) will be stored. If not set, the
index files are stored in the directory specified by
the <strong><code>directory</code>
</strong> attribute.</span></p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>indexWriteBatchSize</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>1000</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Number of indexes written in a
batch.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>journalMaxFileLength</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>32mb</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>A hint to set the maximum size
of the message data logs.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>maxAsyncJobs</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>10000</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>The maximum number of asynchronous messages
that will be queued awaiting storage (should be the same as the number of
concurrent MessageProducers).</p></td></t
r><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>preallocationScope</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><code>entire_journal</code></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ
5.14.0</strong>: this setting configures how journal data files are
preallocated. The default strategy preallocates the journal file on first use
using the appender
thread. </p><ul><li><strong><code>entire_journal_async</code></strong>
will use preallocate ahead of time in a separate
thread.</li><li><strong><code>none</code></strong> disables
preallocation.</li></ul><p>On SSD,
using <strong><code>entire_journal_async</code></strong> avoids delaying
writes pending preallocation on first use.</p><p><strong>Note</strong>: on HDD
the additional thread contention for disk has a negative impact. Therefore use
the default.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>preallocationStrategy</code></p></t
d><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>sparse_file</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ
5.12.0</strong>: This setting configures how the broker will try to
preallocate the journal files when a new journal file is
needed.</p><ul><li><strong><code>sparse_file</code></strong> - sets the file
length, but does not populate it with any
data.</li><li><strong><code>os_kernel_copy</code></strong> - delegates the
preallocation to the Operating
System.</li><li><strong><code>zeros</code></strong>  - each preallocated
journal file contains nothing but <strong><code>0x00</code></strong>
throughout.</li></ul></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>storeOpenWireVersion</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>11</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Determines the version of
OpenWire commands that are marshaled to the KahaDB
journal. </p><p>Before <strong>ActiveMQ 5.12.0</strong>: the default
value is <strong><code>6</code></strong>.</p><p>Some features of the broker
depend on information stored in the OpenWire commands from newer protocol
revisions and these may not work correctly if the store version is set to a
lower value.  KahaDB stores from broker versions greater than 5.9.0 will
in many cases still be readable by the broker but will cause the broker to
continue using the older store version meaning newer features may not work as
intended. </p><p>For KahaDB stores that were created in versions prior to
<strong>ActiveMQ 5.9.0</strong> it will be necessary to manually set
<strong><code>storeOpenWireVersion="6"</code></strong> in order to start a
broker without error.</p></td></tr></tbody></table></div><div
class="confluence-information-macro
confluence-information-macro-information"><span class="aui-icon aui-icon-small
aui-iconfont-info confluence-information-macro-icon"></span><div c
lass="confluence-information-macro-body">For tuning locking properties see the
options listed at <a shape="rect"
href="pluggable-storage-lockers.html">Pluggable storage
lockers.</a></div></div><p> </p><h3
id="KahaDB-SlowFileSystemAccessDiagnosticLogging">Slow File System Access
Diagnostic Logging</h3><p>You can configure a non zero threshold in
milliseconds for database updates. If database operation is slower than that
threshold (for example if you set it to <strong><code>500</code></strong>), you
may see messages like:</p><div class="panel" style="border-width: 1px;"><div
class="panelContent">
<p><code>Slow KahaDB access: cleanup took 1277 |
org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal Checkpoint
Worker</code></p>
</div></div><p>You can configure a threshold used to log these messages by
using a system property and adjust it to your disk speed so that you can easily
pick up runtime anomalies.</p><div class="panel" style="border-width:
1px;"><div class="panelContent">
<p><code>-Dorg.apache.activemq.store.kahadb.LOG_SLOW_ACCESS_TIME=1500</code></p>
-</div></div><h1 id="KahaDB-Multi(m)kahaDBpersistenceadapter">Multi(m) kahaDB
persistence adapter</h1><p>From 5.6, it is possible to distribute destinations
stores across multiple kahdb persistence adapters. When would you do this? If
you have one fast producer/consumer destination and another periodic producer
destination that has irregular batch consumption, you disk usage can grow out
of hand because unconsumed messages get dotted across journal files. Having a
separate journal for each ensures minimal journal usage. Also, some destination
may be critical and require disk synchronization while others may not. In these
cases you can use the mKahaDB persistence adapter and filter destinations using
wildcards, just like with destination policy entries.</p><h3
id="KahaDB-Transactions">Transactions</h3><p>Transactions can span multiple
journals if the destinations are distributed. This means that two phase
completion is necessary, which does impose a performance (additional disk sync)
penalty to record the commit outcome. This penalty is only imposed if more
than one journal is involved in a transaction.</p><h2
id="KahaDB-Configuration.1">Configuration</h2><p>Each instance of kahaDB can be
configured independently. If no destination is supplied to a
<strong><code>filteredKahaDB</code></strong>, the implicit default value will
match any destination, queue or topic. This is a handy catch all. If no
matching persistence adapter can be found, destination creation will fail with
an exception. The <strong><code>filteredKahaDB</code></strong> shares its
wildcard matching rules with <a shape="rect"
href="per-destination-policies.html">Per Destination Policies</a>.</p><div
class="code panel pdl" style="border-width: 1px;"><div class="codeContent
panelContent pdl">
-<pre class="brush: xml; gutter: false; theme: Default"
style="font-size:12px;"><broker brokerName="broker" ... >
- <persistenceAdapter>
+</div></div><h1 id="KahaDB-Multi(m)kahaDBPersistenceAdapter">Multi(m) kahaDB
Persistence Adapter</h1><p>From <strong>ActiveMQ 5.6</strong>: it's possible to
distribute destinations stores across multiple kahdb persistence adapters. When
would you do this? If you have one fast producer/consumer destination and
another periodic producer destination that has irregular batch consumption then
disk usage can grow out of hand as unconsumed messages become distributed
across multiple journal files. Having a separate journal for each ensures
minimal journal usage. Also, some destination may be critical and require disk
synchronization while others may not. In these cases you can use
the <strong><code>mKahaDB</code></strong> persistence adapter and filter
destinations using wildcards, just like with destination policy entries.</p><h3
id="KahaDB-Transactions">Transactions</h3><p>Transactions can span multiple
journals if the destinations are distributed. This means that two phase completi
on is necessary, which does impose a performance (additional disk sync)
penalty to record the commit outcome. This penalty is only imposed if more than
one journal is involved in a transaction.</p><h2
id="KahaDB-Configuration.1">Configuration</h2><p>Each instance
of <strong><code>kahaDB</code></strong> can be configured independently.
If no destination is supplied to a
<strong><code>filteredKahaDB</code></strong>, the implicit default value will
match any destination, queue or topic. This is a handy catch all. If no
matching persistence adapter can be found, destination creation will fail with
an exception. The <strong><code>filteredKahaDB</code></strong> shares its
wildcard matching rules with <a shape="rect"
href="per-destination-policies.html">Per Destination Policies</a>.</p><div
class="code panel pdl" style="border-width: 1px;"><div class="codeContent
panelContent pdl">
+<pre class="brush: xml; gutter: false; theme: Default"
style="font-size:12px;"><broker brokerName="broker">
+
+ <persistenceAdapter>
<mKahaDB directory="${activemq.base}/data/kahadb">
<filteredPersistenceAdapters>
<!-- match all queues -->
@@ -116,10 +114,10 @@
</filteredPersistenceAdapters>
</mKahaDB>
</persistenceAdapter>
-...
+
</broker>
</pre>
-</div></div><h3
id="KahaDB-Automaticperdestinationpersistenceadapter">Automatic per destination
persistence adapter</h3><p>When the
<strong><code>perDestination</code></strong> boolean attribute is set to true
on the catch all (no explicit destination set),
<strong><code>filteredKahaDB</code></strong>. Each matching destination will
get its own <strong><code>kahaDB</code></strong> instance.</p><div class="code
panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
+</div></div><h3
id="KahaDB-AutomaticPerDestinationPersistenceAdapter">Automatic Per Destination
Persistence Adapter</h3><p>Set
<strong><code>perDestination="true"</code></strong> on the catch all, i.e.,
when no explicit destination is set,
<strong><code>filteredKahaDB</code></strong> entry. Each matching destination
will be assigned its own <strong><code>kahaDB</code></strong> instance.</p><div
class="code panel pdl" style="border-width: 1px;"><div class="codeContent
panelContent pdl">
<pre class="brush: xml; gutter: false; theme: Default"
style="font-size:12px;"><broker brokerName="broker" ... >
<persistenceAdapter>
<mKahaDB directory="${activemq.base}/data/kahadb">
@@ -136,7 +134,7 @@
...
</broker>
</pre>
-</div></div><div class="confluence-information-macro
confluence-information-macro-information"><p class="title">Note:</p><span
class="aui-icon aui-icon-small aui-iconfont-info
confluence-information-macro-icon"></span><div
class="confluence-information-macro-body"><p>Specifying: <strong><code>perDestination="true"</code></strong>
and <strong><code>queue=">"</code></strong> on the same line has not
been verified to work and may result in:</p><p> </p><pre>Reason:
java.io.IOException: File '/opt/java/apache-activemq-5.9.0/data/mKahaDB/lock'
could not be locked as lock is already held for this jvm.
</pre></div></div></div>
+</div></div><div class="confluence-information-macro
confluence-information-macro-information"><p class="title">Note:</p><span
class="aui-icon aui-icon-small aui-iconfont-info
confluence-information-macro-icon"></span><div
class="confluence-information-macro-body"><p>Specifying both
<strong><code>perDestination="true"</code></strong>
<em>and</em> <strong><code>queue=">"</code></strong> on the
same <strong><code>filteredKahaDB</code></strong> entry has not been
tested. It <em> may</em> result in:</p><p> </p><pre>Reason:
java.io.IOException: File '/opt/java/apache-activemq-5.9.0/data/mKahaDB/lock'
could not be locked as lock is already held for this jvm.
</pre></div></div></div>
</td>
<td valign="top">
<div class="navigation">