Author: buildbot
Date: Thu Jul 19 18:24:32 2018
New Revision: 1032753
Log:
Production update by buildbot for activemq
Modified:
websites/production/activemq/content/cache/main.pageCache
websites/production/activemq/content/kahadb.html
Modified: websites/production/activemq/content/cache/main.pageCache
==============================================================================
Binary files - no diff available.
Modified: websites/production/activemq/content/kahadb.html
==============================================================================
--- websites/production/activemq/content/kahadb.html (original)
+++ websites/production/activemq/content/kahadb.html Thu Jul 19 18:24:32 2018
@@ -81,18 +81,18 @@
<tr>
<td valign="top" width="100%">
<div class="wiki-content maincontent"><p>KahaDB is a file based persistence
database that is local to the message broker that is using it. It has been
optimized for fast persistence. It is the the default storage mechanism since
<strong>ActiveMQ 5.4</strong>. KahaDB uses less file descriptors and provides
faster recovery than its predecessor, the <a shape="rect"
href="amq-message-store.html">AMQ Message Store</a>.</p><h2
id="KahaDB-Configuration">Configuration</h2><p>To use KahaDB as the broker's
persistence adapter configure ActiveMQ as follows (example):</p><div
class="code panel pdl" style="border-width: 1px;"><div class="codeContent
panelContent pdl">
-<pre class="brush: xml; gutter: false; theme: Default"
style="font-size:12px;"> <broker brokerName="broker">
+<pre class="brush: xml; gutter: false; theme: Default"> <broker
brokerName="broker">
<persistenceAdapter>
<kahaDB directory="activemq-data" journalMaxFileLength="32mb"/>
</persistenceAdapter>
</broker>
</pre>
-</div></div><h3 id="KahaDB-KahaDBProperties">KahaDB Properties</h3><div
class="table-wrap"><table class="confluenceTable"><tbody><tr><th colspan="1"
rowspan="1" class="confluenceTh"><p>Property</p></th><th colspan="1"
rowspan="1" class="confluenceTh"><p>Default</p></th><th colspan="1" rowspan="1"
class="confluenceTh"><p>Comments</p></th></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>archiveCorruptedIndex</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If
<strong><code>true</code></strong>, corrupted indexes found at startup will be
archived (not deleted).</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>archiveDataLogs</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>If <strong><code>true</code></strong>, will
move a message data log to the archi
ve directory instead of deleting it.</p></td></tr><tr><td colspan="1"
rowspan="1"
class="confluenceTd"><p><code>checkForCorruptJournalFiles</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If
<strong><code>true</code></strong>, will check for corrupt journal files on
startup and try and recover them.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>checkpointInterval</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>5000</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>Time (ms) before check-pointing the
journal.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>checksumJournalFiles</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>true</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Create a checksum for a journal
file. The presence of a checksum is requi
red in order for the persistence adapter to be able to detect corrupt journal
files.</p><p>Before <strong>ActiveMQ 5.9.0</strong>: the default is
<strong><code>false</code></strong>.</p></td></tr><tr><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>cleanupInterval</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>30000</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>The interval (in ms) between
consecutive checks that determine which journal files, if any, are eligible for
removal from the message store. An eligible journal file is one that has no
outstanding references.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>compactAcksAfterNoGC</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>10</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ
5.14.0</strong>: when the acknowledgement compaction feature is enabled this
value controls how many store
GC cycles must be completed with no other files being cleaned up before the
compaction logic is triggered to possibly compact older acknowledgements spread
across journal files into a new log file.  The lower the value set the
faster the compaction may occur which can impact performance if it runs to
often.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>compactAcksIgnoresStoreGrowth</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ
5.14.0</strong>: when the acknowledgement compaction feature is enabled
this value controls whether compaction is run when the store is still growing
or if it should only occur when the store has stopped growing (either due to
idle or store limits reached).  If enabled the compaction runs regardless
of the store still having room or being active which can decrease overall
performance but reclaim s
pace faster. </p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>concurrentStoreAndDispatchQueues</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>true</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Enable the dispatching of Queue
messages to interested clients to happen concurrently with message
storage.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>concurrentStoreAndDispatchTopics</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Enable the dispatching of Topic
messages to interested clients to happen concurrently with message
storage</p><div class="confluence-information-macro
confluence-information-macro-warning"><span class="aui-icon aui-icon-small
aui-iconfont-error confluence-information-macro-icon"></span><div
class="confluence-information-macro-body"><p>Enabling this property is
not recommended.</p></div></div></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>directory</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>activemq-data</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>The path to the directory to
use to store the message store data and log files.</p></td></tr><tr><td
colspan="1" rowspan="1"
class="confluenceTd"><p><code>directoryArchive</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>null</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>Define the directory to move data logs to
when they all the messages they contain have been
consumed.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>enableAckCompaction</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>true</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ
5.14.0</strong>: this setting controls whether t
he store will perform periodic compaction of older journal log files that
contain only Message acknowledgements. By compacting these older
acknowledgements into new journal log files the older files can be removed
freeing space and allowing the message store to continue to operate without
hitting store size limits.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>enableIndexWriteAsync</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If
<strong><code>true</code></strong>, the index is updated
asynchronously.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>enableJournalDiskSyncs</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>true</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><span>Ensure every journal
write is followed by a disk sync (JMS durability requirement).</span></p><div
class="conflu
ence-information-macro confluence-information-macro-warning"><span
class="aui-icon aui-icon-small aui-iconfont-error
confluence-information-macro-icon"></span><div
class="confluence-information-macro-body"><p>This property is deprecated as of
<strong>ActiveMQ</strong> <strong>5.14.0</strong>.</p><p>From
<strong>ActiveMQ</strong> <strong>5.14.0</strong>: see <span style="color:
rgb(34,34,34);"><strong><code>journalDiskSyncStrategy</code></strong>.</span></p></div></div></td></tr><tr><td
colspan="1" rowspan="1"
class="confluenceTd"><p><code>ignoreMissingJournalfiles</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If
<strong><code>true</code></strong>, reports of missing journal files are
ignored.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>indexCacheSize</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>10000</code></p></td><td colspan
="1" rowspan="1" class="confluenceTd"><p>Number of index pages cached in
memory.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>indexDirectory</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"> </td><td colspan="1" rowspan="1"
class="confluenceTd"><p><span>From <strong>ActiveMQ 5.10.0</strong>: If set,
configures where the KahaDB index files (<strong><code>db.data</code></strong>
and <strong><code>db.redo</code></strong>) will be stored. If not set, the
index files are stored in the directory specified by
the <strong><code>directory</code></strong>
attribute.</span></p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>indexWriteBatchSize</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>1000</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Number of indexes written in a
batch.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code><span>journalD
iskSyncInterval</span></code></p></td><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>1000</code></p></td><td colspan="1" rowspan="1"
class="confluenceTd"><p>Interval (ms) for when to perform a disk sync
when <strong><code>journalDiskSyncStrategy=periodic</code></strong>. A
sync will only be performed if a write has occurred to the journal since the
last disk sync or when the journal rolls over to a new journal
file.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code><span>journalDiskSyncStrategy</span></code></p></td><td
colspan="1" rowspan="1"
class="confluenceTd"><p><code>always</code></p></td><td colspan="1" rowspan="1"
class="confluenceTd"><p>From <strong>ActiveMQ 5.14.0</strong>: this setting
configures the disk sync policy. The list of available sync strategies are (in
order of decreasing safety, and increasing
performance):</p><ul><li><p><strong><code>always</code></strong> <span>Ensure
every journal write is followed by a disk sync (JM
S durability requirement). This is the safest option but is also the slowest
because it requires a sync after every message write. This is equivalent to the
deprecated
property <strong><code>enableJournalDiskSyncs=true</code></strong>.</span></p></li><li><p><strong><code>periodic</code></strong>
<span style="color: rgb(34,34,34);">The disk will be synced at set intervals
(if a write has occurred) instead of after every journal write which will
reduce the load on the disk and should improve throughput</span>. The disk will
also be synced when rolling over to a new journal file. The default interval is
1 second. The default interval offers very good performance, whilst being safer
than <strong><code>never</code></strong> disk syncing, as data loss is
limited to a maximum of 1 second's worth. See
<strong><code>journalDiskSyncInterval</code></strong> to change the frequency
of disk syncs.</p></li><li><p><strong><code>never</code></strong> A sync will
never be explicitly called
and it will be up to the operating system to flush to disk. This is
equivalent to setting the deprecated property
<strong><code>enableJournalDiskSyncs=false</code></strong>. This is the fastest
option but is the least safe as there's no guarantee as to when data is flushed
to disk. Consequently message loss <em>can</em> occur on broker
failure.</p></li></ul></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>journalMaxFileLength</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>32mb</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>A hint to set the maximum size
of the message data logs.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>maxAsyncJobs</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>10000</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>The maximum number of asynchronous messages
that will be queued awaiting storage (should be the same as the
number of concurrent MessageProducers).</p></td></tr><tr><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>preallocationScope</code></p></td><td
colspan="1" rowspan="1"
class="confluenceTd"><p><code>entire_journal</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ 5.14.0</strong>: this
setting configures how journal data files are preallocated. The default
strategy preallocates the journal file on first use using the appender
thread. </p><ul><li><strong><code>entire_journal</code></strong> will
preallocate the journal file on first use using the appender thread<br
clear="none"><p> </p></li><li><p><strong><code>entire_journal_async</code></strong>
will use preallocate ahead of time in a separate
thread.</p></li><li><p><strong><code>none</code></strong> disables
preallocation.</p></li></ul><p>On SSD,
using <strong><code>entire_journal_async</code></strong> avoids delaying
writes pending preallocation on first use.</p><p><st
rong>Note</strong>: on HDD the additional thread contention for disk has a
negative impact. Therefore use the default.</p></td></tr><tr><td colspan="1"
rowspan="1"
class="confluenceTd"><p><code>preallocationStrategy</code></p></td><td
colspan="1" rowspan="1"
class="confluenceTd"><p><code>sparse_file</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ
5.12.0</strong>: This setting configures how the broker will try to
preallocate the journal files when a new journal file is
needed.</p><ul><li><p><strong><code>sparse_file</code></strong> - sets the file
length, but does not populate it with any
data.</p></li><li><p><strong><code>os_kernel_copy</code></strong> - delegates
the preallocation to the Operating
System.</p></li><li><p><strong><code>zeros</code></strong>  - each
preallocated journal file contains nothing but
<strong><code>0x00</code></strong> throughout.</p></li></ul></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd">
<p><code>storeOpenWireVersion</code></p></td><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>11</code></p></td><td colspan="1" rowspan="1"
class="confluenceTd"><p>Determines the version of OpenWire commands that are
marshaled to the KahaDB journal. </p><p>Before <strong>ActiveMQ
5.12.0</strong>: the default value is
<strong><code>6</code></strong>.</p><p>Some features of the broker depend on
information stored in the OpenWire commands from newer protocol revisions and
these may not work correctly if the store version is set to a lower
value.  KahaDB stores from broker versions greater than 5.9.0 will in many
cases still be readable by the broker but will cause the broker to continue
using the older store version meaning newer features may not work as
intended. </p><p>For KahaDB stores that were created in versions prior to
<strong>ActiveMQ 5.9.0</strong> it will be necessary to manually set
<strong><code>storeOpenWireVersion="6"</code></strong> in order to s
tart a broker without error.</p></td></tr></tbody></table></div><div
class="confluence-information-macro
confluence-information-macro-information"><span class="aui-icon aui-icon-small
aui-iconfont-info confluence-information-macro-icon"></span><div
class="confluence-information-macro-body"><p>For tuning locking properties see
the options listed at <a shape="rect"
href="pluggable-storage-lockers.html">Pluggable storage
lockers.</a></p></div></div><p> </p><h3
id="KahaDB-SlowFileSystemAccessDiagnosticLogging">Slow File System Access
Diagnostic Logging</h3><p>You can configure a non zero threshold in
milliseconds for database updates. If database operation is slower than that
threshold (for example if you set it to <strong><code>500</code></strong>), you
may see messages like:</p><div class="panel" style="border-width: 1px;"><div
class="panelContent">
+</div></div><h3 id="KahaDB-KahaDBProperties">KahaDB Properties</h3><div
class="table-wrap"><table class="fixed-table confluenceTable"><colgroup
span="1"><col span="1" style="width: 290.0px;"><col span="1" style="width:
139.0px;"><col span="1" style="width: 1138.0px;"></colgroup><tbody><tr><th
colspan="1" rowspan="1" class="confluenceTh"><p>Property</p></th><th
colspan="1" rowspan="1" class="confluenceTh"><p>Default</p></th><th colspan="1"
rowspan="1" class="confluenceTh"><p>Comments</p></th></tr><tr><td colspan="1"
rowspan="1"
class="confluenceTd"><p><code>archiveCorruptedIndex</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If
<strong><code>true</code></strong>, corrupted indexes found at startup will be
archived (not deleted).</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>archiveDataLogs</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p><c
ode>false</code></p></td><td colspan="1" rowspan="1"
class="confluenceTd"><p>If <strong><code>true</code></strong>, will move a
message data log to the archive directory instead of deleting
it.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>checkForCorruptJournalFiles</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If
<strong><code>true</code></strong>, will check for corrupt journal files on
startup and try and recover them.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>checkpointInterval</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>5000</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>Time (ms) before check-pointing the
journal.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>checksumJournalFiles</code></p></td><td
colspan="1" rowspan="1" class="confluenceT
d"><p><code>true</code></p></td><td colspan="1" rowspan="1"
class="confluenceTd"><p>Create a checksum for a journal file. The presence of a
checksum is required in order for the persistence adapter to be able to detect
corrupt journal files.</p><p>Before <strong>ActiveMQ 5.9.0</strong>: the
default is <strong><code>false</code></strong>.</p></td></tr><tr><td
colspan="1" rowspan="1"
class="confluenceTd"><p><code>cleanupInterval</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>30000</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>The interval (in ms) between consecutive
checks that determine which journal files, if any, are eligible for removal
from the message store. An eligible journal file is one that has no outstanding
references.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>compactAcksAfterNoGC</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>10</code></p></td><td
colspan="1" rowspan=
"1" class="confluenceTd"><p>From <strong>ActiveMQ 5.14.0</strong>: when the
acknowledgement compaction feature is enabled this value controls how many
store GC cycles must be completed with no other files being cleaned up before
the compaction logic is triggered to possibly compact older acknowledgements
spread across journal files into a new log file.  The lower the value set
the faster the compaction may occur which can impact performance if it runs to
often.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>compactAcksIgnoresStoreGrowth</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ
5.14.0</strong>: when the acknowledgement compaction feature is enabled
this value controls whether compaction is run when the store is still growing
or if it should only occur when the store has stopped growing (either due to
idle or store limits
reached).  If enabled the compaction runs regardless of the store still
having room or being active which can decrease overall performance but reclaim
space faster. </p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>concurrentStoreAndDispatchQueues</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>true</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Enable the dispatching of Queue
messages to interested clients to happen concurrently with message
storage.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>concurrentStoreAndDispatchTopics</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Enable the dispatching of Topic
messages to interested clients to happen concurrently with message
storage</p><div class="confluence-information-macro
confluence-information-macro-warning"><span class="aui-
icon aui-icon-small aui-iconfont-error
confluence-information-macro-icon"></span><div
class="confluence-information-macro-body"><p>Enabling this property is not
recommended.</p></div></div></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>directory</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>activemq-data</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>The path to the directory to
use to store the message store data and log files.</p></td></tr><tr><td
colspan="1" rowspan="1"
class="confluenceTd"><p><code>directoryArchive</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>null</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>Define the directory to move data logs to
when they all the messages they contain have been
consumed.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>enableAckCompaction</code></p></td><td
colspan="1" rowspan="1" class="confluen
ceTd"><p><code>true</code></p></td><td colspan="1" rowspan="1"
class="confluenceTd"><p>From <strong>ActiveMQ 5.14.0</strong>: this setting
controls whether the store will perform periodic compaction of older journal
log files that contain only Message acknowledgements. By compacting these older
acknowledgements into new journal log files the older files can be removed
freeing space and allowing the message store to continue to operate without
hitting store size limits.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>enableIndexWriteAsync</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If
<strong><code>true</code></strong>, the index is updated
asynchronously.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>enableJournalDiskSyncs</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>true</code></p></td><td
colspan="
1" rowspan="1" class="confluenceTd"><p><span>Ensure every journal write is
followed by a disk sync (JMS durability requirement).</span></p><div
class="confluence-information-macro confluence-information-macro-warning"><span
class="aui-icon aui-icon-small aui-iconfont-error
confluence-information-macro-icon"></span><div
class="confluence-information-macro-body"><p>This property is deprecated as of
<strong>ActiveMQ</strong> <strong>5.14.0</strong>.</p><p>From
<strong>ActiveMQ</strong> <strong>5.14.0</strong>: see <span style="color:
rgb(34,34,34);"><strong><code>journalDiskSyncStrategy</code></strong>.</span></p></div></div></td></tr><tr><td
colspan="1" rowspan="1"
class="confluenceTd"><p><code>ignoreMissingJournalfiles</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If
<strong><code>true</code></strong>, reports of missing journal files are
ignored.</p></td></tr><tr><td colspan="1" rowsp
an="1" class="confluenceTd"><p><code>indexCacheSize</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>10000</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Number of index pages cached in
memory.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>indexDirectory</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"> </td><td colspan="1" rowspan="1"
class="confluenceTd"><p><span>From <strong>ActiveMQ 5.10.0</strong>: If set,
configures where the KahaDB index files (<strong><code>db.data</code></strong>
and <strong><code>db.redo</code></strong>) will be stored. If not set, the
index files are stored in the directory specified by
the <strong><code>directory</code></strong>
attribute.</span></p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>indexWriteBatchSize</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>1000</code></p></td><td
colspan="1" rowspa
n="1" class="confluenceTd"><p>Number of indexes written in a
batch.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code><span>journalDiskSyncInterval</span></code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>1000</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Interval (ms) for when to
perform a disk sync
when <strong><code>journalDiskSyncStrategy=periodic</code></strong>. A
sync will only be performed if a write has occurred to the journal since the
last disk sync or when the journal rolls over to a new journal
file.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code><span>journalDiskSyncStrategy</span></code></p></td><td
colspan="1" rowspan="1"
class="confluenceTd"><p><code>always</code></p></td><td colspan="1" rowspan="1"
class="confluenceTd"><p>From <strong>ActiveMQ 5.14.0</strong>: this setting
configures the disk sync policy. The list of available sync strategies are (in
order of decre
asing safety, and increasing
performance):</p><ul><li><p><strong><code>always</code></strong> <span>Ensure
every journal write is followed by a disk sync (JMS durability requirement).
This is the safest option but is also the slowest because it requires a sync
after every message write. This is equivalent to the deprecated
property <strong><code>enableJournalDiskSyncs=true</code></strong>.</span></p></li><li><p><strong><code>periodic</code></strong>
<span style="color: rgb(34,34,34);">The disk will be synced at set intervals
(if a write has occurred) instead of after every journal write which will
reduce the load on the disk and should improve throughput</span>. The disk will
also be synced when rolling over to a new journal file. The default interval is
1 second. The default interval offers very good performance, whilst being safer
than <strong><code>never</code></strong> disk syncing, as data loss is
limited to a maximum of 1 second's worth. See <strong><code>journalDisk
SyncInterval</code></strong> to change the frequency of disk
syncs.</p></li><li><p><strong><code>never</code></strong> A sync will never be
explicitly called and it will be up to the operating system to flush to disk.
This is equivalent to setting the deprecated property
<strong><code>enableJournalDiskSyncs=false</code></strong>. This is the fastest
option but is the least safe as there's no guarantee as to when data is flushed
to disk. Consequently message loss <em>can</em> occur on broker
failure.</p></li></ul></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>journalMaxFileLength</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>32mb</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>A hint to set the maximum size
of the message data logs.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>maxAsyncJobs</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>10000</code></p></td><t
d colspan="1" rowspan="1" class="confluenceTd"><p>The maximum number of
asynchronous messages that will be queued awaiting storage (should be the same
as the number of concurrent MessageProducers).</p></td></tr><tr><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>preallocationScope</code></p></td><td
colspan="1" rowspan="1"
class="confluenceTd"><p><code>entire_journal</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ 5.14.0</strong>: this
setting configures how journal data files are preallocated. The default
strategy preallocates the journal file on first use using the appender
thread. </p><ul><li><strong><code>entire_journal</code></strong> will
preallocate the journal file on first use using the appender thread<br
clear="none"><p> </p></li><li><p><strong><code>entire_journal_async</code></strong>
will use preallocate ahead of time in a separate
thread.</p></li><li><p><strong><code>none</code></strong> disables preallocatio
n.</p></li></ul><p>On SSD,
using <strong><code>entire_journal_async</code></strong> avoids delaying
writes pending preallocation on first use.</p><p><strong>Note</strong>: on HDD
the additional thread contention for disk has a negative impact. Therefore use
the default.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>preallocationStrategy</code></p></td><td
colspan="1" rowspan="1"
class="confluenceTd"><p><code>sparse_file</code></p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ
5.12.0</strong>: This setting configures how the broker will try to
preallocate the journal files when a new journal file is
needed.</p><ul><li><p><strong><code>sparse_file</code></strong> - sets the file
length, but does not populate it with any
data.</p></li><li><p><strong><code>os_kernel_copy</code></strong> - delegates
the preallocation to the Operating
System.</p></li><li><p><strong><code>zeros</code></strong>  - each
preallocated
journal file contains nothing but <strong><code>0x00</code></strong>
throughout.</p></li></ul></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><code>purgeRecoveredXATransactions</code></td><td
colspan="1" rowspan="1" class="confluenceTd"><code>false</code></td><td
colspan="1" rowspan="1" class="confluenceTd">From <strong>ActiveMQ
5.15.5</strong>: If <code><strong>true</strong></code>, will purge
<code>preparedTransactions</code> during recovery. This feature will clear all
<code>preparedTransactions</code>, please use <code>RecoverXATransaction</code>
MBean to manually review for commit or rollback of specific
transactions.</td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>storeOpenWireVersion</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>11</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Determines the version of
OpenWire commands that are marshaled to the KahaDB journal. </p><p>Before
<strong>Act
iveMQ 5.12.0</strong>: the default value is
<strong><code>6</code></strong>.</p><p>Some features of the broker depend on
information stored in the OpenWire commands from newer protocol revisions and
these may not work correctly if the store version is set to a lower
value.  KahaDB stores from broker versions greater than 5.9.0 will in many
cases still be readable by the broker but will cause the broker to continue
using the older store version meaning newer features may not work as
intended. </p><p>For KahaDB stores that were created in versions prior to
<strong>ActiveMQ 5.9.0</strong> it will be necessary to manually set
<strong><code>storeOpenWireVersion="6"</code></strong> in order to start a
broker without error.</p></td></tr></tbody></table></div><div
class="confluence-information-macro
confluence-information-macro-information"><span class="aui-icon aui-icon-small
aui-iconfont-info confluence-information-macro-icon"></span><div
class="confluence-information-macro-body
"><p>For tuning locking properties see the options listed at <a shape="rect"
href="pluggable-storage-lockers.html">Pluggable storage
lockers.</a></p></div></div><p> </p><h3
id="KahaDB-SlowFileSystemAccessDiagnosticLogging">Slow File System Access
Diagnostic Logging</h3><p>You can configure a non zero threshold in
milliseconds for database updates. If database operation is slower than that
threshold (for example if you set it to <strong><code>500</code></strong>), you
may see messages like:</p><div class="panel" style="border-width: 1px;"><div
class="panelContent">
<p><code>Slow KahaDB access: cleanup took 1277 |
org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal Checkpoint
Worker</code></p>
</div></div><p>You can configure a threshold used to log these messages by
using a system property and adjust it to your disk speed so that you can easily
pick up runtime anomalies.</p><div class="panel" style="border-width:
1px;"><div class="panelContent">
<p><code>-Dorg.apache.activemq.store.kahadb.LOG_SLOW_ACCESS_TIME=1500</code></p>
</div></div><h1 id="KahaDB-Multi(m)kahaDBPersistenceAdapter">Multi(m) kahaDB
Persistence Adapter</h1><p>From <strong>ActiveMQ 5.6</strong>: it's possible to
distribute destinations stores across multiple kahdb persistence adapters. When
would you do this? If you have one fast producer/consumer destination and
another periodic producer destination that has irregular batch consumption then
disk usage can grow out of hand as unconsumed messages become distributed
across multiple journal files. Having a separate journal for each ensures
minimal journal usage. Also, some destination may be critical and require disk
synchronization while others may not. In these cases you can use
the <strong><code>mKahaDB</code></strong> persistence adapter and filter
destinations using wildcards, just like with destination policy entries.</p><h3
id="KahaDB-Transactions">Transactions</h3><p>Transactions can span multiple
journals if the destinations are distributed. This means that two phase completi
on is necessary, which does impose a performance (additional disk sync)
penalty to record the commit outcome. This penalty is only imposed if more than
one journal is involved in a transaction.</p><h3
id="KahaDB-Configuration.1">Configuration</h3><p>Each instance
of <strong><code>kahaDB</code></strong> can be configured independently.
If no destination is supplied to a
<strong><code>filteredKahaDB</code></strong>, the implicit default value will
match any destination, queue or topic. This is a handy catch all. If no
matching persistence adapter can be found, destination creation will fail with
an exception. The <strong><code>filteredKahaDB</code></strong> shares its
wildcard matching rules with <a shape="rect"
href="per-destination-policies.html">Per Destination Policies</a>.</p><p>From
ActiveMQ 5.15, <strong><code>filteredKahaDB</code></strong> support a
StoreUsage attribute named <strong><code>usage</code></strong>. This
allows individual disk limits to be imposed o
n matching queues.</p><div class="code panel pdl" style="border-width:
1px;"><div class="codeContent panelContent pdl">
-<pre class="brush: xml; gutter: false; theme: Default"
style="font-size:12px;"><broker brokerName="broker">
+<pre class="brush: xml; gutter: false; theme: Default"><broker
brokerName="broker">
 <persistenceAdapter>
<mKahaDB directory="${activemq.base}/data/kahadb">
@@ -120,7 +120,7 @@
</broker>
</pre>
</div></div><h3
id="KahaDB-AutomaticPerDestinationPersistenceAdapter">Automatic Per Destination
Persistence Adapter</h3><p>Set
<strong><code>perDestination="true"</code></strong> on the catch all, i.e.,
when no explicit destination is set,
<strong><code>filteredKahaDB</code></strong> entry. Each matching destination
will be assigned its own <strong><code>kahaDB</code></strong> instance.</p><div
class="code panel pdl" style="border-width: 1px;"><div class="codeContent
panelContent pdl">
-<pre class="brush: xml; gutter: false; theme: Default"
style="font-size:12px;"><broker brokerName="broker">
+<pre class="brush: xml; gutter: false; theme: Default"><broker
brokerName="broker">
 <persistenceAdapter>
<mKahaDB directory="${activemq.base}/data/kahadb">