Author: buildbot
Date: Mon Nov 28 09:23:05 2016
New Revision: 1001748
Log:
Production update by buildbot for activemq
Modified:
websites/production/activemq/content/cache/main.pageCache
websites/production/activemq/content/what-is-the-prefetch-limit-for.html
Modified: websites/production/activemq/content/cache/main.pageCache
==============================================================================
Binary files - no diff available.
Modified:
websites/production/activemq/content/what-is-the-prefetch-limit-for.html
==============================================================================
--- websites/production/activemq/content/what-is-the-prefetch-limit-for.html
(original)
+++ websites/production/activemq/content/what-is-the-prefetch-limit-for.html
Mon Nov 28 09:23:05 2016
@@ -91,7 +91,7 @@
<pre class="brush: java; gutter: false; theme: Default"
style="font-size:12px;">queue = new
ActiveMQQueue("TEST.QUEUE?consumer.prefetchSize=10");
consumer = session.createConsumer(queue);
</pre>
-</div></div><h3
id="WhatisthePrefetchLimitFor?-PooledConsumersandPrefetch">Pooled Consumers and
Prefetch</h3><p>Consuming messages from a pool of consumers an be problematic
due to prefetch. Unconsumed prefetched messages are only released when a
consumer is closed, but with a pooled consumer the close is deferred (for
reuse) till the consumer pool closes. This leaves prefetched messages
unconsumed till the consumer is reused. This feature can be desirable from a
performance perspective. However, it can lead to out-of-order message delivery
when there is more than one consumer in the pool. For this reason, the <a
shape="rect" class="external-link"
href="http://activemq.apache.org/maven/apidocs/org/apache/activemq/jms/pool/PooledConnectionFactory.html">org.apache.activemq.pool.PooledConnectionFactory</a>
does <strong>not</strong> pool consumers. The problem is visible with the
Spring DMLC when the cache level is set
to <strong><code>CACHE_CONSUMER</code></strong> and there are m
ultiple concurrent consumers. One solution to this problem is to use a
prefetch of <strong><code>0</code></strong> for a pooled consumer, in this
way, it will poll for messages on each call to
<strong><code>receive(timeout)</code></strong>. Another option is to enable the
<a shape="rect" class="external-link"
href="http://activemq.apache.org/maven/apidocs/org/apache/activemq/broker/region/policy/AbortSlowAckConsumerStrategy.html">AbortSlowAckConsumerStrategy</a>
on the broker to disconnect consumers that have not acknowledged a Message
after some configurable time period.</p><h3
id="WhatisthePrefetchLimitFor?-Ramvs.PerformanceTrade-off">Ram vs. Performance
Trade-off</h3><p>Setting a relatively high value of prefetch leads to higher
performance. Therefore the default values are typically greater than 1000 and
much higher for topics and higher still for the non-persistent messages. The
prefetch size dictates how many messages will be held in RAM on the client so
if your RAM is li
mited you may want to set a low value such as 1 or 10
etc.</p><p> </p></div>
+</div></div><h3
id="WhatisthePrefetchLimitFor?-PooledConsumersandPrefetch">Pooled Consumers and
Prefetch</h3><p>Consuming messages from a pool of consumers an be problematic
due to prefetch. Unconsumed prefetched messages are only released when a
consumer is closed, but with a pooled consumer the close is deferred (for
reuse) till the consumer pool closes. This leaves prefetched messages
unconsumed till the consumer is reused. This feature can be desirable from a
performance perspective. However, it can lead to out-of-order message delivery
when there is more than one consumer in the pool. For this reason, the <a
shape="rect" class="external-link"
href="http://activemq.apache.org/maven/apidocs/org/apache/activemq/jms/pool/PooledConnectionFactory.html">org.apache.activemq.pool.PooledConnectionFactory</a>
does <strong>not</strong> pool consumers.</p><p>Pooling consumers is supported
by Springs CachingConnectionFactory (although turned off by default). In case
you use the CachingConnec
tionFactory with multiple consumer threads configured in Springs
DefaultMessageListenerContainer (DMLC) then you either want to turn off
consumer pooling in the CachingConnectionFactory (its off by default) or you
may want to use a prefetch of 0 when pooling consumers. In this way, the
consumer will poll for messages on each call to<strong><code>
receive(timeout)</code></strong>. Its generally recommended to turn off
consumer caching in Springs CachingConnectionFactory and any other frameworks
that allow to pool JMS consumers.</p><p>Note that Springs
DefaultMessageListenerContainer (DMLC) and its
<strong><code>CACHE_CONSUMER</code></strong> cache level is not affected by
this problem! Springs DMLC does not pool consumers in the sense that it does
not use an internal pool with multiple consumer instances. Instead it caches
the consumer, i.e. it re-uses the same JMS consumer object to receive all
messages for the life time of the DMLC instance. So it behaves pretty much like
properly
hand written JMS code, where you create the JMS connection, session, consumer
and then use this consumer instance to receive all your messages.<br
clear="none">Hence there is no problem with using 
<strong><code>CACHE_CONSUMER</code></strong> in Springs DMLC, even with
multiple consumer threads, unless you are using XA transactions. XA
transactions do not work with <strong><code>CACHE_CONSUMER</code></strong>.
However local JMS transactions and non-transacted consumers are just fine to
use  <strong><code>CACHE_CONSUMER</code></strong> in Springs
DMLC.</p><p>Also note that Camel's <a shape="rect" class="external-link"
href="http://camel.apache.org/jms.html">JMS</a> or <a shape="rect"
class="external-link" href="http://camel.apache.org/activemq.html">ActiveMQ</a>
components use Springs DMLC internally. So everything said above about Springs
DMLC and  <strong><code>CACHE_CONSUMER</code></strong> applies to these
two Camel components as well.</p><h3 id="WhatisthePrefetchL
imitFor?-Ramvs.PerformanceTrade-off">Ram vs. Performance
Trade-off</h3><p>Setting a relatively high value of prefetch leads to higher
performance. Therefore the default values are typically greater than 1000 and
much higher for topics and higher still for the non-persistent messages. The
prefetch size dictates how many messages will be held in RAM on the client so
if your RAM is limited you may want to set a low value such as 1 or 10
etc.</p><p> </p></div>
</td>
<td valign="top">
<div class="navigation">