One old message in the DLQ is all it takes to keep every single KahaDB data
file after it from being deleted.  Not thousands, not hundreds, not tens.
One.  As currently implemented, KahaDB requires you to choose between a DLQ
policy that involves keeping DLQ messages around and a KahaDB storage
profile that is proportional to the volume of live messages.

mKahaDB might allow you to have both if you use a separate KahaDB instance
for the DLQ, though I'm not sure I've yet heard anyone report successfully
configuring that setup so I can't say for sure that it would work properly.

It would be easy to test whether the DLQ messages are the root cause of
this behavior: delete them all and see whether your store usage drops to
the level you expect shortly thereafter.  (No more than 30 seconds, unless
you've overridden the cleanupInterval property.)

Tim

On Tue, Jul 28, 2015 at 2:39 PM, Scammell <mark.har...@meridianenergy.co.nz>
wrote:

> Thanks for your thoughts.
>
> The web console shows that the messages are being dequeued at the same rate
> as they are being queued so I was presuming that they were being deleted
> successfully.
>
> I will investigate how to use JMX to view the broker and report back.
>
> There are a few messages in the DLQ but they number tens, not hundreds of
> thousands. These are placed in the DLQ for a valid error scenario which we
> are aware of.
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/Store-percent-used-tp4699945p4700008.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>

Reply via email to