Hi,

I think you need to configure max-size-bytes in address-settings. See [1] for 
more information. Basically you specify how much memory can every queue/topic 
consume. Next you can set address-full-policy to BLOCK, if you don't want to 
page messages to disc if max-size-bytes was exceeded. It will hold producers 
until some messages are consumed.

I would also recommend to configure consumer-window-size to 0 if your consumers 
are slow. It will avoid buffering of messages on client side message buffer so 
those messages can be consumed by other consumers on queue which are idle.

I hope this helps,
Mirek

[1] 
https://github.com/apache/activemq-artemis/blob/master/docs/user-manual/en/paging.md#configuration-1

----- Original Message -----
> From: "Денис Кирпиченков" <denis.kirpichen...@gmail.com>
> To: dev@activemq.apache.org
> Sent: Friday, June 2, 2017 11:20:12 AM
> Subject: org.apache.activemq.artemis.core.journal.impl.JournalImpl
> 
> Hello, All!
> 
> I work on an application that uses artemis in embedded mode and sometimes
> (due to unexpected external coditions) the app processes any single message
> very slow. But in any cases artemis has constantly high rate of input
> messages. For example, artemis receives 100-200 messages per second, but
> due to slow consumers it could deliver only 10-20 messages per second.
> As you can image, described conditions cause an OutOfMemoryError,
> eventually.
> After an investigation I found that ConcurrentHashMap<Long,
> JournalRecord> records in
> org.apache.activemq.artemis.core.journal.impl.JournalImpl holds a lot of
> information about messages which is waiting to be delivered.
> 
> Artemis is configured to have a persisted journal, but anyway
> JournalImpl.records grows on any new received message as far as JVM heap
> allows.
> 
> And my questions, is there any workaround? maybe configuration tweaks?
> How do you think is it ok to make JournalImpl.records backed by
> disk-storage, to limit it's heap usage.
> 
> --
> Best regards,
> Denis
> 

Reply via email to