[
https://issues.apache.org/jira/browse/ARTEMIS-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Diederick updated ARTEMIS-5481:
-------------------------------
Description:
We have a lot of problems with Artemis version 2.36.0 and higher (in a HA setup
with shared file system). Our clients use activemq.management requests to see
which node is active(primary) etc.
Now from version 2.36.0 and higher there is a accumulation of these
activemq.management queues and addresses. There is also a big increase in
connections (amount of client connections is the same). This accumulation eats
all the resources on the node (cpu and memory). Eventually the node gives first
a out of memory on the GUI and then node it self becomes unresponsive and dies
(OOM kill) (no failover).
As you can see in this grafic the green line that in version 2.36.0 there is a
increase of connections (around 800). The behaviour is that managment queues
address slowly accumulate and eating resources (cpu and memory)
After the gap you see version 2.35.0 wich is 'normal' amount of connections.
Also the accumulation of management addresses and queues gets cleaned with
default parameters
!image-2025-05-15-13-14-20-461.png!
was:
We have a lot of problems with Artemis version 2.36.0 and higher (in a HA setup
with shared file system). Our clients use activemq.management requests to see
which node is active(primary) etc.
Now from version 2.36.0 and higher there is a accumulation of these
activemq.management queues and addresses. There is also a big increase in
connections (amount of client connections is the same). This accumulation eats
all the resources on the node (cpu and memory). And eventualy the node gives
first a out of memory on the GUI and then node it self becomes unresponsive and
dies (OOM kill) (no failover).
As you can see in this grafic the green line that in version 2.36.0 there is a
increase of connections (around 800). The behaviour is that managment queues
address slowly accumulate and eating resources (cpu and memory)
After the gap you see version 2.35.0 wich is 'normal' amount of connections.
Also the accumulation of management addresses and queues gets cleaned with
default parameters
!image-2025-05-15-13-14-20-461.png!
> Accumulation activemq.managment queues and addresses, increase connections
> --------------------------------------------------------------------------
>
> Key: ARTEMIS-5481
> URL: https://issues.apache.org/jira/browse/ARTEMIS-5481
> Project: ActiveMQ Artemis
> Issue Type: Bug
> Components: ActiveMQ-Artemis-Native
> Affects Versions: 2.36.0, 2.37.0, 2.38.0, 2.39.0, 2.40.0, 2.41.0
> Reporter: Diederick
> Assignee: Clebert Suconic
> Priority: Major
> Attachments: image-2025-05-15-13-14-20-461.png
>
>
> We have a lot of problems with Artemis version 2.36.0 and higher (in a HA
> setup with shared file system). Our clients use activemq.management requests
> to see which node is active(primary) etc.
> Now from version 2.36.0 and higher there is a accumulation of these
> activemq.management queues and addresses. There is also a big increase in
> connections (amount of client connections is the same). This accumulation
> eats all the resources on the node (cpu and memory). Eventually the node
> gives first a out of memory on the GUI and then node it self becomes
> unresponsive and dies (OOM kill) (no failover).
> As you can see in this grafic the green line that in version 2.36.0 there is
> a increase of connections (around 800). The behaviour is that managment
> queues address slowly accumulate and eating resources (cpu and memory)
> After the gap you see version 2.35.0 wich is 'normal' amount of connections.
> Also the accumulation of management addresses and queues gets cleaned with
> default parameters
> !image-2025-05-15-13-14-20-461.png!
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]
For further information, visit: https://activemq.apache.org/contact