Thank you for your reply.

Yes, usually it's Java Heap Out Of Memory error which in turn crashes any of 
the threads, be it Jolokia thread or any of the cluster connection threads.

I will try to increase memory limit and Prometheus is also on the horizon, but 
in the meantime, do you think such amount of addresses/queues could be an issue 
for the broker itself? I mean it got me thinking, is running a queue per client 
an acceptable design? It was created to separate client data, however we also 
have another related issue. Essentially we must create thousands of roles and 
somehow map these roles to queues now. Can artemis-roles.properties file handle 
this amount of roles?

-- 
    Vilius

-----Original Message-----
From: Justin Bertram <jbert...@apache.org> 
Sent: Tuesday, March 1, 2022 10:49 PM
To: users@activemq.apache.org
Subject: Re: ActiveMQ Artemis crashes with a lot of addresses/queues

> Is this a known issue?

I'm not sure I'd categorize this as a "known issue" as (in my opinion) that 
implies it is a bug. I would simply say that you're reaching the limits of the 
current design. The broker uses JMX for most management and monitoring.
Most resources (e.g. the broker itself, addresses, queues, diverts, bridges, 
acceptors, etc.) have a corresponding JMX MBean, and each of these MBeans has 
individual attributes and operations. When a user first loads the web console 
Hawtio requests all this information from the broker via Jolokia. If you have a 
lot of addresses and queues this is a lot of MBeans and therefore a lot of data 
that the broker has to collect and serialize into JSON. Then Hawtio has to 
parse that JSON so it can build the interface and populate the attributes, etc. 
The more MBeans you have the slower this will be, of course.

> Console is really slow and sometimes even crashes the broker.

You don't really provide any details about exactly how/why the broker is 
crashing so I can only assume that you're hitting an OutOfMemoryError. If 
that's the case then I recommend you give your broker more memory in order to 
deal with the overhead incurred for management.

Aside from that you might consider using a metrics plugin [1] (e.g. the 
Prometheus metrics plugin [2]) for monitoring instead of using the web console. 
It doesn't use JMX at all and it only exports the metrics that are relevant for 
monitoring rather than every single MBean attribute so generally speaking the 
overhead is much lower. Also, you can turn metrics on/off via the 
enable-metrics address-setting to reduce overhead even further.


Justin

[1]
https://activemq.apache.org/components/artemis/documentation/latest/metrics.html
[2] https://github.com/jbertram/artemis-prometheus-metrics-plugin

On Tue, Mar 1, 2022 at 1:51 PM Vilius Šumskas <vilius.sums...@rivile.lt>
wrote:

> Hi,
>
> we have Artemis 2.20.0 cluster which is used by external 
> producers/consumers. Every such producer/consumer pair represents 
> different commercial client (company) so we are using one queue per 
> client to separate their messages. Our developers are using 
> auto-created JMS address/queue combination to serve these clients. 
> These auto-created queues are durable as per Artemis documentation.
>
> Now, the issue is we have thousands of these clients so naturally we 
> have thousands of addresses and almost the same amount of queues in our 
> cluster.
> When trying to access Hawtio console the broker uses all CPU and RAM 
> available. Console is really slow and sometimes even crashes the 
> broker. Is this a known issue?
>
> This is on completely idle system with just couple of consumers 
> attached at any given time (addresses are precreated in advance).  The 
> machines are multi-core Xeons with 8 GB of RAM. Address memory is barely used 
>  ~12 MB.
>
> Or maybe we are doing something wrong? Maybe switching to temporary 
> queues would help?
>
> --
>    Best Regards,
>
>     Vilius Šumskas
>     Rivile
>     IT manager
>     +370 614 75713
>
>

Reply via email to