For what it's worth, we run a very similar setup like Alexander. We have some
global addresses/queues + hundreds of addresses/queues per tenant which have
RBAC configured with management roles and tenant level read/write roles. The
only difference is that we manage these queues, roles and users dynamically,
using Core API and ActiveMQBasicSecurityManager. I.e. they are stored in
journal instead of static XML files.
Latest Console version for such setup is very fast (thanks for your hard work
on that Grzegorz BTW!). I just did a small test on an environment with 300
addresses and 500 queues. Login took a second or two. JSON cache file generated
by Jolokia is ~300KB.
--
Vilius
>-----Original Message-----
>From: Grzegorz Grzybek <[email protected]>
>Sent: Wednesday, April 15, 2026 1:31 PM
>To: [email protected]
>Subject: Re: Possible bug with management ACLs
>
>śr., 15 kwi 2026 o 08:57 Alexander Milovidov <[email protected]> napisał(a):
>>
>> I have tested in the latest version (2.53.0).
>
>Thanks - I took your etc/ files and run with latest 2.54.0-SNAPSHOT ensuring
>all
>the optimization (jolokia-integration-artemis library) is in place.
>
>Here's the curl:
>
>curl -s -H'Origin: http://localhost:8161' -H'Content-Type:
>application/json' -u admin:admin -d '{"type":"list"}'
>'http://localhost:8161/console/jolokia/?maxDepth=9&maxCollectionSize=50000&i
>gnoreErrors=true&canonicalNaming=false&mimeType=application/json&listCache
>=true'
>-o /data/tmp/xxx.json
>
>it took 1 minute to fetch it and the JSON (it's not pretty-printed) is 36MB.
>
>Here's a proof that the result is as with any unoptimized versions of Hawtio
>and
>Artemis:
>
>$ jq '.value.cache | keys' /data/tmp/xxx.json | grep artemis.queue | wc -l
>1001
>
>while the `cache` and `domains` fields in the resulting JSON are in place (see
>https://jolokia.org/reference/html/manual/jolokia_protocol.html#optimized-
>response-list),
>your RBAC configuration makes each queue a "special queue" - there's nothing to
>optimize, as the algorithm I used in Jolokia Integration is roughly about
>providing
>JSON information for each _distinct_ MBeanInfo (for address, queue, ...) - it
>works
>VERY well if you have even 100000 queues because each queue looks the same, so
>there's no point in JSONIfication of the same MBeanInfo over and over again.
>
>However see:
> - https://issues.apache.org/jira/browse/ARTEMIS-5905
> - https://issues.apache.org/jira/browse/ARTEMIS-5910
>
>so it's fine if you have groups of queues configured with special RBAC
>permissions -
>the cache grows a bit, but it's still a good optimization. However if EVERY
>queue is
>different, we have a problem...
>
>In theory I can implement different algorithm - the full MBeanInfo (operations,
>attributes, ...) is cached for each queue, but the RBAC information itself is
>extracted - it shouldn't be hard to implement (should be faster than
>explaining the
>problem to ChatGPT :D), but I'd have to be convinced that your configuration
>is a
>real-world one ;)
>
>But seriously - I'm here to help. Can you generate the configuration with more
>queues, but with less individual RBAC configs to check where's the problem?
>
>kind regards
>Grzegorz Grzybek
>
>>
>> ср, 15 апр. 2026 г. в 08:44, Grzegorz Grzybek <[email protected]>:
>>>
>>> Hello
>>>
>>> wt., 14 kwi 2026 o 18:52 Alexander Milovidov <[email protected]>
>napisał(a):
>>> >
>>> > Hi All,
>>> >
>>> > I have created a project with sample configuration files to reproduce the
>>> > slow
>loading of the management console:
>>> > https://github.com/alexander-milovidov/artemis-slow-console-reprodu
>>> > cer
>>> >
>>> > To make the management console work much worse in an isolated
>environment, I had to create 2000 users, each user is assigned to 4 different
>roles,
>created 1000 addresses with anyast queues, and set permissions for each address
>to its roles. I also set permissions to corresponding DLQ queues.
>>> >
>>> > It takes about 2 minutes to load the management console in this
>configuration.
>>> > During the opening of the management console with the single user, the JVM
>metric jvm_gc_memory_allocated_bytes_total rises from 0 to approx. 1.20
>Gb/second and reduces after loading.
>>> >
>>> > Unfortunately, I didn't reproduce other problems that we have:
>>> > - excessive garbage collection and high CPU usage;
>>> > - blocked threads and unresponsive message broker;
>>> > - slow loading of the list of queues in the Queues tab;
>>> > - some users (very rarely) complain that they wait for 10-30 minutes to
>>> > log on
>to the console which usually opens in 10 seconds.
>>> >
>>> > I can also add jinja2 templates of these files (and ansible role/playbook
>>> > to
>create files from template) if someone needs.
>>>
>>> Thanks for the link to github repository with etc/ files. I'll try to run
>>> as is.
>>> 2 minutes is too much - I used configurations with 50K queues and
>>> Jolokia optimization - the performance (speed and memory at browser
>>> side) was much better.
>>>
>>> Which exactly version of Artemis do you use?
>>>
>>> kind regards
>>> Grzegorz Grzybek
>>>
>>> >
>>> >
>>> > ср, 25 мар. 2026 г. в 19:39, Alexander Milovidov <[email protected]>:
>>> >>
>>> >> Hello!
>>> >>
>>> >> пн, 23 мар. 2026 г. в 10:58, Grzegorz Grzybek <[email protected]>:
>>> >>>
>>> >>> Can we check (roughly) what is your RBAC configurations?
>>> >>>
>>> >>
>>> >> There's about 3000 addresses and 4000 queues. The number of distinct
>>> >> roles
>in the management.xml is about 750.
>>> >> Each ACL for an address has 12 access methods (browse*, send* etc.) with
>4-8 roles.
>>> >>
>>> >> I plan to create a reproducer with templated configuration files which
>creates N addresses with queues, N*2 users with different roles, and assigns
>permissions for roles to each address (amq, read-only role and full-access
>role) in
>management.xml. I'll be back with the results.
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: [email protected]
>>> For additional commands, e-mail: [email protected]
>>>
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: [email protected]
>For additional commands, e-mail: [email protected]