[jira] [Updated] (ARTEMIS-4802) Deprecated master,slave,check-for-live-server tags in examples/features/ha/replicated-failback sample

2024-06-07 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4802:
---
Summary: Deprecated master,slave,check-for-live-server tags in 
examples/features/ha/replicated-failback sample  (was: Depricated 
master,slave,check-for-live-server tags in 
examples/features/ha/replicated-failback sample)

> Deprecated master,slave,check-for-live-server tags in 
> examples/features/ha/replicated-failback sample
> -
>
> Key: ARTEMIS-4802
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4802
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: ActiveMQ-Artemis-Examples
>Affects Versions: 2.34.0
>Reporter: Susinda Perera
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When starting active mq ha samples it gives below log messages. 
>  
> {code:java}
> INFO [org.apache.activemq.artemis.core.server] AMQ221038: Configuration 
> option 'master' is deprecated and will be removed in a future version. Use 
> 'primary' instead. Consult the manual for details.
> INFO [org.apache.activemq.artemis.core.server] AMQ221038: Configuration 
> option 'check-for-live-server' is deprecated and will be removed in a future 
> version. Use 'check-for-active-server' instead. Consult the manual for 
> details.
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Commented] (ARTEMIS-4797) Failover connection references are not always cleaned up in NettyAcceptor, leaking memory

2024-06-06 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17852965#comment-17852965
 ] 

Erwin Dondorp commented on ARTEMIS-4797:


[~Josh B] thx!

> Failover connection references are not always cleaned up in NettyAcceptor, 
> leaking memory
> -
>
> Key: ARTEMIS-4797
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4797
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: OpenWire
>Reporter: Josh Byster
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I'm still trying to parse through exactly what conditions this occurs in, 
> since I'm able to reproduce it in a very specific production setup but not in 
> an isolated environment locally.
> For context, we have custom slow consumer detection that closes connection 
> IDs with slow consumers. These connections are connected via failover 
> transport using client ActiveMQ Classic 5.16.4 (OpenWire). This seems to be 
> specific to Netty.
> It appears this specific order of events causes the connection to not get 
> cleaned up and retained indefinitely on the broker. With frequent kicking of 
> connections, this ends up causing the broker to eventually OOM.
> 1. Connection is created, {{ActiveMQServerChannelHandler}} is created as well
> 2. {{ActiveMQServerChannelHandler#createConnection}} is called, {{active}} 
> flag is set {{true}}.
> 3. A few minutes go by, then we call 
> {{ActiveMQServerControl#closeConnectionWithID}} with the connection ID.
> 4. {{ActiveMQChannelHandler#exceptionCaught}} gets called—*this is the key 
> point that causes issues*. The connection is cleaned up if and only if this 
> is *not* called. The root cause of the exception is 
> {{AbstractChannel.close(ChannelPromise)}}, however the comment above it says 
> this is normal for failover.
> 5. The {{active}} flag is set to {{false}}.
> 6. {{ActiveMQChannelHandler#channelInactive}} gets called, but does *not* 
> call {{listener.connectionDestroyed}} since the {{active}} flag is false.
> 7. The connection is never removed from the {{connections}} map in 
> {{NettyAcceptor}}, causing a leak and eventual OOM of the broker if it 
> happens frequently enough.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Commented] (ARTEMIS-4797) Failover connection references are not always cleaned up in NettyAcceptor, leaking memory

2024-06-06 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17852677#comment-17852677
 ] 

Erwin Dondorp commented on ARTEMIS-4797:


[~Josh B]
I'm trying to explain the 'ghost' connections between cluster nodes that are 
created in my cluster(s) during startup. as reported in ARTEMIS-3157. This is 
somewhat similar to this report, though it is not a growing problem in my case.
 * Are the old connections in your case still visible in the "Browse 
Connections" screen?
 * Do they look different than 'good' connections?

> Failover connection references are not always cleaned up in NettyAcceptor, 
> leaking memory
> -
>
> Key: ARTEMIS-4797
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4797
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: OpenWire
>Reporter: Josh Byster
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I'm still trying to parse through exactly what conditions this occurs in, 
> since I'm able to reproduce it in a very specific production setup but not in 
> an isolated environment locally.
> For context, we have custom slow consumer detection that closes connection 
> IDs with slow consumers. These connections are connected via failover 
> transport using client ActiveMQ Classic 5.16.4 (OpenWire). This seems to be 
> specific to Netty.
> It appears this specific order of events causes the connection to not get 
> cleaned up and retained indefinitely on the broker. With frequent kicking of 
> connections, this ends up causing the broker to eventually OOM.
> 1. Connection is created, {{ActiveMQServerChannelHandler}} is created as well
> 2. {{ActiveMQServerChannelHandler#createConnection}} is called, {{active}} 
> flag is set {{true}}.
> 3. A few minutes go by, then we call 
> {{ActiveMQServerControl#closeConnectionWithID}} with the connection ID.
> 4. {{ActiveMQChannelHandler#exceptionCaught}} gets called—*this is the key 
> point that causes issues*. The connection is cleaned up if and only if this 
> is *not* called. The root cause of the exception is 
> {{AbstractChannel.close(ChannelPromise)}}, however the comment above it says 
> this is normal for failover.
> 5. The {{active}} flag is set to {{false}}.
> 6. {{ActiveMQChannelHandler#channelInactive}} gets called, but does *not* 
> call {{listener.connectionDestroyed}} since the {{active}} flag is false.
> 7. The connection is never removed from the {{connections}} map in 
> {{NettyAcceptor}}, causing a leak and eventual OOM of the broker if it 
> happens frequently enough.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Commented] (ARTEMIS-4781) on-disk files for large messages are not always removed on expiry

2024-05-28 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17850225#comment-17850225
 ] 

Erwin Dondorp commented on ARTEMIS-4781:


> Is the "simple extra consumer on the address ExpiryQueue" receiving and 
> acknowledging those (now large) messages?
that one is a simple JMS consumer which uses autoconfirm to acknowledge any 
message that it reads.
the messages in data/large-messages directory remained even after all clients 
were stopped.

> Please clarify
Can I start over again?
After more testing, I found that files were left in data/large-messages when 
sending AMQP messages with a bytes-payload between 99304 and 99704 bytes. I 
tested in increments of 100 bytes, starting from 96KiB. These are exactly the 
messages that, on creation, stay below my setting of 
amqpMinLargeMessageSize=10; and which after expiry+transfer became slightly 
larger than that. It still only happens when the messages are also transferred 
between nodes on expiry. i.e. it does not happen on single node installation.

> on-disk files for large messages are not always removed on expiry
> -
>
> Key: ARTEMIS-4781
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4781
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Clustering
>Affects Versions: 2.33.0
>Reporter: Erwin Dondorp
>Priority: Major
>
> SETUP:
> Using a broker-cluster.
> The tests are executed with durable and non-durable messages. 3 durable 
> messages and 3 non-durable messages are produced every 60 seconds (almost) at 
> the same time on the 1st broker.
> We are producing large AMQP messages and leave them on a durable queue. 
> MSG/TMP files are created in directory `large-messages` for this as expected.
> After the configured amount of time, the messages expire as expected. the 
> original MSG/TMP files are removed as expected.
> For monitoring, we have an simple extra consumer on the address `ExpiryQueue` 
> connected to a 2nd broker in the same cluster.
> OBSERVATION:
> The MSG/TMP files are left on the disk of the 2nd broker also every 60 
> seconds. This is unexpected. No related logfile lines are seen on either 
> broker.
> The content of the MSG/TMP files is (based on it size) related to the 
> original MSG/TMP files. These files have different names, likely because they 
> have been recreated in the context of the ExpiryQueue address. The files are 
> slightly larger, likely because of the addition of a few expiry related 
> headers.
> Note that this also happens for messages that originally were just a few 
> bytes too small to become a large message, but have become large messages due 
> to the expiry process.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Updated] (ARTEMIS-4781) on-disk files for large messages are not always removed on expiry

2024-05-27 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4781:
---
Description: 
SETUP:

Using a broker-cluster.

The tests are executed with durable and non-durable messages. 3 durable 
messages and 3 non-durable messages are produced every 60 seconds (almost) at 
the same time on the 1st broker.

We are producing large AMQP messages and leave them on a durable queue. MSG/TMP 
files are created in directory `large-messages` for this as expected.

After the configured amount of time, the messages expire as expected. the 
original MSG/TMP files are removed as expected.

For monitoring, we have an simple extra consumer on the address `ExpiryQueue` 
connected to a 2nd broker in the same cluster.

OBSERVATION:

The MSG/TMP files are left on the disk of the 2nd broker also every 60 seconds. 
This is unexpected. No related logfile lines are seen on either broker.
The content of the MSG/TMP files is (based on it size) related to the original 
MSG/TMP files. These files have different names, likely because they have been 
recreated in the context of the ExpiryQueue address. The files are slightly 
larger, likely because of the addition of a few expiry related headers.

Note that this also happens for messages that originally were just a few bytes 
too small to become a large message, but have become large messages due to the 
expiry process.

  was:
SETUP:

Using a broker-cluster.

The tests are executed with durable and non-durable messages. 3 durable 
messages and 3 non-durable messages are produced every 60 seconds (almost) at 
the same time on the 1st broker.

We are producing large AMQP messages and leave them on a durable queue. MSG/TMP 
files are created in directory `large-messages` for this as expected.

After the configured amount of time, the messages expire as expected. the 
original MSG/TMP files are removed as expected.

For monitoring, we have an simple extra consumer on the address `ExpiryQueue` 
connected to a 2nd broker in the same cluster.

OBSERVATION:

The MSG/TMP files are left on the disk of the 2nd broker also every 60 seconds. 
This is unexpected. No related logfile lines are seen on either broker.
The content of the MSG/TMP files is (based on it size) related to the original 
MSG/TMP files. These files have a different names, likely because they have 
been recreated in the context of the ExpiryQueue address. The files are 
slightly larger, likely because of the addition of a few expiry related headers.

Note that this also happens for messages that originally were just a few bytes 
too small to become a large message, but have become large messages due to the 
expiry process.


> on-disk files for large messages are not always removed on expiry
> -
>
> Key: ARTEMIS-4781
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4781
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Clustering
>Affects Versions: 2.33.0
>Reporter: Erwin Dondorp
>Priority: Major
>
> SETUP:
> Using a broker-cluster.
> The tests are executed with durable and non-durable messages. 3 durable 
> messages and 3 non-durable messages are produced every 60 seconds (almost) at 
> the same time on the 1st broker.
> We are producing large AMQP messages and leave them on a durable queue. 
> MSG/TMP files are created in directory `large-messages` for this as expected.
> After the configured amount of time, the messages expire as expected. the 
> original MSG/TMP files are removed as expected.
> For monitoring, we have an simple extra consumer on the address `ExpiryQueue` 
> connected to a 2nd broker in the same cluster.
> OBSERVATION:
> The MSG/TMP files are left on the disk of the 2nd broker also every 60 
> seconds. This is unexpected. No related logfile lines are seen on either 
> broker.
> The content of the MSG/TMP files is (based on it size) related to the 
> original MSG/TMP files. These files have different names, likely because they 
> have been recreated in the context of the ExpiryQueue address. The files are 
> slightly larger, likely because of the addition of a few expiry related 
> headers.
> Note that this also happens for messages that originally were just a few 
> bytes too small to become a large message, but have become large messages due 
> to the expiry process.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Updated] (ARTEMIS-4781) on-disk files for large messages are not always removed on expiry

2024-05-27 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4781:
---
Description: 
SETUP:

Using a broker-cluster.

The tests are executed with durable and non-durable messages. 3 durable 
messages and 3 non-durable messages are produced every 60 seconds (almost) at 
the same time on the 1st broker.

We are producing large AMQP messages and leave them on a durable queue. MSG/TMP 
files are created in directory `large-messages` for this as expected.

After the configured amount of time, the messages expire as expected. the 
original MSG/TMP files are removed as expected.

For monitoring, we have an simple extra consumer on the address `ExpiryQueue` 
connected to a 2nd broker in the same cluster.

OBSERVATION:

The MSG/TMP files are left on the disk of the 2nd broker also every 60 seconds. 
This is unexpected. No related logfile lines are seen on either broker.
The content of the MSG/TMP files is (based on it size) related to the original 
MSG/TMP files. These files have a different names, likely because they have 
been recreated in the context of the ExpiryQueue address. The files are 
slightly larger, likely because of the addition of a few expiry related headers.

Note that this also happens for messages that originally were just a few bytes 
too small to become a large message, but have become large messages due to the 
expiry process.

  was:
SETUP:

Using a broker-cluster.

The tests are executed with durable and non-durable messages. 3 durable 
messages and 3 non-durable messages are produced every 60 seconds (almost) at 
the same time on the 1st broker.

We are producing large AMQP messages and leave them on a durable queue. MSG/TMP 
files are created in directory `large-messages` for this as expected.

After the configured amount of time, the messages expire as expected. the 
original MSG/TMP files are removed as expected.

For monitoring, we have an simple extra consumer on the address `ExpiryQueue` 
connected to a 2nd broker in the same cluster.

OBSERVATION:

The MSG/TMP files are left on the disk of the 2nd broker also every 60 seconds. 
This is unexpected. No related logfile lines are seen on either broker.
The content of the MSG/TMP files is (based on it size) related to the original 
MSG/TMP files. These files have a different names, likely because they have 
been recreated in the context of the ExpiryQueue address. The files are 
slightly larger, likely because of the addition of a few expiry related headers.


> on-disk files for large messages are not always removed on expiry
> -
>
> Key: ARTEMIS-4781
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4781
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Clustering
>Affects Versions: 2.33.0
>Reporter: Erwin Dondorp
>Priority: Major
>
> SETUP:
> Using a broker-cluster.
> The tests are executed with durable and non-durable messages. 3 durable 
> messages and 3 non-durable messages are produced every 60 seconds (almost) at 
> the same time on the 1st broker.
> We are producing large AMQP messages and leave them on a durable queue. 
> MSG/TMP files are created in directory `large-messages` for this as expected.
> After the configured amount of time, the messages expire as expected. the 
> original MSG/TMP files are removed as expected.
> For monitoring, we have an simple extra consumer on the address `ExpiryQueue` 
> connected to a 2nd broker in the same cluster.
> OBSERVATION:
> The MSG/TMP files are left on the disk of the 2nd broker also every 60 
> seconds. This is unexpected. No related logfile lines are seen on either 
> broker.
> The content of the MSG/TMP files is (based on it size) related to the 
> original MSG/TMP files. These files have a different names, likely because 
> they have been recreated in the context of the ExpiryQueue address. The files 
> are slightly larger, likely because of the addition of a few expiry related 
> headers.
> Note that this also happens for messages that originally were just a few 
> bytes too small to become a large message, but have become large messages due 
> to the expiry process.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Updated] (ARTEMIS-4781) on-disk files for large messages are not always removed on expiry

2024-05-27 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4781:
---
Description: 
SETUP:

Using a broker-cluster.

The tests are executed with durable and non-durable messages. 3 durable 
messages and 3 non-durable messages are produced every 60 seconds (almost) at 
the same time on the 1st broker.

We are producing large AMQP messages and leave them on a durable queue. MSG/TMP 
files are created in directory `large-messages` for this as expected.

After the configured amount of time, the messages expire as expected. the 
original MSG/TMP files are removed as expected.

For monitoring, we have an simple extra consumer on the address `ExpiryQueue` 
connected to a 2nd broker in the same cluster.

OBSERVATION:

The MSG/TMP files are left on the disk of the 2nd broker also every 60 seconds. 
This is unexpected. No related logfile lines are seen on either broker.
The content of the MSG/TMP files is (based on it size) related to the original 
MSG/TMP files. These files have a different names, likely because they have 
been recreated in the context of the ExpiryQueue address. The files are 
slightly larger, likely because of the addition of a few expiry related headers.

  was:
DRAFT! text below is not complete yet...



SETUP:

using a broker-cluster.

the tests are executed with durable and non-durable messages. 3 durable 
messages and 3 non-durable messages are produced every 60 seconds (almost) at 
the same time on the 1st broker.

we are producing large amqp messages and leave them on a durable queue. MSG/TMP 
files are created in directory `large-messages` for this as expected.

after the configured amount of time, the messages expire as expected. the 
original MSG/TMP files are removed as expected.

OBSERVATION:
for monitoring, we have a simple consumer on the address `ExpiryQueue` on a 2nd 
broker in the same cluster:

3 TMP files are left on the disk of the 2nd broker every 60 seconds. this is 
unexpected. no related logfile lines are seen on either broker.
the content of the TMP files is (based on it size) related to the original 


> on-disk files for large messages are not always removed on expiry
> -
>
> Key: ARTEMIS-4781
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4781
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Clustering
>Affects Versions: 2.33.0
>Reporter: Erwin Dondorp
>Priority: Major
>
> SETUP:
> Using a broker-cluster.
> The tests are executed with durable and non-durable messages. 3 durable 
> messages and 3 non-durable messages are produced every 60 seconds (almost) at 
> the same time on the 1st broker.
> We are producing large AMQP messages and leave them on a durable queue. 
> MSG/TMP files are created in directory `large-messages` for this as expected.
> After the configured amount of time, the messages expire as expected. the 
> original MSG/TMP files are removed as expected.
> For monitoring, we have an simple extra consumer on the address `ExpiryQueue` 
> connected to a 2nd broker in the same cluster.
> OBSERVATION:
> The MSG/TMP files are left on the disk of the 2nd broker also every 60 
> seconds. This is unexpected. No related logfile lines are seen on either 
> broker.
> The content of the MSG/TMP files is (based on it size) related to the 
> original MSG/TMP files. These files have a different names, likely because 
> they have been recreated in the context of the ExpiryQueue address. The files 
> are slightly larger, likely because of the addition of a few expiry related 
> headers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Updated] (ARTEMIS-4781) on-disk files for large messages are not always removed on expiry

2024-05-27 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4781:
---
Description: 
DRAFT! text below is not complete yet...



SETUP:

using a broker-cluster.

the tests are executed with durable and non-durable messages. 3 durable 
messages and 3 non-durable messages are produced every 60 seconds (almost) at 
the same time on the 1st broker.

we are producing large amqp messages and leave them on a durable queue. MSG/TMP 
files are created in directory `large-messages` for this as expected.

after the configured amount of time, the messages expire as expected. the 
original MSG/TMP files are removed as expected.

OBSERVATION:
for monitoring, we have a simple consumer on the address `ExpiryQueue` on a 2nd 
broker in the same cluster:

3 TMP files are left on the disk of the 2nd broker every 60 seconds. this is 
unexpected. no related logfile lines are seen on either broker.
the content of the TMP files is (based on it size) related to the original 

  was:
DRAFT! text below is not complete yet...



SETUP:

using a broker-cluster.

the results below happen for durable and non-durable messages.

we are producing large amqp messages and leave them on a durable queue. MSG/TMP 
files are created in directory `large-messages` for this as expected.

after the configured amount of time, the messages expire as expected. the 
original MSG/TMP files are removed as expected.

OBSERVATION:

when there is a consumer on the address `ExpiryQueue` on another broker in the 
same cluster:

the MSG/TMP files are replaced with the same amount of new MSG/TMP files on the 
original broker. but this time the files seem to remain forever. this is 
unexpected.


> on-disk files for large messages are not always removed on expiry
> -
>
> Key: ARTEMIS-4781
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4781
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Clustering
>Affects Versions: 2.33.0
>Reporter: Erwin Dondorp
>Priority: Major
>
> DRAFT! text below is not complete yet...
> SETUP:
> using a broker-cluster.
> the tests are executed with durable and non-durable messages. 3 durable 
> messages and 3 non-durable messages are produced every 60 seconds (almost) at 
> the same time on the 1st broker.
> we are producing large amqp messages and leave them on a durable queue. 
> MSG/TMP files are created in directory `large-messages` for this as expected.
> after the configured amount of time, the messages expire as expected. the 
> original MSG/TMP files are removed as expected.
> OBSERVATION:
> for monitoring, we have a simple consumer on the address `ExpiryQueue` on a 
> 2nd broker in the same cluster:
> 3 TMP files are left on the disk of the 2nd broker every 60 seconds. this is 
> unexpected. no related logfile lines are seen on either broker.
> the content of the TMP files is (based on it size) related to the original 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Updated] (ARTEMIS-4781) on-disk files for large messages are not always removed on expiry

2024-05-27 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4781:
---
Description: 
DRAFT! text below is not complete yet...



SETUP:

using a broker-cluster.

the results below happen for durable and non-durable messages.

we are producing large amqp messages and leave them on a durable queue. MSG/TMP 
files are created in directory `large-messages` for this as expected.

after the configured amount of time, the messages expire as expected. the 
original MSG/TMP files are removed as expected.

OBSERVATION:

when there is a consumer on the address `ExpiryQueue` on another broker in the 
same cluster:

the MSG/TMP files are replaced with the same amount of new MSG/TMP files on the 
original broker. but this time the files seem to remain forever. this is 
unexpected.

  was:
DRAFT!!!




SETUP:

using a broker-cluster.

the results below happen for durable and non-durable messages.

we are producing large amqp messages and leave them on a durable queue. MSG/TMP 
files are created in directory `large-messages` for this as expected.

after the configured amount of time, the messages expire as expected. the 
original MSG/TMP files are removed as expected.

OBSERVATION:

when there is a consumer on the address `ExpiryQueue` on another broker in the 
same cluster:

the MSG/TMP files are replaced with the same amount of new MSG/TMP files on the 
original broker. but this time the files seem to remain forever. this is 
unexpected.


> on-disk files for large messages are not always removed on expiry
> -
>
> Key: ARTEMIS-4781
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4781
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Clustering
>Affects Versions: 2.33.0
>Reporter: Erwin Dondorp
>Priority: Major
>
> DRAFT! text below is not complete yet...
> SETUP:
> using a broker-cluster.
> the results below happen for durable and non-durable messages.
> we are producing large amqp messages and leave them on a durable queue. 
> MSG/TMP files are created in directory `large-messages` for this as expected.
> after the configured amount of time, the messages expire as expected. the 
> original MSG/TMP files are removed as expected.
> OBSERVATION:
> when there is a consumer on the address `ExpiryQueue` on another broker in 
> the same cluster:
> the MSG/TMP files are replaced with the same amount of new MSG/TMP files on 
> the original broker. but this time the files seem to remain forever. this is 
> unexpected.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Updated] (ARTEMIS-4781) on-disk files for large messages are not always removed on expiry

2024-05-27 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4781:
---
Description: 
DRAFT!!!




SETUP:

using a broker-cluster.

the results below happen for durable and non-durable messages.

we are producing large amqp messages and leave them on a durable queue. MSG/TMP 
files are created in directory `large-messages` for this as expected.

after the configured amount of time, the messages expire as expected. the 
original MSG/TMP files are removed as expected.

OBSERVATION:

when there is a consumer on the address `ExpiryQueue` on another broker in the 
same cluster:

the MSG/TMP files are replaced with the same amount of new MSG/TMP files on the 
original broker. but this time the files seem to remain forever. this is 
unexpected.

  was:


SETUP:

using a broker-cluster.

the results below happen for durable and non-durable messages.

we are producing large amqp messages and leave them on a durable queue. MSG/TMP 
files are created in directory `large-messages` for this as expected.

after the configured amount of time, the messages expire as expected. the 
original MSG/TMP files are removed as expected.

OBSERVATION:

when there is a consumer on the address `ExpiryQueue` on another broker in the 
same cluster:

the MSG/TMP files are replaced with the same amount of new MSG/TMP files on the 
original broker. but this time the files seem to remain forever. this is 
unexpected.


> on-disk files for large messages are not always removed on expiry
> -
>
> Key: ARTEMIS-4781
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4781
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Clustering
>Affects Versions: 2.33.0
>Reporter: Erwin Dondorp
>Priority: Major
>
> DRAFT!!!
> SETUP:
> using a broker-cluster.
> the results below happen for durable and non-durable messages.
> we are producing large amqp messages and leave them on a durable queue. 
> MSG/TMP files are created in directory `large-messages` for this as expected.
> after the configured amount of time, the messages expire as expected. the 
> original MSG/TMP files are removed as expected.
> OBSERVATION:
> when there is a consumer on the address `ExpiryQueue` on another broker in 
> the same cluster:
> the MSG/TMP files are replaced with the same amount of new MSG/TMP files on 
> the original broker. but this time the files seem to remain forever. this is 
> unexpected.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Updated] (ARTEMIS-4781) on-disk files for large messages are not always removed on expiry

2024-05-27 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4781:
---
Description: 


SETUP:

using a broker-cluster.

the results below happen for durable and non-durable messages.

we are producing large amqp messages and leave them on a durable queue. MSG/TMP 
files are created in directory `large-messages` for this as expected.

after the configured amount of time, the messages expire as expected. the 
original MSG/TMP files are removed as expected.

OBSERVATION:

when there is a consumer on the address `ExpiryQueue` on another broker in the 
same cluster:

the MSG/TMP files are replaced with the same amount of new MSG/TMP files on the 
original broker. but this time the files seem to remain forever. this is 
unexpected.

  was:
SETUP:

using a broker-cluster.

the results below happen for durable and non-durable messages.

we are producing large amqp messages and leave them on a durable queue. MSG/TMP 
files are created in directory `large-messages` for this as expected.

after the configured amount of time, the messages expire as expected. the 
original MSG/TMP files are removed as expected.

OBSERVATION:

when there is a consumer on the address `ExpiryQueue` on another broker in the 
same cluster:

the MSG/TMP files are replaced with the same amount of new MSG/TMP files on the 
original broker. but this time the files seem to remain forever. this is 
unexpected.


> on-disk files for large messages are not always removed on expiry
> -
>
> Key: ARTEMIS-4781
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4781
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Clustering
>Affects Versions: 2.33.0
>Reporter: Erwin Dondorp
>Priority: Major
>
> SETUP:
> using a broker-cluster.
> the results below happen for durable and non-durable messages.
> we are producing large amqp messages and leave them on a durable queue. 
> MSG/TMP files are created in directory `large-messages` for this as expected.
> after the configured amount of time, the messages expire as expected. the 
> original MSG/TMP files are removed as expected.
> OBSERVATION:
> when there is a consumer on the address `ExpiryQueue` on another broker in 
> the same cluster:
> the MSG/TMP files are replaced with the same amount of new MSG/TMP files on 
> the original broker. but this time the files seem to remain forever. this is 
> unexpected.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Closed] (ARTEMIS-4479) supply metrics for cluster topology

2024-05-27 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp closed ARTEMIS-4479.
--
Resolution: Duplicate

> supply metrics for cluster topology
> ---
>
> Key: ARTEMIS-4479
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4479
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Clustering
>Affects Versions: 2.31.2
>Reporter: Erwin Dondorp
>Priority: Major
>
> Artemis integrates well with Prometheus for monitoring to see whether the 
> internals of a broker work properly using the 
> artemis-prometheus-metrics-plugin plugin.
> I found that information about the presence of the broker in a broker-cluster 
> is missing there. that information can already be found:
>  * in the console, tab "Status", the number of live brokers and number of 
> backup brokers is listed
>  * in the JMX interfaces, the listNetworkTopology function has similar 
> information, even with more details
> suggestion is to add 2 new simple metrics:
>  * artemis_cluster_lives_count
>  * artemis_cluster_backups_count



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Commented] (ARTEMIS-4479) supply metrics for cluster topology

2024-05-27 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17849719#comment-17849719
 ] 

Erwin Dondorp commented on ARTEMIS-4479:


I goofed when creating ARTEMIS-4497 as that was a duplicate of this one.
But ARTEMIS-4497 is now closed, and therefore so should this one.

> supply metrics for cluster topology
> ---
>
> Key: ARTEMIS-4479
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4479
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Clustering
>Affects Versions: 2.31.2
>Reporter: Erwin Dondorp
>Priority: Major
>
> Artemis integrates well with Prometheus for monitoring to see whether the 
> internals of a broker work properly using the 
> artemis-prometheus-metrics-plugin plugin.
> I found that information about the presence of the broker in a broker-cluster 
> is missing there. that information can already be found:
>  * in the console, tab "Status", the number of live brokers and number of 
> backup brokers is listed
>  * in the JMX interfaces, the listNetworkTopology function has similar 
> information, even with more details
> suggestion is to add 2 new simple metrics:
>  * artemis_cluster_lives_count
>  * artemis_cluster_backups_count



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Created] (ARTEMIS-4781) on-disk files for large messages are not always removed on expiry

2024-05-27 Thread Erwin Dondorp (Jira)
Erwin Dondorp created ARTEMIS-4781:
--

 Summary: on-disk files for large messages are not always removed 
on expiry
 Key: ARTEMIS-4781
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4781
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Clustering
Affects Versions: 2.33.0
Reporter: Erwin Dondorp


SETUP:

using a broker-cluster.

the results below happen for durable and non-durable messages.

we are producing large amqp messages and leave them on a durable queue. MSG/TMP 
files are created in directory `large-messages` for this as expected.

after the configured amount of time, the messages expire as expected. the 
original MSG/TMP files are removed as expected.

OBSERVATION

when there is a consumer on the address `ExpiryQueue` on another broker in the 
same cluster:

the MSG/TMP files are replaced with the same amount of new MSG/TMP files on the 
original broker. but this time the files seem to remain forever. this is 
unexpected.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Updated] (ARTEMIS-4781) on-disk files for large messages are not always removed on expiry

2024-05-27 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4781:
---
Description: 
SETUP:

using a broker-cluster.

the results below happen for durable and non-durable messages.

we are producing large amqp messages and leave them on a durable queue. MSG/TMP 
files are created in directory `large-messages` for this as expected.

after the configured amount of time, the messages expire as expected. the 
original MSG/TMP files are removed as expected.

OBSERVATION:

when there is a consumer on the address `ExpiryQueue` on another broker in the 
same cluster:

the MSG/TMP files are replaced with the same amount of new MSG/TMP files on the 
original broker. but this time the files seem to remain forever. this is 
unexpected.

  was:
SETUP:

using a broker-cluster.

the results below happen for durable and non-durable messages.

we are producing large amqp messages and leave them on a durable queue. MSG/TMP 
files are created in directory `large-messages` for this as expected.

after the configured amount of time, the messages expire as expected. the 
original MSG/TMP files are removed as expected.

OBSERVATION

when there is a consumer on the address `ExpiryQueue` on another broker in the 
same cluster:

the MSG/TMP files are replaced with the same amount of new MSG/TMP files on the 
original broker. but this time the files seem to remain forever. this is 
unexpected.


> on-disk files for large messages are not always removed on expiry
> -
>
> Key: ARTEMIS-4781
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4781
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Clustering
>Affects Versions: 2.33.0
>Reporter: Erwin Dondorp
>Priority: Major
>
> SETUP:
> using a broker-cluster.
> the results below happen for durable and non-durable messages.
> we are producing large amqp messages and leave them on a durable queue. 
> MSG/TMP files are created in directory `large-messages` for this as expected.
> after the configured amount of time, the messages expire as expected. the 
> original MSG/TMP files are removed as expected.
> OBSERVATION:
> when there is a consumer on the address `ExpiryQueue` on another broker in 
> the same cluster:
> the MSG/TMP files are replaced with the same amount of new MSG/TMP files on 
> the original broker. but this time the files seem to remain forever. this is 
> unexpected.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Commented] (ARTEMIS-4182) fill client-id for cluster connections

2024-04-22 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839877#comment-17839877
 ] 

Erwin Dondorp commented on ARTEMIS-4182:


> I was asking for a screenshot of where, "the field 'Client ID' is filled in 
> with the remote hostname when using broker-connection/amqp-connection."

I was wrong and I cannot reproduce that.
The broker-connection/amqp-connection always uses the remote node-id as the 
client-id.
Here is a screenshot for such connection:
!screenshot-2.png!

> I think the best we could do would be to use the node ID which is 
> automatically generated when the broker is created and uniquely identifies 
> the broker from all others

The node-ID is an internal value that requires some lookup to be done to find 
the more well-known name.
But, agreed, it is consistent with the way broker-connection/amqp-connection 
works and already a nice improvement.

> or use the {{name}} of the node set in {{broker.xml}}

that would be even more useful.

> fill client-id for cluster connections
> --
>
> Key: ARTEMIS-4182
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4182
> Project: ActiveMQ Artemis
>  Issue Type: Wish
>  Components: Broker
>Affects Versions: 2.28.0
>Reporter: Erwin Dondorp
>Priority: Major
> Attachments: image-2023-02-25-13-27-08-542.png, screenshot-2.png
>
>
> when running Artemis in a cluster, the brokers have connections between them.
> these are easily identifiable in the list of connections because the "Users" 
> field is filled in with the username that was specified in setting 
> `cluster-user`.
> but it is unclear where each connection goes to.
> !image-2023-02-25-13-27-08-542.png!
>  
> additional information:
> the field "Client ID" is filled in with the remote hostname when using 
> broker-connection/amqp-connection.
> wish:
> (also) fill in field ClientID of the cluster connections.
> e.g. with the broker-name or from a new parameter `cluster-clientid`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4182) fill client-id for cluster connections

2024-04-22 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4182:
---
Attachment: screenshot-2.png

> fill client-id for cluster connections
> --
>
> Key: ARTEMIS-4182
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4182
> Project: ActiveMQ Artemis
>  Issue Type: Wish
>  Components: Broker
>Affects Versions: 2.28.0
>Reporter: Erwin Dondorp
>Priority: Major
> Attachments: image-2023-02-25-13-27-08-542.png, screenshot-2.png
>
>
> when running Artemis in a cluster, the brokers have connections between them.
> these are easily identifiable in the list of connections because the "Users" 
> field is filled in with the username that was specified in setting 
> `cluster-user`.
> but it is unclear where each connection goes to.
> !image-2023-02-25-13-27-08-542.png!
>  
> additional information:
> the field "Client ID" is filled in with the remote hostname when using 
> broker-connection/amqp-connection.
> wish:
> (also) fill in field ClientID of the cluster connections.
> e.g. with the broker-name or from a new parameter `cluster-clientid`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4182) fill client-id for cluster connections

2024-04-22 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4182:
---
Attachment: (was: screenshot-1.png)

> fill client-id for cluster connections
> --
>
> Key: ARTEMIS-4182
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4182
> Project: ActiveMQ Artemis
>  Issue Type: Wish
>  Components: Broker
>Affects Versions: 2.28.0
>Reporter: Erwin Dondorp
>Priority: Major
> Attachments: image-2023-02-25-13-27-08-542.png, screenshot-2.png
>
>
> when running Artemis in a cluster, the brokers have connections between them.
> these are easily identifiable in the list of connections because the "Users" 
> field is filled in with the username that was specified in setting 
> `cluster-user`.
> but it is unclear where each connection goes to.
> !image-2023-02-25-13-27-08-542.png!
>  
> additional information:
> the field "Client ID" is filled in with the remote hostname when using 
> broker-connection/amqp-connection.
> wish:
> (also) fill in field ClientID of the cluster connections.
> e.g. with the broker-name or from a new parameter `cluster-clientid`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4182) fill client-id for cluster connections

2024-04-22 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4182:
---
Attachment: screenshot-1.png

> fill client-id for cluster connections
> --
>
> Key: ARTEMIS-4182
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4182
> Project: ActiveMQ Artemis
>  Issue Type: Wish
>  Components: Broker
>Affects Versions: 2.28.0
>Reporter: Erwin Dondorp
>Priority: Major
> Attachments: image-2023-02-25-13-27-08-542.png, screenshot-1.png
>
>
> when running Artemis in a cluster, the brokers have connections between them.
> these are easily identifiable in the list of connections because the "Users" 
> field is filled in with the username that was specified in setting 
> `cluster-user`.
> but it is unclear where each connection goes to.
> !image-2023-02-25-13-27-08-542.png!
>  
> additional information:
> the field "Client ID" is filled in with the remote hostname when using 
> broker-connection/amqp-connection.
> wish:
> (also) fill in field ClientID of the cluster connections.
> e.g. with the broker-name or from a new parameter `cluster-clientid`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4182) fill client-id for cluster connections

2024-04-21 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839378#comment-17839378
 ] 

Erwin Dondorp commented on ARTEMIS-4182:


[~jbertram] 

The screenshot is already in the description. The image shows that 
cluster-connections do not fill the column ClientID,

The remote address column indeed shows the ip-number of the remote node. But 
(for me) these ip-numbers are assigned dynamically. From these ip-numbers, I 
cannot determine which node did not connect to the cluster. e.g. Assume the 
screenshot is from a 6-node cluster. The "own" node (where we are viewing from) 
is never in the list, but which one of the other 5 nodes is not connected and 
therefore not in this list? To answer that question, I must either lookup the 
ip-addresses and eliminate the good ones to find the missing one, or open the 
consoles of all 5 other nodes.

> fill client-id for cluster connections
> --
>
> Key: ARTEMIS-4182
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4182
> Project: ActiveMQ Artemis
>  Issue Type: Wish
>  Components: Broker
>Affects Versions: 2.28.0
>Reporter: Erwin Dondorp
>Priority: Major
> Attachments: image-2023-02-25-13-27-08-542.png
>
>
> when running Artemis in a cluster, the brokers have connections between them.
> these are easily identifiable in the list of connections because the "Users" 
> field is filled in with the username that was specified in setting 
> `cluster-user`.
> but it is unclear where each connection goes to.
> !image-2023-02-25-13-27-08-542.png!
>  
> additional information:
> the field "Client ID" is filled in with the remote hostname when using 
> broker-connection/amqp-connection.
> wish:
> (also) fill in field ClientID of the cluster connections.
> e.g. with the broker-name or from a new parameter `cluster-clientid`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4715) Consuming from wildcard queues stopped working with 2.33.0

2024-04-08 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4715:
---
Issue Type: Bug  (was: Task)

> Consuming from wildcard queues stopped working with 2.33.0
> --
>
> Key: ARTEMIS-4715
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4715
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.33.0
>Reporter: Otavio Rodolfo Piske
>Priority: Major
> Attachments: camel-activemq-test-artemis-2.32.log, 
> camel-activemq-test-artemis-2.33.log
>
>
> One of our 
> [tests|https://github.com/apache/camel/blob/main/components/camel-activemq/src/test/java/org/apache/camel/component/activemq/ActiveMQRouteIT.java#L66-L71]
>  that [consumes data from a wildcard 
> queue|https://github.com/apache/camel/blob/main/components/camel-activemq/src/test/java/org/apache/camel/component/activemq/ActiveMQRouteIT.java#L111-L113]
>  has stopped working after migrating to Artemis 2.33.0.
>  
> This test works without problems when using Artemis 2.32.0. I attached the 
> logs from executing the test with 2.32 and 2.33.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4715) Consuming from wildcard queues stopped working with 2.33.0

2024-04-08 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4715:
---
Issue Type: Task  (was: Bug)

> Consuming from wildcard queues stopped working with 2.33.0
> --
>
> Key: ARTEMIS-4715
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4715
> Project: ActiveMQ Artemis
>  Issue Type: Task
>Affects Versions: 2.33.0
>Reporter: Otavio Rodolfo Piske
>Priority: Major
> Attachments: camel-activemq-test-artemis-2.32.log, 
> camel-activemq-test-artemis-2.33.log
>
>
> One of our 
> [tests|https://github.com/apache/camel/blob/main/components/camel-activemq/src/test/java/org/apache/camel/component/activemq/ActiveMQRouteIT.java#L66-L71]
>  that [consumes data from a wildcard 
> queue|https://github.com/apache/camel/blob/main/components/camel-activemq/src/test/java/org/apache/camel/component/activemq/ActiveMQRouteIT.java#L111-L113]
>  has stopped working after migrating to Artemis 2.33.0.
>  
> This test works without problems when using Artemis 2.32.0. I attached the 
> logs from executing the test with 2.32 and 2.33.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4688) add idle subscription removal after timeout

2024-03-14 Thread Erwin Dondorp (Jira)
Erwin Dondorp created ARTEMIS-4688:
--

 Summary: add idle subscription removal after timeout
 Key: ARTEMIS-4688
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4688
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: Broker
Affects Versions: 2.32.0
Reporter: Erwin Dondorp


In Artemis, messages expire by their individual message setting. With some 
variations possible via address settings in the broker configuration.

But unused subscription-queues are never removed, except for the non-durable 
ones.

Request is to add a idle-timeout for queues from durable subscriptions. The 
subscription and its queue should then be deleted when there are no subscribers 
during a given time-period.

I believe ActiveMQ Classic has this via the offlineDurableSubscriberTimeout 
setting, see also 
https://activemq.apache.org/components/classic/documentation/manage-durable-subscribers.

 

This subject was also mentioned a while back in 
[https://stackoverflow.com/questions/56115426/apache-activemq-artemis-durable-subscription-ttl;]
 but, for our use-cases, using only message-expiry is not sufficient. this is 
because each abandonned durable subscription still stores messages, and is thus 
using resources.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4680) Upgrade the console to use HawtIO 4

2024-03-12 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825859#comment-17825859
 ] 

Erwin Dondorp commented on ARTEMIS-4680:


[~andytaylor] the new gui looks nice. I like it that the tree-panel is 
completely gone and that the additional commands per object-type are now no 
longer in the menu-bar, but in separate dropdown menus, etc.

I could not build the current version due to "import TopologyDragDropDemo" in 
ArtemisTabView.tsx. once that was commented out, it seemed to work. but not 
sure whether I'm now missing something due to that :-)

 

I noticed a typo in MessageView.tsx:

       
          durable
          \{currentMessage.address}
        
that value is not the intended value.

> Upgrade the console to use HawtIO 4
> ---
>
> Key: ARTEMIS-4680
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4680
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Web Console
>Reporter: Andy Taylor
>Assignee: Andy Taylor
>Priority: Major
>
> The current console is based upon HawtIO 1 which in turn is built on 
> Bootstrap. Bootstrap is old and no longer actively being maintained.
>  
> This improvement is to migrate the current console to use HawtIO 4 which i 
> based on Typescript, react and Patternfly.
>  
> A WIP can be found 
> [here|https://github.com/andytaylor/activemq-artemis/tree/artemis-console-ng]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4574) Broker diagram no longer works

2024-01-17 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4574:
---
Priority: Major  (was: Critical)

> Broker diagram no longer works
> --
>
> Key: ARTEMIS-4574
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4574
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Reporter: Erwin Dondorp
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Since very recent, the broker diagram no longer shows.
> The web-console shows the following error: {{TypeError: thisBroker.primary is 
> undefined}}
> since this is a recent problem I guessed that commit 
> 85b2f4b126ee4598cae345d3aef575bdb2cbb03e (ARTEMIS-3474 "replace non-inclusive 
> terms") was the reason. builds before+after confirmed that.
> [~jbertram]: -I have enough insight in this part of the code (e.g. file 
> {{{}diagram.js{}}}) to fix this. do you want a PR?- a PR is added.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4574) Broker diagram no longer works

2024-01-17 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4574:
---
Description: 
Since very recent, the broker diagram no longer shows.

The web-console shows the following error: {{TypeError: thisBroker.primary is 
undefined}}

since this is a recent problem I guessed that commit 
85b2f4b126ee4598cae345d3aef575bdb2cbb03e (ARTEMIS-3474 "replace non-inclusive 
terms") was the reason. builds before+after confirmed that.

[~jbertram]: -I have enough insight in this part of the code (e.g. file 
{{{}diagram.js{}}}) to fix this. do you want a PR?- a PR is added.

  was:
Since very recent, the broker diagram no longer shows.

The web-console shows the following error: {{TypeError: thisBroker.primary is 
undefined}}

since this is a recent problem I guessed that commit 
85b2f4b126ee4598cae345d3aef575bdb2cbb03e (ARTEMIS-3474 "replace non-inclusive 
terms") was the reason. builds before+after confirmed that.

[~jbertram]: I have enough insight in this part of the code (e.g. file 
{{diagram.js}}) to fix this. do you want a PR?


> Broker diagram no longer works
> --
>
> Key: ARTEMIS-4574
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4574
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Reporter: Erwin Dondorp
>Priority: Critical
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Since very recent, the broker diagram no longer shows.
> The web-console shows the following error: {{TypeError: thisBroker.primary is 
> undefined}}
> since this is a recent problem I guessed that commit 
> 85b2f4b126ee4598cae345d3aef575bdb2cbb03e (ARTEMIS-3474 "replace non-inclusive 
> terms") was the reason. builds before+after confirmed that.
> [~jbertram]: -I have enough insight in this part of the code (e.g. file 
> {{{}diagram.js{}}}) to fix this. do you want a PR?- a PR is added.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4574) Broker diagram no longer works

2024-01-17 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4574:
---
Description: 
Since very recent, the broker diagram no longer shows.

The web-console shows the following error: {{TypeError: thisBroker.primary is 
undefined}}

since this is a recent problem I guessed that commit 
85b2f4b126ee4598cae345d3aef575bdb2cbb03e (ARTEMIS-3474 "replace non-inclusive 
terms") was the reason. builds before+after confirmed that.

[~jbertram]: I have enough insight in this part of the code (e.g. file 
{{diagram.js}}) to fix this. do you want a PR?

  was:
Since very recent, the broker diagram no longer shows.

The web-console shows the following error: {{TypeError: thisBroker.primary is 
undefined}}

since this is a recent problem I guessed that commit 
85b2f4b126ee4598cae345d3aef575bdb2cbb03e (ARTEMIS-3474 "replace non-inclusive 
terms") was the reason. builds before+after confirmed that.

[~jbertram]: I have enough insight in this part of the code (e.g. file 
`diagram.js`) to fix this. do you want a PR?


> Broker diagram no longer works
> --
>
> Key: ARTEMIS-4574
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4574
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Reporter: Erwin Dondorp
>Priority: Critical
>
> Since very recent, the broker diagram no longer shows.
> The web-console shows the following error: {{TypeError: thisBroker.primary is 
> undefined}}
> since this is a recent problem I guessed that commit 
> 85b2f4b126ee4598cae345d3aef575bdb2cbb03e (ARTEMIS-3474 "replace non-inclusive 
> terms") was the reason. builds before+after confirmed that.
> [~jbertram]: I have enough insight in this part of the code (e.g. file 
> {{diagram.js}}) to fix this. do you want a PR?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4574) Broker diagram no longer works

2024-01-17 Thread Erwin Dondorp (Jira)
Erwin Dondorp created ARTEMIS-4574:
--

 Summary: Broker diagram no longer works
 Key: ARTEMIS-4574
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4574
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Web Console
Reporter: Erwin Dondorp


Since very recent, the broker diagram no longer shows.

The web-console shows the following error: {{TypeError: thisBroker.primary is 
undefined}}

since this is a recent problem I guessed that commit 
85b2f4b126ee4598cae345d3aef575bdb2cbb03e (ARTEMIS-3474 "replace non-inclusive 
terms") was the reason. builds before+after confirmed that.

[~jbertram]: I have enough insight in this part of the code (e.g. file 
`diagram.js`) to fix this. do you want a PR?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4565) 'artemis help' shows wrong text for 'queue'

2024-01-12 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4565:
---
Description: 
when issuing the command {{artemis help}}, the text for 'queue' is not correct, 
an obvious copy-paste problem.
{noformat}
$ bin/artemis help
Usage: artemis [COMMAND]
ActiveMQ Artemis Command Line
Commands:
  help   use 'help ' for more information
  auto-complete  Generates the auto complete script file to be used in bash or
   zsh.
  shell  JLine3 shell helping using the CLI
  producer   Send message(s) to a broker.
  transfer   Move messages from one destination towards another destination.
  consumer   Consume messages from a queue.
  browserBrowse messages on a queue.
  mask   Mask a password and print it out.
  versionPrint version information.
  perf   use 'help perf' for sub commands list
  check  use 'help check' for sub commands list
  queue  use 'help check' for sub commands list <=== HERE 
  addressuse 'help address' for sub commands list
  activation use 'help activation' for sub commands list
  data   use 'help data' for sub commands list
  user   file-based user management. Use 'help user' for sub commands
   list.
  runRun the broker.
  stop   Stop the broker.
  kill   Kill a broker started with --allow-kill.
  perf-journal   Calculate the journal-buffer-timeout to use with the current
   data folder.
{noformat}

a small PR is added.

  was:
when issuing the command {{artemis help}}, the text for 'queue' is not correct, 
an obvious copy-paste problem.
{noformat}
$ bin/artemis help
Usage: artemis [COMMAND]
ActiveMQ Artemis Command Line
Commands:
  help   use 'help ' for more information
  auto-complete  Generates the auto complete script file to be used in bash or
   zsh.
  shell  JLine3 shell helping using the CLI
  producer   Send message(s) to a broker.
  transfer   Move messages from one destination towards another destination.
  consumer   Consume messages from a queue.
  browserBrowse messages on a queue.
  mask   Mask a password and print it out.
  versionPrint version information.
  perf   use 'help perf' for sub commands list
  check  use 'help check' for sub commands list
  queue  use 'help check' for sub commands list <=== HERE >
  addressuse 'help address' for sub commands list
  activation use 'help activation' for sub commands list
  data   use 'help data' for sub commands list
  user   file-based user management. Use 'help user' for sub commands
   list.
  runRun the broker.
  stop   Stop the broker.
  kill   Kill a broker started with --allow-kill.
  perf-journal   Calculate the journal-buffer-timeout to use with the current
   data folder.
{noformat}

a small PR is added.


> 'artemis help' shows wrong text for 'queue'
> ---
>
> Key: ARTEMIS-4565
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4565
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.31.2
>Reporter: Erwin Dondorp
>Priority: Trivial
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> when issuing the command {{artemis help}}, the text for 'queue' is not 
> correct, an obvious copy-paste problem.
> {noformat}
> $ bin/artemis help
> Usage: artemis [COMMAND]
> ActiveMQ Artemis Command Line
> Commands:
>   help   use 'help ' for more information
>   auto-complete  Generates the auto complete script file to be used in bash or
>zsh.
>   shell  JLine3 shell helping using the CLI
>   producer   Send message(s) to a broker.
>   transfer   Move messages from one destination towards another 
> destination.
>   consumer   Consume messages from a queue.
>   browserBrowse messages on a queue.
>   mask   Mask a password and print it out.
>   versionPrint version information.
>   perf   use 'help perf' for sub commands list
>   check  use 'help check' for sub commands list
>   queue  use 'help check' for sub commands list <=== HERE 
>   addressuse 'help address' for sub commands list
>   activation use 'help activation' for sub commands list
>   data   use 'help data' for sub commands list
>   user   file-based user management. Use 'help user' for sub commands
>list.
>   runRun the broker.
>   stop   Stop the broker.
>   kill   Kill a broker started with --allow-kill.
>   perf-journal   Calculate the 

[jira] [Updated] (ARTEMIS-4565) 'artemis help' shows wrong text for 'queue'

2024-01-12 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4565:
---
Description: 
when issuing the command {{artemis help}}, the text for 'queue' is not correct, 
an obvious copy-paste problem.
{noformat}
$ bin/artemis help
Usage: artemis [COMMAND]
ActiveMQ Artemis Command Line
Commands:
  help   use 'help ' for more information
  auto-complete  Generates the auto complete script file to be used in bash or
   zsh.
  shell  JLine3 shell helping using the CLI
  producer   Send message(s) to a broker.
  transfer   Move messages from one destination towards another destination.
  consumer   Consume messages from a queue.
  browserBrowse messages on a queue.
  mask   Mask a password and print it out.
  versionPrint version information.
  perf   use 'help perf' for sub commands list
  check  use 'help check' for sub commands list
  queue  use 'help check' for sub commands list <=== HERE >
  addressuse 'help address' for sub commands list
  activation use 'help activation' for sub commands list
  data   use 'help data' for sub commands list
  user   file-based user management. Use 'help user' for sub commands
   list.
  runRun the broker.
  stop   Stop the broker.
  kill   Kill a broker started with --allow-kill.
  perf-journal   Calculate the journal-buffer-timeout to use with the current
   data folder.
{noformat}

a small PR is added.

  was:
when issuing the command {{artemis help}}, the text for 'queue' is not correct, 
an obvious copy-paste problem.
{noformat}
$ bin/artemis help
Usage: artemis [COMMAND]
ActiveMQ Artemis Command Line
Commands:
  help   use 'help ' for more information
  auto-complete  Generates the auto complete script file to be used in bash or
   zsh.
  shell  JLine3 shell helping using the CLI
  producer   Send message(s) to a broker.
  transfer   Move messages from one destination towards another destination.
  consumer   Consume messages from a queue.
  browserBrowse messages on a queue.
  mask   Mask a password and print it out.
  versionPrint version information.
  perf   use 'help perf' for sub commands list
  check  use 'help check' for sub commands list
  queue  use 'help check' for sub commands list
  addressuse 'help address' for sub commands list
  activation use 'help activation' for sub commands list
  data   use 'help data' for sub commands list
  user   file-based user management. Use 'help user' for sub commands
   list.
  runRun the broker.
  stop   Stop the broker.
  kill   Kill a broker started with --allow-kill.
  perf-journal   Calculate the journal-buffer-timeout to use with the current
   data folder.
{noformat}

a small PR is added.


> 'artemis help' shows wrong text for 'queue'
> ---
>
> Key: ARTEMIS-4565
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4565
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.31.2
>Reporter: Erwin Dondorp
>Priority: Trivial
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> when issuing the command {{artemis help}}, the text for 'queue' is not 
> correct, an obvious copy-paste problem.
> {noformat}
> $ bin/artemis help
> Usage: artemis [COMMAND]
> ActiveMQ Artemis Command Line
> Commands:
>   help   use 'help ' for more information
>   auto-complete  Generates the auto complete script file to be used in bash or
>zsh.
>   shell  JLine3 shell helping using the CLI
>   producer   Send message(s) to a broker.
>   transfer   Move messages from one destination towards another 
> destination.
>   consumer   Consume messages from a queue.
>   browserBrowse messages on a queue.
>   mask   Mask a password and print it out.
>   versionPrint version information.
>   perf   use 'help perf' for sub commands list
>   check  use 'help check' for sub commands list
>   queue  use 'help check' for sub commands list <=== HERE >
>   addressuse 'help address' for sub commands list
>   activation use 'help activation' for sub commands list
>   data   use 'help data' for sub commands list
>   user   file-based user management. Use 'help user' for sub commands
>list.
>   runRun the broker.
>   stop   Stop the broker.
>   kill   Kill a broker started with --allow-kill.
>   perf-journal   Calculate the 

[jira] [Created] (ARTEMIS-4565) 'artemis help' shows wrong text for 'queue'

2024-01-12 Thread Erwin Dondorp (Jira)
Erwin Dondorp created ARTEMIS-4565:
--

 Summary: 'artemis help' shows wrong text for 'queue'
 Key: ARTEMIS-4565
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4565
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.31.2
Reporter: Erwin Dondorp


when issuing the command {{artemis help}}, the text for 'queue' is not correct, 
an obvious copy-paste problem.
{noformat}
$ bin/artemis help
Usage: artemis [COMMAND]
ActiveMQ Artemis Command Line
Commands:
  help   use 'help ' for more information
  auto-complete  Generates the auto complete script file to be used in bash or
   zsh.
  shell  JLine3 shell helping using the CLI
  producer   Send message(s) to a broker.
  transfer   Move messages from one destination towards another destination.
  consumer   Consume messages from a queue.
  browserBrowse messages on a queue.
  mask   Mask a password and print it out.
  versionPrint version information.
  perf   use 'help perf' for sub commands list
  check  use 'help check' for sub commands list
  queue  use 'help check' for sub commands list
  addressuse 'help address' for sub commands list
  activation use 'help activation' for sub commands list
  data   use 'help data' for sub commands list
  user   file-based user management. Use 'help user' for sub commands
   list.
  runRun the broker.
  stop   Stop the broker.
  kill   Kill a broker started with --allow-kill.
  perf-journal   Calculate the journal-buffer-timeout to use with the current
   data folder.
{noformat}

a small PR is added.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-3427) Page size doesn't change when selecting new page size.

2024-01-08 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804309#comment-17804309
 ] 

Erwin Dondorp edited comment on ARTEMIS-3427 at 1/8/24 3:34 PM:


-FYI: The behavior of the page-size-selector is the standard behavior of the 
PatternFly Pagination component-. And btw, with the page reload all other 
context is also lost: page number, search filter, and (on other pages) sort 
order.

Ignore that... Artemis is not using the standard patternfly-pagination 
component, but a copy of that. let's investigate more...


was (Author: erwindon):
FYI: The behavior of the page-size-selector is the standard behavior of the 
PatternFly Pagination component. And btw, with the page reload all other 
context is also lost: page number, search filter, and (on other pages) sort 
order.

> Page size doesn't change when selecting new page size. 
> ---
>
> Key: ARTEMIS-3427
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3427
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.17.0, 2.18.0, 2.19.0, 2.20.0, 2.21.0, 2.22.0, 2.23.0, 
> 2.24.0, 2.25.0, 2.26.0, 2.27.0, 2.28.0, 2.29.0, 2.30.0, 2.31.0, 2.31.1, 2.31.2
>Reporter: Max
>Priority: Major
>
> When there are many messages in the queue and we try to browse them we can 
> change the page size to show more messages on one page.
> How to reproduce:
>  # make sure there are many messages in the queue so we get multiple pages in 
> the queue browser
>  # select new page size in the drop down. (the page doesn't get reloaded).
>  # Click page navigation buttons to switch to next page. this triggers page 
> reload with new size.
>  # Click page navigation button to go back to page 1 to view desired result.
>  
> After refresh the page size is back to default 10, which is way too low for 
> prod env.
> this ping-pong clicking in order to get new page size disturbs in combination 
> with bug related to select all checkbox (ARTEMIS-3428)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-3427) Page size doesn't change when selecting new page size.

2024-01-08 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804309#comment-17804309
 ] 

Erwin Dondorp commented on ARTEMIS-3427:


FYI: The behavior of the page-size-selector is the standard behavior of the 
PatternFly Pagination component. And btw, with the page reload all other 
context is also lost: page number, search filter, and (on other pages) sort 
order.

> Page size doesn't change when selecting new page size. 
> ---
>
> Key: ARTEMIS-3427
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3427
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.17.0, 2.18.0, 2.19.0, 2.20.0, 2.21.0, 2.22.0, 2.23.0, 
> 2.24.0, 2.25.0, 2.26.0, 2.27.0, 2.28.0, 2.29.0, 2.30.0, 2.31.0, 2.31.1, 2.31.2
>Reporter: Max
>Priority: Major
>
> When there are many messages in the queue and we try to browse them we can 
> change the page size to show more messages on one page.
> How to reproduce:
>  # make sure there are many messages in the queue so we get multiple pages in 
> the queue browser
>  # select new page size in the drop down. (the page doesn't get reloaded).
>  # Click page navigation buttons to switch to next page. this triggers page 
> reload with new size.
>  # Click page navigation button to go back to page 1 to view desired result.
>  
> After refresh the page size is back to default 10, which is way too low for 
> prod env.
> this ping-pong clicking in order to get new page size disturbs in combination 
> with bug related to select all checkbox (ARTEMIS-3428)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-3428) Select all button in web console doesn't work as expected

2024-01-02 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801991#comment-17801991
 ] 

Erwin Dondorp commented on ARTEMIS-3428:


I created a custom version of Artemis where the checkbox-column is also made 
visible on the {{Addresses}} page and on the {{Queues}} page.
In both cases, the table on that page then has exactly the same behavior as on 
the {{Browse queue}} page: after two table-page switches, the 'select-all' 
checkbox no longer works properly. Note also that any operation that causes 
data to be re-loaded also contributes to this problem, this includes the 
{{Reset}} button, applying a sort-order or applying a filter.
Since these 2 pages are much simpler, it rules out that the problem occurs due 
to the extra complexity of the {{Browse queue}} page.

> Select all button in web console doesn't work as expected
> -
>
> Key: ARTEMIS-3428
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3428
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.17.0, 2.18.0, 2.19.0, 2.20.0, 2.21.0, 2.22.0, 2.23.0, 
> 2.24.0, 2.25.0, 2.26.0, 2.27.0, 2.28.0, 2.29.0, 2.30.0, 2.31.0, 2.31.1, 2.31.2
>Reporter: Max
>Priority: Major
>
> When clicking checkbox to select all messages after switching pages, only the 
> clicked checkbox changes the state and no message is selected.
> How to reproduce
> very easy.
> make sure you have more messages than fits on one page (e.g. page size 10, 
> then 15 messages, page size 50, then 60 messages)
> click browse messages
> change page to the next page (2.. or more)
> change page back to the page 1
> click select all. at this moment it should have selected all messages as 
> usuaul. instead only the clicked select all checkbox changes state to 
> "selected" but the messages don't get selected. 
> This is very annoying bug. Especially in combination with another bug related 
> to page size change ARTEMIS-3427. So after clicking back and forth to get 
> desired page size displayed and then select all doesn't work. Then need to 
> refresh the page to get select all working again but the page size is back to 
> default 10. Though sometimes new size persists between refreshes.
> This makes deleting/retrying messages in dlq quite of a ugly task.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-3428) Select all button in web console doesn't work as expected

2024-01-02 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801887#comment-17801887
 ] 

Erwin Dondorp commented on ARTEMIS-3428:


The example that you referenced is in the context of "pagination" and happens 
to be without select-all facility. See 
https://pf3.patternfly.org/v3/pattern-library/content-views/table-view/ for 
other examples of table-view, both samples use a "select-all" facility. It is 
safe to say that the "select-all" facility is part of the pattern-fly library.

> Select all button in web console doesn't work as expected
> -
>
> Key: ARTEMIS-3428
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3428
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.17.0, 2.18.0, 2.19.0, 2.20.0, 2.21.0, 2.22.0, 2.23.0, 
> 2.24.0, 2.25.0, 2.26.0, 2.27.0, 2.28.0, 2.29.0, 2.30.0, 2.31.0, 2.31.1, 2.31.2
>Reporter: Max
>Priority: Major
>
> When clicking checkbox to select all messages after switching pages, only the 
> clicked checkbox changes the state and no message is selected.
> How to reproduce
> very easy.
> make sure you have more messages than fits on one page (e.g. page size 10, 
> then 15 messages, page size 50, then 60 messages)
> click browse messages
> change page to the next page (2.. or more)
> change page back to the page 1
> click select all. at this moment it should have selected all messages as 
> usuaul. instead only the clicked select all checkbox changes state to 
> "selected" but the messages don't get selected. 
> This is very annoying bug. Especially in combination with another bug related 
> to page size change ARTEMIS-3427. So after clicking back and forth to get 
> desired page size displayed and then select all doesn't work. Then need to 
> refresh the page to get select all working again but the page size is back to 
> default 10. Though sometimes new size persists between refreshes.
> This makes deleting/retrying messages in dlq quite of a ugly task.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-3428) Select all button in web console doesn't work as expected

2023-12-30 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801409#comment-17801409
 ] 

Erwin Dondorp edited comment on ARTEMIS-3428 at 12/31/23 2:17 AM:
--

[~Mustermann] 

I have bisected this problem.

The problem occurs since the introduction of Hawtio2 in commit 
https://github.com/apache/activemq-artemis/commit/950e087c383a9a8451a9d7577c3ef6eec5332587.
 See also ARTEMIS-2838.

This is a commit between 2.15.0 and 2.16.0. The problem actually also exists in 
2.16.0.

>From a technical point of view, the visible error is caused by the fact that 
>the table header is rewritten. That causes all connected event-handlers to be 
>lost. i.e. the "select-all" checkbox is reduced to a simple checkbox. the 
>error seems to occur in the "patternfly" library that handles the gui 
>elements. For "table", see 
>[https://pf3.patternfly.org/v3/pattern-library/content-views/table-view/]. for 
>"paging", see 
>[https://pf3.patternfly.org/v3/pattern-library/navigation/pagination/]. at 
>this moment, I have no clue whether the problem is caused by these libraries, 
>or that the libraries are somehow misused.


was (Author: erwindon):
[~Mustermann] 

I have bisected this problem.

The problem occurs since the introduction of Hawtio2 in commit 
950e087c383a9a8451a9d7577c3ef6eec5332587. See also ARTEMIS-2838.

This is a commit between 2.15.0 and 2.16.0. The problem actually also exists in 
2.16.0.

>From a technical point of view, the visible error is caused by the fact that 
>the table header is rewritten. That causes all connected event-handlers to be 
>lost. i.e. the "select-all" checkbox is reduced to a simple checkbox. the 
>error seems to occur in the "patternfly" library that handles the gui 
>elements. For "table", see 
>[https://pf3.patternfly.org/v3/pattern-library/content-views/table-view/]. for 
>"paging", see 
>[https://pf3.patternfly.org/v3/pattern-library/navigation/pagination/]. at 
>this moment, I have no clue whether the problem is caused by these libraries, 
>or that the libraries are somehow misused.

> Select all button in web console doesn't work as expected
> -
>
> Key: ARTEMIS-3428
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3428
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.17.0, 2.18.0, 2.19.0, 2.20.0, 2.21.0, 2.22.0, 2.23.0, 
> 2.24.0, 2.25.0, 2.26.0, 2.27.0, 2.28.0, 2.29.0, 2.30.0, 2.31.0, 2.31.1, 2.31.2
>Reporter: Max
>Priority: Major
>
> When clicking checkbox to select all messages after switching pages, only the 
> clicked checkbox changes the state and no message is selected.
> How to reproduce
> very easy.
> make sure you have more messages than fits on one page (e.g. page size 10, 
> then 15 messages, page size 50, then 60 messages)
> click browse messages
> change page to the next page (2.. or more)
> change page back to the page 1
> click select all. at this moment it should have selected all messages as 
> usuaul. instead only the clicked select all checkbox changes state to 
> "selected" but the messages don't get selected. 
> This is very annoying bug. Especially in combination with another bug related 
> to page size change ARTEMIS-3427. So after clicking back and forth to get 
> desired page size displayed and then select all doesn't work. Then need to 
> refresh the page to get select all working again but the page size is back to 
> default 10. Though sometimes new size persists between refreshes.
> This makes deleting/retrying messages in dlq quite of a ugly task.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-3428) Select all button in web console doesn't work as expected

2023-12-30 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801409#comment-17801409
 ] 

Erwin Dondorp commented on ARTEMIS-3428:


[~Mustermann] 

I have bisected this problem.

The problem occurs since the introduction of Hawtio2 in commit 
950e087c383a9a8451a9d7577c3ef6eec5332587. See also ARTEMIS-2838.

This is a commit between 2.15.0 and 2.16.0. The problem actually also exists in 
2.16.0.

>From a technical point of view, the visible error is caused by the fact that 
>the table header is rewritten. That causes all connected event-handlers to be 
>lost. i.e. the "select-all" checkbox is reduced to a simple checkbox. the 
>error seems to occur in the "patternfly" library that handles the gui 
>elements. For "table", see 
>[https://pf3.patternfly.org/v3/pattern-library/content-views/table-view/]. for 
>"paging", see 
>[https://pf3.patternfly.org/v3/pattern-library/navigation/pagination/]. at 
>this moment, I have no clue whether the problem is caused by these libraries, 
>or that the libraries are somehow misused.

> Select all button in web console doesn't work as expected
> -
>
> Key: ARTEMIS-3428
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3428
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.17.0, 2.18.0, 2.19.0, 2.20.0, 2.21.0, 2.22.0, 2.23.0, 
> 2.24.0, 2.25.0, 2.26.0, 2.27.0, 2.28.0, 2.29.0, 2.30.0, 2.31.0, 2.31.1, 2.31.2
>Reporter: Max
>Priority: Major
>
> When clicking checkbox to select all messages after switching pages, only the 
> clicked checkbox changes the state and no message is selected.
> How to reproduce
> very easy.
> make sure you have more messages than fits on one page (e.g. page size 10, 
> then 15 messages, page size 50, then 60 messages)
> click browse messages
> change page to the next page (2.. or more)
> change page back to the page 1
> click select all. at this moment it should have selected all messages as 
> usuaul. instead only the clicked select all checkbox changes state to 
> "selected" but the messages don't get selected. 
> This is very annoying bug. Especially in combination with another bug related 
> to page size change ARTEMIS-3427. So after clicking back and forth to get 
> desired page size displayed and then select all doesn't work. Then need to 
> refresh the page to get select all working again but the page size is back to 
> default 10. Though sometimes new size persists between refreshes.
> This makes deleting/retrying messages in dlq quite of a ugly task.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4549) confirmation for delete/purge-queue mention "address" instead of "queue"

2023-12-29 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4549:
---
Description: 
When deleting or purging a queue, the title of the dialog box says:
 * "Confirm delete address" (see screenshot below); or
 * "Confirm purge address".

But these operations relate to a queue. Therefore the text should be: "Confirm 
delete queue" or "Confirm purge queue" instead.

on further investigation, the same happens for the related error messages from 
these operations.

a PR is added.

!image-2023-12-29-16-28-54-626.png!

  was:
When deleting or purging a queue, the title of the dialog box says:
 * "Confirm delete address" (see screenshot below); or
 * "Confirm purge address".

But these operations relate to a queue. Therefore the text should be: "Confirm 
delete queue" or "Confirm purge queue" instead.

on further investigation, the same happens for the related error messages from 
these operations.

a PR will be added.

!image-2023-12-29-16-28-54-626.png!


> confirmation for delete/purge-queue mention "address" instead of "queue"
> 
>
> Key: ARTEMIS-4549
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4549
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.31.2
>Reporter: Erwin Dondorp
>Priority: Minor
> Attachments: image-2023-12-29-16-28-54-626.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When deleting or purging a queue, the title of the dialog box says:
>  * "Confirm delete address" (see screenshot below); or
>  * "Confirm purge address".
> But these operations relate to a queue. Therefore the text should be: 
> "Confirm delete queue" or "Confirm purge queue" instead.
> on further investigation, the same happens for the related error messages 
> from these operations.
> a PR is added.
> !image-2023-12-29-16-28-54-626.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4549) confirmation for delete/purge-queue mention "address" instead of "queue"

2023-12-29 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4549:
---
Description: 
When deleting or purging a queue, the title of the dialog box says:
 * "Confirm delete address" (see screenshot below); or
 * "Confirm purge address".

But these operations relate to a queue. Therefore the text should be: "Confirm 
delete queue" or "Confirm purge queue" instead.

on further investigation, the same happens for the related error messages from 
these operations.

a PR will be added.

!image-2023-12-29-16-28-54-626.png!

  was:
When deleting or purging a queue, the title of the dialog box says:
 * "Confirm delete address" (see screenshot below); or
 * "Confirm purge address".

But these operations relate to a queue. Therefore the text should be: "Confirm 
delete queue" or "Confirm purge queue" instead.

a PR will be added.

!image-2023-12-29-16-28-54-626.png!


> confirmation for delete/purge-queue mention "address" instead of "queue"
> 
>
> Key: ARTEMIS-4549
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4549
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.31.2
>Reporter: Erwin Dondorp
>Priority: Minor
> Attachments: image-2023-12-29-16-28-54-626.png
>
>
> When deleting or purging a queue, the title of the dialog box says:
>  * "Confirm delete address" (see screenshot below); or
>  * "Confirm purge address".
> But these operations relate to a queue. Therefore the text should be: 
> "Confirm delete queue" or "Confirm purge queue" instead.
> on further investigation, the same happens for the related error messages 
> from these operations.
> a PR will be added.
> !image-2023-12-29-16-28-54-626.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4549) confirmation for delete/purge-queue mention "address" instead of "queue"

2023-12-29 Thread Erwin Dondorp (Jira)
Erwin Dondorp created ARTEMIS-4549:
--

 Summary: confirmation for delete/purge-queue mention "address" 
instead of "queue"
 Key: ARTEMIS-4549
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4549
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Web Console
Affects Versions: 2.31.2
Reporter: Erwin Dondorp
 Attachments: image-2023-12-29-16-28-54-626.png

When deleting or purging a queue, the title of the dialog box says:
 * "Confirm delete address" (see screenshot below); or
 * "Confirm purge address".

But these operations relate to a queue. Therefore the text should be: "Confirm 
delete queue" or "Confirm purge queue" instead.

a PR will be added.

!image-2023-12-29-16-28-54-626.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-3428) Select all button in web console doesn't work as expected

2023-12-28 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801008#comment-17801008
 ] 

Erwin Dondorp commented on ARTEMIS-3428:


> Am I the only one using queue browser?
No, you are not. But don't forget that you have a specific use-case: you are 
deleting individual messages manually. My guess is that most people just use 
the "delete-all-messages" function in the gui for this. Personally, I have set 
up expiry on the ExpiryQueue/DLQ itself to minimize cleanup-work. e.g. 
min-expiry-delay=1d, max-expiry-delay=1d, expiry-address=empty.

> Select all button in web console doesn't work as expected
> -
>
> Key: ARTEMIS-3428
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3428
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.17.0, 2.18.0, 2.19.0, 2.20.0, 2.21.0, 2.22.0, 2.23.0, 
> 2.24.0, 2.25.0, 2.26.0, 2.27.0, 2.28.0, 2.29.0, 2.30.0, 2.31.0, 2.31.1, 2.31.2
>Reporter: Max
>Priority: Major
>
> When clicking checkbox to select all messages after switching pages, only the 
> clicked checkbox changes the state and no message is selected.
> How to reproduce
> very easy.
> make sure you have more messages than fits on one page (e.g. page size 10, 
> then 15 messages, page size 50, then 60 messages)
> click browse messages
> change page to the next page (2.. or more)
> change page back to the page 1
> click select all. at this moment it should have selected all messages as 
> usuaul. instead only the clicked select all checkbox changes state to 
> "selected" but the messages don't get selected. 
> This is very annoying bug. Especially in combination with another bug related 
> to page size change ARTEMIS-3427. So after clicking back and forth to get 
> desired page size displayed and then select all doesn't work. Then need to 
> refresh the page to get select all working again but the page size is back to 
> default 10. Though sometimes new size persists between refreshes.
> This makes deleting/retrying messages in dlq quite of a ugly task.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-3428) Select all button in web console doesn't work as expected

2023-12-21 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17472356#comment-17472356
 ] 

Erwin Dondorp edited comment on ARTEMIS-3428 at 12/21/23 11:17 PM:
---

I think this is a duplicate of ARTEMIS-3408. however this use case is simpler 
as on the other issue the effect was thought to be after deleting a page of 
messages. The side-effect of that is of course that an implicit page-navigation 
is done, thus making it the same.


was (Author: erwindon):
I think this is a duplicate of ARTEMIS-3428. however this use case is simpler 
as on the other issue the effect was thought to be after deleting a page of 
messages. The side-effect of that is of course that an implicit page-navigation 
is done, thus making it the same.

> Select all button in web console doesn't work as expected
> -
>
> Key: ARTEMIS-3428
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3428
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.17.0, 2.18.0
>Reporter: Max
>Priority: Major
>
> When clicking checkbox to select all messages after switching pages, only the 
> clicked checkbox changes the state and no message is selected.
> How to reproduce
> very easy.
> make sure you have more messages than fits on one page (e.g. page size 10, 
> then 15 messages, page size 50, then 60 messages)
> click browse messages
> change page to the next page (2.. or more)
> change page back to the page 1
> click select all. at this moment it should have selected all messages as 
> usuaul. instead only the clicked select all checkbox changes state to 
> "selected" but the messages don't get selected. 
> This is very annoying bug. Especially in combination with another bug related 
> to page size change ARTEMIS-3427. So after clicking back and forth to get 
> desired page size displayed and then select all doesn't work. Then need to 
> refresh the page to get select all working again but the page size is back to 
> default 10. Though sometimes new size persists between refreshes.
> This makes deleting/retrying messages in dlq quite of a ugly task.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (ARTEMIS-3190) AMQ222289: Did not route to any matching bindings on dead-letter-address DLQ and auto-create-dead-letter-resources is true

2023-12-21 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp closed ARTEMIS-3190.
--
Resolution: Invalid

> AMQ89: Did not route to any matching bindings on dead-letter-address DLQ 
> and auto-create-dead-letter-resources is true
> --
>
> Key: ARTEMIS-3190
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3190
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.17.0
>Reporter: Erwin Dondorp
>Assignee: Justin Bertram
>Priority: Major
>
> When a client rollbacks the offered messages, the following error is visible:
> {noformat}
> WARN  [org.apache.activemq.artemis.core.server] AMQ89: Did not route to 
> any matching bindings on dead-letter-address DLQ and 
> auto-create-dead-letter-resources is true; dropping message: 
> Reference[375]:RELIABLE:AMQPStandardMessage( [durable=true, messageID=375, 
> address=x1, size=189, applicationProperties={}, 
> properties=Properties{messageId=8bedcf71-a878-469f-9fdf-937c3dc44a02, 
> userId=null, to='x1', subject='null', replyTo='null', correlationId=null, 
> contentType=null, contentEncoding=null, absoluteExpiryTime=null, 
> creationTime=Fri Mar 19 12:07:02 UTC 2021, groupId='null', 
> groupSequence=null, replyToGroupId='null'}, extraProperties = 
> TypedProperties[_AMQ_AD=x1]]
> {noformat}
> The original source code of Artemis says "this shouldn't happen, but in case 
> it does it's better to log a message than just drop the message silently". 
> And indeed the message is dropped.
> setup:
> * use a client that uses createSession(true, Session.CLIENT_ACKNOWLEDGE)
> * let the client always rollback the offered message
> * server has auto-create-dead-letter-resources=true for all addresses
> after 10 tries, the server gives up on the message and tries to move it to 
> the DLQ address --> OK.
> when the DLQ address does not exist yet, it is created --> OK
> when the queue under DLQ does not exist yet, it is created --> OK
> but moving the message to that queue fails with the above message --> FAIL
> this results in message loss.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-3190) AMQ222289: Did not route to any matching bindings on dead-letter-address DLQ and auto-create-dead-letter-resources is true

2023-12-21 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17799608#comment-17799608
 ] 

Erwin Dondorp commented on ARTEMIS-3190:


I've not seen this error in a long time. closing it.

> AMQ89: Did not route to any matching bindings on dead-letter-address DLQ 
> and auto-create-dead-letter-resources is true
> --
>
> Key: ARTEMIS-3190
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3190
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.17.0
>Reporter: Erwin Dondorp
>Assignee: Justin Bertram
>Priority: Major
>
> When a client rollbacks the offered messages, the following error is visible:
> {noformat}
> WARN  [org.apache.activemq.artemis.core.server] AMQ89: Did not route to 
> any matching bindings on dead-letter-address DLQ and 
> auto-create-dead-letter-resources is true; dropping message: 
> Reference[375]:RELIABLE:AMQPStandardMessage( [durable=true, messageID=375, 
> address=x1, size=189, applicationProperties={}, 
> properties=Properties{messageId=8bedcf71-a878-469f-9fdf-937c3dc44a02, 
> userId=null, to='x1', subject='null', replyTo='null', correlationId=null, 
> contentType=null, contentEncoding=null, absoluteExpiryTime=null, 
> creationTime=Fri Mar 19 12:07:02 UTC 2021, groupId='null', 
> groupSequence=null, replyToGroupId='null'}, extraProperties = 
> TypedProperties[_AMQ_AD=x1]]
> {noformat}
> The original source code of Artemis says "this shouldn't happen, but in case 
> it does it's better to log a message than just drop the message silently". 
> And indeed the message is dropped.
> setup:
> * use a client that uses createSession(true, Session.CLIENT_ACKNOWLEDGE)
> * let the client always rollback the offered message
> * server has auto-create-dead-letter-resources=true for all addresses
> after 10 tries, the server gives up on the message and tries to move it to 
> the DLQ address --> OK.
> when the DLQ address does not exist yet, it is created --> OK
> when the queue under DLQ does not exist yet, it is created --> OK
> but moving the message to that queue fails with the above message --> FAIL
> this results in message loss.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4547) empty message shows "Unsupported message body type which cannot be displayed by hawtio"

2023-12-21 Thread Erwin Dondorp (Jira)
Erwin Dondorp created ARTEMIS-4547:
--

 Summary: empty message shows "Unsupported message body type which 
cannot be displayed by hawtio"
 Key: ARTEMIS-4547
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4547
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Web Console
Affects Versions: 2.31.2
Reporter: Erwin Dondorp


When an empty message is sent and it is viewed in the web console, there the 
error message "Unsupported message body type which cannot be displayed by 
hawtio" is shown.

I think the following will solve the problem:
{code}
--- 
a/artemis-hawtio/artemis-plugin/src/main/webapp/plugin/js/components/browse.js
+++ 
b/artemis-hawtio/artemis-plugin/src/main/webapp/plugin/js/components/browse.js
@@ -757,7 +757,7 @@ var Artemis;
 */
 function createBodyText(message) {
 Artemis.log.debug("loading message:" + message);
-if (message.text) {
+if (message.text !== undefined) {
 var body = message.text;
 var lenTxt = "" + body.length;
 message.textMode = "text (" + lenTxt + " chars)";
{code}

but I need to test it before I submit the PR



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4547) empty message shows "Unsupported message body type which cannot be displayed by hawtio"

2023-12-21 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4547:
---
Description: 
When an empty message is sent and it is viewed in the web console, there the 
error message "Unsupported message body type which cannot be displayed by 
hawtio" is shown.
This includes empty messages sent from the console itself.

I think the following will solve the problem:
{code}
--- 
a/artemis-hawtio/artemis-plugin/src/main/webapp/plugin/js/components/browse.js
+++ 
b/artemis-hawtio/artemis-plugin/src/main/webapp/plugin/js/components/browse.js
@@ -757,7 +757,7 @@ var Artemis;
 */
 function createBodyText(message) {
 Artemis.log.debug("loading message:" + message);
-if (message.text) {
+if (message.text !== undefined) {
 var body = message.text;
 var lenTxt = "" + body.length;
 message.textMode = "text (" + lenTxt + " chars)";
{code}

but I need to test it before I submit the PR

  was:
When an empty message is sent and it is viewed in the web console, there the 
error message "Unsupported message body type which cannot be displayed by 
hawtio" is shown.

I think the following will solve the problem:
{code}
--- 
a/artemis-hawtio/artemis-plugin/src/main/webapp/plugin/js/components/browse.js
+++ 
b/artemis-hawtio/artemis-plugin/src/main/webapp/plugin/js/components/browse.js
@@ -757,7 +757,7 @@ var Artemis;
 */
 function createBodyText(message) {
 Artemis.log.debug("loading message:" + message);
-if (message.text) {
+if (message.text !== undefined) {
 var body = message.text;
 var lenTxt = "" + body.length;
 message.textMode = "text (" + lenTxt + " chars)";
{code}

but I need to test it before I submit the PR


> empty message shows "Unsupported message body type which cannot be displayed 
> by hawtio"
> ---
>
> Key: ARTEMIS-4547
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4547
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.31.2
>Reporter: Erwin Dondorp
>Priority: Minor
>
> When an empty message is sent and it is viewed in the web console, there the 
> error message "Unsupported message body type which cannot be displayed by 
> hawtio" is shown.
> This includes empty messages sent from the console itself.
> I think the following will solve the problem:
> {code}
> --- 
> a/artemis-hawtio/artemis-plugin/src/main/webapp/plugin/js/components/browse.js
> +++ 
> b/artemis-hawtio/artemis-plugin/src/main/webapp/plugin/js/components/browse.js
> @@ -757,7 +757,7 @@ var Artemis;
>  */
>  function createBodyText(message) {
>  Artemis.log.debug("loading message:" + message);
> -if (message.text) {
> +if (message.text !== undefined) {
>  var body = message.text;
>  var lenTxt = "" + body.length;
>  message.textMode = "text (" + lenTxt + " chars)";
> {code}
> but I need to test it before I submit the PR



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4535) invalid filter in GUI gives large stack-trace in logfile

2023-12-15 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17797153#comment-17797153
 ] 

Erwin Dondorp commented on ARTEMIS-4535:


PR is added

> invalid filter in GUI gives large stack-trace in logfile
> 
>
> Key: ARTEMIS-4535
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4535
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.31.2
>Reporter: Erwin Dondorp
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Using the message browser screen (artemisBrowseQueue) with a lot of messages.
> Tried to search a message by giving its message-id (just "2149244887") in the 
> search-field.
> This resulted in no change in the gui, but a very large stack-trace in the 
> logfile.
> I know the expression syntax is described in 
> https://activemq.apache.org/components/artemis/documentation/latest/filter-expressions.html.
> It's just that this stack-trace is a bit much...
> The stack-trace and a few extra lines at the beginning for context:
> {noformat}
> 2023-12-14 20:24:42,956 ERROR [org.apache.activemq.artemis.core.server] 
> AMQ224006: Invalid filter: 2149244887
> 2023-12-14 20:24:42,957 ERROR [org.apache.activemq.artemis.core.server] 
> AMQ224006: Invalid filter: 2149244887
> 2023-12-14 20:24:42,957 WARN  
> [org.apache.activemq.artemis.core.management.impl.QueueControlImpl] 
> AMQ229020: Invalid filter: 2149244887
> org.apache.activemq.artemis.api.core.ActiveMQInvalidFilterExpressionException:
>  AMQ229020: Invalid filter: 2149244887
>     at 
> org.apache.activemq.artemis.core.filter.impl.FilterImpl.createFilter(FilterImpl.java:90)
>  ~[artemis-server-2.31.2.jar:2.31.2]
>     at 
> org.apache.activemq.artemis.core.filter.impl.FilterImpl.createFilter(FilterImpl.java:72)
>  ~[artemis-server-2.31.2.jar:2.31.2]
>     at 
> org.apache.activemq.artemis.core.management.impl.QueueControlImpl.browse(QueueControlImpl.java:1614)
>  ~[artemis-server-2.31.2.jar:2.31.2]
>     at jdk.internal.reflect.GeneratedMethodAccessor42.invoke(Unknown Source) 
> ~[?:?]
>     at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:?]
>     at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
>     at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71) ~[?:?]
>     at jdk.internal.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) 
> ~[?:?]
>     at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:?]
>     at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
>     at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:260) ~[?:?]
>     at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>  ~[?:?]
>     at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>  ~[?:?]
>     at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) 
> ~[?:?]
>     at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) 
> ~[?:?]
>     at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252) 
> ~[?:?]
>     at javax.management.StandardMBean.invoke(StandardMBean.java:405) ~[?:?]
>     at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:809)
>  ~[?:?]
>     at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) 
> ~[?:?]
>     at jdk.internal.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) 
> ~[?:?]
>     at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:?]
>     at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
>     at 
> org.apache.activemq.artemis.core.server.management.ArtemisMBeanServerBuilder$MBeanInvocationHandler.invoke(ArtemisMBeanServerBuilder.java:96)
>  ~[artemis-server-2.31.2.jar:2.31.2]
>     at com.sun.proxy.$Proxy31.invoke(Unknown Source) ~[?:?]
>     at org.jolokia.handler.ExecHandler.doHandleRequest(ExecHandler.java:98) 
> ~[jolokia-core-1.7.2.jar:?]
>     at org.jolokia.handler.ExecHandler.doHandleRequest(ExecHandler.java:40) 
> ~[jolokia-core-1.7.2.jar:?]
>     at 
> org.jolokia.handler.JsonRequestHandler.handleRequest(JsonRequestHandler.java:89)
>  ~[jolokia-core-1.7.2.jar:?]
>     at 
> org.jolokia.backend.MBeanServerExecutorLocal.handleRequest(MBeanServerExecutorLocal.java:109)
>  ~[jolokia-core-1.7.2.jar:?]
>     at 
> org.jolokia.backend.MBeanServerHandler.dispatchRequest(MBeanServerHandler.java:161)
>  ~[jolokia-core-1.7.2.jar:?]
>     at 
> org.jolokia.backend.LocalRequestDispatcher.dispatchRequest(LocalRequestDispatcher.java:99)
>  ~[jolokia-core-1.7.2.jar:?]
>     at 
> 

[jira] [Commented] (ARTEMIS-4535) invalid filter in GUI gives large stack-trace in logfile

2023-12-15 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17797145#comment-17797145
 ] 

Erwin Dondorp commented on ARTEMIS-4535:


when an invalid filter is entered, also 2 (identical) error lines appear in the 
logfile with code AMQ224006.
these are raised from 
artemis-server/src/main/java/org/apache/activemq/artemis/core/filter/impl/FilterImpl.java
 (lines 89 and 124)
see the first 2 lines of the log in the issue description.
by that time, the exception is of type FilterException.
that makes it harder to distinguish them from other failures.
Priority to suppress these is lower because these are (each) just one line in 
the logfile.
I'll leave them unchanged.


> invalid filter in GUI gives large stack-trace in logfile
> 
>
> Key: ARTEMIS-4535
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4535
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.31.2
>Reporter: Erwin Dondorp
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Using the message browser screen (artemisBrowseQueue) with a lot of messages.
> Tried to search a message by giving its message-id (just "2149244887") in the 
> search-field.
> This resulted in no change in the gui, but a very large stack-trace in the 
> logfile.
> I know the expression syntax is described in 
> https://activemq.apache.org/components/artemis/documentation/latest/filter-expressions.html.
> It's just that this stack-trace is a bit much...
> The stack-trace and a few extra lines at the beginning for context:
> {noformat}
> 2023-12-14 20:24:42,956 ERROR [org.apache.activemq.artemis.core.server] 
> AMQ224006: Invalid filter: 2149244887
> 2023-12-14 20:24:42,957 ERROR [org.apache.activemq.artemis.core.server] 
> AMQ224006: Invalid filter: 2149244887
> 2023-12-14 20:24:42,957 WARN  
> [org.apache.activemq.artemis.core.management.impl.QueueControlImpl] 
> AMQ229020: Invalid filter: 2149244887
> org.apache.activemq.artemis.api.core.ActiveMQInvalidFilterExpressionException:
>  AMQ229020: Invalid filter: 2149244887
>     at 
> org.apache.activemq.artemis.core.filter.impl.FilterImpl.createFilter(FilterImpl.java:90)
>  ~[artemis-server-2.31.2.jar:2.31.2]
>     at 
> org.apache.activemq.artemis.core.filter.impl.FilterImpl.createFilter(FilterImpl.java:72)
>  ~[artemis-server-2.31.2.jar:2.31.2]
>     at 
> org.apache.activemq.artemis.core.management.impl.QueueControlImpl.browse(QueueControlImpl.java:1614)
>  ~[artemis-server-2.31.2.jar:2.31.2]
>     at jdk.internal.reflect.GeneratedMethodAccessor42.invoke(Unknown Source) 
> ~[?:?]
>     at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:?]
>     at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
>     at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71) ~[?:?]
>     at jdk.internal.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) 
> ~[?:?]
>     at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:?]
>     at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
>     at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:260) ~[?:?]
>     at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>  ~[?:?]
>     at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>  ~[?:?]
>     at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) 
> ~[?:?]
>     at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) 
> ~[?:?]
>     at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252) 
> ~[?:?]
>     at javax.management.StandardMBean.invoke(StandardMBean.java:405) ~[?:?]
>     at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:809)
>  ~[?:?]
>     at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) 
> ~[?:?]
>     at jdk.internal.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) 
> ~[?:?]
>     at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:?]
>     at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
>     at 
> org.apache.activemq.artemis.core.server.management.ArtemisMBeanServerBuilder$MBeanInvocationHandler.invoke(ArtemisMBeanServerBuilder.java:96)
>  ~[artemis-server-2.31.2.jar:2.31.2]
>     at com.sun.proxy.$Proxy31.invoke(Unknown Source) ~[?:?]
>     at org.jolokia.handler.ExecHandler.doHandleRequest(ExecHandler.java:98) 
> ~[jolokia-core-1.7.2.jar:?]
>     at org.jolokia.handler.ExecHandler.doHandleRequest(ExecHandler.java:40) 
> ~[jolokia-core-1.7.2.jar:?]
>     at 
> 

[jira] [Commented] (ARTEMIS-4535) invalid filter in GUI gives large stack-trace in logfile

2023-12-14 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17796974#comment-17796974
 ] 

Erwin Dondorp commented on ARTEMIS-4535:


I'll create a PR for that... (tomorrow)...

> invalid filter in GUI gives large stack-trace in logfile
> 
>
> Key: ARTEMIS-4535
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4535
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.31.2
>Reporter: Erwin Dondorp
>Priority: Minor
>
> Using the message browser screen (artemisBrowseQueue) with a lot of messages.
> Tried to search a message by giving its message-id (just "2149244887") in the 
> search-field.
> This resulted in no change in the gui, but a very large stack-trace in the 
> logfile.
> I know the expression syntax is described in 
> https://activemq.apache.org/components/artemis/documentation/latest/filter-expressions.html.
> It's just that this stack-trace is a bit much...
> The stack-trace and a few extra lines at the beginning for context:
> {noformat}
> 2023-12-14 20:24:42,956 ERROR [org.apache.activemq.artemis.core.server] 
> AMQ224006: Invalid filter: 2149244887
> 2023-12-14 20:24:42,957 ERROR [org.apache.activemq.artemis.core.server] 
> AMQ224006: Invalid filter: 2149244887
> 2023-12-14 20:24:42,957 WARN  
> [org.apache.activemq.artemis.core.management.impl.QueueControlImpl] 
> AMQ229020: Invalid filter: 2149244887
> org.apache.activemq.artemis.api.core.ActiveMQInvalidFilterExpressionException:
>  AMQ229020: Invalid filter: 2149244887
>     at 
> org.apache.activemq.artemis.core.filter.impl.FilterImpl.createFilter(FilterImpl.java:90)
>  ~[artemis-server-2.31.2.jar:2.31.2]
>     at 
> org.apache.activemq.artemis.core.filter.impl.FilterImpl.createFilter(FilterImpl.java:72)
>  ~[artemis-server-2.31.2.jar:2.31.2]
>     at 
> org.apache.activemq.artemis.core.management.impl.QueueControlImpl.browse(QueueControlImpl.java:1614)
>  ~[artemis-server-2.31.2.jar:2.31.2]
>     at jdk.internal.reflect.GeneratedMethodAccessor42.invoke(Unknown Source) 
> ~[?:?]
>     at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:?]
>     at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
>     at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71) ~[?:?]
>     at jdk.internal.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) 
> ~[?:?]
>     at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:?]
>     at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
>     at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:260) ~[?:?]
>     at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>  ~[?:?]
>     at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>  ~[?:?]
>     at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) 
> ~[?:?]
>     at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) 
> ~[?:?]
>     at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252) 
> ~[?:?]
>     at javax.management.StandardMBean.invoke(StandardMBean.java:405) ~[?:?]
>     at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:809)
>  ~[?:?]
>     at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) 
> ~[?:?]
>     at jdk.internal.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) 
> ~[?:?]
>     at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:?]
>     at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
>     at 
> org.apache.activemq.artemis.core.server.management.ArtemisMBeanServerBuilder$MBeanInvocationHandler.invoke(ArtemisMBeanServerBuilder.java:96)
>  ~[artemis-server-2.31.2.jar:2.31.2]
>     at com.sun.proxy.$Proxy31.invoke(Unknown Source) ~[?:?]
>     at org.jolokia.handler.ExecHandler.doHandleRequest(ExecHandler.java:98) 
> ~[jolokia-core-1.7.2.jar:?]
>     at org.jolokia.handler.ExecHandler.doHandleRequest(ExecHandler.java:40) 
> ~[jolokia-core-1.7.2.jar:?]
>     at 
> org.jolokia.handler.JsonRequestHandler.handleRequest(JsonRequestHandler.java:89)
>  ~[jolokia-core-1.7.2.jar:?]
>     at 
> org.jolokia.backend.MBeanServerExecutorLocal.handleRequest(MBeanServerExecutorLocal.java:109)
>  ~[jolokia-core-1.7.2.jar:?]
>     at 
> org.jolokia.backend.MBeanServerHandler.dispatchRequest(MBeanServerHandler.java:161)
>  ~[jolokia-core-1.7.2.jar:?]
>     at 
> org.jolokia.backend.LocalRequestDispatcher.dispatchRequest(LocalRequestDispatcher.java:99)
>  ~[jolokia-core-1.7.2.jar:?]
>     at 
> 

[jira] [Commented] (ARTEMIS-4535) invalid filter in GUI gives large stack-trace in logfile

2023-12-14 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17796951#comment-17796951
 ] 

Erwin Dondorp commented on ARTEMIS-4535:


the log seems to originate from an extra log statement in 
artemis-server/src/main/java/org/apache/activemq/artemis/core/management/impl/QueueControlImpl.java(1642)

> invalid filter in GUI gives large stack-trace in logfile
> 
>
> Key: ARTEMIS-4535
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4535
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.31.2
>Reporter: Erwin Dondorp
>Priority: Minor
>
> Using the message browser screen (artemisBrowseQueue) with a lot of messages.
> Tried to search a message by giving its message-id (just "2149244887") in the 
> search-field.
> This resulted in no change in the gui, but a very large stack-trace in the 
> logfile.
> I know the expression syntax is described in 
> https://activemq.apache.org/components/artemis/documentation/latest/filter-expressions.html.
> It's just that this stack-trace is a bit much...
> The stack-trace and a few extra lines at the beginning for context:
> {noformat}
> 2023-12-14 20:24:42,956 ERROR [org.apache.activemq.artemis.core.server] 
> AMQ224006: Invalid filter: 2149244887
> 2023-12-14 20:24:42,957 ERROR [org.apache.activemq.artemis.core.server] 
> AMQ224006: Invalid filter: 2149244887
> 2023-12-14 20:24:42,957 WARN  
> [org.apache.activemq.artemis.core.management.impl.QueueControlImpl] 
> AMQ229020: Invalid filter: 2149244887
> org.apache.activemq.artemis.api.core.ActiveMQInvalidFilterExpressionException:
>  AMQ229020: Invalid filter: 2149244887
>     at 
> org.apache.activemq.artemis.core.filter.impl.FilterImpl.createFilter(FilterImpl.java:90)
>  ~[artemis-server-2.31.2.jar:2.31.2]
>     at 
> org.apache.activemq.artemis.core.filter.impl.FilterImpl.createFilter(FilterImpl.java:72)
>  ~[artemis-server-2.31.2.jar:2.31.2]
>     at 
> org.apache.activemq.artemis.core.management.impl.QueueControlImpl.browse(QueueControlImpl.java:1614)
>  ~[artemis-server-2.31.2.jar:2.31.2]
>     at jdk.internal.reflect.GeneratedMethodAccessor42.invoke(Unknown Source) 
> ~[?:?]
>     at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:?]
>     at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
>     at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71) ~[?:?]
>     at jdk.internal.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) 
> ~[?:?]
>     at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:?]
>     at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
>     at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:260) ~[?:?]
>     at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>  ~[?:?]
>     at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>  ~[?:?]
>     at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) 
> ~[?:?]
>     at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) 
> ~[?:?]
>     at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252) 
> ~[?:?]
>     at javax.management.StandardMBean.invoke(StandardMBean.java:405) ~[?:?]
>     at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:809)
>  ~[?:?]
>     at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) 
> ~[?:?]
>     at jdk.internal.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) 
> ~[?:?]
>     at 
> jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:?]
>     at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
>     at 
> org.apache.activemq.artemis.core.server.management.ArtemisMBeanServerBuilder$MBeanInvocationHandler.invoke(ArtemisMBeanServerBuilder.java:96)
>  ~[artemis-server-2.31.2.jar:2.31.2]
>     at com.sun.proxy.$Proxy31.invoke(Unknown Source) ~[?:?]
>     at org.jolokia.handler.ExecHandler.doHandleRequest(ExecHandler.java:98) 
> ~[jolokia-core-1.7.2.jar:?]
>     at org.jolokia.handler.ExecHandler.doHandleRequest(ExecHandler.java:40) 
> ~[jolokia-core-1.7.2.jar:?]
>     at 
> org.jolokia.handler.JsonRequestHandler.handleRequest(JsonRequestHandler.java:89)
>  ~[jolokia-core-1.7.2.jar:?]
>     at 
> org.jolokia.backend.MBeanServerExecutorLocal.handleRequest(MBeanServerExecutorLocal.java:109)
>  ~[jolokia-core-1.7.2.jar:?]
>     at 
> org.jolokia.backend.MBeanServerHandler.dispatchRequest(MBeanServerHandler.java:161)
>  ~[jolokia-core-1.7.2.jar:?]
>     at 
> 

[jira] [Updated] (ARTEMIS-4535) invalid filter in GUI gives large stack-trace in logfile

2023-12-14 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4535:
---
Description: 
Using the message browser screen (artemisBrowseQueue) with a lot of messages.
Tried to search a message by giving its message-id (just "2149244887") in the 
search-field.
This resulted in no change in the gui, but a very large stack-trace in the 
logfile.

I know the expression syntax is described in 
https://activemq.apache.org/components/artemis/documentation/latest/filter-expressions.html.
It's just that this stack-trace is a bit much...

The stack-trace and a few extra lines at the beginning for context:
{noformat}
2023-12-14 20:24:42,956 ERROR [org.apache.activemq.artemis.core.server] 
AMQ224006: Invalid filter: 2149244887
2023-12-14 20:24:42,957 ERROR [org.apache.activemq.artemis.core.server] 
AMQ224006: Invalid filter: 2149244887
2023-12-14 20:24:42,957 WARN  
[org.apache.activemq.artemis.core.management.impl.QueueControlImpl] AMQ229020: 
Invalid filter: 2149244887
org.apache.activemq.artemis.api.core.ActiveMQInvalidFilterExpressionException: 
AMQ229020: Invalid filter: 2149244887
    at 
org.apache.activemq.artemis.core.filter.impl.FilterImpl.createFilter(FilterImpl.java:90)
 ~[artemis-server-2.31.2.jar:2.31.2]
    at 
org.apache.activemq.artemis.core.filter.impl.FilterImpl.createFilter(FilterImpl.java:72)
 ~[artemis-server-2.31.2.jar:2.31.2]
    at 
org.apache.activemq.artemis.core.management.impl.QueueControlImpl.browse(QueueControlImpl.java:1614)
 ~[artemis-server-2.31.2.jar:2.31.2]
    at jdk.internal.reflect.GeneratedMethodAccessor42.invoke(Unknown Source) 
~[?:?]
    at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:?]
    at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
    at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71) ~[?:?]
    at jdk.internal.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) 
~[?:?]
    at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:?]
    at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
    at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:260) ~[?:?]
    at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 ~[?:?]
    at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 ~[?:?]
    at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) 
~[?:?]
    at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) ~[?:?]
    at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252) ~[?:?]
    at javax.management.StandardMBean.invoke(StandardMBean.java:405) ~[?:?]
    at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:809)
 ~[?:?]
    at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) 
~[?:?]
    at jdk.internal.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) 
~[?:?]
    at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:?]
    at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
    at 
org.apache.activemq.artemis.core.server.management.ArtemisMBeanServerBuilder$MBeanInvocationHandler.invoke(ArtemisMBeanServerBuilder.java:96)
 ~[artemis-server-2.31.2.jar:2.31.2]
    at com.sun.proxy.$Proxy31.invoke(Unknown Source) ~[?:?]
    at org.jolokia.handler.ExecHandler.doHandleRequest(ExecHandler.java:98) 
~[jolokia-core-1.7.2.jar:?]
    at org.jolokia.handler.ExecHandler.doHandleRequest(ExecHandler.java:40) 
~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.handler.JsonRequestHandler.handleRequest(JsonRequestHandler.java:89)
 ~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.backend.MBeanServerExecutorLocal.handleRequest(MBeanServerExecutorLocal.java:109)
 ~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.backend.MBeanServerHandler.dispatchRequest(MBeanServerHandler.java:161)
 ~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.backend.LocalRequestDispatcher.dispatchRequest(LocalRequestDispatcher.java:99)
 ~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.backend.BackendManager.callRequestDispatcher(BackendManager.java:429)
 ~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.backend.BackendManager.handleRequest(BackendManager.java:158) 
~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.http.HttpRequestHandler.executeRequest(HttpRequestHandler.java:197) 
~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.http.HttpRequestHandler.handlePostRequest(HttpRequestHandler.java:137)
 ~[jolokia-core-1.7.2.jar:?]
    at org.jolokia.http.AgentServlet$3.handleRequest(AgentServlet.java:493) 
~[jolokia-core-1.7.2.jar:?]
    at org.jolokia.http.AgentServlet.handleSecurely(AgentServlet.java:383) 
~[jolokia-core-1.7.2.jar:?]
    at 

[jira] [Updated] (ARTEMIS-4535) invalid filter in GUI gives large stack-trace in logfile

2023-12-14 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4535:
---
Description: 
Using the message browser screen (artemisBrowseQueue) with a lot of messages.
Tried to search a message by giving its message-id (just "2149244887") in the 
search-field.
This resulted in no change in the gui, but a very large stack-trace in the 
logfile.
It is likely related to the fact that the search string looks numeric as the 
search for "x2149244887" had the expected result --> no messages selected and 
no error message in the logfile. 

The stack-trace and a few extra lines at the beginning for context:
{noformat}
2023-12-14 20:24:42,956 ERROR [org.apache.activemq.artemis.core.server] 
AMQ224006: Invalid filter: 2149244887
2023-12-14 20:24:42,957 ERROR [org.apache.activemq.artemis.core.server] 
AMQ224006: Invalid filter: 2149244887
2023-12-14 20:24:42,957 WARN  
[org.apache.activemq.artemis.core.management.impl.QueueControlImpl] AMQ229020: 
Invalid filter: 2149244887
org.apache.activemq.artemis.api.core.ActiveMQInvalidFilterExpressionException: 
AMQ229020: Invalid filter: 2149244887
    at 
org.apache.activemq.artemis.core.filter.impl.FilterImpl.createFilter(FilterImpl.java:90)
 ~[artemis-server-2.31.2.jar:2.31.2]
    at 
org.apache.activemq.artemis.core.filter.impl.FilterImpl.createFilter(FilterImpl.java:72)
 ~[artemis-server-2.31.2.jar:2.31.2]
    at 
org.apache.activemq.artemis.core.management.impl.QueueControlImpl.browse(QueueControlImpl.java:1614)
 ~[artemis-server-2.31.2.jar:2.31.2]
    at jdk.internal.reflect.GeneratedMethodAccessor42.invoke(Unknown Source) 
~[?:?]
    at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:?]
    at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
    at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71) ~[?:?]
    at jdk.internal.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) 
~[?:?]
    at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:?]
    at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
    at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:260) ~[?:?]
    at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 ~[?:?]
    at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 ~[?:?]
    at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) 
~[?:?]
    at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) ~[?:?]
    at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252) ~[?:?]
    at javax.management.StandardMBean.invoke(StandardMBean.java:405) ~[?:?]
    at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:809)
 ~[?:?]
    at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) 
~[?:?]
    at jdk.internal.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) 
~[?:?]
    at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:?]
    at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
    at 
org.apache.activemq.artemis.core.server.management.ArtemisMBeanServerBuilder$MBeanInvocationHandler.invoke(ArtemisMBeanServerBuilder.java:96)
 ~[artemis-server-2.31.2.jar:2.31.2]
    at com.sun.proxy.$Proxy31.invoke(Unknown Source) ~[?:?]
    at org.jolokia.handler.ExecHandler.doHandleRequest(ExecHandler.java:98) 
~[jolokia-core-1.7.2.jar:?]
    at org.jolokia.handler.ExecHandler.doHandleRequest(ExecHandler.java:40) 
~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.handler.JsonRequestHandler.handleRequest(JsonRequestHandler.java:89)
 ~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.backend.MBeanServerExecutorLocal.handleRequest(MBeanServerExecutorLocal.java:109)
 ~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.backend.MBeanServerHandler.dispatchRequest(MBeanServerHandler.java:161)
 ~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.backend.LocalRequestDispatcher.dispatchRequest(LocalRequestDispatcher.java:99)
 ~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.backend.BackendManager.callRequestDispatcher(BackendManager.java:429)
 ~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.backend.BackendManager.handleRequest(BackendManager.java:158) 
~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.http.HttpRequestHandler.executeRequest(HttpRequestHandler.java:197) 
~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.http.HttpRequestHandler.handlePostRequest(HttpRequestHandler.java:137)
 ~[jolokia-core-1.7.2.jar:?]
    at org.jolokia.http.AgentServlet$3.handleRequest(AgentServlet.java:493) 
~[jolokia-core-1.7.2.jar:?]
    at org.jolokia.http.AgentServlet.handleSecurely(AgentServlet.java:383) 
~[jolokia-core-1.7.2.jar:?]
    at 

[jira] [Created] (ARTEMIS-4535) invalid filter in GUI gives large stack-trace in logfile

2023-12-14 Thread Erwin Dondorp (Jira)
Erwin Dondorp created ARTEMIS-4535:
--

 Summary: invalid filter in GUI gives large stack-trace in logfile
 Key: ARTEMIS-4535
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4535
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Web Console
Affects Versions: 2.31.2
Reporter: Erwin Dondorp


Using the message browser screen (artemisBrowseQueue) with a lot of messages.

Tried to search a message by giving its message-id (just "2149244887") in the 
search-field.

This resulted in no change in the gui, but a very large stack-trace in the 
logfile.

 

 

The stack-trace and a few extra lines at the beginning for context:
{noformat}
2023-12-14 20:24:42,956 ERROR [org.apache.activemq.artemis.core.server] 
AMQ224006: Invalid filter: 2149244887
2023-12-14 20:24:42,957 ERROR [org.apache.activemq.artemis.core.server] 
AMQ224006: Invalid filter: 2149244887
2023-12-14 20:24:42,957 WARN  
[org.apache.activemq.artemis.core.management.impl.QueueControlImpl] AMQ229020: 
Invalid filter: 2149244887
org.apache.activemq.artemis.api.core.ActiveMQInvalidFilterExpressionException: 
AMQ229020: Invalid filter: 2149244887
    at 
org.apache.activemq.artemis.core.filter.impl.FilterImpl.createFilter(FilterImpl.java:90)
 ~[artemis-server-2.31.2.jar:2.31.2]
    at 
org.apache.activemq.artemis.core.filter.impl.FilterImpl.createFilter(FilterImpl.java:72)
 ~[artemis-server-2.31.2.jar:2.31.2]
    at 
org.apache.activemq.artemis.core.management.impl.QueueControlImpl.browse(QueueControlImpl.java:1614)
 ~[artemis-server-2.31.2.jar:2.31.2]
    at jdk.internal.reflect.GeneratedMethodAccessor42.invoke(Unknown Source) 
~[?:?]
    at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:?]
    at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
    at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71) ~[?:?]
    at jdk.internal.reflect.GeneratedMethodAccessor10.invoke(Unknown Source) 
~[?:?]
    at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:?]
    at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
    at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:260) ~[?:?]
    at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 ~[?:?]
    at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 ~[?:?]
    at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) 
~[?:?]
    at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) ~[?:?]
    at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252) ~[?:?]
    at javax.management.StandardMBean.invoke(StandardMBean.java:405) ~[?:?]
    at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:809)
 ~[?:?]
    at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) 
~[?:?]
    at jdk.internal.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) 
~[?:?]
    at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:?]
    at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
    at 
org.apache.activemq.artemis.core.server.management.ArtemisMBeanServerBuilder$MBeanInvocationHandler.invoke(ArtemisMBeanServerBuilder.java:96)
 ~[artemis-server-2.31.2.jar:2.31.2]
    at com.sun.proxy.$Proxy31.invoke(Unknown Source) ~[?:?]
    at org.jolokia.handler.ExecHandler.doHandleRequest(ExecHandler.java:98) 
~[jolokia-core-1.7.2.jar:?]
    at org.jolokia.handler.ExecHandler.doHandleRequest(ExecHandler.java:40) 
~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.handler.JsonRequestHandler.handleRequest(JsonRequestHandler.java:89)
 ~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.backend.MBeanServerExecutorLocal.handleRequest(MBeanServerExecutorLocal.java:109)
 ~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.backend.MBeanServerHandler.dispatchRequest(MBeanServerHandler.java:161)
 ~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.backend.LocalRequestDispatcher.dispatchRequest(LocalRequestDispatcher.java:99)
 ~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.backend.BackendManager.callRequestDispatcher(BackendManager.java:429)
 ~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.backend.BackendManager.handleRequest(BackendManager.java:158) 
~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.http.HttpRequestHandler.executeRequest(HttpRequestHandler.java:197) 
~[jolokia-core-1.7.2.jar:?]
    at 
org.jolokia.http.HttpRequestHandler.handlePostRequest(HttpRequestHandler.java:137)
 ~[jolokia-core-1.7.2.jar:?]
    at org.jolokia.http.AgentServlet$3.handleRequest(AgentServlet.java:493) 
~[jolokia-core-1.7.2.jar:?]
    at org.jolokia.http.AgentServlet.handleSecurely(AgentServlet.java:383) 
~[jolokia-core-1.7.2.jar:?]
    at 

[jira] [Commented] (ARTEMIS-4521) Deleting divert using management API doesn't remove binding from journal

2023-12-01 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17792055#comment-17792055
 ] 

Erwin Dondorp commented on ARTEMIS-4521:


for completeness: this also relates to retro-active addresses, as one of the 
objects that are created for that is a divert.

> Deleting divert using management API doesn't remove binding from journal
> 
>
> Key: ARTEMIS-4521
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4521
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.31.2
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
> Fix For: 2.32.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After a JMX {{createDivert}} followed by a JMX {{destroyDivert}} followed by 
> a broker restart the divert is visible again.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4497) export summary on cluster-topology as metrics

2023-11-17 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17787430#comment-17787430
 ] 

Erwin Dondorp commented on ARTEMIS-4497:


I've closed this issue (and its corresponding PR) because I now have a small 
java program available that gathers this data using JMX and then makes it 
available as Prometheus metrics.

> export summary on cluster-topology as metrics
> -
>
> Key: ARTEMIS-4497
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4497
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Clustering
>Affects Versions: 2.31.2
>Reporter: Erwin Dondorp
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The broker has functions to export metrics to a metrics-interface-provider 
> (e.g. [https://github.com/rh-messaging/artemis-prometheus-metrics-plugin] for 
> collection using REST).
> This is only a subset of what can be collected using the JMX interface. (and 
> the JMX interface has real commands too).
> Information on the cluster-topology was not yet available in these metrics.
> Due to the numeric nature of the metrics, providing the full topology of the 
> cluster (as JMX operation {{listNetworkTopology}} does) is not possible. But 
> just providing the broker-count is already very valuable.
> the following 2 metrics can be exported:
>  * artemis_cluster_lives_count
>  * artemis_cluster_backups_count
> With this information available, administrators can be alerted about various 
> issues in the cluster. Detailed analysis can be done using the JMX info 
> and/or the Console.
> a PR has been added to add that information to the set of metrics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (ARTEMIS-4497) export summary on cluster-topology as metrics

2023-11-17 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp closed ARTEMIS-4497.
--
Resolution: Abandoned

> export summary on cluster-topology as metrics
> -
>
> Key: ARTEMIS-4497
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4497
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Clustering
>Affects Versions: 2.31.2
>Reporter: Erwin Dondorp
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The broker has functions to export metrics to a metrics-interface-provider 
> (e.g. [https://github.com/rh-messaging/artemis-prometheus-metrics-plugin] for 
> collection using REST).
> This is only a subset of what can be collected using the JMX interface. (and 
> the JMX interface has real commands too).
> Information on the cluster-topology was not yet available in these metrics.
> Due to the numeric nature of the metrics, providing the full topology of the 
> cluster (as JMX operation {{listNetworkTopology}} does) is not possible. But 
> just providing the broker-count is already very valuable.
> the following 2 metrics can be exported:
>  * artemis_cluster_lives_count
>  * artemis_cluster_backups_count
> With this information available, administrators can be alerted about various 
> issues in the cluster. Detailed analysis can be done using the JMX info 
> and/or the Console.
> a PR has been added to add that information to the set of metrics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4497) export summary on cluster-topology as metrics

2023-11-10 Thread Erwin Dondorp (Jira)
Erwin Dondorp created ARTEMIS-4497:
--

 Summary: export summary on cluster-topology as metrics
 Key: ARTEMIS-4497
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4497
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: Clustering
Affects Versions: 2.31.2
Reporter: Erwin Dondorp


The broker has functions to export metrics to a metrics-interface-provider 
(e.g. [https://github.com/rh-messaging/artemis-prometheus-metrics-plugin] for 
collection using REST).

This is only a subset of what can be collected using the JMX interface. (and 
the JMX interface has real commands too).

Information on the cluster-topology was not yet available in these metrics.

Due to the numeric nature of the metrics, providing the full topology of the 
cluster (as JMX operation {{listNetworkTopology}} does) is not possible. But 
just providing the broker-count is already very valuable.

the following 2 metrics can be exported:
 * artemis_cluster_lives_count
 * artemis_cluster_backups_count

With this information available, administrators can be alerted about various 
issues in the cluster. Detailed analysis can be done using the JMX info and/or 
the Console.

a PR has been added to add that information to the set of metrics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4486) update metrics documentation

2023-11-02 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4486:
---
Description: 
In ARTEMIS-4456, a callback function to help the metrics plugin was added. 
However, the documentation for it was not updated yet.

Additionally, in the list of metrics, there were a few ones missing.

a PR is added to update the metrics documentation.

  was:
In ARTEMIS-4456, a callback function to help the metrics plugin. However, the 
documentation for it was not updated yet.

Additionally, in the list of metrics, there were a few ones missing, these are 
added too.

a PR is added to update the metrics documentation.


> update metrics documentation
> 
>
> Key: ARTEMIS-4486
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4486
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.31.2
>Reporter: Erwin Dondorp
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In ARTEMIS-4456, a callback function to help the metrics plugin was added. 
> However, the documentation for it was not updated yet.
> Additionally, in the list of metrics, there were a few ones missing.
> a PR is added to update the metrics documentation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4486) update metrics documentation

2023-11-02 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782212#comment-17782212
 ] 

Erwin Dondorp commented on ARTEMIS-4486:


[~brusdev] fyi

> update metrics documentation
> 
>
> Key: ARTEMIS-4486
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4486
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.31.2
>Reporter: Erwin Dondorp
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In ARTEMIS-4456, a callback function to help the metrics plugin was added. 
> However, the documentation for it was not updated yet.
> Additionally, in the list of metrics, there were a few ones missing.
> a PR is added to update the metrics documentation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4486) update metrics documentation

2023-11-02 Thread Erwin Dondorp (Jira)
Erwin Dondorp created ARTEMIS-4486:
--

 Summary: update metrics documentation
 Key: ARTEMIS-4486
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4486
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.31.2
Reporter: Erwin Dondorp


In https://issues.apache.org/jira/browse/ARTEMIS-4456, a callback function to 
help the metrics plugin. However, the documentation for it was not updated yet.

Additionally, in the list of metrics, there were a few ones missing, these are 
added too.

a PR is added to update the metrics documentation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4486) update metrics documentation

2023-11-02 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4486:
---
Description: 
In ARTEMIS-4456, a callback function to help the metrics plugin. However, the 
documentation for it was not updated yet.

Additionally, in the list of metrics, there were a few ones missing, these are 
added too.

a PR is added to update the metrics documentation.

  was:
In https://issues.apache.org/jira/browse/ARTEMIS-4456, a callback function to 
help the metrics plugin. However, the documentation for it was not updated yet.

Additionally, in the list of metrics, there were a few ones missing, these are 
added too.

a PR is added to update the metrics documentation.


> update metrics documentation
> 
>
> Key: ARTEMIS-4486
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4486
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.31.2
>Reporter: Erwin Dondorp
>Priority: Minor
>
> In ARTEMIS-4456, a callback function to help the metrics plugin. However, the 
> documentation for it was not updated yet.
> Additionally, in the list of metrics, there were a few ones missing, these 
> are added too.
> a PR is added to update the metrics documentation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4485) console shows broker-attributes instead of the requested address-attributes

2023-11-02 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4485:
---
Description: 
When using the "attributes"-button at the end of a table-row in the Addresses 
page/table, sometimes the broker-attributes are shown instead of the expected 
address-attributes.

Unfortunately, this is not 100% reproducible, but I've seen it several times 
now and not doubting my actions.

this time I was able to capture the brower console log:
{noformat}
[artemis-plugin] current 
nid=root-org.apache.activemq.artemis-XYZ-addresses-FULL/ADDRESS/NAME/HERE 
app-efb360a568.js:1:9987
[artemis-plugin] 
targetNID=root-org.apache.activemq.artemis-XYZ-addresses-FULL/ADDRESS/NAME/HERE 
app-efb360a568.js:1:9987
[hawtio-core-tasks] Executing tasks: LocationChangeStartTasks 
app-efb360a568.js:1:9987
[hawtio-core-tasks] Executing task: ConParam with parameters: 
Array(3) [ {…}, 
"http://artemis-apps-0:58161/console/artemis/attributes?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;,
 
"http://artemis-apps-0:58161/console/artemis/artemisAddresses?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;
 ]
app-efb360a568.js:1:9987
[hawtio-core-tasks] Executing task: RefreshUserSession with parameters: 
Array(3) [ {…}, 
"http://artemis-apps-0:58161/console/artemis/attributes?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;,
 
"http://artemis-apps-0:58161/console/artemis/artemisAddresses?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;
 ]
app-efb360a568.js:1:9987
[hawtio-core-template-cache] request for template at: 
plugins/jmx/html/attributes/attributes.html app-efb360a568.js:1:9987
[hawtio-core-template-cache] Getting template: 
plugins/jmx/html/attributes/attributes.html app-efb360a568.js:1:9987
[hawtio-core-template-cache] Found template for URL: 
plugins/jmx/html/attributes/attributes.html app-efb360a568.js:1:9987
[hawtio-core-template-cache] Adding template: attributeModal.html 
app-efb360a568.js:1:9987
[hawtio-jmx] attribute - nid:  
root-org.apache.activemq.artemis-XYZ-addresses-FULL/ADDRESS/NAME/HERE 
app-efb360a568.js:1:9987
[hawtio-console-assembly] Updated session. Response: 
Object { data: "ok", status: 200, headers: Wn(t)
, config: {…}, statusText: "OK", xhrStatus: "complete" }
app-efb360a568.js:1:9987
[hawtio-jmx] Updated attributes info cache for mbean 
org.apache.activemq.artemis:broker="XYZ-ABC-123" 
Object { op: {…}, attr: {…}, class: 
"org.apache.activemq.artemis.core.management.impl.ActiveMQServerControlImpl", 
desc: "Information on the management interface of the MBean" }
attr: Object { AddressMemoryUsage: {…}, ManagementAddress: {…}, 
ConnectorServices: {…}, … }
class: 
"org.apache.activemq.artemis.core.management.impl.ActiveMQServerControlImpl"
desc: "Information on the management interface of the MBean"
op: Object { removeAddressSettings: {…}, listSessions: (2) […], scaleDown: {…}, 
… }
: Object { … }
{noformat}

my observation is that the "targetNID" is incorrect.
the brokername that appears in it is truncated on the first "-" character.
in the redacted output, this is visible as "XYZ"(truncated) vs 
"XYZ-ABC-123"(correct).
when I manually fix the redirect URL to include the full brokerName, then the 
requested information is shown, confirming this a bit more.

after shallow investigation:
I think it is likely a defect in (or misuse of) function {{getRootNid}} in file 
{{addresses.js}}.
It seems that {{getRootNid}} assumes that the broker-name does not contain a 
{{-}} itself.
Therefore it accidentally shortens the broker-name, leading to the above 
problem.

  was:
When using the "attributes"-button at the end of a table-row in the Addresses 
page/table, sometimes the broker-attributes are shown instead of the expected 
address-attributes.

Unfortunately, this is not 100% reproducible, but I've seen it several times 
now and not doubting my actions.

this time I was able to capture the brower console log:
{noformat}
[artemis-plugin] current 
nid=root-org.apache.activemq.artemis-XYZ-addresses-FULL/ADDRESS/NAME/HERE 
app-efb360a568.js:1:9987
[artemis-plugin] 
targetNID=root-org.apache.activemq.artemis-XYZ-addresses-FULL/ADDRESS/NAME/HERE 
app-efb360a568.js:1:9987
[hawtio-core-tasks] Executing tasks: LocationChangeStartTasks 
app-efb360a568.js:1:9987
[hawtio-core-tasks] Executing task: ConParam with parameters: 
Array(3) [ {…}, 
"http://artemis-apps-0:58161/console/artemis/attributes?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;,
 
"http://artemis-apps-0:58161/console/artemis/artemisAddresses?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;
 ]
app-efb360a568.js:1:9987
[hawtio-core-tasks] Executing task: RefreshUserSession with parameters: 
Array(3) [ 

[jira] [Updated] (ARTEMIS-4485) console shows broker-attributes instead of the requested address-attributes

2023-11-02 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4485:
---
Description: 
When using the "attributes"-button at the end of a table-row in the Addresses 
page/table, sometimes the broker-attributes are shown instead of the expected 
address-attributes.

Unfortunately, this is not 100% reproducible, but I've seen it several times 
now and not doubting my actions.

this time I was able to capture the brower console log:
{noformat}
[artemis-plugin] current 
nid=root-org.apache.activemq.artemis-XYZ-addresses-FULL/ADDRESS/NAME/HERE 
app-efb360a568.js:1:9987
[artemis-plugin] 
targetNID=root-org.apache.activemq.artemis-XYZ-addresses-FULL/ADDRESS/NAME/HERE 
app-efb360a568.js:1:9987
[hawtio-core-tasks] Executing tasks: LocationChangeStartTasks 
app-efb360a568.js:1:9987
[hawtio-core-tasks] Executing task: ConParam with parameters: 
Array(3) [ {…}, 
"http://artemis-apps-0:58161/console/artemis/attributes?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;,
 
"http://artemis-apps-0:58161/console/artemis/artemisAddresses?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;
 ]
app-efb360a568.js:1:9987
[hawtio-core-tasks] Executing task: RefreshUserSession with parameters: 
Array(3) [ {…}, 
"http://artemis-apps-0:58161/console/artemis/attributes?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;,
 
"http://artemis-apps-0:58161/console/artemis/artemisAddresses?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;
 ]
app-efb360a568.js:1:9987
[hawtio-core-template-cache] request for template at: 
plugins/jmx/html/attributes/attributes.html app-efb360a568.js:1:9987
[hawtio-core-template-cache] Getting template: 
plugins/jmx/html/attributes/attributes.html app-efb360a568.js:1:9987
[hawtio-core-template-cache] Found template for URL: 
plugins/jmx/html/attributes/attributes.html app-efb360a568.js:1:9987
[hawtio-core-template-cache] Adding template: attributeModal.html 
app-efb360a568.js:1:9987
[hawtio-jmx] attribute - nid:  
root-org.apache.activemq.artemis-XYZ-addresses-FULL/ADDRESS/NAME/HERE 
app-efb360a568.js:1:9987
[hawtio-console-assembly] Updated session. Response: 
Object { data: "ok", status: 200, headers: Wn(t)
, config: {…}, statusText: "OK", xhrStatus: "complete" }
app-efb360a568.js:1:9987
[hawtio-jmx] Updated attributes info cache for mbean 
org.apache.activemq.artemis:broker="XYZ-ABC-123" 
Object { op: {…}, attr: {…}, class: 
"org.apache.activemq.artemis.core.management.impl.ActiveMQServerControlImpl", 
desc: "Information on the management interface of the MBean" }
attr: Object { AddressMemoryUsage: {…}, ManagementAddress: {…}, 
ConnectorServices: {…}, … }
class: 
"org.apache.activemq.artemis.core.management.impl.ActiveMQServerControlImpl"
desc: "Information on the management interface of the MBean"
op: Object { removeAddressSettings: {…}, listSessions: (2) […], scaleDown: {…}, 
… }
: Object { … }
{noformat}

my observation is that the "targetNID" is incorrect.
the brokername that appears in it is truncated on the first "-" character.
in the redacted output, this is visible as "XYZ"(truncated) vs 
"XYZ-ABC-123"(correct).
when I manually fix the redirect URL to include the full brokerName, then the 
requested information is shown, confirming this a bit more.

after shallow investigation:
I think it is likely a defect in (or misuse of) function {{getRootNid}} in file 
{{addresses.js}}.

  was:
When using the "attributes"-button at the end of a table-row in the Addresses 
page/table, sometimes the broker-attributes are shown instead of the expected 
address-attributes.

Unfortunately, this is not 100% reproducible, but I've seen it several times 
now and not doubting my actions.

this time I was able to capture the brower console log:
{noformat}
[artemis-plugin] current 
nid=root-org.apache.activemq.artemis-XYZ-addresses-FULL/ADDRESS/NAME/HERE 
app-efb360a568.js:1:9987
[artemis-plugin] 
targetNID=root-org.apache.activemq.artemis-XYZ-addresses-FULL/ADDRESS/NAME/HERE 
app-efb360a568.js:1:9987
[hawtio-core-tasks] Executing tasks: LocationChangeStartTasks 
app-efb360a568.js:1:9987
[hawtio-core-tasks] Executing task: ConParam with parameters: 
Array(3) [ {…}, 
"http://artemis-apps-0:58161/console/artemis/attributes?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;,
 
"http://artemis-apps-0:58161/console/artemis/artemisAddresses?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;
 ]
app-efb360a568.js:1:9987
[hawtio-core-tasks] Executing task: RefreshUserSession with parameters: 
Array(3) [ {…}, 
"http://artemis-apps-0:58161/console/artemis/attributes?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;,
 

[jira] [Updated] (ARTEMIS-4485) console shows broker-attributes instead of the requested address-attributes

2023-11-02 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4485:
---
Description: 
When using the "attributes"-button at the end of a table-row in the Addresses 
page/table, sometimes the broker-attributes are shown instead of the expected 
address-attributes.

Unfortunately, this is not 100% reproducible, but I've seen it several times 
now and not doubting my actions.

this time I was able to capture the brower console log:
{noformat}
[artemis-plugin] current 
nid=root-org.apache.activemq.artemis-XYZ-addresses-FULL/ADDRESS/NAME/HERE 
app-efb360a568.js:1:9987
[artemis-plugin] 
targetNID=root-org.apache.activemq.artemis-XYZ-addresses-FULL/ADDRESS/NAME/HERE 
app-efb360a568.js:1:9987
[hawtio-core-tasks] Executing tasks: LocationChangeStartTasks 
app-efb360a568.js:1:9987
[hawtio-core-tasks] Executing task: ConParam with parameters: 
Array(3) [ {…}, 
"http://artemis-apps-0:58161/console/artemis/attributes?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;,
 
"http://artemis-apps-0:58161/console/artemis/artemisAddresses?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;
 ]
app-efb360a568.js:1:9987
[hawtio-core-tasks] Executing task: RefreshUserSession with parameters: 
Array(3) [ {…}, 
"http://artemis-apps-0:58161/console/artemis/attributes?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;,
 
"http://artemis-apps-0:58161/console/artemis/artemisAddresses?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;
 ]
app-efb360a568.js:1:9987
[hawtio-core-template-cache] request for template at: 
plugins/jmx/html/attributes/attributes.html app-efb360a568.js:1:9987
[hawtio-core-template-cache] Getting template: 
plugins/jmx/html/attributes/attributes.html app-efb360a568.js:1:9987
[hawtio-core-template-cache] Found template for URL: 
plugins/jmx/html/attributes/attributes.html app-efb360a568.js:1:9987
[hawtio-core-template-cache] Adding template: attributeModal.html 
app-efb360a568.js:1:9987
[hawtio-jmx] attribute - nid:  
root-org.apache.activemq.artemis-XYZ-addresses-FULL/ADDRESS/NAME/HERE 
app-efb360a568.js:1:9987
[hawtio-console-assembly] Updated session. Response: 
Object { data: "ok", status: 200, headers: Wn(t)
, config: {…}, statusText: "OK", xhrStatus: "complete" }
app-efb360a568.js:1:9987
[hawtio-jmx] Updated attributes info cache for mbean 
org.apache.activemq.artemis:broker="XYZ-ABC-123" 
Object { op: {…}, attr: {…}, class: 
"org.apache.activemq.artemis.core.management.impl.ActiveMQServerControlImpl", 
desc: "Information on the management interface of the MBean" }
attr: Object { AddressMemoryUsage: {…}, ManagementAddress: {…}, 
ConnectorServices: {…}, … }
class: 
"org.apache.activemq.artemis.core.management.impl.ActiveMQServerControlImpl"
desc: "Information on the management interface of the MBean"
op: Object { removeAddressSettings: {…}, listSessions: (2) […], scaleDown: {…}, 
… }
: Object { … }
{noformat}

my observation is that the "targetNID" is incorrect.
the brokername that appears in it is truncated on the first "-" character.
in the redacted output, this is visible as "XYZ"(truncated) vs 
"XYZ-ABC-123"(correct).
when I manually fix the redirect URL to include the full brokerName, then the 
requested information is shown, confirming this a bit more.

  was:
When using the "attributes"-button at the end of a table-row in the Addresses 
page/table, sometimes the broker-attributes are shown instead of the expected 
address-attributes.

Unfortunately, this is not 100% reproducible, but I've seen it several times 
now and not doubting my actions.

this time I was able to capture the brower console log:
{noformat}
[artemis-plugin] current 
nid=root-org.apache.activemq.artemis-XYZ-addresses-FULL/ADDRESS/NAME/HERE 
app-efb360a568.js:1:9987
[artemis-plugin] 
targetNID=root-org.apache.activemq.artemis-XYZ-addresses-FULL/ADDRESS/NAME/HERE 
app-efb360a568.js:1:9987
[hawtio-core-tasks] Executing tasks: LocationChangeStartTasks 
app-efb360a568.js:1:9987
[hawtio-core-tasks] Executing task: ConParam with parameters: 
Array(3) [ {…}, 
"http://artemis-apps-0:58161/console/artemis/attributes?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;,
 
"http://artemis-apps-0:58161/console/artemis/artemisAddresses?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;
 ]
app-efb360a568.js:1:9987
[hawtio-core-tasks] Executing task: RefreshUserSession with parameters: 
Array(3) [ {…}, 
"http://artemis-apps-0:58161/console/artemis/attributes?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;,
 
"http://artemis-apps-0:58161/console/artemis/artemisAddresses?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;

[jira] [Created] (ARTEMIS-4485) console shows broker-attributes instead of the requested address-attributes

2023-11-02 Thread Erwin Dondorp (Jira)
Erwin Dondorp created ARTEMIS-4485:
--

 Summary: console shows broker-attributes instead of the requested 
address-attributes
 Key: ARTEMIS-4485
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4485
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Web Console
Affects Versions: 2.31.2
Reporter: Erwin Dondorp


When using the "attributes"-button at the end of a table-row in the Addresses 
page/table, sometimes the broker-attributes are shown instead of the expected 
address-attributes.

Unfortunately, this is not 100% reproducible, but I've seen it several times 
now and not doubting my actions.

this time I was able to capture the brower console log:
{noformat}
[artemis-plugin] current 
nid=root-org.apache.activemq.artemis-XYZ-addresses-FULL/ADDRESS/NAME/HERE 
app-efb360a568.js:1:9987
[artemis-plugin] 
targetNID=root-org.apache.activemq.artemis-XYZ-addresses-FULL/ADDRESS/NAME/HERE 
app-efb360a568.js:1:9987
[hawtio-core-tasks] Executing tasks: LocationChangeStartTasks 
app-efb360a568.js:1:9987
[hawtio-core-tasks] Executing task: ConParam with parameters: 
Array(3) [ {…}, 
"http://artemis-apps-0:58161/console/artemis/attributes?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;,
 
"http://artemis-apps-0:58161/console/artemis/artemisAddresses?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;
 ]
app-efb360a568.js:1:9987
[hawtio-core-tasks] Executing task: RefreshUserSession with parameters: 
Array(3) [ {…}, 
"http://artemis-apps-0:58161/console/artemis/attributes?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;,
 
"http://artemis-apps-0:58161/console/artemis/artemisAddresses?tab=artemis=root-org.apache.activemq.artemis-XYZ-addresses-FULL%2FADDRESS%2FNAME%2FHERE;
 ]
app-efb360a568.js:1:9987
[hawtio-core-template-cache] request for template at: 
plugins/jmx/html/attributes/attributes.html app-efb360a568.js:1:9987
[hawtio-core-template-cache] Getting template: 
plugins/jmx/html/attributes/attributes.html app-efb360a568.js:1:9987
[hawtio-core-template-cache] Found template for URL: 
plugins/jmx/html/attributes/attributes.html app-efb360a568.js:1:9987
[hawtio-core-template-cache] Adding template: attributeModal.html 
app-efb360a568.js:1:9987
[hawtio-jmx] attribute - nid:  
root-org.apache.activemq.artemis-XYZ-addresses-FULL/ADDRESS/NAME/HERE 
app-efb360a568.js:1:9987
[hawtio-console-assembly] Updated session. Response: 
Object { data: "ok", status: 200, headers: Wn(t)
, config: {…}, statusText: "OK", xhrStatus: "complete" }
app-efb360a568.js:1:9987
[hawtio-jmx] Updated attributes info cache for mbean 
org.apache.activemq.artemis:broker="XYZ-ABC-123" 
Object { op: {…}, attr: {…}, class: 
"org.apache.activemq.artemis.core.management.impl.ActiveMQServerControlImpl", 
desc: "Information on the management interface of the MBean" }
attr: Object { AddressMemoryUsage: {…}, ManagementAddress: {…}, 
ConnectorServices: {…}, … }
class: 
"org.apache.activemq.artemis.core.management.impl.ActiveMQServerControlImpl"
desc: "Information on the management interface of the MBean"
op: Object { removeAddressSettings: {…}, listSessions: (2) […], scaleDown: {…}, 
… }
: Object { … }
{noformat}

my observation is that the "targetNID" is incorrect.
the brokername that appears in it is truncated on the first "-" character.
in the redacted output, this is visible as "XYZ"(truncated) vs 
"XYZ-ABC-123"(correct).
when I manually fix the redirect URL to include the full brokerName,
then the requested information is shown, confirming this bit more.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4483) Log.warn messages with AMQP during regular closing of AMQP clients.

2023-11-02 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782133#comment-17782133
 ] 

Erwin Dondorp commented on ARTEMIS-4483:


[~clebertsuconic] 

what error message codes does this apply to?

is it these two?:
{code:java}
2023-11-02 12:30:18,557 WARN  [org.apache.activemq.artemis.core.server] 
AMQ222107: Cleared up resources for session 6e9fbac0-796a-11ee-87c0-fe52de550f9b
2023-11-02 12:30:18,557 WARN  [org.apache.activemq.artemis.core.server] 
AMQ222061: Client connection failed, clearing up resources for session 
6ea18f81-796a-11ee-87c0-fe52de550f9b{code}
and what does the updated version change? does it skip these messages, or make 
it a lower level, or ...?
the referenced PR was just a little bit too extensive for me to predict the new 
situation...

> Log.warn messages with AMQP during regular closing of AMQP clients.
> ---
>
> Key: ARTEMIS-4483
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4483
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.31.2
>Reporter: Clebert Suconic
>Assignee: Clebert Suconic
>Priority: Major
> Fix For: 2.32.0
>
>
> As I was fixing ARTEMIS-4476, I realized why regular closes in AMQP are 
> throwing log.warn in the log for cleaning up sessions and even invalid 
> connections at certain points.
> So, this is a complimentar task to ARTEMIS-4476. It will be part of the same 
> git commit, but it's an unrelated issue that is going to be fixed altogether



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4479) supply metrics for cluster topology

2023-10-30 Thread Erwin Dondorp (Jira)
Erwin Dondorp created ARTEMIS-4479:
--

 Summary: supply metrics for cluster topology
 Key: ARTEMIS-4479
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4479
 Project: ActiveMQ Artemis
  Issue Type: New Feature
  Components: Clustering
Affects Versions: 2.31.2
Reporter: Erwin Dondorp


Artemis integrates well with Prometheus for monitoring to see whether the 
internals of a broker work properly using the artemis-prometheus-metrics-plugin 
plugin.

I found that information about the presence of the broker in a broker-cluster 
is missing there. that information can already be found:
 * in the console, tab "Status", the number of live brokers and number of 
backup brokers is listed
 * in the JMX interfaces, the listNetworkTopology function has similar 
information, even with more details

suggestion is to add 2 new simple metrics:
 * artemis_cluster_lives_count
 * artemis_cluster_backups_count



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (ARTEMIS-4463) navigating to address attributes (using addresses tab) does not show these attributes

2023-10-23 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp closed ARTEMIS-4463.
--
Resolution: Cannot Reproduce

> navigating to address attributes (using addresses tab) does not show these 
> attributes
> -
>
> Key: ARTEMIS-4463
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4463
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.31.0
>Reporter: Erwin Dondorp
>Priority: Minor
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When viewing the list of addresses using the Addresses tab, a button 
> "Attributes" is visible for each row. The expectation is that navigation is 
> to the attributes for the corresponding row.
> Instead, it navigates to the attributes for whatever is selected in the 
> navigation/jmx tree.
> The similar function on the Queues tab works fine.
> I'll add a PR (it is not as repeatable as I thought, so please be patient)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4463) navigating to address attributes (using addresses tab) does not show these attributes

2023-10-23 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17778785#comment-17778785
 ] 

Erwin Dondorp commented on ARTEMIS-4463:


this takes me too much time to reliably reproduce, maybe returning later when I 
find solution

> navigating to address attributes (using addresses tab) does not show these 
> attributes
> -
>
> Key: ARTEMIS-4463
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4463
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.31.0
>Reporter: Erwin Dondorp
>Priority: Minor
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When viewing the list of addresses using the Addresses tab, a button 
> "Attributes" is visible for each row. The expectation is that navigation is 
> to the attributes for the corresponding row.
> Instead, it navigates to the attributes for whatever is selected in the 
> navigation/jmx tree.
> The similar function on the Queues tab works fine.
> I'll add a PR (it is not as repeatable as I thought, so please be patient)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4463) navigating to address attributes (using addresses tab) does not show these attributes

2023-10-19 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4463:
---
Description: 
When viewing the list of addresses using the Addresses tab, a button 
"Attributes" is visible for each row. The expectation is that navigation is to 
the attributes for the corresponding row.

Instead, it navigates to the attributes for whatever is selected in the 
navigation/jmx tree.

The similar function on the Queues tab works fine.

I'll add a PR (it is not as repeatable as I thought, so please be patient)

  was:
When viewing the list of addresses using the Addresses tab, a button 
"Attributes" is visible for each row. The expectation is that navigation is to 
the attributes for the corresponding row.

Instead, it navigates to the attributes for whatever is selected in the 
navigation/jmx tree.

The similar function on the Queues tab works fine.

I've added a PR


> navigating to address attributes (using addresses tab) does not show these 
> attributes
> -
>
> Key: ARTEMIS-4463
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4463
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.31.0
>Reporter: Erwin Dondorp
>Priority: Minor
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When viewing the list of addresses using the Addresses tab, a button 
> "Attributes" is visible for each row. The expectation is that navigation is 
> to the attributes for the corresponding row.
> Instead, it navigates to the attributes for whatever is selected in the 
> navigation/jmx tree.
> The similar function on the Queues tab works fine.
> I'll add a PR (it is not as repeatable as I thought, so please be patient)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4463) navigating to address attributes (using addresses tab) does not show these attributes

2023-10-19 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4463:
---
Description: 
When viewing the list of addresses using the Addresses tab, a button 
"Attributes" is visible for each row. The expectation is that navigation is to 
the attributes for the corresponding row.

Instead, it navigates to the attributes for whatever is selected in the 
navigation/jmx tree.

The similar function on the Queues tab works fine.

I've added a PR

  was:
When viewing the list of addresses using the Addresses tab, a button 
"Attributes" is visible for each row. The expectation is that navigation is to 
the attributes for the corresponding row.

Instead, it navigates to the attributes for whatever is selected in the 
navigation/jmx tree.

The similar function on the Queues tab works fine.

I will create a PR for this.


> navigating to address attributes (using addresses tab) does not show these 
> attributes
> -
>
> Key: ARTEMIS-4463
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4463
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.31.0
>Reporter: Erwin Dondorp
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When viewing the list of addresses using the Addresses tab, a button 
> "Attributes" is visible for each row. The expectation is that navigation is 
> to the attributes for the corresponding row.
> Instead, it navigates to the attributes for whatever is selected in the 
> navigation/jmx tree.
> The similar function on the Queues tab works fine.
> I've added a PR



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4463) navigating to address attributes (using addresses tab) does not show these attributes

2023-10-19 Thread Erwin Dondorp (Jira)
Erwin Dondorp created ARTEMIS-4463:
--

 Summary: navigating to address attributes (using addresses tab) 
does not show these attributes
 Key: ARTEMIS-4463
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4463
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Web Console
Affects Versions: 2.31.0
Reporter: Erwin Dondorp


When viewing the list of addresses using the Addresses tab, a button 
"Attributes" is visible for each row. The expectation is that navigation is to 
the attributes for the corresponding row.

Instead, it navigates to the attributes for whatever is selected in the 
navigation/jmx tree.

The similar function on the Queues tab works fine.

I will create a PR for this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4182) fill client-id for cluster connections

2023-10-10 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4182:
---
Description: 
when running Artemis in a cluster, the brokers have connections between them.
these are easily identifiable in the list of connections because the "Users" 
field is filled in with the username that was specified in setting 
`cluster-user`.
but it is unclear where each connection goes to.
!image-2023-02-25-13-27-08-542.png!

 

additional information:
the field "Client ID" is filled in with the remote hostname when using 
broker-connection/amqp-connection.

wish:
(also) fill in field ClientID of the cluster connections.
e.g. with the broker-name or from a new parameter `cluster-clientid`

  was:
when running Artemis in a cluster, the brokers have connections between them.
these are easily identifiable in the list of connections because the "Users" 
field is filled in with the username that was specified in setting 
`cluster-user`.
but it is unclear where each connection goes to.
!image-2023-02-25-13-27-08-542.png!

 

additional information:
the field "ClienID" is filled in with the remote hostname when using 
broker-connection/amqp-connection.

wish:
(also) fill in field ClientID of the cluster connections.
e.g. with the broker-name or from a new parameter `cluster-clientid`


> fill client-id for cluster connections
> --
>
> Key: ARTEMIS-4182
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4182
> Project: ActiveMQ Artemis
>  Issue Type: Wish
>  Components: Broker
>Affects Versions: 2.28.0
>Reporter: Erwin Dondorp
>Priority: Major
> Attachments: image-2023-02-25-13-27-08-542.png
>
>
> when running Artemis in a cluster, the brokers have connections between them.
> these are easily identifiable in the list of connections because the "Users" 
> field is filled in with the username that was specified in setting 
> `cluster-user`.
> but it is unclear where each connection goes to.
> !image-2023-02-25-13-27-08-542.png!
>  
> additional information:
> the field "Client ID" is filled in with the remote hostname when using 
> broker-connection/amqp-connection.
> wish:
> (also) fill in field ClientID of the cluster connections.
> e.g. with the broker-name or from a new parameter `cluster-clientid`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4182) fill client-id for cluster connections

2023-10-10 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4182:
---
Description: 
when running Artemis in a cluster, the brokers have connections between them.
these are easily identifiable in the list of connections because the "Users" 
field is filled in with the username that was specified in setting 
`cluster-user`.
but it is unclear where each connection goes to.
!image-2023-02-25-13-27-08-542.png!

 

additional information:
the field "ClienID" is filled in with the remote hostname when using 
broker-connection/amqp-connection.

wish:
(also) fill in field ClientID of the cluster connections.
e.g. with the broker-name or from a new parameter `cluster-clientid`

  was:
when running Artemis in a cluster, the brokers have connections between them.
these are easily identifiable in the list of connections because the "Users" 
field is filled in with the username that was specified in setting 
`cluster-user`.
but it is unclear where each connection goes to.
!image-2023-02-25-13-27-08-542.png! 

wish:
fill in field ClientID of the cluster connections.
e.g. with the broker-name or from a new parameter `cluster-clientid`


> fill client-id for cluster connections
> --
>
> Key: ARTEMIS-4182
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4182
> Project: ActiveMQ Artemis
>  Issue Type: Wish
>  Components: Broker
>Affects Versions: 2.28.0
>Reporter: Erwin Dondorp
>Priority: Major
> Attachments: image-2023-02-25-13-27-08-542.png
>
>
> when running Artemis in a cluster, the brokers have connections between them.
> these are easily identifiable in the list of connections because the "Users" 
> field is filled in with the username that was specified in setting 
> `cluster-user`.
> but it is unclear where each connection goes to.
> !image-2023-02-25-13-27-08-542.png!
>  
> additional information:
> the field "ClienID" is filled in with the remote hostname when using 
> broker-connection/amqp-connection.
> wish:
> (also) fill in field ClientID of the cluster connections.
> e.g. with the broker-name or from a new parameter `cluster-clientid`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (ARTEMIS-4350) MQTT queue creation does not honor the delimiter and always uses '.'

2023-07-06 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp closed ARTEMIS-4350.
--
Resolution: Won't Fix

too much work and risk

> MQTT queue creation does not honor the delimiter and always uses '.'
> 
>
> Key: ARTEMIS-4350
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4350
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: MQTT
>Affects Versions: 2.29.0
>Reporter: Erwin Dondorp
>Priority: Minor
> Attachments: 1.png, 2.png
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> When subscribing using an MQTT client on an address, a subscription queue is 
> created.
> For MQTT, the subscription-queue has a name that is constructed using the 
> client-id and the address-name. The parts are always joined using the {{.}} 
> character.
> It is more consistent when the _delimiter_ character is used for that.
> a PR will be added
> using {{/}} as separator, before:
> !1.png!
> using {{/}} as separator, after:
> !2.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4350) MQTT queue creation does not honor the delimiter and always uses '.'

2023-07-06 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4350:
---
 Attachment: 2.png
 1.png
Description: 
When subscribing using an MQTT client on an address, a subscription queue is 
created.
For MQTT, the subscription-queue has a name that is constructed using the 
client-id and the address-name. The parts are always joined using the {{.}} 
character.
It is more consistent when the _delimiter_ character is used for that.

a PR will be added

using {{/}} as separator, before:
!1.png!

using {{/}} as separator, after:
!2.png!

  was:
When subscribing using an MQTT client on an address, a subscription queue is 
created.
For MQTT, the subscription-queue has a name that is constructed using the 
client-id and the address-name. The parts are always joined using the `.` 
character.
It is more consistent when the _delimiter_ character is used for that.

a PR will be added


> MQTT queue creation does not honor the delimiter and always uses '.'
> 
>
> Key: ARTEMIS-4350
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4350
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: MQTT
>Affects Versions: 2.29.0
>Reporter: Erwin Dondorp
>Priority: Minor
> Attachments: 1.png, 2.png
>
>
> When subscribing using an MQTT client on an address, a subscription queue is 
> created.
> For MQTT, the subscription-queue has a name that is constructed using the 
> client-id and the address-name. The parts are always joined using the {{.}} 
> character.
> It is more consistent when the _delimiter_ character is used for that.
> a PR will be added
> using {{/}} as separator, before:
> !1.png!
> using {{/}} as separator, after:
> !2.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4350) MQTT queue creation does not honor the delimiter and always uses '.'

2023-07-06 Thread Erwin Dondorp (Jira)
Erwin Dondorp created ARTEMIS-4350:
--

 Summary: MQTT queue creation does not honor the delimiter and 
always uses '.'
 Key: ARTEMIS-4350
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4350
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: MQTT
Affects Versions: 2.29.0
Reporter: Erwin Dondorp


When subscribing using an MQTT client on an address, a subscription queue is 
created.
For MQTT, the subscription-queue has a name that is constructed using the 
client-id and the address-name. The parts are always joined using the `.` 
character.
It is more consistent when the _delimiter_ character is used for that.

a PR will be added



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4331) Upgrade JGroups

2023-06-23 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17736534#comment-17736534
 ] 

Erwin Dondorp commented on ARTEMIS-4331:


I was working on that and have added that.
But I have no way to actually test this reliably...

> Upgrade JGroups
> ---
>
> Key: ARTEMIS-4331
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4331
> Project: ActiveMQ Artemis
>  Issue Type: Dependency upgrade
>  Components: clustering
>Affects Versions: 2.29.0
>Reporter: Erwin Dondorp
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I encountered a NPE in the JGROUPS facility of Artemis.
> I found that it was already solved upstream in 
> https://issues.redhat.com/browse/JGRP-2707
> hereby requesting that the updated JGroups is included in Artemis.
> {noformat}
> 2023-06-23 14:37:59,999 INFO  [org.apache.activemq.artemis] AMQ241004: 
> Artemis Console available at http://0.0.0.0:8161/console
> java.lang.NullPointerException
> at 
> org.jgroups.protocols.FD_SOCK2.getPhysicalAddresses(FD_SOCK2.java:440)
> at org.jgroups.protocols.FD_SOCK2.connectTo(FD_SOCK2.java:390)
> at 
> org.jgroups.protocols.FD_SOCK2.connectToNextPingDest(FD_SOCK2.java:371)
> at org.jgroups.protocols.FD_SOCK2.handle(FD_SOCK2.java:342)
> at org.jgroups.protocols.FD_SOCK2.handle(FD_SOCK2.java:31)
> ...
> {noformat}
> (first line is not part of the error message, but provided as context)
> current version of jgroups is 5.2.0.Final, fix version is 5.2.15



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4331) Upgrade JGroups

2023-06-23 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4331:
---
Description: 
I encountered a NPE in the JGROUPS facility of Artemis.
I found that it was already solved upstream in 
https://issues.redhat.com/browse/JGRP-2707
hereby requesting that the updated JGroups is included in Artemis.


{noformat}
2023-06-23 14:37:59,999 INFO  [org.apache.activemq.artemis] AMQ241004: Artemis 
Console available at http://0.0.0.0:8161/console
java.lang.NullPointerException
at 
org.jgroups.protocols.FD_SOCK2.getPhysicalAddresses(FD_SOCK2.java:440)
at org.jgroups.protocols.FD_SOCK2.connectTo(FD_SOCK2.java:390)
at 
org.jgroups.protocols.FD_SOCK2.connectToNextPingDest(FD_SOCK2.java:371)
at org.jgroups.protocols.FD_SOCK2.handle(FD_SOCK2.java:342)
at org.jgroups.protocols.FD_SOCK2.handle(FD_SOCK2.java:31)
...
{noformat}
(first line is not part of the error message, but provided as context)

current version of jgroups is 5.2.0.Final, fix version is 5.2.15

  was:
I encountered a NPE in the JGROUPS facility of Artemis.
I found that it was already solved upstream in 
https://issues.redhat.com/browse/JGRP-2707
hereby requesting that the updated JGroups is included in Artemis.


{noformat}
2023-06-23 14:37:59,999 INFO  [org.apache.activemq.artemis] AMQ241004: Artemis 
Console available at http://0.0.0.0:8161/console
java.lang.NullPointerException
at 
org.jgroups.protocols.FD_SOCK2.getPhysicalAddresses(FD_SOCK2.java:440)
at org.jgroups.protocols.FD_SOCK2.connectTo(FD_SOCK2.java:390)
at 
org.jgroups.protocols.FD_SOCK2.connectToNextPingDest(FD_SOCK2.java:371)
at org.jgroups.protocols.FD_SOCK2.handle(FD_SOCK2.java:342)
at org.jgroups.protocols.FD_SOCK2.handle(FD_SOCK2.java:31)
...
{noformat}
(first line is not part of the error message, but provided as context)


> Upgrade JGroups
> ---
>
> Key: ARTEMIS-4331
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4331
> Project: ActiveMQ Artemis
>  Issue Type: Task
>  Components: clustering
>Affects Versions: 2.29.0
>Reporter: Erwin Dondorp
>Priority: Major
>
> I encountered a NPE in the JGROUPS facility of Artemis.
> I found that it was already solved upstream in 
> https://issues.redhat.com/browse/JGRP-2707
> hereby requesting that the updated JGroups is included in Artemis.
> {noformat}
> 2023-06-23 14:37:59,999 INFO  [org.apache.activemq.artemis] AMQ241004: 
> Artemis Console available at http://0.0.0.0:8161/console
> java.lang.NullPointerException
> at 
> org.jgroups.protocols.FD_SOCK2.getPhysicalAddresses(FD_SOCK2.java:440)
> at org.jgroups.protocols.FD_SOCK2.connectTo(FD_SOCK2.java:390)
> at 
> org.jgroups.protocols.FD_SOCK2.connectToNextPingDest(FD_SOCK2.java:371)
> at org.jgroups.protocols.FD_SOCK2.handle(FD_SOCK2.java:342)
> at org.jgroups.protocols.FD_SOCK2.handle(FD_SOCK2.java:31)
> ...
> {noformat}
> (first line is not part of the error message, but provided as context)
> current version of jgroups is 5.2.0.Final, fix version is 5.2.15



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4331) Upgrade JGroups

2023-06-23 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4331:
---
Description: 
I encountered a NPE in the JGROUPS facility of Artemis.
I found that it was already solved upstream in 
https://issues.redhat.com/browse/JGRP-2707
hereby requesting that the updated JGroups is included in Artemis.


{noformat}
2023-06-23 14:37:59,999 INFO  [org.apache.activemq.artemis] AMQ241004: Artemis 
Console available at http://0.0.0.0:8161/console
java.lang.NullPointerException
at 
org.jgroups.protocols.FD_SOCK2.getPhysicalAddresses(FD_SOCK2.java:440)
at org.jgroups.protocols.FD_SOCK2.connectTo(FD_SOCK2.java:390)
at 
org.jgroups.protocols.FD_SOCK2.connectToNextPingDest(FD_SOCK2.java:371)
at org.jgroups.protocols.FD_SOCK2.handle(FD_SOCK2.java:342)
at org.jgroups.protocols.FD_SOCK2.handle(FD_SOCK2.java:31)
...
{noformat}
(first line is not part of the error message, but provided as context)

  was:
I encountered a NPE in the JGROUPS facility of Artemis.
I found that it was already solved upstream in 
https://issues.redhat.com/browse/JGRP-2707
hereby requesting that the updated JGroups is included in Artemis.


> Upgrade JGroups
> ---
>
> Key: ARTEMIS-4331
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4331
> Project: ActiveMQ Artemis
>  Issue Type: Task
>  Components: clustering
>Affects Versions: 2.29.0
>Reporter: Erwin Dondorp
>Priority: Major
>
> I encountered a NPE in the JGROUPS facility of Artemis.
> I found that it was already solved upstream in 
> https://issues.redhat.com/browse/JGRP-2707
> hereby requesting that the updated JGroups is included in Artemis.
> {noformat}
> 2023-06-23 14:37:59,999 INFO  [org.apache.activemq.artemis] AMQ241004: 
> Artemis Console available at http://0.0.0.0:8161/console
> java.lang.NullPointerException
> at 
> org.jgroups.protocols.FD_SOCK2.getPhysicalAddresses(FD_SOCK2.java:440)
> at org.jgroups.protocols.FD_SOCK2.connectTo(FD_SOCK2.java:390)
> at 
> org.jgroups.protocols.FD_SOCK2.connectToNextPingDest(FD_SOCK2.java:371)
> at org.jgroups.protocols.FD_SOCK2.handle(FD_SOCK2.java:342)
> at org.jgroups.protocols.FD_SOCK2.handle(FD_SOCK2.java:31)
> ...
> {noformat}
> (first line is not part of the error message, but provided as context)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4331) Upgrade JGroups

2023-06-23 Thread Erwin Dondorp (Jira)
Erwin Dondorp created ARTEMIS-4331:
--

 Summary: Upgrade JGroups
 Key: ARTEMIS-4331
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4331
 Project: ActiveMQ Artemis
  Issue Type: Task
  Components: clustering
Affects Versions: 2.29.0
Reporter: Erwin Dondorp


I encountered a NPE in the JGROUPS facility of Artemis.
I found that it was already solved upstream in 
https://issues.redhat.com/browse/JGRP-2707
hereby requesting that the updated JGroups is included in Artemis.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (ARTEMIS-4330) Upgrade JGroups

2023-06-23 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp closed ARTEMIS-4330.
--
Resolution: Invalid

> Upgrade JGroups
> ---
>
> Key: ARTEMIS-4330
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4330
> Project: ActiveMQ Artemis
>  Issue Type: Dependency upgrade
>Affects Versions: 2.29.0
>Reporter: Erwin Dondorp
>Assignee: Justin Bertram
>Priority: Major
>
> I have noticed with the OWASP dependency-check plugin 
> (org.owasp:dependency-check-maven:5.0.0) that the currently used 
> org.jgroups:jgroups:3.6.13.Final has a [CWE-300: Channel Accessible by 
> Non-Endpoint 
> ('Man-in-the-Middle')|https://ossindex.sonatype.org/vuln/7c83fdab-9665-4e79-bc81-cc67fbb96417]
>  vulnerability. The problem has not been reported in the NVD database, 
> therefore there is no CVE record.
> The vulnerability has been 
> [addressed|https://github.com/belaban/JGroups/pull/348] in version 
> org.jgroups:jgroups:4.0.2.Final (at the moment the latest version is 
> org.jgroups:jgroups:4.1.1.Final).
> The org.jgroups:jgroups dependency would require an upgrade to resolve the 
> vulnerability.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4330) Upgrade JGroups

2023-06-23 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4330:
---
Affects Version/s: 2.29.0
   (was: 2.6.4)

> Upgrade JGroups
> ---
>
> Key: ARTEMIS-4330
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4330
> Project: ActiveMQ Artemis
>  Issue Type: Dependency upgrade
>Affects Versions: 2.29.0
>Reporter: Erwin Dondorp
>Assignee: Justin Bertram
>Priority: Major
> Fix For: 2.21.0
>
>
> I have noticed with the OWASP dependency-check plugin 
> (org.owasp:dependency-check-maven:5.0.0) that the currently used 
> org.jgroups:jgroups:3.6.13.Final has a [CWE-300: Channel Accessible by 
> Non-Endpoint 
> ('Man-in-the-Middle')|https://ossindex.sonatype.org/vuln/7c83fdab-9665-4e79-bc81-cc67fbb96417]
>  vulnerability. The problem has not been reported in the NVD database, 
> therefore there is no CVE record.
> The vulnerability has been 
> [addressed|https://github.com/belaban/JGroups/pull/348] in version 
> org.jgroups:jgroups:4.0.2.Final (at the moment the latest version is 
> org.jgroups:jgroups:4.1.1.Final).
> The org.jgroups:jgroups dependency would require an upgrade to resolve the 
> vulnerability.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4330) Upgrade JGroups

2023-06-23 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4330:
---
Fix Version/s: (was: 2.21.0)

> Upgrade JGroups
> ---
>
> Key: ARTEMIS-4330
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4330
> Project: ActiveMQ Artemis
>  Issue Type: Dependency upgrade
>Affects Versions: 2.29.0
>Reporter: Erwin Dondorp
>Assignee: Justin Bertram
>Priority: Major
>
> I have noticed with the OWASP dependency-check plugin 
> (org.owasp:dependency-check-maven:5.0.0) that the currently used 
> org.jgroups:jgroups:3.6.13.Final has a [CWE-300: Channel Accessible by 
> Non-Endpoint 
> ('Man-in-the-Middle')|https://ossindex.sonatype.org/vuln/7c83fdab-9665-4e79-bc81-cc67fbb96417]
>  vulnerability. The problem has not been reported in the NVD database, 
> therefore there is no CVE record.
> The vulnerability has been 
> [addressed|https://github.com/belaban/JGroups/pull/348] in version 
> org.jgroups:jgroups:4.0.2.Final (at the moment the latest version is 
> org.jgroups:jgroups:4.1.1.Final).
> The org.jgroups:jgroups dependency would require an upgrade to resolve the 
> vulnerability.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4330) Upgrade JGroups

2023-06-23 Thread Erwin Dondorp (Jira)
Erwin Dondorp created ARTEMIS-4330:
--

 Summary: Upgrade JGroups
 Key: ARTEMIS-4330
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4330
 Project: ActiveMQ Artemis
  Issue Type: Dependency upgrade
Affects Versions: 2.6.4
Reporter: Erwin Dondorp
Assignee: Justin Bertram
 Fix For: 2.21.0


I have noticed with the OWASP dependency-check plugin 
(org.owasp:dependency-check-maven:5.0.0) that the currently used 
org.jgroups:jgroups:3.6.13.Final has a [CWE-300: Channel Accessible by 
Non-Endpoint 
('Man-in-the-Middle')|https://ossindex.sonatype.org/vuln/7c83fdab-9665-4e79-bc81-cc67fbb96417]
 vulnerability. The problem has not been reported in the NVD database, 
therefore there is no CVE record.

The vulnerability has been 
[addressed|https://github.com/belaban/JGroups/pull/348] in version 
org.jgroups:jgroups:4.0.2.Final (at the moment the latest version is 
org.jgroups:jgroups:4.1.1.Final).

The org.jgroups:jgroups dependency would require an upgrade to resolve the 
vulnerability.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-3692) Extend Functionality of Temporary Queue Namespace to Security Settings

2023-04-04 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17708386#comment-17708386
 ] 

Erwin Dondorp commented on ARTEMIS-3692:


[~scuilion] the same security setting, but on "{{*}}" should also be possible. 
that will restrict it a bit more.

[~jbertram] the related PR from was closed. are there any other developments on 
this?

> Extend Functionality of Temporary Queue Namespace to Security Settings
> --
>
> Key: ARTEMIS-3692
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3692
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Kevin O'Neal
>Priority: Major
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Currently the temporary-queue-namespace is only relevant for 
> address-settings, not security-settings. Therefore, the only way to enforce 
> security settings on temporary queues is to use the match "#".



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (ARTEMIS-4214) cluster without slaves should not show an error-icon

2023-03-22 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp closed ARTEMIS-4214.
--
Resolution: Invalid

that is what "live-only" is for

> cluster without slaves should not show an error-icon
> 
>
> Key: ARTEMIS-4214
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4214
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Web Console
>Affects Versions: 2.28.0
>Reporter: Erwin Dondorp
>Priority: Minor
> Attachments: image-2023-03-22-11-18-36-785.png
>
>
> When an Artemis cluster has only live-nodes and no backup-nodes, the status 
> screen shows an error-icon. but that combination is likely intented and thus 
> should show an ok-icon.
> the icon state is determined by the value that is also shown for 
> "replicating:".
> my proposal is to use the ok-icon when there are no backup nodes.
> a PR will be added.
> original status display:
> !image-2023-03-22-11-18-36-785.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4214) cluster without slaves should not show an error-icon

2023-03-22 Thread Erwin Dondorp (Jira)
Erwin Dondorp created ARTEMIS-4214:
--

 Summary: cluster without slaves should not show an error-icon
 Key: ARTEMIS-4214
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4214
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: Web Console
Affects Versions: 2.28.0
Reporter: Erwin Dondorp
 Attachments: image-2023-03-22-11-18-36-785.png

When an Artemis cluster has only live-nodes and no backup-nodes, the status 
screen shows an error-icon. but that combination is likely intented and thus 
should show an ok-icon.

the icon state is determined by the value that is also shown for "replicating:".

my proposal is to use the ok-icon when there are no backup nodes.
a PR will be added.

original status display:
!image-2023-03-22-11-18-36-785.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-4183) Broker diagram is not properly updated when new nodes become available

2023-02-26 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17693705#comment-17693705
 ] 

Erwin Dondorp edited comment on ARTEMIS-4183 at 2/26/23 9:03 PM:
-

ANALYSIS:

the GUI figures out what to display in function {{load()}}
just after the initial call to {{load()}}, an assignment to variable 
{{ctrl.hiddenRelations}} is done.
but when the same function is called from the Refresh-button, such assignment 
is not done, thus keeping the old value.

The variable {{ctrl.hiddenRelations}} was introduced on 6-jun-2021 for 
ARTEMIS-3043

SOLUTIONS:

* solution 1a:
add the assignment also after the {{load()}} call for the Refresh-button
* solution 1b:
move the assignment to the {{load()}} function
* solution 2:
do not maintain variable {{ctrl.hiddenRelations}} and only use variable 
{{ctrl.relations}}

SOLUTION:

As the use of variable {{ctrl.hiddenRelations}} is unclear to me, solution #2 
has been implemented in the PR.

 [~andytaylor]: can you comment on this solution?


was (Author: erwindon):
ANALYSIS:

the GUI figures out what to display in function {{load()}}
just after the initial call to {{load()}}, an assignment to variable 
{{ctrl.hiddenRelations}} is done.
but when the same function is called from the Refresh-button, such assignment 
is not done, thus keeping the old value.

The variable {{ctrl.hiddenRelations}} was introduced on 6-jun-2021 for 
ARTEMIS-3043

SOLUTIONS:

solution 1a:
add the assignment also after the {{load()}} call for the Refresh-button
solution 1b:
move the assignment to the {{load()}} function
solution 2:
do not maintain variable {{ctrl.hiddenRelations}} and only use variable 
{{ctrl.relations}}

SOLUTION:

As the use of variable {{ctrl.hiddenRelations}} is unclear to me, solution #2 
has been implemented in the PR.

 [~andytaylor]: can you comment on this solution?

> Broker diagram is not properly updated when new nodes become available
> --
>
> Key: ARTEMIS-4183
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4183
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.28.0
>Reporter: Erwin Dondorp
>Priority: Minor
> Attachments: image-2023-02-26-16-36-25-761.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Using an auto discovery cluster.
> When the number of nodes is reduced, the broker diagram is properly updated 
> when the Refresh button is used.
> When the number of nodes is enlarged, the broker diagram is _not_ properly 
> updated when the Refresh button is used. The new nodes are visible, but their 
> cluster-connections are not shown. The diagram can easily be fixed by using 
> the browser refresh button instead, or by temporarily switching tabs within 
> the Artemis console.
> The following diagram shows the effect:
> !image-2023-02-26-16-36-25-761.png! 
> left image: initial situation with 5 nodes
> middle image: 3 nodes are added, and after Refresh button is used
> right image: after page refresh
> -I know the JS code from the Console fairly well. I suspect a synchronisation 
> error between the variables {{relations}} and {{{}hiddenRelations{}}}. I'll 
> investigate and try to add a PR.-
> A PR is added.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4183) Broker diagram is not properly updated when new nodes become available

2023-02-26 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4183:
---
Description: 
Using an auto discovery cluster.
When the number of nodes is reduced, the broker diagram is properly updated 
when the Refresh button is used.
When the number of nodes is enlarged, the broker diagram is _not_ properly 
updated when the Refresh button is used. The new nodes are visible, but their 
cluster-connections are not shown. The diagram can easily be fixed by using the 
browser refresh button instead, or by temporarily switching tabs within the 
Artemis console.

The following diagram shows the effect:
!image-2023-02-26-16-36-25-761.png! 
left image: initial situation with 5 nodes
middle image: 3 nodes are added, and after Refresh button is used
right image: after page refresh

-I know the JS code from the Console fairly well. I suspect a synchronisation 
error between the variables {{relations}} and {{{}hiddenRelations{}}}. I'll 
investigate and try to add a PR.-
A PR is added.

  was:
Using an auto discovery cluster.
When the number of nodes is reduced, the broker diagram is properly updated 
when the Refresh button is used.
When the number of nodes is enlarged, the broker diagram is _not_ properly 
updated when the Refresh button is used. The new nodes are visible, but their 
cluster-connections are not shown. The diagram can easily be fixed by using the 
browser refresh button instead, or by temporarily switching tabs within the 
Artemis console.

The following diagram shows the effect:
!image-2023-02-26-16-36-25-761.png! 
left image: initial situation with 5 nodes
middle image: 3 nodes are added, and after Refresh button is used
right image: after page refresh

I know the JS code from the Console fairly well. I suspect a synchronisation 
error between the variables {{relations}} and {{hiddenRelations}}. I'll 
investigate and try to add a PR.


> Broker diagram is not properly updated when new nodes become available
> --
>
> Key: ARTEMIS-4183
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4183
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.28.0
>Reporter: Erwin Dondorp
>Priority: Minor
> Attachments: image-2023-02-26-16-36-25-761.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Using an auto discovery cluster.
> When the number of nodes is reduced, the broker diagram is properly updated 
> when the Refresh button is used.
> When the number of nodes is enlarged, the broker diagram is _not_ properly 
> updated when the Refresh button is used. The new nodes are visible, but their 
> cluster-connections are not shown. The diagram can easily be fixed by using 
> the browser refresh button instead, or by temporarily switching tabs within 
> the Artemis console.
> The following diagram shows the effect:
> !image-2023-02-26-16-36-25-761.png! 
> left image: initial situation with 5 nodes
> middle image: 3 nodes are added, and after Refresh button is used
> right image: after page refresh
> -I know the JS code from the Console fairly well. I suspect a synchronisation 
> error between the variables {{relations}} and {{{}hiddenRelations{}}}. I'll 
> investigate and try to add a PR.-
> A PR is added.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-4183) Broker diagram is not properly updated when new nodes become available

2023-02-26 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17693705#comment-17693705
 ] 

Erwin Dondorp edited comment on ARTEMIS-4183 at 2/26/23 8:46 PM:
-

ANALYSIS:

the GUI figures out what to display in function {{load()}}
just after the initial call to {{load()}}, an assignment to variable 
{{ctrl.hiddenRelations}} is done.
but when the same function is called from the Refresh-button, such assignment 
is not done, thus keeping the old value.

The variable {{ctrl.hiddenRelations}} was introduced on 6-jun-2021 for 
ARTEMIS-3043

SOLUTIONS:

solution 1a:
add the assignment also after the {{load()}} call for the Refresh-button
solution 1b:
move the assignment to the {{load()}} function
solution 2:
do not maintain variable {{ctrl.hiddenRelations}} and only use variable 
{{ctrl.relations}}

SOLUTION:

As the use of variable {{ctrl.hiddenRelations}} is unclear to me, solution #2 
has been implemented in the PR.

 [~andytaylor]: can you comment on this solution?


was (Author: erwindon):
ANALYSIS:

the GUI figures out what to display in function {{load()}}
just after the initial call to {{load()}}, an assignment to variable 
{{ctrl.hiddenRelations}} is done.
but when the same function is called from the Refresh-button, such assignment 
is not done, thus keeping the old value.

The variable {{ctrl.hiddenRelations}} was introduced on 6-jun-2021 for 
ARTEMIS-3043

SOLUTIONS:

solution 1a:
add the assignment also after the {{load()}} call for the Refresh-button
solution 1b:
move the assignment to the {{load()}} function
solution 2:
do not maintain variable {{ctrl.hiddenRelations}} and only use variable 
{{ctrl.relations}}

SOLUTION:

As the use of variable {{ctrl.hiddenRelations}} is unclear to me, solution #2 
has been implemented in the PR.

@andytaylor: can you comment on this solution?

> Broker diagram is not properly updated when new nodes become available
> --
>
> Key: ARTEMIS-4183
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4183
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.28.0
>Reporter: Erwin Dondorp
>Priority: Minor
> Attachments: image-2023-02-26-16-36-25-761.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Using an auto discovery cluster.
> When the number of nodes is reduced, the broker diagram is properly updated 
> when the Refresh button is used.
> When the number of nodes is enlarged, the broker diagram is _not_ properly 
> updated when the Refresh button is used. The new nodes are visible, but their 
> cluster-connections are not shown. The diagram can easily be fixed by using 
> the browser refresh button instead, or by temporarily switching tabs within 
> the Artemis console.
> The following diagram shows the effect:
> !image-2023-02-26-16-36-25-761.png! 
> left image: initial situation with 5 nodes
> middle image: 3 nodes are added, and after Refresh button is used
> right image: after page refresh
> I know the JS code from the Console fairly well. I suspect a synchronisation 
> error between the variables {{relations}} and {{hiddenRelations}}. I'll 
> investigate and try to add a PR.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4183) Broker diagram is not properly updated when new nodes become available

2023-02-26 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17693705#comment-17693705
 ] 

Erwin Dondorp commented on ARTEMIS-4183:


ANALYSIS:

the GUI figures out what to display in function {{load()}}
just after the initial call to {{load()}}, an assignment to variable 
{{ctrl.hiddenRelations}} is done.
but when the same function is called from the Refresh-button, such assignment 
is not done, thus keeping the old value.

The variable {{ctrl.hiddenRelations}} was introduced on 6-jun-2021 for 
ARTEMIS-3043

SOLUTIONS:

solution 1a:
add the assignment also after the {{load()}} call for the Refresh-button
solution 1b:
move the assignment to the {{load()}} function
solution 2:
do not maintain variable {{ctrl.hiddenRelations}} and only use variable 
{{ctrl.relations}}

SOLUTION:

As the use of variable {{ctrl.hiddenRelations}} is unclear to me, solution #2 
has been implemented in the PR.

@andytaylor: can you comment on this solution?

> Broker diagram is not properly updated when new nodes become available
> --
>
> Key: ARTEMIS-4183
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4183
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.28.0
>Reporter: Erwin Dondorp
>Priority: Minor
> Attachments: image-2023-02-26-16-36-25-761.png
>
>
> Using an auto discovery cluster.
> When the number of nodes is reduced, the broker diagram is properly updated 
> when the Refresh button is used.
> When the number of nodes is enlarged, the broker diagram is _not_ properly 
> updated when the Refresh button is used. The new nodes are visible, but their 
> cluster-connections are not shown. The diagram can easily be fixed by using 
> the browser refresh button instead, or by temporarily switching tabs within 
> the Artemis console.
> The following diagram shows the effect:
> !image-2023-02-26-16-36-25-761.png! 
> left image: initial situation with 5 nodes
> middle image: 3 nodes are added, and after Refresh button is used
> right image: after page refresh
> I know the JS code from the Console fairly well. I suspect a synchronisation 
> error between the variables {{relations}} and {{hiddenRelations}}. I'll 
> investigate and try to add a PR.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4183) Broker diagram is not properly updated when new nodes become available

2023-02-26 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4183:
---
Description: 
Using an auto discovery cluster.
When the number of nodes is reduced, the broker diagram is properly updated 
when the Refresh button is used.
When the number of nodes is enlarged, the broker diagram is _not_ properly 
updated when the Refresh button is used. The new nodes are visible, but their 
cluster-connections are not shown. The diagram can easily be fixed by using the 
browser refresh button instead, or by temporarily switching tabs within the 
Artemis console.

The following diagram shows the effect:
!image-2023-02-26-16-36-25-761.png! 
left image: initial situation with 5 nodes
middle image: 3 nodes are added, and after Refresh button is used
right image: after page refresh

I know the JS code from the Console fairly well. I suspect a synchronisation 
error between the variables {{relations}} and {{hiddenRelations}}. I'll 
investigate and try to add a PR.

  was:
Using an auto discovery cluster.
When the number of nodes is reduced, the broker diagram is properly updated 
when the Refresh button is used.
When the number of nodes is enlarged, the broker diagram is _not_ properly 
updated when the Refresh button is used. The new nodes are visible, but their 
cluster-connections are not shown. The diagram can easily be fixed by using the 
browser refresh button instead, or by temporarily switching tabs within the 
Artemis console.

The following diagram shows the effect:
!image-2023-02-26-16-36-25-761.png! 
left image: initial situation with 5 nodes
middle image: 3 nodes are added, and after Refresh button is used
right image: after page refresh

I know the JS code from the Console fairly well. I suspect a synchronisation 
error between the variables `relations` and `hiddenRelations`. I'll investigate 
and try to add a PR.


> Broker diagram is not properly updated when new nodes become available
> --
>
> Key: ARTEMIS-4183
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4183
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.28.0
>Reporter: Erwin Dondorp
>Priority: Minor
> Attachments: image-2023-02-26-16-36-25-761.png
>
>
> Using an auto discovery cluster.
> When the number of nodes is reduced, the broker diagram is properly updated 
> when the Refresh button is used.
> When the number of nodes is enlarged, the broker diagram is _not_ properly 
> updated when the Refresh button is used. The new nodes are visible, but their 
> cluster-connections are not shown. The diagram can easily be fixed by using 
> the browser refresh button instead, or by temporarily switching tabs within 
> the Artemis console.
> The following diagram shows the effect:
> !image-2023-02-26-16-36-25-761.png! 
> left image: initial situation with 5 nodes
> middle image: 3 nodes are added, and after Refresh button is used
> right image: after page refresh
> I know the JS code from the Console fairly well. I suspect a synchronisation 
> error between the variables {{relations}} and {{hiddenRelations}}. I'll 
> investigate and try to add a PR.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4183) Broker diagram is not properly updated when new nodes become available

2023-02-26 Thread Erwin Dondorp (Jira)
Erwin Dondorp created ARTEMIS-4183:
--

 Summary: Broker diagram is not properly updated when new nodes 
become available
 Key: ARTEMIS-4183
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4183
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Web Console
Affects Versions: 2.28.0
Reporter: Erwin Dondorp
 Attachments: image-2023-02-26-16-36-25-761.png

Using an auto discovery cluster.
When the number of nodes is reduced, the broker diagram is properly updated 
when the Refresh button is used.
When the number of nodes is enlarged, the broker diagram is _not_ properly 
updated when the Refresh button is used. The new nodes are visible, but their 
cluster-connections are not shown. The diagram can easily be fixed by using the 
browser refresh button instead, or by temporarily switching tabs within the 
Artemis console.

The following diagram shows the effect:
!image-2023-02-26-16-36-25-761.png! 
left image: initial situation with 5 nodes
middle image: 3 nodes are added, and after Refresh button is used
right image: after page refresh

I know the JS code from the Console fairly well. I suspect a synchronisation 
error between the variables `relations` and `hiddenRelations`. I'll investigate 
and try to add a PR.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4182) fill client-id for cluster connections

2023-02-25 Thread Erwin Dondorp (Jira)
Erwin Dondorp created ARTEMIS-4182:
--

 Summary: fill client-id for cluster connections
 Key: ARTEMIS-4182
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4182
 Project: ActiveMQ Artemis
  Issue Type: Wish
  Components: Broker
Affects Versions: 2.28.0
Reporter: Erwin Dondorp
 Attachments: image-2023-02-25-13-27-08-542.png

when running Artemis in a cluster, the brokers have connections between them.
these are easily identifiable in the list of connections because the "Users" 
field is filled in with the username that was specified in setting 
`cluster-user`.
but it is unclear where each connection goes to.
!image-2023-02-25-13-27-08-542.png! 

wish:
fill in field ClientID of the cluster connections.
e.g. with the broker-name or from a new parameter `cluster-clientid`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (ARTEMIS-4173) stacktrace for cluster connection

2023-02-17 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp closed ARTEMIS-4173.
--
Resolution: Invalid

> stacktrace for cluster connection
> -
>
> Key: ARTEMIS-4173
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4173
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.28.0
>Reporter: Erwin Dondorp
>Priority: Minor
>
> setup is a first attempt to use JGROUPS with FILE_PING.
> jgroups config file was copied from 
> {{examples/features/clustered/clustered-jgroups/src/main/resources/activemq/server0/test-jgroups-file_ping.xml}}
>  with only an adjustment of the filename.
> the following stack trace was visible, after JGroups gave up and Artemis 
> continued as singleton.
> the second part of the trace says that this should not happen, but it did...
> note the reference to port number -1
> {panel:title=stack trace}
> {noformat}
> org.apache.activemq.artemis.api.core.ActiveMQIllegalStateException: 
> AMQ219024: Could not select a TransportConfiguration to create SessionFactory
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:675)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:547)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:526)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl$4.run(ServerLocatorImpl.java:489)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:57)
>  ~[artemis-commons-2.28.0.jar:?]
>   at 
> org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:32)
>  ~[artemis-commons-2.28.0.jar:?]
>   at 
> org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:68)
>  ~[artemis-commons-2.28.0.jar:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  ~[?:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  ~[?:?]
>   at 
> org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
>  ~[artemis-commons-2.28.0.jar:?]
> 2023-02-17 01:32:46,279 WARN  [org.apache.activemq.artemis.core.client] 
> AMQ212007: connector.create or connectorFactory.createConnector should never 
> throw an exception, implementation is badly behaved, but we will deal with it 
> anyway.
> java.lang.IllegalArgumentException: port out of range:-1
>   at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143) 
> ~[?:?]
>   at java.net.InetSocketAddress.(InetSocketAddress.java:224) ~[?:?]
>   at 
> org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector.createConnection(NettyConnector.java:874)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector.createConnection(NettyConnector.java:866)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector.createConnection(NettyConnector.java:848)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.openTransportConnection(ClientSessionFactoryImpl.java:1105)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.createTransportConnection(ClientSessionFactoryImpl.java:1212)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.createTransportConnection(ClientSessionFactoryImpl.java:1146)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.establishNewConnection(ClientSessionFactoryImpl.java:1375)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.getConnection(ClientSessionFactoryImpl.java:967)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.getConnectionWithRetry(ClientSessionFactoryImpl.java:858)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.connect(ClientSessionFactoryImpl.java:252)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> 

[jira] [Commented] (ARTEMIS-4173) stacktrace for cluster connection

2023-02-17 Thread Erwin Dondorp (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17690408#comment-17690408
 ] 

Erwin Dondorp commented on ARTEMIS-4173:


the {{clustered-jgroups}} example runs fine on my development machine.

but I also found that that example has multiple mechanisms active in its 
{{_jgroups_.xml}} file. (I only want to use FILE_PING)
that makes the as-is re-use of that configuration file a bit useless.
any errors from using the example configuration are too confusing.

> stacktrace for cluster connection
> -
>
> Key: ARTEMIS-4173
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4173
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.28.0
>Reporter: Erwin Dondorp
>Priority: Minor
>
> setup is a first attempt to use JGROUPS with FILE_PING.
> jgroups config file was copied from 
> {{examples/features/clustered/clustered-jgroups/src/main/resources/activemq/server0/test-jgroups-file_ping.xml}}
>  with only an adjustment of the filename.
> the following stack trace was visible, after JGroups gave up and Artemis 
> continued as singleton.
> the second part of the trace says that this should not happen, but it did...
> note the reference to port number -1
> {panel:title=stack trace}
> {noformat}
> org.apache.activemq.artemis.api.core.ActiveMQIllegalStateException: 
> AMQ219024: Could not select a TransportConfiguration to create SessionFactory
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:675)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:547)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:526)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl$4.run(ServerLocatorImpl.java:489)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:57)
>  ~[artemis-commons-2.28.0.jar:?]
>   at 
> org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:32)
>  ~[artemis-commons-2.28.0.jar:?]
>   at 
> org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:68)
>  ~[artemis-commons-2.28.0.jar:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  ~[?:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  ~[?:?]
>   at 
> org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
>  ~[artemis-commons-2.28.0.jar:?]
> 2023-02-17 01:32:46,279 WARN  [org.apache.activemq.artemis.core.client] 
> AMQ212007: connector.create or connectorFactory.createConnector should never 
> throw an exception, implementation is badly behaved, but we will deal with it 
> anyway.
> java.lang.IllegalArgumentException: port out of range:-1
>   at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143) 
> ~[?:?]
>   at java.net.InetSocketAddress.(InetSocketAddress.java:224) ~[?:?]
>   at 
> org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector.createConnection(NettyConnector.java:874)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector.createConnection(NettyConnector.java:866)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector.createConnection(NettyConnector.java:848)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.openTransportConnection(ClientSessionFactoryImpl.java:1105)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.createTransportConnection(ClientSessionFactoryImpl.java:1212)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.createTransportConnection(ClientSessionFactoryImpl.java:1146)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.establishNewConnection(ClientSessionFactoryImpl.java:1375)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.getConnection(ClientSessionFactoryImpl.java:967)
>  ~[artemis-core-client-2.28.0.jar:2.28.0]
>   at 
> 

[jira] [Updated] (ARTEMIS-4173) stacktrace for cluster connection

2023-02-16 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4173:
---
Description: 
setup is a first attempt to use JGROUPS with FILE_PING.
jgroups config file was copied from 
{{examples/features/clustered/clustered-jgroups/src/main/resources/activemq/server0/test-jgroups-file_ping.xml}}
 with only an adjustment of the filename.

the following stack trace was visible, after JGroups gave up and Artemis 
continued as singleton.

the second part of the trace says that this should not happen, but it did...
note the reference to port number -1

{panel:title=stack trace}
{noformat}
org.apache.activemq.artemis.api.core.ActiveMQIllegalStateException: AMQ219024: 
Could not select a TransportConfiguration to create SessionFactory
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:675)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:547)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:526)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl$4.run(ServerLocatorImpl.java:489)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:57)
 ~[artemis-commons-2.28.0.jar:?]
at 
org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:32)
 ~[artemis-commons-2.28.0.jar:?]
at 
org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:68)
 ~[artemis-commons-2.28.0.jar:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
~[?:?]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
~[?:?]
at 
org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
 ~[artemis-commons-2.28.0.jar:?]
2023-02-17 01:32:46,279 WARN  [org.apache.activemq.artemis.core.client] 
AMQ212007: connector.create or connectorFactory.createConnector should never 
throw an exception, implementation is badly behaved, but we will deal with it 
anyway.
java.lang.IllegalArgumentException: port out of range:-1
at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143) 
~[?:?]
at java.net.InetSocketAddress.(InetSocketAddress.java:224) ~[?:?]
at 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector.createConnection(NettyConnector.java:874)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector.createConnection(NettyConnector.java:866)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector.createConnection(NettyConnector.java:848)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.openTransportConnection(ClientSessionFactoryImpl.java:1105)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.createTransportConnection(ClientSessionFactoryImpl.java:1212)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.createTransportConnection(ClientSessionFactoryImpl.java:1146)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.establishNewConnection(ClientSessionFactoryImpl.java:1375)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.getConnection(ClientSessionFactoryImpl.java:967)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.getConnectionWithRetry(ClientSessionFactoryImpl.java:858)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.connect(ClientSessionFactoryImpl.java:252)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:610)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:595)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:579)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 

[jira] [Updated] (ARTEMIS-4173) stacktrace for cluster connection

2023-02-16 Thread Erwin Dondorp (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erwin Dondorp updated ARTEMIS-4173:
---
Description: 
setup is a first attempt to use JGROUPS with FILE_PING.
jgroups config file was copied from 
{{examples/features/clustered/clustered-jgroups/src/main/resources/activemq/server0/test-jgroups-file_ping.xml}}
 with only an adjustment of the filename.

the following stack trace was visible, after JGroups gave up and Artemis 
continued as singleton.

the second part of the trace says that this should not happen, but it did...

{panel:title=stack trace}
{noformat}
org.apache.activemq.artemis.api.core.ActiveMQIllegalStateException: AMQ219024: 
Could not select a TransportConfiguration to create SessionFactory
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:675)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:547)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:526)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl$4.run(ServerLocatorImpl.java:489)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:57)
 ~[artemis-commons-2.28.0.jar:?]
at 
org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:32)
 ~[artemis-commons-2.28.0.jar:?]
at 
org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:68)
 ~[artemis-commons-2.28.0.jar:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
~[?:?]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
~[?:?]
at 
org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
 ~[artemis-commons-2.28.0.jar:?]
2023-02-17 01:32:46,279 WARN  [org.apache.activemq.artemis.core.client] 
AMQ212007: connector.create or connectorFactory.createConnector should never 
throw an exception, implementation is badly behaved, but we will deal with it 
anyway.
java.lang.IllegalArgumentException: port out of range:-1
at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143) 
~[?:?]
at java.net.InetSocketAddress.(InetSocketAddress.java:224) ~[?:?]
at 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector.createConnection(NettyConnector.java:874)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector.createConnection(NettyConnector.java:866)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector.createConnection(NettyConnector.java:848)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.openTransportConnection(ClientSessionFactoryImpl.java:1105)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.createTransportConnection(ClientSessionFactoryImpl.java:1212)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.createTransportConnection(ClientSessionFactoryImpl.java:1146)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.establishNewConnection(ClientSessionFactoryImpl.java:1375)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.getConnection(ClientSessionFactoryImpl.java:967)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.getConnectionWithRetry(ClientSessionFactoryImpl.java:858)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.connect(ClientSessionFactoryImpl.java:252)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:610)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:595)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:579)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 

[jira] [Created] (ARTEMIS-4173) stacktrace for cluster connection

2023-02-16 Thread Erwin Dondorp (Jira)
Erwin Dondorp created ARTEMIS-4173:
--

 Summary: stacktrace for cluster connection
 Key: ARTEMIS-4173
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4173
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.28.0
Reporter: Erwin Dondorp


setup is a first attempt to use JGROUPS with FILE_PING.
jgroups config file was copied from 
{{examples/features/clustered/clustered-jgroups/src/main/resources/activemq/server0/test-jgroups-file_ping.xml}}
 with only an adjustment of the filename.

the following stack trace was visible, after JGroups gave up and Artemis 
continued as singleton.

the second part of the trace says that this should not happen, but it did...


{panel:title=stack trace}
org.apache.activemq.artemis.api.core.ActiveMQIllegalStateException: AMQ219024: 
Could not select a TransportConfiguration to create SessionFactory
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:675)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:547)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:526)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl$4.run(ServerLocatorImpl.java:489)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:57)
 ~[artemis-commons-2.28.0.jar:?]
at 
org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:32)
 ~[artemis-commons-2.28.0.jar:?]
at 
org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:68)
 ~[artemis-commons-2.28.0.jar:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
~[?:?]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
~[?:?]
at 
org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
 ~[artemis-commons-2.28.0.jar:?]
2023-02-17 01:32:46,279 WARN  [org.apache.activemq.artemis.core.client] 
AMQ212007: connector.create or connectorFactory.createConnector should never 
throw an exception, implementation is badly behaved, but we will deal with it 
anyway.
java.lang.IllegalArgumentException: port out of range:-1
at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143) 
~[?:?]
at java.net.InetSocketAddress.(InetSocketAddress.java:224) ~[?:?]
at 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector.createConnection(NettyConnector.java:874)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector.createConnection(NettyConnector.java:866)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector.createConnection(NettyConnector.java:848)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.openTransportConnection(ClientSessionFactoryImpl.java:1105)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.createTransportConnection(ClientSessionFactoryImpl.java:1212)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.createTransportConnection(ClientSessionFactoryImpl.java:1146)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.establishNewConnection(ClientSessionFactoryImpl.java:1375)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.getConnection(ClientSessionFactoryImpl.java:967)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.getConnectionWithRetry(ClientSessionFactoryImpl.java:858)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.connect(ClientSessionFactoryImpl.java:252)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:610)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:595)
 ~[artemis-core-client-2.28.0.jar:2.28.0]
at 

  1   2   3   4   5   6   7   8   >