[jira] [Comment Edited] (ARTEMIS-3587) After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the broker.

2021-11-23 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17448068#comment-17448068
 ] 

Sebastian T edited comment on ARTEMIS-3587 at 11/23/21, 3:13 PM:
-

We are providing the broker as a shared service. And yes filters are used a 
lot. The server is monitored with Dynatrace. I do not see any CPU spikes. Max. 
CPU usage is 25% at the time of the stacktrace the CPU usage was at 15%.

My question is, why did we not get these warnings with Artemis 2.16. Was 
CriticalMeasure introduced later or does filtering has now a stronger 
performance impact in 2.19 than it had 2.16? The load on the broker is the same 
now than it was with 2.16.


was (Author: seb):
We are providing the broker as a shared service. And yes filters are used a 
lot. The server is monitored with Dynatrace. I do not see any CPU spikes. Max. 
CPU usage is 25%.

My question is, why did we not get these warnings with Artemis 2.16. Was 
CriticalMeasure introduced later or does filtering has now a stronger 
performance impact in 2.19 than it had 2.16? The load on the broker is the same 
now than it was with 2.16.

> After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the 
> broker.
> --
>
> Key: ARTEMIS-3587
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3587
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.19.0
>Reporter: Sebastian T
>Priority: Major
>  Labels: performance
> Attachments: log_with_stacktrace.log
>
>
> After upgrading from Artemis 2.16.0 to 2.19.0 we now receive the following 
> message a few times a day:
> 2021-11-05 05:32:53.641 WARN 3850 --- [eduled-threads)] 
> o.a.a.a.u.c.CriticalMeasure  : Component 
> org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
> 2021-11-05 05:32:53.642 INFO 3850 --- [eduled-threads)] o.a.a.a.c.server  
>: AMQ224107: The Critical Analyzer detected slow paths on 
> the broker.  It is recommended that you enable trace logs on 
> org.apache.activemq.artemis.utils.critical while you troubleshoot this issue. 
> You should disable the trace logs when you have finished troubleshooting.
> This message is also logged via ActiveMQServerCriticalPlugin#criticalFailure 
> however the javadoc says: "This will be called before the broker is stopped." 
> The broker however is not stopped. So I think the javadoc and the behaviour 
> of the callback are not in line anymore too.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (ARTEMIS-3587) After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the broker.

2021-11-23 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17448068#comment-17448068
 ] 

Sebastian T edited comment on ARTEMIS-3587 at 11/23/21, 3:11 PM:
-

We are providing the broker as a shared service. And yes filters are used a 
lot. The server is monitored with Dynatrace. I do not see any CPU spikes. Max. 
CPU usage is 25%.

My question is, why did we not get these warnings with Artemis 2.16. Was 
CriticalMeasure introduced later or does filtering has now a stronger 
performance impact in 2.19 than it had 2.16? The load on the broker is the same 
now than it was with 2.16.


was (Author: seb):
We are providing the broker as a shared service. And yes filters are used a 
lot. My question is, why did we not get these warnings with Artemis 2.16. Was 
CriticalMeasure introduced later or does filtering has now a stronger 
performance impact in 2.19 than it had 2.16? The load on the broker is the same 
now than it was with 2.16.

> After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the 
> broker.
> --
>
> Key: ARTEMIS-3587
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3587
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.19.0
>Reporter: Sebastian T
>Priority: Major
>  Labels: performance
> Attachments: log_with_stacktrace.log
>
>
> After upgrading from Artemis 2.16.0 to 2.19.0 we now receive the following 
> message a few times a day:
> 2021-11-05 05:32:53.641 WARN 3850 --- [eduled-threads)] 
> o.a.a.a.u.c.CriticalMeasure  : Component 
> org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
> 2021-11-05 05:32:53.642 INFO 3850 --- [eduled-threads)] o.a.a.a.c.server  
>: AMQ224107: The Critical Analyzer detected slow paths on 
> the broker.  It is recommended that you enable trace logs on 
> org.apache.activemq.artemis.utils.critical while you troubleshoot this issue. 
> You should disable the trace logs when you have finished troubleshooting.
> This message is also logged via ActiveMQServerCriticalPlugin#criticalFailure 
> however the javadoc says: "This will be called before the broker is stopped." 
> The broker however is not stopped. So I think the javadoc and the behaviour 
> of the callback are not in line anymore too.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (ARTEMIS-3587) After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the broker.

2021-11-23 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17448068#comment-17448068
 ] 

Sebastian T commented on ARTEMIS-3587:
--

We are providing the broker as a shared service. And yes filters are used a 
lot. My question is, why did we not get these warnings with Artemis 2.16. Was 
CriticalMeasure introduced later or does filtering has now a stronger 
performance impact in 2.19 than it had 2.16? The load on the broker is the same 
now than it was with 2.16.

> After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the 
> broker.
> --
>
> Key: ARTEMIS-3587
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3587
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.19.0
>Reporter: Sebastian T
>Priority: Major
>  Labels: performance
> Attachments: log_with_stacktrace.log
>
>
> After upgrading from Artemis 2.16.0 to 2.19.0 we now receive the following 
> message a few times a day:
> 2021-11-05 05:32:53.641 WARN 3850 --- [eduled-threads)] 
> o.a.a.a.u.c.CriticalMeasure  : Component 
> org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
> 2021-11-05 05:32:53.642 INFO 3850 --- [eduled-threads)] o.a.a.a.c.server  
>: AMQ224107: The Critical Analyzer detected slow paths on 
> the broker.  It is recommended that you enable trace logs on 
> org.apache.activemq.artemis.utils.critical while you troubleshoot this issue. 
> You should disable the trace logs when you have finished troubleshooting.
> This message is also logged via ActiveMQServerCriticalPlugin#criticalFailure 
> however the javadoc says: "This will be called before the broker is stopped." 
> The broker however is not stopped. So I think the javadoc and the behaviour 
> of the callback are not in line anymore too.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (ARTEMIS-3587) After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the broker.

2021-11-23 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17448008#comment-17448008
 ] 

Sebastian T commented on ARTEMIS-3587:
--

Btw. what does it mean, that a queue is "expired"?

> After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the 
> broker.
> --
>
> Key: ARTEMIS-3587
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3587
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.19.0
>Reporter: Sebastian T
>Priority: Major
>  Labels: performance
> Attachments: log_with_stacktrace.log
>
>
> After upgrading from Artemis 2.16.0 to 2.19.0 we now receive the following 
> message a few times a day:
> 2021-11-05 05:32:53.641 WARN 3850 --- [eduled-threads)] 
> o.a.a.a.u.c.CriticalMeasure  : Component 
> org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
> 2021-11-05 05:32:53.642 INFO 3850 --- [eduled-threads)] o.a.a.a.c.server  
>: AMQ224107: The Critical Analyzer detected slow paths on 
> the broker.  It is recommended that you enable trace logs on 
> org.apache.activemq.artemis.utils.critical while you troubleshoot this issue. 
> You should disable the trace logs when you have finished troubleshooting.
> This message is also logged via ActiveMQServerCriticalPlugin#criticalFailure 
> however the javadoc says: "This will be called before the broker is stopped." 
> The broker however is not stopped. So I think the javadoc and the behaviour 
> of the callback are not in line anymore too.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (ARTEMIS-3587) After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the broker.

2021-11-23 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17448007#comment-17448007
 ] 

Sebastian T commented on ARTEMIS-3587:
--

[~nigrofranz] the attached log is from a later occurrence of the Critical 
Analyzer error after I enabled the tracing.

I now grepped the log, so the error happens either on path 2 and path 4 
regularly:

{{2021-11-05 05:32:53.641 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
2021-11-06 00:34:02.151 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 2
2021-11-06 00:37:04.968 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
2021-11-06 05:38:09.274 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 2
2021-11-06 07:05:14.936 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 2
2021-11-06 14:07:18.483 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 2
2021-11-07 03:38:26.588 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 2
2021-11-07 03:54:27.697 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
2021-11-08 16:13:36.934 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
2021-11-08 21:47:45.493 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 2
2021-11-09 12:37:50.197 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 2
2021-11-09 20:12:56.317 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
2021-11-11 08:01:02.224 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 2
2021-11-11 20:16:07.506 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 2
2021-11-12 20:27:15.128 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
2021-11-13 08:09:23.094 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
2021-11-13 15:59:26.511 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
2021-11-15 04:51:35.870 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 2
2021-11-15 15:46:39.835 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 2
2021-11-15 22:37:48.983 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 0
2021-11-16 08:39:50.281 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 2
2021-11-16 10:38:52.435 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
2021-11-16 15:33:56.592 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
2021-11-17 08:48:07.199 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 2
2021-11-17 16:18:10.777 WARN 3850 --- [eduled-threads)] 

[jira] [Comment Edited] (ARTEMIS-3587) After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the broker.

2021-11-23 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447929#comment-17447929
 ] 

Sebastian T edited comment on ARTEMIS-3587 at 11/23/21, 12:11 PM:
--

I enabled tracing of the CriticalMeasure component. I am attaching the 
generated stacktrace while the error occured. [^log_with_stacktrace.log] 


was (Author: seb):
I enabled tracing of the CricilaMeasure component. I am attaching the generated 
stacktrace while the error occured. [^log_with_stacktrace.log] 

> After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the 
> broker.
> --
>
> Key: ARTEMIS-3587
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3587
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.19.0
>Reporter: Sebastian T
>Priority: Major
>  Labels: performance
> Attachments: log_with_stacktrace.log
>
>
> After upgrading from Artemis 2.16.0 to 2.19.0 we now receive the following 
> message a few times a day:
> 2021-11-05 05:32:53.641 WARN 3850 --- [eduled-threads)] 
> o.a.a.a.u.c.CriticalMeasure  : Component 
> org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
> 2021-11-05 05:32:53.642 INFO 3850 --- [eduled-threads)] o.a.a.a.c.server  
>: AMQ224107: The Critical Analyzer detected slow paths on 
> the broker.  It is recommended that you enable trace logs on 
> org.apache.activemq.artemis.utils.critical while you troubleshoot this issue. 
> You should disable the trace logs when you have finished troubleshooting.
> This message is also logged via ActiveMQServerCriticalPlugin#criticalFailure 
> however the javadoc says: "This will be called before the broker is stopped." 
> The broker however is not stopped. So I think the javadoc and the behaviour 
> of the callback are not in line anymore too.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3587) After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the broker.

2021-11-23 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3587:
-
Labels: performance  (was: )

> After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the 
> broker.
> --
>
> Key: ARTEMIS-3587
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3587
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.19.0
>Reporter: Sebastian T
>Priority: Major
>  Labels: performance
> Attachments: log_with_stacktrace.log
>
>
> After upgrading from Artemis 2.16.0 to 2.19.0 we now receive the following 
> message a few times a day:
> 2021-11-05 05:32:53.641 WARN 3850 --- [eduled-threads)] 
> o.a.a.a.u.c.CriticalMeasure  : Component 
> org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
> 2021-11-05 05:32:53.642 INFO 3850 --- [eduled-threads)] o.a.a.a.c.server  
>: AMQ224107: The Critical Analyzer detected slow paths on 
> the broker.  It is recommended that you enable trace logs on 
> org.apache.activemq.artemis.utils.critical while you troubleshoot this issue. 
> You should disable the trace logs when you have finished troubleshooting.
> This message is also logged via ActiveMQServerCriticalPlugin#criticalFailure 
> however the javadoc says: "This will be called before the broker is stopped." 
> The broker however is not stopped. So I think the javadoc and the behaviour 
> of the callback are not in line anymore too.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3587) After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the broker.

2021-11-23 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3587:
-
Component/s: Broker

> After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the 
> broker.
> --
>
> Key: ARTEMIS-3587
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3587
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.19.0
>Reporter: Sebastian T
>Priority: Major
> Attachments: log_with_stacktrace.log
>
>
> After upgrading from Artemis 2.16.0 to 2.19.0 we now receive the following 
> message a few times a day:
> 2021-11-05 05:32:53.641 WARN 3850 --- [eduled-threads)] 
> o.a.a.a.u.c.CriticalMeasure  : Component 
> org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
> 2021-11-05 05:32:53.642 INFO 3850 --- [eduled-threads)] o.a.a.a.c.server  
>: AMQ224107: The Critical Analyzer detected slow paths on 
> the broker.  It is recommended that you enable trace logs on 
> org.apache.activemq.artemis.utils.critical while you troubleshoot this issue. 
> You should disable the trace logs when you have finished troubleshooting.
> This message is also logged via ActiveMQServerCriticalPlugin#criticalFailure 
> however the javadoc says: "This will be called before the broker is stopped." 
> The broker however is not stopped. So I think the javadoc and the behaviour 
> of the callback are not in line anymore too.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3587) After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the broker.

2021-11-23 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3587:
-
Attachment: log_with_stacktrace.log

> After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the 
> broker.
> --
>
> Key: ARTEMIS-3587
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3587
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.19.0
>Reporter: Sebastian T
>Priority: Major
> Attachments: log_with_stacktrace.log
>
>
> After upgrading from Artemis 2.16.0 to 2.19.0 we now receive the following 
> message a few times a day:
> 2021-11-05 05:32:53.641 WARN 3850 --- [eduled-threads)] 
> o.a.a.a.u.c.CriticalMeasure  : Component 
> org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
> 2021-11-05 05:32:53.642 INFO 3850 --- [eduled-threads)] o.a.a.a.c.server  
>: AMQ224107: The Critical Analyzer detected slow paths on 
> the broker.  It is recommended that you enable trace logs on 
> org.apache.activemq.artemis.utils.critical while you troubleshoot this issue. 
> You should disable the trace logs when you have finished troubleshooting.
> This message is also logged via ActiveMQServerCriticalPlugin#criticalFailure 
> however the javadoc says: "This will be called before the broker is stopped." 
> The broker however is not stopped. So I think the javadoc and the behaviour 
> of the callback are not in line anymore too.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (ARTEMIS-3587) After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the broker.

2021-11-23 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447929#comment-17447929
 ] 

Sebastian T commented on ARTEMIS-3587:
--

I enabled tracing of the CricilaMeasure component. I am attaching the generated 
stacktrace while the error occured. [^log_with_stacktrace.log] 

> After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the 
> broker.
> --
>
> Key: ARTEMIS-3587
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3587
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.19.0
>Reporter: Sebastian T
>Priority: Major
> Attachments: log_with_stacktrace.log
>
>
> After upgrading from Artemis 2.16.0 to 2.19.0 we now receive the following 
> message a few times a day:
> 2021-11-05 05:32:53.641 WARN 3850 --- [eduled-threads)] 
> o.a.a.a.u.c.CriticalMeasure  : Component 
> org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
> 2021-11-05 05:32:53.642 INFO 3850 --- [eduled-threads)] o.a.a.a.c.server  
>: AMQ224107: The Critical Analyzer detected slow paths on 
> the broker.  It is recommended that you enable trace logs on 
> org.apache.activemq.artemis.utils.critical while you troubleshoot this issue. 
> You should disable the trace logs when you have finished troubleshooting.
> This message is also logged via ActiveMQServerCriticalPlugin#criticalFailure 
> however the javadoc says: "This will be called before the broker is stopped." 
> The broker however is not stopped. So I think the javadoc and the behaviour 
> of the callback are not in line anymore too.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3587) After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the broker.

2021-11-22 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3587:
-
Description: 
After upgrading from Artemis 2.16.0 to 2.19.0 we now receive the following 
message a few times a day:

2021-11-05 05:32:53.641 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
2021-11-05 05:32:53.642 INFO 3850 --- [eduled-threads)] o.a.a.a.c.server
 : AMQ224107: The Critical Analyzer detected slow paths on the 
broker.  It is recommended that you enable trace logs on 
org.apache.activemq.artemis.utils.critical while you troubleshoot this issue. 
You should disable the trace logs when you have finished troubleshooting.

This message is also logged via ActiveMQServerCriticalPlugin#criticalFailure 
however the javadoc says: "This will be called before the broker is stopped." 
The broker however is not stopped. So I think the javadoc and the behaviour of 
the callback are not in line anymore too.

  was:
After upgrading from Artemis 2.16.0 to 2.19.0 we now receive the following 
message a view times a day:

2021-11-05 05:32:53.641 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
2021-11-05 05:32:53.642 INFO 3850 --- [eduled-threads)] o.a.a.a.c.server
 : AMQ224107: The Critical Analyzer detected slow paths on the 
broker.  It is recommended that you enable trace logs on 
org.apache.activemq.artemis.utils.critical while you troubleshoot this issue. 
You should disable the trace logs when you have finished troubleshooting.

This message is also logged via ActiveMQServerCriticalPlugin#criticalFailure 
however the javadoc says: "This will be called before the broker is stopped." 
The broker however is not stopped. So I think the javadoc and the behaviour of 
the callback are not in line anymore too.


> After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the 
> broker.
> --
>
> Key: ARTEMIS-3587
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3587
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.19.0
>Reporter: Sebastian T
>Priority: Major
>
> After upgrading from Artemis 2.16.0 to 2.19.0 we now receive the following 
> message a few times a day:
> 2021-11-05 05:32:53.641 WARN 3850 --- [eduled-threads)] 
> o.a.a.a.u.c.CriticalMeasure  : Component 
> org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
> 2021-11-05 05:32:53.642 INFO 3850 --- [eduled-threads)] o.a.a.a.c.server  
>: AMQ224107: The Critical Analyzer detected slow paths on 
> the broker.  It is recommended that you enable trace logs on 
> org.apache.activemq.artemis.utils.critical while you troubleshoot this issue. 
> You should disable the trace logs when you have finished troubleshooting.
> This message is also logged via ActiveMQServerCriticalPlugin#criticalFailure 
> however the javadoc says: "This will be called before the broker is stopped." 
> The broker however is not stopped. So I think the javadoc and the behaviour 
> of the callback are not in line anymore too.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (ARTEMIS-3587) After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the broker.

2021-11-22 Thread Sebastian T (Jira)
Sebastian T created ARTEMIS-3587:


 Summary: After upgrade: AMQ224107: The Critical Analyzer detected 
slow paths on the broker.
 Key: ARTEMIS-3587
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3587
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 2.19.0
Reporter: Sebastian T


After upgrading from Artemis 2.16.0 to 2.19.0 we now receive the following 
message a view times a day:

2021-11-05 05:32:53.641 WARN 3850 --- [eduled-threads)] 
o.a.a.a.u.c.CriticalMeasure  : Component 
org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
2021-11-05 05:32:53.642 INFO 3850 --- [eduled-threads)] o.a.a.a.c.server
 : AMQ224107: The Critical Analyzer detected slow paths on the 
broker.  It is recommended that you enable trace logs on 
org.apache.activemq.artemis.utils.critical while you troubleshoot this issue. 
You should disable the trace logs when you have finished troubleshooting.

This message is also logged via ActiveMQServerCriticalPlugin#criticalFailure 
however the javadoc says: "This will be called before the broker is stopped." 
The broker however is not stopped. So I think the javadoc and the behaviour of 
the callback are not in line anymore too.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (ARTEMIS-2007) Messages not redistributed to consumers with matching filters

2021-06-24 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17368931#comment-17368931
 ] 

Sebastian T commented on ARTEMIS-2007:
--

That would be great and help us improve our HA concept.

> Messages not redistributed to consumers with matching filters
> -
>
> Key: ARTEMIS-2007
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2007
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: AMQP, Broker
>Affects Versions: 2.6.3
>Reporter: Sebastian T
>Assignee: Gary Tully
>Priority: Major
> Attachments: artemis-2007.zip
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We are experiencing the following issue:
> # We configure an Artemis cluster with ON_DEMAND message loadbalacing and 
> message redistribution enabled.
> # We then connect a single consumer to *queues.queue1* on node1 that has a 
> message filter that does NOT match a given message.
> # Then we send a message to *queues.queue1* on node1.
> # Then we connect a consumer to *queues.queue1* on node2 that has a filter 
> matching the message we sent.
> We now would expect that the message on node1 currently not having any 
> matching consumers on node1 to be forwarded or redistributed to node2 where a 
> matching consumer exists.
> However that is not happening the consumer on node2 does not receive the 
> message and in our case the message on node1 expires after some time despite 
> a matching consumer is connected to the cluster.
> In the described scenario when we disconnect the consumer on node1 (that does 
> not match the message anyway) the message is redistributed to node2 and 
> consumed by the matching consumer.
> If no consumer was connected to node1, a message is sent to node1 and only 
> then a matching consumer is connected to node2 the message is forwarded to 
> node2 as expected.
> So I guess the core problem is that message redistribution of messages on 
> node1 is not triggered when a matching consumer is connected to node2 while a 
> *any* consumer already exists on node1 no matter if it actually matches the 
> given message.
> I attached a maven test case that illustrates the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3180) Consumers with wildcards addresses broken

2021-03-18 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17304232#comment-17304232
 ] 

Sebastian T commented on ARTEMIS-3180:
--

IMHO if "*" or "#" currently matches parts of words (which I never tried), 
either the term "word" in the documentation must be replaced by "substring" or 
similar, or it's a bug in the implementation.

> Consumers with wildcards addresses broken
> -
>
> Key: ARTEMIS-3180
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3180
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker
>Affects Versions: 2.17.0
> Environment: Artemis 2.17.0 
> openjdk version "1.8.0_275"
> OpenJDK Runtime Environment (build 1.8.0_275-b01)
> OpenJDK 64-Bit Server VM (build 25.275-b01, mixed mode)
>Reporter: Sebastian T
>Assignee: Gary Tully
>Priority: Blocker
> Fix For: 2.18.0
>
> Attachments: artemis-3180.zip
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> While upgrading from Artemis 2.16.0 to 2.17.0 several of our test cases 
> started failing.
> We are using AMQP and QPid JMS to communicate to Artemis.
> The issue seems to be related to subscriptions with wildcards.
> Sending a message to an address "topics.aaa.bbb.ccc" with a consumer 
> subscribed to "topics.#.aaa.#" and one with "topics.#.bbb.#" only the 
> consumer subscribed to "topics.#.aaa.#" receives the message.
> I am attaching a test case illustrating the issue.
> Running "mvn test" executes it with Artemis 2.17.0 and will fail. "mvn test 
> -Dartemis.version=2.16.0" executes the test with Artemis 2.16.0 and will 
> succeed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3180) Consumers with wildcards addresses broken

2021-03-12 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17300419#comment-17300419
 ] 

Sebastian T commented on ARTEMIS-3180:
--

Thanks Gary.

> Consumers with wildcards addresses broken
> -
>
> Key: ARTEMIS-3180
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3180
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker
>Affects Versions: 2.17.0
> Environment: Artemis 2.17.0 
> openjdk version "1.8.0_275"
> OpenJDK Runtime Environment (build 1.8.0_275-b01)
> OpenJDK 64-Bit Server VM (build 25.275-b01, mixed mode)
>Reporter: Sebastian T
>Assignee: Gary Tully
>Priority: Blocker
> Attachments: artemis-3180.zip
>
>
> While upgrading from Artemis 2.16.0 to 2.17.0 several of our test cases 
> started failing.
> We are using AMQP and QPid JMS to communicate to Artemis.
> The issue seems to be related to subscriptions with wildcards.
> Sending a message to an address "topics.aaa.bbb.ccc" with a consumer 
> subscribed to "topics.#.aaa.#" and one with "topics.#.bbb.#" only the 
> consumer subscribed to "topics.#.aaa.#" receives the message.
> I am attaching a test case illustrating the issue.
> Running "mvn test" executes it with Artemis 2.17.0 and will fail. "mvn test 
> -Dartemis.version=2.16.0" executes the test with Artemis 2.16.0 and will 
> succeed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3180) Consumers with wildcards addresses broken

2021-03-12 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3180:
-
Attachment: (was: artemis-3180.zip)

> Consumers with wildcards addresses broken
> -
>
> Key: ARTEMIS-3180
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3180
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker
>Affects Versions: 2.17.0
> Environment: Artemis 2.17.0 
> openjdk version "1.8.0_275"
> OpenJDK Runtime Environment (build 1.8.0_275-b01)
> OpenJDK 64-Bit Server VM (build 25.275-b01, mixed mode)
>Reporter: Sebastian T
>Assignee: Gary Tully
>Priority: Blocker
> Attachments: artemis-3180.zip
>
>
> While upgrading from Artemis 2.16.0 to 2.17.0 several of our test cases 
> started failing.
> We are using AMQP and QPid JMS to communicate to Artemis.
> The issue seems to be related to subscriptions with wildcards.
> Sending a message to an address "topics.aaa.bbb.ccc" with a consumer 
> subscribed to "topics.#.aaa.#" and one with "topics.#.bbb.#" only the 
> consumer subscribed to "topics.#.aaa.#" receives the message.
> I am attaching a test case illustrating the issue.
> Running "mvn test" executes it with Artemis 2.17.0 and will fail. "mvn test 
> -Dartemis.version=2.16.0" executes the test with Artemis 2.16.0 and will 
> succeed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3180) Consumers with wildcards addresses broken

2021-03-12 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3180:
-
Description: 
While upgrading from Artemis 2.16.0 to 2.17.0 several of our test cases started 
failing.

We are using AMQP and QPid JMS to communicate to Artemis.

The issue seems to be related to subscriptions with wildcards.

Sending a message to an address "topics.aaa.bbb.ccc" with a consumer subscribed 
to "topics.#.aaa.#" and one with "topics.#.bbb.#" only the consumer subscribed 
to "topics.#.aaa.#" receives the message.

I am attaching a test case illustrating the issue.

Running "mvn test" executes it with Artemis 2.17.0 and will fail. "mvn test 
-Dartemis.version=2.16.0" executes the test with Artemis 2.16.0 and will 
succeed.


  was:
While upgrading from Artemis 2.16.0 to 2.17.0 several of our test cases started 
failing.

We are using AMQP and QPid JMS to communicate to Artemis.

The issue seems to be related to subscriptions with wildcards.

Sending a message to an address "topics.clientA.clientB" with a consumer 
subscribed to "topics.#.clientA.#" and one with "topics.#.clientB.#" only the 
first consumer receives the message.

I am attaching a test case illustrating the issue.

Running "mvn test" executes it with Artemis 2.17.0 and will fail. "mvn test 
-Dartemis.version=2.16.0" executes the test with Artemis 2.16.0 and will 
succeed.



> Consumers with wildcards addresses broken
> -
>
> Key: ARTEMIS-3180
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3180
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker
>Affects Versions: 2.17.0
> Environment: Artemis 2.17.0 
> openjdk version "1.8.0_275"
> OpenJDK Runtime Environment (build 1.8.0_275-b01)
> OpenJDK 64-Bit Server VM (build 25.275-b01, mixed mode)
>Reporter: Sebastian T
>Assignee: Gary Tully
>Priority: Blocker
> Attachments: artemis-3180.zip, artemis-3180.zip
>
>
> While upgrading from Artemis 2.16.0 to 2.17.0 several of our test cases 
> started failing.
> We are using AMQP and QPid JMS to communicate to Artemis.
> The issue seems to be related to subscriptions with wildcards.
> Sending a message to an address "topics.aaa.bbb.ccc" with a consumer 
> subscribed to "topics.#.aaa.#" and one with "topics.#.bbb.#" only the 
> consumer subscribed to "topics.#.aaa.#" receives the message.
> I am attaching a test case illustrating the issue.
> Running "mvn test" executes it with Artemis 2.17.0 and will fail. "mvn test 
> -Dartemis.version=2.16.0" executes the test with Artemis 2.16.0 and will 
> succeed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3180) Consumers with wildcards addresses broken

2021-03-12 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3180:
-
Attachment: artemis-3180.zip

> Consumers with wildcards addresses broken
> -
>
> Key: ARTEMIS-3180
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3180
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker
>Affects Versions: 2.17.0
> Environment: Artemis 2.17.0 
> openjdk version "1.8.0_275"
> OpenJDK Runtime Environment (build 1.8.0_275-b01)
> OpenJDK 64-Bit Server VM (build 25.275-b01, mixed mode)
>Reporter: Sebastian T
>Assignee: Gary Tully
>Priority: Blocker
> Attachments: artemis-3180.zip, artemis-3180.zip
>
>
> While upgrading from Artemis 2.16.0 to 2.17.0 several of our test cases 
> started failing.
> We are using AMQP and QPid JMS to communicate to Artemis.
> The issue seems to be related to subscriptions with wildcards.
> Sending a message to an address "topics.aaa.bbb.ccc" with a consumer 
> subscribed to "topics.#.aaa.#" and one with "topics.#.bbb.#" only the 
> consumer subscribed to "topics.#.aaa.#" receives the message.
> I am attaching a test case illustrating the issue.
> Running "mvn test" executes it with Artemis 2.17.0 and will fail. "mvn test 
> -Dartemis.version=2.16.0" executes the test with Artemis 2.16.0 and will 
> succeed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3180) Consumers with wildcards addresses broken

2021-03-12 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3180:
-
Description: 
While upgrading from Artemis 2.16.0 to 2.17.0 several of our test cases started 
failing.

We are using AMQP and QPid JMS to communicate to Artemis.

The issue seems to be related to subscriptions with wildcards.

Sending a message to an address "topics.clientA.clientB" with a consumer 
subscribed to "topics.#.clientA.#" and one with "topics.#.clientB.#" only the 
first consumer receives the message.

I am attaching a test case illustrating the issue.

Running "mvn test" executes it with Artemis 2.17.0 and will fail. "mvn test 
-Dartemis.version=2.16.0" executes the test with Artemis 2.16.0 and will 
succeed.


  was:
While upgrading from Artemis 2.16.0 to 2.17.0 several of our test cases started 
failing.

We are using AMQP and QPid JMS to communicate to Artemis.

The issue we are having is when two shared subscriptions are made to the same 
topic only one of the subscriptions receives the message.

I am attaching a test case.

Running "mvn test" executes it with Artemis 2.17.0 and will fail. "mvn test 
-Dartemis.version=2.16.0" executes the test with Artemis 2.16.0 and will 
succeed.



> Consumers with wildcards addresses broken
> -
>
> Key: ARTEMIS-3180
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3180
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker
>Affects Versions: 2.17.0
> Environment: Artemis 2.17.0 
> openjdk version "1.8.0_275"
> OpenJDK Runtime Environment (build 1.8.0_275-b01)
> OpenJDK 64-Bit Server VM (build 25.275-b01, mixed mode)
>Reporter: Sebastian T
>Assignee: Gary Tully
>Priority: Blocker
> Attachments: artemis-3180.zip
>
>
> While upgrading from Artemis 2.16.0 to 2.17.0 several of our test cases 
> started failing.
> We are using AMQP and QPid JMS to communicate to Artemis.
> The issue seems to be related to subscriptions with wildcards.
> Sending a message to an address "topics.clientA.clientB" with a consumer 
> subscribed to "topics.#.clientA.#" and one with "topics.#.clientB.#" only the 
> first consumer receives the message.
> I am attaching a test case illustrating the issue.
> Running "mvn test" executes it with Artemis 2.17.0 and will fail. "mvn test 
> -Dartemis.version=2.16.0" executes the test with Artemis 2.16.0 and will 
> succeed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3180) With multiple shared subscriptions only one receives the message

2021-03-12 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3180:
-
Attachment: (was: artemis-3180.zip)

> With multiple shared subscriptions only one receives the message
> 
>
> Key: ARTEMIS-3180
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3180
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker
>Affects Versions: 2.17.0
> Environment: Artemis 2.17.0 
> openjdk version "1.8.0_275"
> OpenJDK Runtime Environment (build 1.8.0_275-b01)
> OpenJDK 64-Bit Server VM (build 25.275-b01, mixed mode)
>Reporter: Sebastian T
>Priority: Blocker
> Attachments: artemis-3180.zip
>
>
> While upgrading from Artemis 2.16.0 to 2.17.0 several of our test cases 
> started failing.
> We are using AMQP and QPid JMS to communicate to Artemis.
> The issue we are having is when two shared subscriptions are made to the same 
> topic only one of the subscriptions receives the message.
> I am attaching a test case.
> Running "mvn test" executes it with Artemis 2.17.0 and will fail. "mvn test 
> -Dartemis.version=2.16.0" executes the test with Artemis 2.16.0 and will 
> succeed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3180) With multiple shared subscriptions only one receives the message

2021-03-12 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3180:
-
Attachment: artemis-3180.zip

> With multiple shared subscriptions only one receives the message
> 
>
> Key: ARTEMIS-3180
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3180
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker
>Affects Versions: 2.17.0
> Environment: Artemis 2.17.0 
> openjdk version "1.8.0_275"
> OpenJDK Runtime Environment (build 1.8.0_275-b01)
> OpenJDK 64-Bit Server VM (build 25.275-b01, mixed mode)
>Reporter: Sebastian T
>Priority: Blocker
> Attachments: artemis-3180.zip
>
>
> While upgrading from Artemis 2.16.0 to 2.17.0 several of our test cases 
> started failing.
> We are using AMQP and QPid JMS to communicate to Artemis.
> The issue we are having is when two shared subscriptions are made to the same 
> topic only one of the subscriptions receives the message.
> I am attaching a test case.
> Running "mvn test" executes it with Artemis 2.17.0 and will fail. "mvn test 
> -Dartemis.version=2.16.0" executes the test with Artemis 2.16.0 and will 
> succeed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3180) Consumers with wildcards addresses broken

2021-03-12 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3180:
-
Summary: Consumers with wildcards addresses broken  (was: With multiple 
shared subscriptions only one receives the message)

> Consumers with wildcards addresses broken
> -
>
> Key: ARTEMIS-3180
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3180
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker
>Affects Versions: 2.17.0
> Environment: Artemis 2.17.0 
> openjdk version "1.8.0_275"
> OpenJDK Runtime Environment (build 1.8.0_275-b01)
> OpenJDK 64-Bit Server VM (build 25.275-b01, mixed mode)
>Reporter: Sebastian T
>Priority: Blocker
> Attachments: artemis-3180.zip
>
>
> While upgrading from Artemis 2.16.0 to 2.17.0 several of our test cases 
> started failing.
> We are using AMQP and QPid JMS to communicate to Artemis.
> The issue we are having is when two shared subscriptions are made to the same 
> topic only one of the subscriptions receives the message.
> I am attaching a test case.
> Running "mvn test" executes it with Artemis 2.17.0 and will fail. "mvn test 
> -Dartemis.version=2.16.0" executes the test with Artemis 2.16.0 and will 
> succeed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3180) With multiple shared subscriptions only one receives the message

2021-03-12 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3180:
-
Attachment: (was: artemis-3180.zip)

> With multiple shared subscriptions only one receives the message
> 
>
> Key: ARTEMIS-3180
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3180
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker
>Affects Versions: 2.17.0
> Environment: Artemis 2.17.0 
> openjdk version "1.8.0_275"
> OpenJDK Runtime Environment (build 1.8.0_275-b01)
> OpenJDK 64-Bit Server VM (build 25.275-b01, mixed mode)
>Reporter: Sebastian T
>Priority: Blocker
> Attachments: artemis-3180.zip
>
>
> While upgrading from Artemis 2.16.0 to 2.17.0 several of our test cases 
> started failing.
> We are using AMQP and QPid JMS to communicate to Artemis.
> The issue we are having is when two shared subscriptions are made to the same 
> topic only one of the subscriptions receives the message.
> I am attaching a test case.
> Running "mvn test" executes it with Artemis 2.17.0 and will fail. "mvn test 
> -Dartemis.version=2.16.0" executes the test with Artemis 2.16.0 and will 
> succeed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3180) With multiple shared subscriptions only one receives the message

2021-03-12 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3180:
-
Attachment: artemis-3180.zip

> With multiple shared subscriptions only one receives the message
> 
>
> Key: ARTEMIS-3180
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3180
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker
>Affects Versions: 2.17.0
> Environment: Artemis 2.17.0 
> openjdk version "1.8.0_275"
> OpenJDK Runtime Environment (build 1.8.0_275-b01)
> OpenJDK 64-Bit Server VM (build 25.275-b01, mixed mode)
>Reporter: Sebastian T
>Priority: Blocker
> Attachments: artemis-3180.zip
>
>
> While upgrading from Artemis 2.16.0 to 2.17.0 several of our test cases 
> started failing.
> We are using AMQP and QPid JMS to communicate to Artemis.
> The issue we are having is when two shared subscriptions are made to the same 
> topic only one of the subscriptions receives the message.
> I am attaching a test case.
> Running "mvn test" executes it with Artemis 2.17.0 and will fail. "mvn test 
> -Dartemis.version=2.16.0" executes the test with Artemis 2.16.0 and will 
> succeed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3180) With multiple shared subscriptions only one receives the message

2021-03-12 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3180:
-
Attachment: artemis-3180.zip

> With multiple shared subscriptions only one receives the message
> 
>
> Key: ARTEMIS-3180
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3180
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP, Broker
>Affects Versions: 2.17.0
> Environment: Artemis 2.17.0 
> openjdk version "1.8.0_275"
> OpenJDK Runtime Environment (build 1.8.0_275-b01)
> OpenJDK 64-Bit Server VM (build 25.275-b01, mixed mode)
>Reporter: Sebastian T
>Priority: Blocker
> Attachments: artemis-3180.zip
>
>
> While upgrading from Artemis 2.16.0 to 2.17.0 several of our test cases 
> started failing.
> We are using AMQP and QPid JMS to communicate to Artemis.
> The issue we are having is when two shared subscriptions are made to the same 
> topic only one of the subscriptions receives the message.
> I am attaching a test case.
> Running "mvn test" executes it with Artemis 2.17.0 and will fail. "mvn test 
> -Dartemis.version=2.16.0" executes the test with Artemis 2.16.0 and will 
> succeed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-3180) With multiple shared subscriptions only one receives the message

2021-03-12 Thread Sebastian T (Jira)
Sebastian T created ARTEMIS-3180:


 Summary: With multiple shared subscriptions only one receives the 
message
 Key: ARTEMIS-3180
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3180
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: AMQP, Broker
Affects Versions: 2.17.0
 Environment: Artemis 2.17.0 

openjdk version "1.8.0_275"
OpenJDK Runtime Environment (build 1.8.0_275-b01)
OpenJDK 64-Bit Server VM (build 25.275-b01, mixed mode)
Reporter: Sebastian T


While upgrading from Artemis 2.16.0 to 2.17.0 several of our test cases started 
failing.

We are using AMQP and QPid JMS to communicate to Artemis.

The issue we are having is when two shared subscriptions are made to the same 
topic only one of the subscriptions receives the message.

I am attaching a test case.

Running "mvn test" executes it with Artemis 2.17.0 and will fail. "mvn test 
-Dartemis.version=2.16.0" executes the test with Artemis 2.16.0 and will 
succeed.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3121) Refactor NettyAcceptor.getPrototcols(Map) method

2021-03-10 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3121:
-
Fix Version/s: 2.18.0

> Refactor NettyAcceptor.getPrototcols(Map) method
> 
>
> Key: ARTEMIS-3121
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3121
> Project: ActiveMQ Artemis
>  Issue Type: Task
>  Components: Broker
>Affects Versions: 2.16.0
>Reporter: Sebastian T
>Priority: Trivial
> Fix For: 2.18.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The NettyAcceptor.getPrototcols(Map) currently tries to join the keys of the 
> given protocolManager map in a complicated and inefficient way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-20 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17287675#comment-17287675
 ] 

Sebastian T commented on ARTEMIS-3117:
--

I cannot compare it with the JDK provider in our environment atm, but I doubt 
it performs any better since the expensive part is SSLSupport.loadKeystore() 
which is used in both cases (JDK SSL and OpenSSL).

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-18 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17286576#comment-17286576
 ] 

Sebastian T commented on ARTEMIS-3117:
--

I rolled out a patched Artemis version with OpenSSL context caching enabled and 
the performance degradation on Java 11 went away.

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-18 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17286404#comment-17286404
 ] 

Sebastian T commented on ARTEMIS-3117:
--

[~jbertram] I am comparing uncached OpenSSL provider in JDK8 with uncached 
OpenSSL provider in JDK11.

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285376#comment-17285376
 ] 

Sebastian T edited comment on ARTEMIS-3117 at 2/16/21, 5:54 PM:


It looks like, despite the fact that the amqp connection count on our Artemis 
instance is stable, we get around 20 TCP connection attempts per second on the 
amqps port 5671 from somewhere in our cloud environment. As far as I understand 
this results in the same number of initialization of a new SSL contexts per 
second in NettyAcceptor. Since SSL context initialization is apparently 
considerably slower or more expensive in JDK11 than in JDK8, we only now (after 
the JDK switch) see this affecting the overall broker performance.

{noformat}
$ sudo tcpdump "tcp[tcpflags] & (tcp-syn|tcp-ack) == tcp-syn|tcp-ack" | pv 
--line-mode --rate > /dev/null
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
[19.8 /s]
{noformat}



was (Author: seb):
It looks like, despite the fact that the amqp connection count on our Artemis 
instance is stable, we get around 20 TCP connection attempts per second on the 
amqps port 5671 from somewhere in our cloud environment. As far as I understand 
this results in the same number of initialization of a new SSL contexts per 
second in NettyAcceptor. Since SSL context initialization is apparently 
considerably slower or more expensive in JDK11 than in JDK8, we only now (after 
the JDK switch) see this affecting the overall broker performance.

{noformat}
$ sudo tcpdump "tcp[tcpflags] & (tcp-syn|tcp-ack) == (tcp-syn|tcp-ack)" | pv 
--line-mode --rate > /dev/null
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
[19.8 /s]
{noformat}


> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285376#comment-17285376
 ] 

Sebastian T edited comment on ARTEMIS-3117 at 2/16/21, 5:52 PM:


It looks like, despite the fact that the amqp connection count on our Artemis 
instance is stable, we get around 20 TCP connection attempts per second on the 
amqps port 5671 from somewhere in our cloud environment. As far as I understand 
this results in the same number of initialization of a new SSL contexts per 
second in NettyAcceptor. Since SSL context initialization is apparently 
considerably slower or more expensive in JDK11 than in JDK8, we only now (after 
the JDK switch) see this affecting the overall broker performance.

{noformat}
$ sudo tcpdump "tcp[tcpflags] & (tcp-syn|tcp-ack) == (tcp-syn|tcp-ack)" | pv 
--line-mode --rate > /dev/null
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
[19.8 /s]
{noformat}



was (Author: seb):
It looks like, despite the fact that the amqp connection count on our Artemis 
instance is stable, we get around 20 TCP connection attempts per second on the 
amqps port 5671 from somewhere in our cloud environment. As far as I understand 
this results in the same amount of initialization of a new SSL context per 
second in NettyAcceptor. Since SSL context initialization is apparently 
considerably slower or more expensive in JDK11 than in JDK8, we only now (after 
the JDK switch) see this affecting the overall broker performance.

{noformat}
$ sudo tcpdump "tcp[tcpflags] & (tcp-syn|tcp-ack) == (tcp-syn|tcp-ack)" | pv 
--line-mode --rate > /dev/null
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
[19.8 /s]
{noformat}


> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285376#comment-17285376
 ] 

Sebastian T commented on ARTEMIS-3117:
--

It looks like, despite the fact that the amqp connection count on our Artemis 
instance is stable, we get around 20 TCP connection attempts per second on the 
amqps port 5671 from somewhere in our cloud environment. As far as I understand 
this results in the same amount of initialization of a new SSL context per 
second in NettyAcceptor. Since SSL context initialization is apparently 
considerably slower or more expensive in JDK11 than in JDK8, we only now (after 
the JDK switch) see this affecting the overall broker performance.

{noformat}
$ sudo tcpdump "tcp[tcpflags] & (tcp-syn|tcp-ack) == (tcp-syn|tcp-ack)" | pv 
--line-mode --rate > /dev/null
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
[19.8 /s]
{noformat}


> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-3121) Refactor NettyAcceptor.getPrototcols(Map) method

2021-02-16 Thread Sebastian T (Jira)
Sebastian T created ARTEMIS-3121:


 Summary: Refactor NettyAcceptor.getPrototcols(Map) method
 Key: ARTEMIS-3121
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3121
 Project: ActiveMQ Artemis
  Issue Type: Task
  Components: Broker
Affects Versions: 2.16.0
Reporter: Sebastian T


The NettyAcceptor.getPrototcols(Map) currently tries to join the keys of the 
given protocolManager map in a complicated and inefficient way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285274#comment-17285274
 ] 

Sebastian T commented on ARTEMIS-3117:
--

I have one question of understanding. Shouldn't the SSLContext performance 
issue only have an impact when establishing new connections? The connection 
count is pretty stable on our broker.

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285272#comment-17285272
 ] 

Sebastian T commented on ARTEMIS-3117:
--

[~nigrofranz]I followed you advice and used async-profiler. Here are some 
results.
{noformat}
$ sudo ./profiler.sh -e cpu -d 30 -o flat 4008
Started [cpu] profiling
--- Execution profile ---
Total samples   : 4697
unknown_Java: 66 (1.41%)
not_walkable_Java   : 14 (0.30%)
deoptimization  : 3 (0.06%)

Frame buffer usage  : 4.1374%

  ns  percent  samples  top
  --  ---  ---  ---
  5454597354   11.51%  545  sha1_implCompress
  37621402337.94%  370  __lock_text_start_[k]
  23320437474.92%  233  sun.security.provider.DigestBase.engineReset
  17636245343.72%  175  
/tmp/libnetty_tcnative_linux_x86_6412383274641244971797.so (deleted)
  13808252432.91%  138  java.util.Arrays.fill
  12111436502.56%  121  java.util.Arrays.fill
  11220562322.37%  112  jbyte_disjoint_arraycopy
  10604680302.24%  106  sun.security.provider.SHA.implDigest
  10536771312.22%  104  
org.apache.activemq.artemis.protocol.amqp.broker.AMQPConnectionCallback.isWritable
   9742495412.06%   96  
org.apache.activemq.artemis.utils.collections.LinkedListImpl$Iterator.canAdvance
   9135088031.93%   90  [vdso]
   8106119081.71%   81  sun.security.provider.ByteArrayAccess.b2iBig64
   7708206931.63%   76  
org.apache.activemq.artemis.protocol.amqp.broker.AMQPSessionCallback.isWritable
   7707570931.63%   76  
org.apache.activemq.artemis.core.server.impl.QueueImpl.deliver
   6605953581.39%   65  
org.apache.activemq.artemis.core.server.impl.QueueImpl.handle
   5705513071.20%   57  sun.security.provider.DigestBase.engineUpdate
   5603966911.18%   56  
java.security.MessageDigest$Delegate.engineDigest
   4881475511.03%   48  eventfd_write_[k]
   4802739551.01%   48  sun.security.provider.SHA.implCompressCheck


$ sudo ./profiler.sh -e lock -d 30 -o flat 4008
Started [lock] profiling
--- Execution profile ---
Total samples   : 7255Frame buffer usage  : 0.0869%  ns  percent  
samples  top
  --  ---  ---  ---
 11699901984   92.19%  303  
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor
   8998756247.09% 5539  
java.util.concurrent.locks.ReentrantLock$NonfairSync
904165730.71% 1407  
org.apache.activemq.artemis.core.server.impl.QueueImpl
  1903030.00%4  java.lang.Class
   958510.00%2  java.lang.Object


$ sudo ./profiler.sh -e cpu -d 30 -o traces 4008
Started [cpu] profiling
--- Execution profile ---
Total samples   : 4544
unknown_Java: 76 (1.67%)
not_walkable_Java   : 7 (0.15%)
deoptimization  : 5 (0.11%)Frame buffer usage  : 4.1185%--- 2769920933 ns 
(6.04%), 277 samples
  [ 0] sha1_implCompress
  [ 1] java.security.MessageDigest$Delegate.engineDigest
  [ 2] java.security.MessageDigest.digest
  [ 3] java.security.MessageDigest.digest
  [ 4] com.sun.crypto.provider.PKCS12PBECipherCore.derive
  [ 5] com.sun.crypto.provider.PKCS12PBECipherCore.derive
  [ 6] com.sun.crypto.provider.HmacPKCS12PBESHA1.engineInit
  [ 7] javax.crypto.Mac.chooseProvider
  [ 8] javax.crypto.Mac.init
  [ 9] sun.security.pkcs12.PKCS12KeyStore.lambda$engineLoad$2
  [10] sun.security.pkcs12.PKCS12KeyStore$$Lambda$617.524606891.tryOnce
  [11] sun.security.pkcs12.PKCS12KeyStore$RetryWithZero.run
  [12] sun.security.pkcs12.PKCS12KeyStore.engineLoad
  [13] sun.security.util.KeyStoreDelegator.engineLoad
  [14] java.security.KeyStore.load
  [15] 
org.apache.activemq.artemis.core.remoting.impl.ssl.SSLSupport.loadKeystore
  [16] 
org.apache.activemq.artemis.core.remoting.impl.ssl.SSLSupport.loadTrustManagerFactory
  [17] 
org.apache.activemq.artemis.core.remoting.impl.ssl.SSLSupport.createNettyContext
  [18] 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.loadOpenSslEngine
  [19] 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler
  [20] 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor$4.initChannel
  [21] io.netty.channel.ChannelInitializer.initChannel
  [22] io.netty.channel.ChannelInitializer.handlerAdded
  [23] io.netty.channel.AbstractChannelHandlerContext.callHandlerAdded
  [24] io.netty.channel.DefaultChannelPipeline.callHandlerAdded0
  [25] io.netty.channel.DefaultChannelPipeline.access$100
  [26] io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask.execute
  [27] io.netty.channel.DefaultChannelPipeline.callHandlerAddedForAllHandlers
  [28] io.netty.channel.DefaultChannelPipeline.invokeHandlerAddedIfNeeded
  [29] io.netty.channel.AbstractChannel$AbstractUnsafe.register0
  [30] io.netty.channel.AbstractChannel$AbstractUnsafe.access$200
  [31] 

[jira] [Commented] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285190#comment-17285190
 ] 

Sebastian T commented on ARTEMIS-3117:
--

After digging through the source code I guess in case of JDK SSL the issue can 
be mitigated by registering 
{{org.apache.activemq.artemis.core.remoting.impl.ssl.CachingSSLContextFactory}} 
via 
{{META-INF/services/org.apache.activemq.artemis.spi.core.remoting.ssl.SSLContextFactory}}.
 This however has no effect in case of using OpenSSL.

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285138#comment-17285138
 ] 

Sebastian T commented on ARTEMIS-3117:
--

This looks like the same issue to me 
https://github.com/twitter/finagle/issues/856

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-12 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17284009#comment-17284009
 ] 

Sebastian T edited comment on ARTEMIS-3117 at 2/12/21, 10:02 PM:
-

Here is an example of thread blocking that I see frequently under JDK 11:
 
{code:yaml}
-  threadName: Thread-16 (activemq-netty-threads)
   threadId: 115
   threadState: RUNNABLE
   blockedTime: 1485418
   blockedCount: 64007
   waitedTime: 77377
   waitedCount: 507941
   lockName: null
   lockOwnerId: -1
   lockOwnerName: null
   daemon: true
   inNative: false
   suspended: false
   priority: 5
   stackTrace:
   - java.util.Arrays.fill:3494
   - sun.security.provider.DigestBase.engineReset:182
   - sun.security.provider.DigestBase.engineUpdate:112
   - java.security.MessageDigest$Delegate.engineUpdate:623
   - java.security.MessageDigest.update:355
   - java.security.MessageDigest.digest:430
   - com.sun.crypto.provider.PKCS12PBECipherCore.derive:119
   - com.sun.crypto.provider.PKCS12PBECipherCore.derive:69
   - com.sun.crypto.provider.HmacPKCS12PBESHA1.engineInit:134
   - javax.crypto.Mac.chooseProvider:366
   - javax.crypto.Mac.init:465
   - sun.security.pkcs12.PKCS12KeyStore.lambda$engineLoad$2:2151
   - 
sun.security.pkcs12.PKCS12KeyStore$$Lambda$617/0x7f82725bc1b0.tryOnce:-1
   - sun.security.pkcs12.PKCS12KeyStore$RetryWithZero.run:295
   - sun.security.pkcs12.PKCS12KeyStore.engineLoad:2149
   - sun.security.util.KeyStoreDelegator.engineLoad:243
   - java.security.KeyStore.load:1479
   - 
org.apache.activemq.artemis.core.remoting.impl.ssl.SSLSupport.loadKeystore:265
   - 
org.apache.activemq.artemis.core.remoting.impl.ssl.SSLSupport.loadTrustManagerFactory:213
   - 
org.apache.activemq.artemis.core.remoting.impl.ssl.SSLSupport.createNettyContext:171
   - 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.loadOpenSslEngine:654
   - 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler:529
   - 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor$4.initChannel:415
   - io.netty.channel.ChannelInitializer.initChannel:129
   - io.netty.channel.ChannelInitializer.handlerAdded:112
   - io.netty.channel.AbstractChannelHandlerContext.callHandlerAdded:938
   - io.netty.channel.DefaultChannelPipeline.callHandlerAdded0:609
   - io.netty.channel.DefaultChannelPipeline.access$100:46
   - 
io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask.execute:1463
   - io.netty.channel.DefaultChannelPipeline.callHandlerAddedForAllHandlers:1115
   - io.netty.channel.DefaultChannelPipeline.invokeHandlerAddedIfNeeded:650
   - io.netty.channel.AbstractChannel$AbstractUnsafe.register0:502
   - io.netty.channel.AbstractChannel$AbstractUnsafe.access$200:417
   - io.netty.channel.AbstractChannel$AbstractUnsafe$1.run:474
   - io.netty.util.concurrent.AbstractEventExecutor.safeExecute:164
   - io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks:472
   - io.netty.channel.epoll.EpollEventLoop.run:384
   - io.netty.util.concurrent.SingleThreadEventExecutor$4.run:989
   - io.netty.util.internal.ThreadExecutorMap$2.run:74
   - org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run:118
   lockedMonitors:
   -  className: java.lang.Object
  identityHashCode: 2036147318
  lockedStackDepth: 9
  lockedStackFrame: javax.crypto.Mac.chooseProvider:366
   -  className: sun.security.pkcs12.PKCS12KeyStore
  identityHashCode: 1078305195
  lockedStackDepth: 14
  lockedStackFrame: sun.security.pkcs12.PKCS12KeyStore.engineLoad:2149
   -  className: 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor
  identityHashCode: 1316121135
  lockedStackDepth: 21
  lockedStackFrame: 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler:529
   lockedSynchronizers: []
   lockInfo: null

-  threadName: Thread-17 (activemq-netty-threads)
   threadId: 116
   threadState: BLOCKED
   blockedTime: 1501712
   blockedCount: 87086
   waitedTime: 17546
   waitedCount: 106476
   lockName: 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor@4e726a2f
   lockOwnerId: 115
   lockOwnerName: Thread-16 (activemq-netty-threads)
   daemon: true
   inNative: false
   suspended: false
   priority: 5
   stackTrace:
   - 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor:getSslHandler:528
   - 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor$4:initChannel:415
   - io.netty.channel.ChannelInitializer:initChannel:129
   - io.netty.channel.ChannelInitializer:handlerAdded:112
   - io.netty.channel.AbstractChannelHandlerContext:callHandlerAdded:938
   - io.netty.ch 
io.netty.channel.DefaultChannelPipeline:methodName:access$100:46
   - 
io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask:execute:1463
   - 

[jira] [Commented] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-12 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17284009#comment-17284009
 ] 

Sebastian T commented on ARTEMIS-3117:
--

 
{code:yaml}
-  threadName: Thread-16 (activemq-netty-threads)
   threadId: 115
   threadState: RUNNABLE
   blockedTime: 1485418
   blockedCount: 64007
   waitedTime: 77377
   waitedCount: 507941
   lockName: null
   lockOwnerId: -1
   lockOwnerName: null
   daemon: true
   inNative: false
   suspended: false
   priority: 5
   stackTrace:- java.util.Arrays.fill:3494
   - sun.security.provider.DigestBase.engineReset:182
   - sun.security.provider.DigestBase.engineUpdate:112
   - java.security.MessageDigest$Delegate.engineUpdate:623
   - java.security.MessageDigest.update:355
   - java.security.MessageDigest.digest:430
   - com.sun.crypto.provider.PKCS12PBECipherCore.derive:119
   - com.sun.crypto.provider.PKCS12PBECipherCore.derive:69
   - com.sun.crypto.provider.HmacPKCS12PBESHA1.engineInit:134
   - javax.crypto.Mac.chooseProvider:366
   - javax.crypto.Mac.init:465
   - sun.security.pkcs12.PKCS12KeyStore.lambda$engineLoad$2:2151
   - 
sun.security.pkcs12.PKCS12KeyStore$$Lambda$617/0x7f82725bc1b0.tryOnce:-1
   - sun.security.pkcs12.PKCS12KeyStore$RetryWithZero.run:295
   - sun.security.pkcs12.PKCS12KeyStore.engineLoad:2149
   - sun.security.util.KeyStoreDelegator.engineLoad:243
   - java.security.KeyStore.load:1479
   - 
org.apache.activemq.artemis.core.remoting.impl.ssl.SSLSupport.loadKeystore:265
   - 
org.apache.activemq.artemis.core.remoting.impl.ssl.SSLSupport.loadTrustManagerFactory:213
   - 
org.apache.activemq.artemis.core.remoting.impl.ssl.SSLSupport.createNettyContext:171
   - 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.loadOpenSslEngine:654
   - 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler:529
   - 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor$4.initChannel:415
   - io.netty.channel.ChannelInitializer.initChannel:129
   - io.netty.channel.ChannelInitializer.handlerAdded:112
   - io.netty.channel.AbstractChannelHandlerContext.callHandlerAdded:938
   - io.netty.channel.DefaultChannelPipeline.callHandlerAdded0:609
   - io.netty.channel.DefaultChannelPipeline.access$100:46
   - 
io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask.execute:1463
   - io.netty.channel.DefaultChannelPipeline.callHandlerAddedForAllHandlers:1115
   - io.netty.channel.DefaultChannelPipeline.invokeHandlerAddedIfNeeded:650
   - io.netty.channel.AbstractChannel$AbstractUnsafe.register0:502
   - io.netty.channel.AbstractChannel$AbstractUnsafe.access$200:417
   - io.netty.channel.AbstractChannel$AbstractUnsafe$1.run:474
   - io.netty.util.concurrent.AbstractEventExecutor.safeExecute:164
   - io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks:472
   - io.netty.channel.epoll.EpollEventLoop.run:384
   - io.netty.util.concurrent.SingleThreadEventExecutor$4.run:989
   - io.netty.util.internal.ThreadExecutorMap$2.run:74
   - org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run:118
   lockedMonitors:-  className: java.lang.Object
  identityHashCode: 2036147318
  lockedStackDepth: 9
  lockedStackFrame: javax.crypto.Mac.chooseProvider:366
   -  className: sun.security.pkcs12.PKCS12KeyStore
  identityHashCode: 1078305195
  lockedStackDepth: 14
  lockedStackFrame: sun.security.pkcs12.PKCS12KeyStore.engineLoad:2149
   -  className: 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor
  identityHashCode: 1316121135
  lockedStackDepth: 21
  lockedStackFrame: 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler:529
   lockedSynchronizers: []
   lockInfo: null

-  threadName: Thread-17 (activemq-netty-threads)
   threadId: 116
   threadState: BLOCKED
   blockedTime: 1501712
   blockedCount: 87086
   waitedTime: 17546
   waitedCount: 106476
   lockName: 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor@4e726a2f
   lockOwnerId: 115
   lockOwnerName: Thread-16 (activemq-netty-threads)
   daemon: true
   inNative: false
   suspended: false
   priority: 5
   stackTrace:
   - 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor:getSslHandler:528
   - 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor$4:initChannel:415
   - io.netty.channel.ChannelInitializer:initChannel:129
   - io.netty.channel.ChannelInitializer:handlerAdded:112
   - io.netty.channel.AbstractChannelHandlerContext:callHandlerAdded:938
   - io.netty.ch 
io.netty.channel.DefaultChannelPipeline:methodName:access$100:46
   - 
io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask:execute:1463
   - io.netty.channel.DefaultChannelPipeline:callHandlerAddedForAllHandlers:1115
   - io.netty.channel.DefaultChannelPipeline:invokeHandlerAddedIfNeeded:650
   - 

[jira] [Commented] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-12 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17284000#comment-17284000
 ] 

Sebastian T commented on ARTEMIS-3117:
--

Yes, I can try to do this next week.

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-12 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17283986#comment-17283986
 ] 

Sebastian T commented on ARTEMIS-3117:
--

Garbage collection CPU consumption is similar in both cases (JDK8 and 11) 
roughly 0.04%

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-12 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17283982#comment-17283982
 ] 

Sebastian T commented on ARTEMIS-3117:
--

Unfortunately I cannot run it without TLS in that environment.

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-12 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3117:
-
Attachment: broker.xml

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-12 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3117:
-
Description: 
Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 and 
are seeing a noticable performance degradation which results in higher CPU 
usage and higher latency.

We are monitoring request/reply round trip duration with a custom distributed 
qpid-jms based healthcheck applications. Here is a graphic that shows the 
effect when we switched the JDK:

!image-2021-02-12-21-39-32-185.png!

CPU Usage of the broker process:

!image-2021-02-12-22-01-07-044.png|width=874,height=262!

 

The broker itself is also monitored via Dynatrace, there I can see that after 
upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
JDK it only spent 3.2%.

*JDK 8:*

!image-2021-02-12-21-40-21-125.png|width=1247,height=438!

 

*JDK 11:*

*!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*

 

*A method hotspot breakdown reveals this:*

!image-2021-02-12-21-47-02-387.png|width=1271,height=605!

!image-2021-02-12-21-47-57-301.png|width=1059,height=627!

Maybe I am misinterpreting the charts but the root cause seems to be somewhere 
in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} and/or in 
{{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
 I currently cannot pinpoint the exact line number.

 

  was:
Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 and 
are seeing a noticable performance degradation which results in higher CPU 
usage and higher latency.

We are monitoring request/reply round trip duration with a custom distributed 
qpid-jms based healthcheck applications. Here is a graphic that shows the 
effect when we switched the JDK:

!image-2021-02-12-21-39-32-185.png!

CPU Usage of the broker process:

!image-2021-02-12-22-01-07-044.png|width=874,height=262!

 

The broker itself is also monitored via Dynatrace, there I can see that after 
upgrading to JDK 11 the broker process spend 22% of CPU time locking while in 
JDK it only spent 3.5%.

*JDK 8:*

!image-2021-02-12-21-40-21-125.png|width=1587,height=557!

 

*JDK 11:*

*!image-2021-02-12-21-44-26-271.png|width=1496,height=525!*

 

*A method hotspot breakdown reveals this:*

!image-2021-02-12-21-47-02-387.png|width=1271,height=605!

!image-2021-02-12-21-47-57-301.png|width=1017,height=602!

Maybe I am misinterpreting the charts but the root cause seems to be somewhere 
in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}}. I currently 
cannot pinpoint the exact line number.

 


> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> 

[jira] [Updated] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-12 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3117:
-
Description: 
Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 and 
are seeing a noticable performance degradation which results in higher CPU 
usage and higher latency.

We are monitoring request/reply round trip duration with a custom distributed 
qpid-jms based healthcheck applications. Here is a graphic that shows the 
effect when we switched the JDK:

!image-2021-02-12-21-39-32-185.png!

CPU Usage of the broker process:

!image-2021-02-12-22-01-07-044.png|width=874,height=262!

 

The broker itself is also monitored via Dynatrace, there I can see that after 
upgrading to JDK 11 the broker process spend 22% of CPU time locking while in 
JDK it only spent 3.5%.

*JDK 8:*

!image-2021-02-12-21-40-21-125.png|width=1587,height=557!

 

*JDK 11:*

*!image-2021-02-12-21-44-26-271.png|width=1496,height=525!*

 

*A method hotspot breakdown reveals this:*

!image-2021-02-12-21-47-02-387.png|width=1271,height=605!

!image-2021-02-12-21-47-57-301.png|width=1017,height=602!

Maybe I am misinterpreting the charts but the root cause seems to be somewhere 
in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}}. I currently 
cannot pinpoint the exact line number.

 

  was:
Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 and 
are seeing a noticable performance degradation which results in higher CPU 
usage and higher latency.

We are monitoring request/reply round trip duration with a custom distributed 
qpid-jms based healthcheck applications. Here is a graphic that shows the 
effect when we switched the JDK:

!image-2021-02-12-21-39-32-185.png!

CPU Usage of the broker process:

!image-2021-02-12-22-01-07-044.png|width=874,height=262!

 

The broker itself is also monitored via Dynatrace, there I can see that after 
upgrading to JDK 11 the broker process spend 22% of CPU time locking while in 
JDK it only spent 3.5%.

*JDK 8:*

!image-2021-02-12-21-40-21-125.png!

 

*JDK 11:*

*!image-2021-02-12-21-44-26-271.png!*

 

*A method hotspot breakdown reveals this:*

!image-2021-02-12-21-47-02-387.png!

!image-2021-02-12-21-47-57-301.png!

Maybe I am misinterpreting the charts but the root cause seems to be somewhere 
in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}}. I currently 
cannot pinpoint the exact line number.

 


> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 22% of CPU time locking while in 
> JDK it only spent 3.5%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1587,height=557!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1496,height=525!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1017,height=602!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}}. I 
> currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-12 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3117:
-
Description: 
Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 and 
are seeing a noticable performance degradation which results in higher CPU 
usage and higher latency.

We are monitoring request/reply round trip duration with a custom distributed 
qpid-jms based healthcheck applications. Here is a graphic that shows the 
effect when we switched the JDK:

!image-2021-02-12-21-39-32-185.png!

CPU Usage of the broker process:

!image-2021-02-12-22-01-07-044.png|width=874,height=262!

 

The broker itself is also monitored via Dynatrace, there I can see that after 
upgrading to JDK 11 the broker process spend 22% of CPU time locking while in 
JDK it only spent 3.5%.

*JDK 8:*

!image-2021-02-12-21-40-21-125.png!

 

*JDK 11:*

*!image-2021-02-12-21-44-26-271.png!*

 

*A method hotspot breakdown reveals this:*

!image-2021-02-12-21-47-02-387.png!

!image-2021-02-12-21-47-57-301.png!

Maybe I am misinterpreting the charts but the root cause seems to be somewhere 
in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}}. I currently 
cannot pinpoint the exact line number.

 

  was:
Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 and 
are seeing a noticable performance degradation which results in higher CPU 
usage and higher latency.

We are monitoring request/reply round trip duration with a custom distributed 
qpid-jms based healthcheck applications. Here is a graphic that shows the 
effect when we switched the JDK:

!image-2021-02-12-21-39-32-185.png!

The broker itself is also monitored via Dynatrace, there I can see that after 
upgrading to JDK 11 the broker process spend 22% of CPU time locking while in 
JDK it only spent 3.5%.

*JDK 8:*

!image-2021-02-12-21-40-21-125.png!

 

*JDK 11:*

*!image-2021-02-12-21-44-26-271.png!*

 

*A method hotspot breakdown reveals this:*

!image-2021-02-12-21-47-02-387.png!

!image-2021-02-12-21-47-57-301.png!


Maybe I am misinterpreting the charts but the root cause seems to be somewhere 
in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}}. I currently 
cannot pinpoint the exact line number.

 


> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 22% of CPU time locking while in 
> JDK it only spent 3.5%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png!
> !image-2021-02-12-21-47-57-301.png!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}}. I 
> currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-12 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3117:
-
Attachment: image-2021-02-12-22-01-07-044.png

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 22% of CPU time locking while in 
> JDK it only spent 3.5%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png!
> !image-2021-02-12-21-47-57-301.png!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}}. I 
> currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-12 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3117:
-
Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over TLS 
via BoringSSL  (was: Amazon Linux 2, Amazon Corretto (OpenJDK 11))

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 22% of CPU time locking while in 
> JDK it only spent 3.5%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png!
> !image-2021-02-12-21-47-57-301.png!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}}. I 
> currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-12 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3117:
-
Summary: Performance degradation when switching from JDK8 to JDK11  (was: 
Performance degradation when switching from JDK8 to 11)

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11)
>Reporter: Sebastian T
>Priority: Major
> Attachments: image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 22% of CPU time locking while in 
> JDK it only spent 3.5%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png!
> !image-2021-02-12-21-47-57-301.png!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}}. I 
> currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-3117) Performance degradation when switching from JDK8 to 11

2021-02-12 Thread Sebastian T (Jira)
Sebastian T created ARTEMIS-3117:


 Summary: Performance degradation when switching from JDK8 to 11
 Key: ARTEMIS-3117
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: Broker
Affects Versions: 2.16.0
 Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11)
Reporter: Sebastian T
 Attachments: image-2021-02-12-21-39-32-185.png, 
image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
image-2021-02-12-21-47-57-301.png

Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 and 
are seeing a noticable performance degradation which results in higher CPU 
usage and higher latency.

We are monitoring request/reply round trip duration with a custom distributed 
qpid-jms based healthcheck applications. Here is a graphic that shows the 
effect when we switched the JDK:

!image-2021-02-12-21-39-32-185.png!

The broker itself is also monitored via Dynatrace, there I can see that after 
upgrading to JDK 11 the broker process spend 22% of CPU time locking while in 
JDK it only spent 3.5%.

*JDK 8:*

!image-2021-02-12-21-40-21-125.png!

 

*JDK 11:*

*!image-2021-02-12-21-44-26-271.png!*

 

*A method hotspot breakdown reveals this:*

!image-2021-02-12-21-47-02-387.png!

!image-2021-02-12-21-47-57-301.png!


Maybe I am misinterpreting the charts but the root cause seems to be somewhere 
in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}}. I currently 
cannot pinpoint the exact line number.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-3053) Log Subject Name of expired client certificates

2021-01-20 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17268461#comment-17268461
 ] 

Sebastian T edited comment on ARTEMIS-3053 at 1/20/21, 9:20 AM:


Yes, we are using cert-based authentication. The certificate that is expired is 
a client certificate, not the servers.

We are using an acceptor config like this:
{code:xml}
tcp://0.0.0.0:5671?needClientAuth=true;protocols=AMQP;enabledProtocols=TLSv1.2;sslEnabled=true;sslProvider=OPENSSL;keyStoreProvider=JKS;keyStorePassword=;keyStorePath=broker-keystore.jks;trustStorePassword=;trustStoreProvider=JKS;trustStorePath=broker-truststore.jks;amqpCredits=1000;tcpReceiveBufferSize=131072;connectionTtlMin=5000;connectionTtl=6;connectionTtlMax=18;amqpIdleTimeout=6;amqpLowCredits=300;saslMechanisms=EXTERNAL;batchDelay=0;enabledCipherSuites=TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384;directDeliver=true;tcpNoDelay=true;tcpSendBufferSize=131072{code}


was (Author: seb):
Yes, we are using cert-based authentication. The certificate that is expired is 
a client certificate, not the servers.

We are using an acceptor config like this:

{{tcp://0.0.0.0:5671?needClientAuth=true}}{{;}}{{protocols=AMQP;}}{{}}{{}}{{enabledProtocols=TLSv1.2}}{{}}{{;sslEnabled=true}}{{;}}{{sslProvider=OPENSSL;}}{{keyStoreProvider=JKS;}}{{keyStorePassword=;}}{{keyStorePath=broker-keystore.jks}}{{;}}{{trustStorePassword=}}{{}}{{;trustStoreProvider=JKS;trustStorePath=broker-truststore.jks;}}{{amqpCredits=1000;tcpReceiveBufferSize=131072;connectionTtlMin=5000;connectionTtl=6;}}{{connectionTtlMax=18;}}{{}}{{amqpIdleTimeout=6;amqpLowCredits=300;saslMechanisms=EXTERNAL;batchDelay=0;enabledCipherSuites=TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384;}}{{directDeliver=true;tcpNoDelay=true;}}{{tcpSendBufferSize=131072}}

> Log Subject Name of expired client certificates
> ---
>
> Key: ARTEMIS-3053
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3053
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: AMQP, Broker
>Affects Versions: 2.16.0
>Reporter: Sebastian T
>Priority: Minor
>
> We are using client authentication with our large central cloud broker 
> instance and are seeing CertificateExpiredExceptions in the logs:
> {{AMQ08: SSL handshake failed for client from /x.x.x.x:59484: 
> java.security.cert.CertificateExpiredException: NotAfter: Wed Sep 23 15:00:00 
> CEST 2020.}}
> It would be very helpful if the client certificate subject DN could be logged 
> too so we can figure out which client apps causing this.
> The reported IP address is not helpful as the client apps are running elastic 
> K8s/cloud foundry clusters.
>  
> Logging happens here 
> [https://github.com/apache/activemq-artemis/blob/bfca1c59de57168afec045dd5b889c759b3e58a1/artemis-server/src/main/java/org/apache/activemq/artemis/core/remoting/impl/netty/NettyAcceptor.java#L1012]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3053) Log Subject Name of expired client certificates

2021-01-20 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17268461#comment-17268461
 ] 

Sebastian T commented on ARTEMIS-3053:
--

Yes, we are using cert-based authentication. The certificate that is expired is 
a client certificate, not the servers.

We are using an acceptor config like this:

{{tcp://0.0.0.0:5671?needClientAuth=true}}{{;}}{{protocols=AMQP;}}{{}}{{}}{{enabledProtocols=TLSv1.2}}{{}}{{;sslEnabled=true}}{{;}}{{sslProvider=OPENSSL;}}{{keyStoreProvider=JKS;}}{{keyStorePassword=;}}{{keyStorePath=broker-keystore.jks}}{{;}}{{trustStorePassword=}}{{}}{{;trustStoreProvider=JKS;trustStorePath=broker-truststore.jks;}}{{amqpCredits=1000;tcpReceiveBufferSize=131072;connectionTtlMin=5000;connectionTtl=6;}}{{connectionTtlMax=18;}}{{}}{{amqpIdleTimeout=6;amqpLowCredits=300;saslMechanisms=EXTERNAL;batchDelay=0;enabledCipherSuites=TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384;}}{{directDeliver=true;tcpNoDelay=true;}}{{tcpSendBufferSize=131072}}

> Log Subject Name of expired client certificates
> ---
>
> Key: ARTEMIS-3053
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3053
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: AMQP, Broker
>Affects Versions: 2.16.0
>Reporter: Sebastian T
>Priority: Minor
>
> We are using client authentication with our large central cloud broker 
> instance and are seeing CertificateExpiredExceptions in the logs:
> {{AMQ08: SSL handshake failed for client from /x.x.x.x:59484: 
> java.security.cert.CertificateExpiredException: NotAfter: Wed Sep 23 15:00:00 
> CEST 2020.}}
> It would be very helpful if the client certificate subject DN could be logged 
> too so we can figure out which client apps causing this.
> The reported IP address is not helpful as the client apps are running elastic 
> K8s/cloud foundry clusters.
>  
> Logging happens here 
> [https://github.com/apache/activemq-artemis/blob/bfca1c59de57168afec045dd5b889c759b3e58a1/artemis-server/src/main/java/org/apache/activemq/artemis/core/remoting/impl/netty/NettyAcceptor.java#L1012]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-3053) Log Subject Name of expired client certificates

2021-01-05 Thread Sebastian T (Jira)
Sebastian T created ARTEMIS-3053:


 Summary: Log Subject Name of expired client certificates
 Key: ARTEMIS-3053
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3053
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: AMQP, Broker
Affects Versions: 2.16.0
Reporter: Sebastian T


We are using client authentication with our large central cloud broker instance 
and are seeing CertificateExpiredExceptions in the logs:

{{AMQ08: SSL handshake failed for client from /x.x.x.x:59484: 
java.security.cert.CertificateExpiredException: NotAfter: Wed Sep 23 15:00:00 
CEST 2020.}}

It would be very helpful if the client certificate subject DN could be logged 
too so we can figure out which client apps causing this.

The reported IP address is not helpful as the client apps are running elastic 
K8s/cloud foundry clusters.

 

Logging happens here 
[https://github.com/apache/activemq-artemis/blob/bfca1c59de57168afec045dd5b889c759b3e58a1/artemis-server/src/main/java/org/apache/activemq/artemis/core/remoting/impl/netty/NettyAcceptor.java#L1012]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-3001) Provide address and queue cound via ActiveMQServerControl

2020-11-17 Thread Sebastian T (Jira)
Sebastian T created ARTEMIS-3001:


 Summary: Provide address and queue cound via ActiveMQServerControl
 Key: ARTEMIS-3001
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3001
 Project: ActiveMQ Artemis
  Issue Type: New Feature
  Components: JMX
Affects Versions: 2.16.0
Reporter: Sebastian T


The accompanying PR introduces two new methods *getAddressCount* and 
*getQueueCount* to the *ActiveMQServerControl* class to retrieve the current 
number of addresses and current number of queues on the broker.

We want to monitor these number via our APM. Currently we have to use 
*ActiveMQServerControl#getAddressNames().size()* and 
*ActiveMQServerControl#getQueueNames().size()* which however are too expensive 
for our use case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3001) Provide address and queue count via ActiveMQServerControl

2020-11-17 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-3001:
-
Summary: Provide address and queue count via ActiveMQServerControl  (was: 
Provide address and queue cound via ActiveMQServerControl)

> Provide address and queue count via ActiveMQServerControl
> -
>
> Key: ARTEMIS-3001
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3001
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: JMX
>Affects Versions: 2.16.0
>Reporter: Sebastian T
>Priority: Major
>
> The accompanying PR introduces two new methods *getAddressCount* and 
> *getQueueCount* to the *ActiveMQServerControl* class to retrieve the current 
> number of addresses and current number of queues on the broker.
> We want to monitor these number via our APM. Currently we have to use 
> *ActiveMQServerControl#getAddressNames().size()* and 
> *ActiveMQServerControl#getQueueNames().size()* which however are too 
> expensive for our use case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2982) Improve "Browse Queues" view in web console

2020-11-10 Thread Sebastian T (Jira)
Sebastian T created ARTEMIS-2982:


 Summary: Improve "Browse Queues" view in web console
 Key: ARTEMIS-2982
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2982
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: Web Console
Affects Versions: 2.16.0
Reporter: Sebastian T


The accompanying PR improves the "Browse Queues" view in the following ways:
 # Remove unused column "Queue Count"
 # When clicking on an address in the "address" column the respective address 
is selected in the navigation tree
 # Make "name" cell in each row clickable so it selects the respective queue in 
the  navigation tree and sets the queue filter.
 # Make "message count" cell in each row clickable so it navigates to the 
message queue browser.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2606) Artemis Admin Web Console not loading on server with many queues

2020-01-29 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17026092#comment-17026092
 ] 

Sebastian T commented on ARTEMIS-2606:
--

[~jbertram] I am currently manually patching the class file for our broker 
installation and it works as expected.

Regarding ARTEMIS-2091, I guess it could not even be implemented before. AFAIK 
only as of Hawt.io 2.8 custom plugins are actually possible. This was 
implemented via [https://github.com/hawtio/hawtio/pull/2600]

[~tadayosi] It would be great if you can backport the fix and create a hotfix 
release. I guess migrating a plugin from Hawt.io 1 to 2 is not just copying and 
pasting the files to new locations but means reimplementing the plugin and thus 
it will take some time/effort before the Artemis web console gets migrated.

> Artemis Admin Web Console not loading on server with many queues
> 
>
> Key: ARTEMIS-2606
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2606
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.11.0
> Environment: Artemis 2.11.0 on AmazonLinux 2 with Amazon Corretto JDK 
> (but also reproducable on Windows with OpenJDK)
>Reporter: Sebastian T
>Priority: Critical
>
> We have a high number of queues 10.000+ on one of our Artemis cluster. Now 
> the Artemis admin UI is not responding at all (blank screen).
> I did some testing and saw that when I access a server with 500 queues, the 
> console downloads a 8.5MB JSON file from the server to the browser, when I 
> have 3000 queues that JSON file is already 35MB large.
> This is the HTTP Request:
> https:///jolokia/?maxDepth=9=5=true=false
> Request Method:POST
> Request Body: 
> \{"type":"exec","mbean":"hawtio:type=security,name=RBACRegistry","operation":"list()"}
> I suspect the problem is related to the fact that Artemis creates MBean 
> objects for each address and queue and when all MBean information are 
> downloaded by Hawt.io via Jolokia.
> We used RabbitMQ before and had no issues with their admin UI while 
> administering 30.000+ queues.
> Any suggestions regarding temporary workarounds are appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2610) Improve ActiveMQServer.getConnectionCount()

2020-01-29 Thread Sebastian T (Jira)
Sebastian T created ARTEMIS-2610:


 Summary: Improve ActiveMQServer.getConnectionCount()
 Key: ARTEMIS-2610
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2610
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: Broker
Affects Versions: 2.11.0
Reporter: Sebastian T


We are using ActiveMQServer.getConnectionCount() as one metric to constantly 
monitor our brokers via JMX.

{{ActiveMQServer.getConnectionCount()}} currently invokes 
{{remotingService.getConnections().size()}} to determine the connection size. 
This is unnecessarily expensive as {{remotingService.getConnections()}} is 
synchronized and returns a new {{Set}} instance with all 
connections on each invocation.

This PR introduces a new method {{RemotingService.getConnectionCount()}} which 
avoids the synchronization and temporary object creation.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2606) Artemis Admin Web Console not loading on server with many queues

2020-01-24 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17022823#comment-17022823
 ] 

Sebastian T commented on ARTEMIS-2606:
--

I changed priority to critical as we are seeing very high memory allocations 
because of this issue that affect broker operation.

My PR for the Hawt.io project got merged into the 2.x stream. Maybe you have a 
possibility to somehow integrate the fix for the RBACRegistry into the artemis 
distribution.

> Artemis Admin Web Console not loading on server with many queues
> 
>
> Key: ARTEMIS-2606
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2606
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.11.0
> Environment: Artemis 2.11.0 on AmazonLinux 2 with Amazon Corretto JDK 
> (but also reproducable on Windows with OpenJDK)
>Reporter: Sebastian T
>Priority: Critical
>
> We have a high number of queues 10.000+ on one of our Artemis cluster. Now 
> the Artemis admin UI is not responding at all (blank screen).
> I did some testing and saw that when I access a server with 500 queues, the 
> console downloads a 8.5MB JSON file from the server to the browser, when I 
> have 3000 queues that JSON file is already 35MB large.
> This is the HTTP Request:
> https:///jolokia/?maxDepth=9=5=true=false
> Request Method:POST
> Request Body: 
> \{"type":"exec","mbean":"hawtio:type=security,name=RBACRegistry","operation":"list()"}
> I suspect the problem is related to the fact that Artemis creates MBean 
> objects for each address and queue and when all MBean information are 
> downloaded by Hawt.io via Jolokia.
> We used RabbitMQ before and had no issues with their admin UI while 
> administering 30.000+ queues.
> Any suggestions regarding temporary workarounds are appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2606) Artemis Admin Web Console not loading on server with many queues

2020-01-24 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-2606:
-
Priority: Critical  (was: Major)

> Artemis Admin Web Console not loading on server with many queues
> 
>
> Key: ARTEMIS-2606
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2606
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.11.0
> Environment: Artemis 2.11.0 on AmazonLinux 2 with Amazon Corretto JDK 
> (but also reproducable on Windows with OpenJDK)
>Reporter: Sebastian T
>Priority: Critical
>
> We have a high number of queues 10.000+ on one of our Artemis cluster. Now 
> the Artemis admin UI is not responding at all (blank screen).
> I did some testing and saw that when I access a server with 500 queues, the 
> console downloads a 8.5MB JSON file from the server to the browser, when I 
> have 3000 queues that JSON file is already 35MB large.
> This is the HTTP Request:
> https:///jolokia/?maxDepth=9=5=true=false
> Request Method:POST
> Request Body: 
> \{"type":"exec","mbean":"hawtio:type=security,name=RBACRegistry","operation":"list()"}
> I suspect the problem is related to the fact that Artemis creates MBean 
> objects for each address and queue and when all MBean information are 
> downloaded by Hawt.io via Jolokia.
> We used RabbitMQ before and had no issues with their admin UI while 
> administering 30.000+ queues.
> Any suggestions regarding temporary workarounds are appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2606) Artemis Admin Web Console not loading on server with many queues

2020-01-23 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17022187#comment-17022187
 ] 

Sebastian T commented on ARTEMIS-2606:
--

I send a PR to the Hawt.io project that solves this issue 
[https://github.com/hawtio/hawtio/pull/2617]

Since Artemis is using a very old version of hawt.io, the fix either needs to 
be backported to Hawt.io 1.5.x or once it is merged and released Artemis needs 
to be upgraded to Hawt,io 2.x

> Artemis Admin Web Console not loading on server with many queues
> 
>
> Key: ARTEMIS-2606
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2606
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.11.0
> Environment: Artemis 2.11.0 on AmazonLinux 2 with Amazon Corretto JDK 
> (but also reproducable on Windows with OpenJDK)
>Reporter: Sebastian T
>Priority: Major
>
> We have a high number of queues 10.000+ on one of our Artemis cluster. Now 
> the Artemis admin UI is not responding at all (blank screen).
> I did some testing and saw that when I access a server with 500 queues, the 
> console downloads a 8.5MB JSON file from the server to the browser, when I 
> have 3000 queues that JSON file is already 35MB large.
> This is the HTTP Request:
> https:///jolokia/?maxDepth=9=5=true=false
> Request Method:POST
> Request Body: 
> \{"type":"exec","mbean":"hawtio:type=security,name=RBACRegistry","operation":"list()"}
> I suspect the problem is related to the fact that Artemis creates MBean 
> objects for each address and queue and when all MBean information are 
> downloaded by Hawt.io via Jolokia.
> We used RabbitMQ before and had no issues with their admin UI while 
> administering 30.000+ queues.
> Any suggestions regarding temporary workarounds are appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2606) Artemis Admin Web Console not loading on server with many queues

2020-01-23 Thread Sebastian T (Jira)
Sebastian T created ARTEMIS-2606:


 Summary: Artemis Admin Web Console not loading on server with many 
queues
 Key: ARTEMIS-2606
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2606
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Web Console
Affects Versions: 2.11.0
 Environment: Artemis 2.11.0 on AmazonLinux 2 with Amazon Corretto JDK 
(but also reproducable on Windows with OpenJDK)
Reporter: Sebastian T


We have a high number of queues 10.000+ on one of our Artemis cluster. Now the 
Artemis admin UI is not responding at all (blank screen).

I did some testing and saw that when I access a server with 500 queues, the 
console downloads a 8.5MB JSON file from the server to the browser, when I have 
3000 queues that JSON file is already 35MB large.

This is the HTTP Request:
https:///jolokia/?maxDepth=9=5=true=false
Request Method:POST
Request Body: 
\{"type":"exec","mbean":"hawtio:type=security,name=RBACRegistry","operation":"list()"}

I suspect the problem is related to the fact that Artemis creates MBean objects 
for each address and queue and when all MBean information are downloaded by 
Hawt.io via Jolokia.

We used RabbitMQ before and had no issues with their admin UI while 
administering 30.000+ queues.

Any suggestions regarding temporary workarounds are appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-2575) Deadlock in embedded broker during unit test

2019-12-16 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16997467#comment-16997467
 ] 

Sebastian T edited comment on ARTEMIS-2575 at 12/16/19 5:17 PM:


Based on the stacktrace it looks like a race condition between two consumers 
being closed at the exact same time on the same queue.
 Thread 12 makes it to ServerConsumerImpl line 600 and Thread 8 makes it to 
ServerConsumerImpl line 559. Then the deadlock happens because Thread 12 has a 
lock on ManagementServcieImpl and tries to acquire a lock on the queue object. 
Whereas Thread 8 has a lock on the queue object and tries to acquire a lock on 
ManagementServcieImpl.

"Thread-12 
(ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@3defe5e4)"
 Id=3814 BLOCKED on 
org.apache.activemq.artemis.core.server.impl.QueueImpl@1ee23908 
 owned by "Thread-8 
(ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@3defe5e4)"
 Id=3768
  at 
org.apache.activemq.artemis.core.server.impl.QueueImpl.addRedistributor(QueueImpl.java)
  -  *blocked {color:#00875a}on 
org.apache.activemq.artemis.core.server.impl.QueueImpl@1ee23908{color}*
  at 
org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl.onNotification(PostOfficeImpl.java:415)
  -  locked java.lang.Object@5217b201
  at 
org.apache.activemq.artemis.core.server.management.impl.ManagementServiceImpl.sendNotification(ManagementServiceImpl.java:661)
  -  locked java.lang.Object@5217b201
  -  *locked{color:#ff8b00} 
org.apache.activemq.artemis.core.server.management.impl.ManagementServiceImpl@2821ce58{color}*
  *at 
org.apache.activemq.artemis.core.server.impl.ServerConsumerImpl.close(ServerConsumerImpl.java:600)*
  -  locked 
org.apache.activemq.artemis.core.server.impl.ServerConsumerImpl@7af516ef
  at 
org.apache.activemq.artemis.core.server.impl.ServerConsumerImpl.close(ServerConsumerImpl.java:533)
  at 
org.apache.activemq.artemis.core.server.impl.ServerSessionImpl.closeConsumer(ServerSessionImpl.java:1610)
  at 
org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler.slowPacketHandler(ServerSessionPacketHandler.java:585)
  at 
org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler.onMessagePacket(ServerSessionPacketHandler.java:285)
  at 
org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler$$Lambda$643/1351406291.onMessage(Unknown
 Source)
  at 
org.apache.activemq.artemis.utils.actors.Actor.doTask(Actor.java:33)
  ...

 

"Thread-8 
(ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@3defe5e4)"
 Id=3768 BLOCKED on 
org.apache.activemq.artemis.core.server.management.impl.ManagementServiceImpl@2821ce58
 
 owned by "Thread-12 
(ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@3defe5e4)"
 Id=3814
  at 
org.apache.activemq.artemis.core.server.management.impl.ManagementServiceImpl.unregisterQueue(ManagementServiceImpl.java)
  -  *blocked{color:#ff8b00} on 
org.apache.activemq.artemis.core.server.management.impl.ManagementServiceImpl@2821ce58{color}*
  at 
org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl.removeBinding(PostOfficeImpl.java:763)
  -  locked 
org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl@7817714f
  at 
org.apache.activemq.artemis.core.server.impl.QueueImpl.deleteQueue(QueueImpl.java:2098)
  at 
org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.destroyQueue(ActiveMQServerImpl.java:2174)
  at 
org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.destroyQueue(ActiveMQServerImpl.java:2122)
  at 
org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.destroyQueue(ActiveMQServerImpl.java:2113)
  at 
org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.destroyQueue(ActiveMQServerImpl.java:2093)
  at 
org.apache.activemq.artemis.core.server.impl.TransientQueueManagerImpl.doIt(TransientQueueManagerImpl.java:43)
  at 
org.apache.activemq.artemis.core.server.impl.TransientQueueManagerImpl$$Lambda$1186/821597250.run(Unknown
 Source)
  at 
org.apache.activemq.artemis.utils.ReferenceCounterUtil.execute(ReferenceCounterUtil.java:81)
  at 
org.apache.activemq.artemis.utils.ReferenceCounterUtil.decrement(ReferenceCounterUtil.java:71)
  at 
org.apache.activemq.artemis.core.server.impl.QueueImpl.removeConsumer(QueueImpl.java:1310)
  -  *locked 
{color:#00875a}org.apache.activemq.artemis.core.server.impl.QueueImpl@1ee23908{color}*
  at 
org.apache.activemq.artemis.core.server.impl.ServerConsumerImpl.removeItself(ServerConsumerImpl.java:626)
  *at 

[jira] [Commented] (ARTEMIS-2575) Deadlock in embedded broker during unit test

2019-12-16 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16997467#comment-16997467
 ] 

Sebastian T commented on ARTEMIS-2575:
--

Based on the stacktrace it looks like a race condition between two consumers 
being closed at the exact same time on the same queue.
Thread 12 makes it to ServerConsumerImpl line 600 and Thread 8 makes it to 
ServerConsumerImpl line 559. Then the deadlock happens because Thread 12 has a 
lock on ManagementServcieImpl and tries to acquire a lock on the queue object. 
Whereas Thread 8 has a lock on the queue object and tries to acquire a lock on 
ManagementServcieImpl.



"Thread-12 
(ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@3defe5e4)"
 Id=3814 BLOCKED on 
org.apache.activemq.artemis.core.server.impl.QueueImpl@1ee23908 
 owned by "Thread-8 
(ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@3defe5e4)"
 Id=3768
 at 
org.apache.activemq.artemis.core.server.impl.QueueImpl.addRedistributor(QueueImpl.java)
 -  *blocked {color:#00875a}on 
org.apache.activemq.artemis.core.server.impl.QueueImpl@1ee23908{color}*
 at 
org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl.onNotification(PostOfficeImpl.java:415)
 -  locked java.lang.Object@5217b201
 at 
org.apache.activemq.artemis.core.server.management.impl.ManagementServiceImpl.sendNotification(ManagementServiceImpl.java:661)
 -  locked java.lang.Object@5217b201
 -  *locked{color:#ff8b00} 
org.apache.activemq.artemis.core.server.management.impl.ManagementServiceImpl@2821ce58{color}*
 *at 
org.apache.activemq.artemis.core.server.impl.ServerConsumerImpl.close(ServerConsumerImpl.java:600)*
 ** -  locked 
org.apache.activemq.artemis.core.server.impl.ServerConsumerImpl@7af516ef
 at 
org.apache.activemq.artemis.core.server.impl.ServerConsumerImpl.close(ServerConsumerImpl.java:533)
 at 
org.apache.activemq.artemis.core.server.impl.ServerSessionImpl.closeConsumer(ServerSessionImpl.java:1610)
 at 
org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler.slowPacketHandler(ServerSessionPacketHandler.java:585)
 at 
org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler.onMessagePacket(ServerSessionPacketHandler.java:285)
 at 
org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler$$Lambda$643/1351406291.onMessage(Unknown
 Source)
 at org.apache.activemq.artemis.utils.actors.Actor.doTask(Actor.java:33)
 ...

 

"Thread-8 
(ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@3defe5e4)"
 Id=3768 BLOCKED on 
org.apache.activemq.artemis.core.server.management.impl.ManagementServiceImpl@2821ce58
 
 owned by "Thread-12 
(ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@3defe5e4)"
 Id=3814
 at 
org.apache.activemq.artemis.core.server.management.impl.ManagementServiceImpl.unregisterQueue(ManagementServiceImpl.java)
 -  *blocked{color:#ff8b00} on 
org.apache.activemq.artemis.core.server.management.impl.ManagementServiceImpl@2821ce58{color}*
 at 
org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl.removeBinding(PostOfficeImpl.java:763)
 -  locked 
org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl@7817714f
 at 
org.apache.activemq.artemis.core.server.impl.QueueImpl.deleteQueue(QueueImpl.java:2098)
 at 
org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.destroyQueue(ActiveMQServerImpl.java:2174)
 at 
org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.destroyQueue(ActiveMQServerImpl.java:2122)
 at 
org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.destroyQueue(ActiveMQServerImpl.java:2113)
 at 
org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.destroyQueue(ActiveMQServerImpl.java:2093)
 at 
org.apache.activemq.artemis.core.server.impl.TransientQueueManagerImpl.doIt(TransientQueueManagerImpl.java:43)
 at 
org.apache.activemq.artemis.core.server.impl.TransientQueueManagerImpl$$Lambda$1186/821597250.run(Unknown
 Source)
 at 
org.apache.activemq.artemis.utils.ReferenceCounterUtil.execute(ReferenceCounterUtil.java:81)
 at 
org.apache.activemq.artemis.utils.ReferenceCounterUtil.decrement(ReferenceCounterUtil.java:71)
 at 
org.apache.activemq.artemis.core.server.impl.QueueImpl.removeConsumer(QueueImpl.java:1310)
 -  *locked 
{color:#00875a}org.apache.activemq.artemis.core.server.impl.QueueImpl@1ee23908{color}*
 at 
org.apache.activemq.artemis.core.server.impl.ServerConsumerImpl.removeItself(ServerConsumerImpl.java:626)
 *at 
org.apache.activemq.artemis.core.server.impl.ServerConsumerImpl.close(ServerConsumerImpl.java:559)*

[jira] [Created] (ARTEMIS-2575) Deadlock in embedded broker during unit test

2019-12-13 Thread Sebastian T (Jira)
Sebastian T created ARTEMIS-2575:


 Summary: Deadlock in embedded broker during unit test
 Key: ARTEMIS-2575
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2575
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.10.1
Reporter: Sebastian T


Today our integration test build of an Spring Boot 2.2 embedded Artemis 2.10.1 
failed with a deadlock alert while testing using the CORE protocol. I could not 
reproduce the issue again, thus cannot provide a test case but wanted to report 
it anyways for review. Maybe based on the stacktraces of the dead locked 
threads some conclusion regarding the underlying issue can be drawn.

 
{code:java}
16:55:49.169 WARN [l-threads)] ionImpl(RemotingConnectionImpl.java:210) 
AMQ212037: Connection failure to localhost/127.0.0.1:62616 has been detected: 
AMQ219014: Timed out after waiting 30,000 ms for response when sending packet 
69 [code=CONNECTION_TIMEDOUT]
16:56:19.171 WARN [l-threads)] ionImpl(RemotingConnectionImpl.java:210) 
AMQ212037: Connection failure to localhost/127.0.0.1:62616 has been detected: 
AMQ219014: Timed out after waiting 30,000 ms for response when sending packet 
69 [code=CONNECTION_TIMEDOUT]
16:56:23.109 WARN [d-threads)] CriticalMeasure(CriticalMeasure.java:99) 
Component org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on 
path 3
16:56:23.109 WARN [d-threads)] QServerImpl(ActiveMQServerImpl.java:713) 
AMQ224081: The component 
QueueImpl[name=nonDurable.testQueue.6C9JYR47olzg62aOdBRJIq, 
postOffice=PostOfficeImpl 
[server=ActiveMQServerImpl::serverUUID=ef2586cd-1dc8-11ea-8bf6-0a5801f021fd], 
temp=true]@1ee23908 is not responsive
16:56:23.191 WARN [d-threads)] ServerImpl(ActiveMQServerImpl.java:1022) 
AMQ222199: Thread dump: 
***
Complete Thread dump 

Deadlock detected!

"Thread-18 
(ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@3defe5e4)"
 Id=4061 BLOCKED on 
org.apache.activemq.artemis.core.server.management.impl.ManagementServiceImpl@2821ce58
 owned by "Thread-12 
(ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@3defe5e4)"
 Id=3814
at 
org.apache.activemq.artemis.core.server.management.impl.ManagementServiceImpl.sendNotification(ManagementServiceImpl.java:651)
-  blocked on 
org.apache.activemq.artemis.core.server.management.impl.ManagementServiceImpl@2821ce58
at 
org.apache.activemq.artemis.core.server.impl.ServerSessionImpl.sendSessionNotification(ServerSessionImpl.java:455)
at 
org.apache.activemq.artemis.core.server.impl.ServerSessionImpl.doClose(ServerSessionImpl.java:435)
-  locked 
org.apache.activemq.artemis.core.server.impl.ServerSessionImpl@4a68311e
at 
org.apache.activemq.artemis.core.server.impl.ServerSessionImpl$1.done(ServerSessionImpl.java:1597)
at 
org.apache.activemq.artemis.core.persistence.impl.journal.OperationContextImpl.executeOnCompletion(OperationContextImpl.java:189)
at 
org.apache.activemq.artemis.core.persistence.impl.journal.OperationContextImpl.executeOnCompletion(OperationContextImpl.java:130)
at 
org.apache.activemq.artemis.core.server.impl.ServerSessionImpl.close(ServerSessionImpl.java:1589)
at 
org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler.slowPacketHandler(ServerSessionPacketHandler.java:566)
at 
org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler.onMessagePacket(ServerSessionPacketHandler.java:285)
at 
org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler$$Lambda$643/1351406291.onMessage(Unknown
 Source)
at org.apache.activemq.artemis.utils.actors.Actor.doTask(Actor.java:33)
at 
org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:66)
at 
org.apache.activemq.artemis.utils.actors.ProcessorBase$$Lambda$604/1843989939.run(Unknown
 Source)
at 
org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:42)
at 
org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31)
at 
org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:66)
at 
org.apache.activemq.artemis.utils.actors.ProcessorBase$$Lambda$604/1843989939.run(Unknown
 Source)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at 
org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)

Number of locked synchronizers = 1
- java.util.concurrent.ThreadPoolExecutor$Worker@5014e5b4


"Thread-12 

[jira] [Commented] (ARTEMIS-2571) Remove unneccessary synchronization in ActiveMQServerImpl

2019-12-10 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993017#comment-16993017
 ] 

Sebastian T commented on ARTEMIS-2571:
--

I am attaching a testcase to demonstrate the implications of this PR.

When you run the contained maven project with "mvn clean test", the 
Artemis2571_PerfTest will be executed against Artemis 2.10.1.

When you run it with "mvn clean test -Ppatched", the Artemis2571_PerfTest will 
be executed against Artemis 2.10.1 and the ActiveMQServerImpl.java file located 
under src/test/patch/... will be compiled and used. This file only has the 
synchronized keywords removed from the getSessions() methods.

The test establishes 300 connections to the embedded broker with 40 sessions 
each. Then 3 separate threads invoke the 
ActiveMQServerControl.listConnections() method every 5 seconds to simulate 
three open browsers with autorefresh enabled. The execution time of the 
listConnections method is measured and printed to the console.

To see if the change has an impact on hot path performance a producer and a 
consumer are connected to the broker that read/write from one queue 
concurrently.

On my 8 core, 32GB test machine I see no difference in message send/received 
between the patched and the unpachted test. But the listConnections() is 
executed 50% faster in the patched versions. I.e. 2seconds per instead of 4 
seconds per execution.

> Remove unneccessary synchronization in ActiveMQServerImpl
> -
>
> Key: ARTEMIS-2571
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2571
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker, Web Console
>Affects Versions: 2.10.1
>Reporter: Sebastian T
>Priority: Minor
> Attachments: artemis-test.zip
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The ActiveMQServerImpl sessions field is a ConcurrentHashMap. Synchronizing 
> on the ActiveMQServerImpl object to iterate over the map is not necessary. 
> ActiveMQServerImpl#getSession, ActiveMQServerImpl#removeSession and 
> ActiveMQServerImpl#createSession also work on the sessions field without 
> synchronizing on the ActiveMQServerImpl.
> Removing the synchronized keyword on the ActiveMQServerImpl#getSessions() 
> methods e.g. improves loading of Connections view, especially when multiple 
> administrators are using the UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2571) Remove unneccessary synchronization in ActiveMQServerImpl

2019-12-10 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-2571:
-
Attachment: artemis-test.zip

> Remove unneccessary synchronization in ActiveMQServerImpl
> -
>
> Key: ARTEMIS-2571
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2571
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker, Web Console
>Affects Versions: 2.10.1
>Reporter: Sebastian T
>Priority: Minor
> Attachments: artemis-test.zip
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The ActiveMQServerImpl sessions field is a ConcurrentHashMap. Synchronizing 
> on the ActiveMQServerImpl object to iterate over the map is not necessary. 
> ActiveMQServerImpl#getSession, ActiveMQServerImpl#removeSession and 
> ActiveMQServerImpl#createSession also work on the sessions field without 
> synchronizing on the ActiveMQServerImpl.
> Removing the synchronized keyword on the ActiveMQServerImpl#getSessions() 
> methods e.g. improves loading of Connections view, especially when multiple 
> administrators are using the UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2571) Remove unneccessary synchronization in ActiveMQServerImpl

2019-12-10 Thread Sebastian T (Jira)
Sebastian T created ARTEMIS-2571:


 Summary: Remove unneccessary synchronization in ActiveMQServerImpl
 Key: ARTEMIS-2571
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2571
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: Broker, Web Console
Affects Versions: 2.10.1
Reporter: Sebastian T


The ActiveMQServerImpl sessions field is a ConcurrentHashMap. Synchronizing on 
the ActiveMQServerImpl object to iterate over the map is not necessary. 
ActiveMQServerImpl#getSession, ActiveMQServerImpl#removeSession and 
ActiveMQServerImpl#createSession also work on the sessions field without 
synchronizing on the ActiveMQServerImpl.

Removing the synchronized keyword on the ActiveMQServerImpl#getSessions() 
methods e.g. improves loading of Connections view, especially when multiple 
administrators are using the UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2570) Very slow loading of Connections view

2019-12-06 Thread Sebastian T (Jira)
Sebastian T created ARTEMIS-2570:


 Summary: Very slow loading of Connections view
 Key: ARTEMIS-2570
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2570
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Web Console
Affects Versions: 2.10.1
Reporter: Sebastian T


The connections view of the admin ui in our production environment is loading 
extremely slow, like 20 seconds compared to 2 seconds of the sessions view. We 
have around 6.000 sessions and 400 connections.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2541) Improve rendering in message browser of Admin UI

2019-11-08 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-2541:
-
Description: 
The accompanying PR improves the following parts of the Admin UI's message 
browser:
 # use fixed column width for columns with date/numeric/boolean values
 # move the *User ID* column to the end
 # add a separate *Validated User* column displaying the *_AMQ_VALIDATED_USER* 
string property.
 # display human readable names in the *type* column instead of numeric value. 
the numeric value is still accessible via a tooltip of the respective cell and 
used for sorting
 # The *Expires* column displays a human friendly representation of the time 
when the message expires (or expired in case it was not yet GCed) instead of a 
unix timestamp.
 If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
timezone.
 The actual timestamp value with ms precision is still used when sorting the 
columns and is accessible via a cell tooltip.
 # The messages are now sortable by the *Timestamp* column, as the server now 
sends the unix timestamp instead of an already pre-rendered datetime string in 
the server's locale
 # Add a new sortable column that displays the messages persistent size.
 # Change the background color of the table row the mouse currently hovers over 
to light yellow.

Here is screen-shot from before the changes:
 !ScreenShot_BeforePR.png!

Here is a screen-shot from after the changes:
 !ScreenShot.png!

  was:
The accompanying PR improves the following parts of the Admin UI's message 
browser:
 # use fixed column width for columns with date/numeric/boolean values
 # move the *User ID* column to the end
 # add a separate *Validated User* column displaying the *_AMQ_VALIDATED_USER* 
string property.
 # display human readable names in the *type* column instead of numeric value. 
the numeric value is still accessible via a tooltip of the respective cell and 
used for sorting
 # The *Expires* column displays a human friendly representation of the time 
when the message expires (or expired in case it was not yet GCed) instead of a 
unix timestamp.
 If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
timezone.
 The actual timestamp value with ms precision is still used when sorting the 
columns and is accessible via a cell tooltip.
 # The messages are now sortable by the *Timestamp* column, as the server now 
sends the unix timestamp instead of an already pre-rendered datetime string in 
the server's locale
 # Add a new sortable column that displays the messages persistent size.

Here is screen-shot from before the changes:
 !ScreenShot_BeforePR.png!

Here is a screen-shot from after the changes:
 !ScreenShot.png!


> Improve rendering in message browser of Admin UI
> 
>
> Key: ARTEMIS-2541
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2541
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Web Console
>Affects Versions: 2.10.1
>Reporter: Sebastian T
>Priority: Minor
> Attachments: ScreenShot.png, ScreenShot_BeforePR.png
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The accompanying PR improves the following parts of the Admin UI's message 
> browser:
>  # use fixed column width for columns with date/numeric/boolean values
>  # move the *User ID* column to the end
>  # add a separate *Validated User* column displaying the 
> *_AMQ_VALIDATED_USER* string property.
>  # display human readable names in the *type* column instead of numeric 
> value. the numeric value is still accessible via a tooltip of the respective 
> cell and used for sorting
>  # The *Expires* column displays a human friendly representation of the time 
> when the message expires (or expired in case it was not yet GCed) instead of 
> a unix timestamp.
>  If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
> used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
> timezone.
>  The actual timestamp value with ms precision is still used when sorting the 
> columns and is accessible via a cell tooltip.
>  # The messages are now sortable by the *Timestamp* column, as the server now 
> sends the unix timestamp instead of an already pre-rendered datetime string 
> in the server's locale
>  # Add a new sortable column that displays the messages persistent size.
>  # Change the background color of the table row the mouse currently hovers 
> over to light yellow.
> Here is screen-shot from before the changes:
>  !ScreenShot_BeforePR.png!
> Here is a screen-shot from after the changes:
>  !ScreenShot.png!



--
This message was 

[jira] [Updated] (ARTEMIS-2541) Improve rendering in message browser of Admin UI

2019-11-08 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-2541:
-
Attachment: (was: ScreenShot.png)

> Improve rendering in message browser of Admin UI
> 
>
> Key: ARTEMIS-2541
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2541
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Web Console
>Affects Versions: 2.10.1
>Reporter: Sebastian T
>Priority: Minor
> Attachments: ScreenShot.png, ScreenShot_BeforePR.png
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The accompanying PR improves the following parts of the Admin UI's message 
> browser:
>  # use fixed column width for columns with date/numeric/boolean values
>  # move the *User ID* column to the end
>  # add a separate *Validated User* column displaying the 
> *_AMQ_VALIDATED_USER* string property.
>  # display human readable names in the *type* column instead of numeric 
> value. the numeric value is still accessible via a tooltip of the respective 
> cell and used for sorting
>  # The *Expires* column displays a human friendly representation of the time 
> when the message expires (or expired in case it was not yet GCed) instead of 
> a unix timestamp.
>  If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
> used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
> timezone.
>  The actual timestamp value with ms precision is still used when sorting the 
> columns and is accessible via a cell tooltip.
>  # The messages are now sortable by the *Timestamp* column, as the server now 
> sends the unix timestamp instead of an already pre-rendered datetime string 
> in the server's locale
>  # Add a new sortable column that displays the messages persistent size.
> Here is screen-shot from before the changes:
>  !ScreenShot_BeforePR.png!
> Here is a screen-shot from after the changes:
>  !ScreenShot.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2541) Improve rendering in message browser of Admin UI

2019-11-08 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-2541:
-
Attachment: ScreenShot.png

> Improve rendering in message browser of Admin UI
> 
>
> Key: ARTEMIS-2541
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2541
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Web Console
>Affects Versions: 2.10.1
>Reporter: Sebastian T
>Priority: Minor
> Attachments: ScreenShot.png, ScreenShot_BeforePR.png
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The accompanying PR improves the following parts of the Admin UI's message 
> browser:
>  # use fixed column width for columns with date/numeric/boolean values
>  # move the *User ID* column to the end
>  # add a separate *Validated User* column displaying the 
> *_AMQ_VALIDATED_USER* string property.
>  # display human readable names in the *type* column instead of numeric 
> value. the numeric value is still accessible via a tooltip of the respective 
> cell and used for sorting
>  # The *Expires* column displays a human friendly representation of the time 
> when the message expires (or expired in case it was not yet GCed) instead of 
> a unix timestamp.
>  If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
> used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
> timezone.
>  The actual timestamp value with ms precision is still used when sorting the 
> columns and is accessible via a cell tooltip.
>  # The messages are now sortable by the *Timestamp* column, as the server now 
> sends the unix timestamp instead of an already pre-rendered datetime string 
> in the server's locale
>  # Add a new sortable column that displays the messages persistent size.
> Here is screen-shot from before the changes:
>  !ScreenShot_BeforePR.png!
> Here is a screen-shot from after the changes:
>  !ScreenShot.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2541) Improve rendering in message browser of Admin UI

2019-11-08 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-2541:
-
Attachment: (was: ScreenShot.png)

> Improve rendering in message browser of Admin UI
> 
>
> Key: ARTEMIS-2541
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2541
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Web Console
>Affects Versions: 2.10.1
>Reporter: Sebastian T
>Priority: Minor
> Attachments: ScreenShot.png, ScreenShot_BeforePR.png
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The accompanying PR improves the following parts of the Admin UI's message 
> browser:
>  # use fixed column width for columns with date/numeric/boolean values
>  # move the *User ID* column to the end
>  # add a separate *Validated User* column displaying the 
> *_AMQ_VALIDATED_USER* string property.
>  # display human readable names in the *type* column instead of numeric 
> value. the numeric value is still accessible via a tooltip of the respective 
> cell and used for sorting
>  # The *Expires* column displays a human friendly representation of the time 
> when the message expires (or expired in case it was not yet GCed) instead of 
> a unix timestamp.
>  If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
> used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
> timezone.
>  The actual timestamp value with ms precision is still used when sorting the 
> columns and is accessible via a cell tooltip.
>  # The messages are now sortable by the *Timestamp* column, as the server now 
> sends the unix timestamp instead of an already pre-rendered datetime string 
> in the server's locale
>  # Add a new sortable column that displays the messages persistent size.
> Here is screen-shot from before the changes:
>  !ScreenShot_BeforePR.png!
> Here is a screen-shot from after the changes:
>  !ScreenShot.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2541) Improve rendering in message browser of Admin UI

2019-11-08 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-2541:
-
Attachment: ScreenShot.png

> Improve rendering in message browser of Admin UI
> 
>
> Key: ARTEMIS-2541
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2541
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Web Console
>Affects Versions: 2.10.1
>Reporter: Sebastian T
>Priority: Minor
> Attachments: ScreenShot.png, ScreenShot_BeforePR.png
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The accompanying PR improves the following parts of the Admin UI's message 
> browser:
>  # use fixed column width for columns with date/numeric/boolean values
>  # move the *User ID* column to the end
>  # add a separate *Validated User* column displaying the 
> *_AMQ_VALIDATED_USER* string property.
>  # display human readable names in the *type* column instead of numeric 
> value. the numeric value is still accessible via a tooltip of the respective 
> cell and used for sorting
>  # The *Expires* column displays a human friendly representation of the time 
> when the message expires (or expired in case it was not yet GCed) instead of 
> a unix timestamp.
>  If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
> used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
> timezone.
>  The actual timestamp value with ms precision is still used when sorting the 
> columns and is accessible via a cell tooltip.
>  # The messages are now sortable by the *Timestamp* column, as the server now 
> sends the unix timestamp instead of an already pre-rendered datetime string 
> in the server's locale
>  # Add a new sortable column that displays the messages persistent size.
> Here is screen-shot from before the changes:
>  !ScreenShot_BeforePR.png!
> Here is a screen-shot from after the changes:
>  !ScreenShot.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2541) Improve rendering in message browser of Admin UI

2019-11-08 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-2541:
-
Description: 
The accompanying PR improves the following parts of the Admin UI's message 
browser:
 # use fixed column width for columns with date/numeric/boolean values
 # move the *User ID* column to the end
 # add a separate *Validated User* column displaying the *_AMQ_VALIDATED_USER* 
string property.
 # display human readable names in the *type* column instead of numeric value. 
the numeric value is still accessible via a tooltip of the respective cell and 
used for sorting
 # The *Expires* column displays a human friendly representation of the time 
when the message expires (or expired in case it was not yet GCed) instead of a 
unix timestamp.
 If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
timezone.
 The actual timestamp value with ms precision is still used when sorting the 
columns and is accessible via a cell tooltip.
 # The messages are now sortable by the *Timestamp* column, as the server now 
sends the unix timestamp instead of an already pre-rendered datetime string in 
the server's locale
 # Add a new sortable column that displays the messages persistent size.

Here is screen-shot from before the changes:
 !ScreenShot_BeforePR.png!

Here is a screen-shot from after the changes:
 !ScreenShot.png!

  was:
The accompanying PR improves the following parts of the Admin UI's message 
browser:
 # use fixed column width for columns with date/numeric/boolean values
 # move the *userID* column to the end and make it's width auto expand
 # the *userID* column now alternatively displays the *_AMQ_VALIDATED_USER* 
string property in case the userID field is empty. this e.g. is the case in our 
environment where we use client certificate based authentication.
 # display human readable names in the *type* column instead of numeric value. 
the numeric value is still accessible via a tooltip of the respective cell and 
used for sorting
 # The *expires* column displays a human friendly representation of the time 
when the message expires (or expired in case it was not yet GCed) instead of a 
unix timestamp.
 If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
timezone.
 The actual timestamp value with ms precision is still used when sorting the 
columns and is accessible via a cell tooltip.
 # The messages are now sortable by the *timestamp* column, as the server now 
sends the unix timestamp instead of an already pre-rendered datetime string in 
the server's locale
 # Add a new sortable column that displays the messages persistent size.

Here is screen-shot from before the changes:
!ScreenShot_BeforePR.png!

Here is a screen-shot from after the changes:
!ScreenShot.png!


> Improve rendering in message browser of Admin UI
> 
>
> Key: ARTEMIS-2541
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2541
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Web Console
>Affects Versions: 2.10.1
>Reporter: Sebastian T
>Priority: Minor
> Attachments: ScreenShot.png, ScreenShot_BeforePR.png
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> The accompanying PR improves the following parts of the Admin UI's message 
> browser:
>  # use fixed column width for columns with date/numeric/boolean values
>  # move the *User ID* column to the end
>  # add a separate *Validated User* column displaying the 
> *_AMQ_VALIDATED_USER* string property.
>  # display human readable names in the *type* column instead of numeric 
> value. the numeric value is still accessible via a tooltip of the respective 
> cell and used for sorting
>  # The *Expires* column displays a human friendly representation of the time 
> when the message expires (or expired in case it was not yet GCed) instead of 
> a unix timestamp.
>  If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
> used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
> timezone.
>  The actual timestamp value with ms precision is still used when sorting the 
> columns and is accessible via a cell tooltip.
>  # The messages are now sortable by the *Timestamp* column, as the server now 
> sends the unix timestamp instead of an already pre-rendered datetime string 
> in the server's locale
>  # Add a new sortable column that displays the messages persistent size.
> Here is screen-shot from before the changes:
>  !ScreenShot_BeforePR.png!
> Here is a screen-shot from after the changes:
>  !ScreenShot.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2541) Improve rendering in message browser of Admin UI

2019-11-08 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-2541:
-
Attachment: ScreenShot.png

> Improve rendering in message browser of Admin UI
> 
>
> Key: ARTEMIS-2541
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2541
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Web Console
>Affects Versions: 2.10.1
>Reporter: Sebastian T
>Priority: Minor
> Attachments: ScreenShot.png, ScreenShot_BeforePR.png
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> The accompanying PR improves the following parts of the Admin UI's message 
> browser:
>  # use fixed column width for columns with date/numeric/boolean values
>  # move the *userID* column to the end and make it's width auto expand
>  # the *userID* column now alternatively displays the *_AMQ_VALIDATED_USER* 
> string property in case the userID field is empty. this e.g. is the case in 
> our environment where we use client certificate based authentication.
>  # display human readable names in the *type* column instead of numeric 
> value. the numeric value is still accessible via a tooltip of the respective 
> cell and used for sorting
>  # The *expires* column displays a human friendly representation of the time 
> when the message expires (or expired in case it was not yet GCed) instead of 
> a unix timestamp.
>  If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
> used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
> timezone.
>  The actual timestamp value with ms precision is still used when sorting the 
> columns and is accessible via a cell tooltip.
>  # The messages are now sortable by the *timestamp* column, as the server now 
> sends the unix timestamp instead of an already pre-rendered datetime string 
> in the server's locale
>  # Add a new sortable column that displays the messages persistent size.
> Here is screen-shot from before the changes:
> !ScreenShot_BeforePR.png!
> Here is a screen-shot from after the changes:
> !ScreenShot.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2541) Improve rendering in message browser of Admin UI

2019-11-08 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-2541:
-
Attachment: (was: ScreenShot.png)

> Improve rendering in message browser of Admin UI
> 
>
> Key: ARTEMIS-2541
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2541
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Web Console
>Affects Versions: 2.10.1
>Reporter: Sebastian T
>Priority: Minor
> Attachments: ScreenShot.png, ScreenShot_BeforePR.png
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> The accompanying PR improves the following parts of the Admin UI's message 
> browser:
>  # use fixed column width for columns with date/numeric/boolean values
>  # move the *userID* column to the end and make it's width auto expand
>  # the *userID* column now alternatively displays the *_AMQ_VALIDATED_USER* 
> string property in case the userID field is empty. this e.g. is the case in 
> our environment where we use client certificate based authentication.
>  # display human readable names in the *type* column instead of numeric 
> value. the numeric value is still accessible via a tooltip of the respective 
> cell and used for sorting
>  # The *expires* column displays a human friendly representation of the time 
> when the message expires (or expired in case it was not yet GCed) instead of 
> a unix timestamp.
>  If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
> used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
> timezone.
>  The actual timestamp value with ms precision is still used when sorting the 
> columns and is accessible via a cell tooltip.
>  # The messages are now sortable by the *timestamp* column, as the server now 
> sends the unix timestamp instead of an already pre-rendered datetime string 
> in the server's locale
>  # Add a new sortable column that displays the messages persistent size.
> Here is screen-shot from before the changes:
> !ScreenShot_BeforePR.png!
> Here is a screen-shot from after the changes:
> !ScreenShot.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2541) Improve rendering in message browser of Admin UI

2019-11-07 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-2541:
-
Attachment: ScreenShot_BeforePR.png

> Improve rendering in message browser of Admin UI
> 
>
> Key: ARTEMIS-2541
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2541
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Web Console
>Affects Versions: 2.10.1
>Reporter: Sebastian T
>Priority: Minor
> Attachments: ScreenShot.png, ScreenShot_BeforePR.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The accompanying PR improves the following parts of the Admin UI's message 
> browser:
>  # use fixed column width for columns with date/numeric/boolean values
>  # move the *userID* column to the end and make it's width auto expand
>  # the *userID* column now alternatively displays the *_AMQ_VALIDATED_USER* 
> string property in case the userID field is empty. this e.g. is the case in 
> our environment where we use client certificate based authentication.
>  # display human readable names in the *type* column instead of numeric 
> value. the numeric value is still accessible via a tooltip of the respective 
> cell and used for sorting
>  # The *expires* column displays a human friendly representation of the time 
> when the message expires (or expired in case it was not yet GCed) instead of 
> a unix timestamp.
>  If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
> used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
> timezone.
>  The actual timestamp value with ms precision is still used when sorting the 
> columns and is accessible via a cell tooltip.
>  # The messages are now sortable by the *timestamp* column, as the server now 
> sends the unix timestamp instead of an already pre-rendered datetime string 
> in the server's locale
>  # Add a new sortable column that displays the messages persistent size.
> Attached is a screenshot illustrating the changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2541) Improve rendering in message browser of Admin UI

2019-11-07 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-2541:
-
Attachment: (was: ScreenShot_BeforePR.png)

> Improve rendering in message browser of Admin UI
> 
>
> Key: ARTEMIS-2541
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2541
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Web Console
>Affects Versions: 2.10.1
>Reporter: Sebastian T
>Priority: Minor
> Attachments: ScreenShot.png, ScreenShot_BeforePR.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The accompanying PR improves the following parts of the Admin UI's message 
> browser:
>  # use fixed column width for columns with date/numeric/boolean values
>  # move the *userID* column to the end and make it's width auto expand
>  # the *userID* column now alternatively displays the *_AMQ_VALIDATED_USER* 
> string property in case the userID field is empty. this e.g. is the case in 
> our environment where we use client certificate based authentication.
>  # display human readable names in the *type* column instead of numeric 
> value. the numeric value is still accessible via a tooltip of the respective 
> cell and used for sorting
>  # The *expires* column displays a human friendly representation of the time 
> when the message expires (or expired in case it was not yet GCed) instead of 
> a unix timestamp.
>  If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
> used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
> timezone.
>  The actual timestamp value with ms precision is still used when sorting the 
> columns and is accessible via a cell tooltip.
>  # The messages are now sortable by the *timestamp* column, as the server now 
> sends the unix timestamp instead of an already pre-rendered datetime string 
> in the server's locale
>  # Add a new sortable column that displays the messages persistent size.
> Attached is a screenshot illustrating the changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2541) Improve rendering in message browser of Admin UI

2019-11-07 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-2541:
-
Attachment: ScreenShot_BeforePR.png

> Improve rendering in message browser of Admin UI
> 
>
> Key: ARTEMIS-2541
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2541
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Web Console
>Affects Versions: 2.10.1
>Reporter: Sebastian T
>Priority: Minor
> Attachments: ScreenShot.png, ScreenShot_BeforePR.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The accompanying PR improves the following parts of the Admin UI's message 
> browser:
>  # use fixed column width for columns with date/numeric/boolean values
>  # move the *userID* column to the end and make it's width auto expand
>  # the *userID* column now alternatively displays the *_AMQ_VALIDATED_USER* 
> string property in case the userID field is empty. this e.g. is the case in 
> our environment where we use client certificate based authentication.
>  # display human readable names in the *type* column instead of numeric 
> value. the numeric value is still accessible via a tooltip of the respective 
> cell and used for sorting
>  # The *expires* column displays a human friendly representation of the time 
> when the message expires (or expired in case it was not yet GCed) instead of 
> a unix timestamp.
>  If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
> used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
> timezone.
>  The actual timestamp value with ms precision is still used when sorting the 
> columns and is accessible via a cell tooltip.
>  # The messages are now sortable by the *timestamp* column, as the server now 
> sends the unix timestamp instead of an already pre-rendered datetime string 
> in the server's locale
>  # Add a new sortable column that displays the messages persistent size.
> Attached is a screenshot illustrating the changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2541) Improve rendering in message browser of Admin UI

2019-11-07 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-2541:
-
Attachment: ScreenShot.png

> Improve rendering in message browser of Admin UI
> 
>
> Key: ARTEMIS-2541
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2541
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Web Console
>Affects Versions: 2.10.1
>Reporter: Sebastian T
>Priority: Minor
> Attachments: ScreenShot.png
>
>
> The accompanying PR improves the following parts of the Admin UI's message 
> browser:
>  # use fixed column width for columns with date/numeric/boolean values
>  # move the *userID* column to the end and make it's width auto expand
>  # the *userID* column now alternatively displays the *_AMQ_VALIDATED_USER* 
> string property in case the userID field is empty. this e.g. is the case in 
> our environment where we use client certificate based authentication.
>  # display human readable names in the *type* column instead of numeric 
> value. the numeric value is still accessible via a tooltip of the respective 
> cell and used for sorting
>  # The *expires* column displays a human friendly representation of the time 
> when the message expires (or expired in case it was not yet GCed) instead of 
> a unix timestamp.
>  If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
> used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
> timezone.
>  The actual timestamp value with ms precision is still used when sorting the 
> columns and is accessible via a cell tooltip.
>  # The messages are now sortable by the *timestamp* column, as the server now 
> sends the unix timestamp instead of an already pre-rendered datetime string 
> in the server's locale
>  # Add a new sortable column that displays the messages persistent size.
> Attached is a screenshot illustrating the changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2541) Improve rendering in message browser of Admin UI

2019-11-07 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-2541:
-
Attachment: (was: ScreenShot.png)

> Improve rendering in message browser of Admin UI
> 
>
> Key: ARTEMIS-2541
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2541
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Web Console
>Affects Versions: 2.10.1
>Reporter: Sebastian T
>Priority: Minor
> Attachments: ScreenShot.png
>
>
> The accompanying PR improves the following parts of the Admin UI's message 
> browser:
>  # use fixed column width for columns with date/numeric/boolean values
>  # move the *userID* column to the end and make it's width auto expand
>  # the *userID* column now alternatively displays the *_AMQ_VALIDATED_USER* 
> string property in case the userID field is empty. this e.g. is the case in 
> our environment where we use client certificate based authentication.
>  # display human readable names in the *type* column instead of numeric 
> value. the numeric value is still accessible via a tooltip of the respective 
> cell and used for sorting
>  # The *expires* column displays a human friendly representation of the time 
> when the message expires (or expired in case it was not yet GCed) instead of 
> a unix timestamp.
>  If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
> used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
> timezone.
>  The actual timestamp value with ms precision is still used when sorting the 
> columns and is accessible via a cell tooltip.
>  # The messages are now sortable by the *timestamp* column, as the server now 
> sends the unix timestamp instead of an already pre-rendered datetime string 
> in the server's locale
>  # Add a new sortable column that displays the messages persistent size.
> Attached is a screenshot illustrating the changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2541) Improve rendering in message browser of Admin UI

2019-11-07 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-2541:
-
Description: 
The accompanying PR improves the following parts of the Admin UI's message 
browser:
 # use fixed column width for columns with date/numeric/boolean values
 # move the *userID* column to the end and make it's width auto expand
 # the *userID* column now alternatively displays the *_AMQ_VALIDATED_USER* 
string property in case the userID field is empty. this e.g. is the case in our 
environment where we use client certificate based authentication.
 # display human readable names in the *type* column instead of numeric value. 
the numeric value is still accessible via a tooltip of the respective cell and 
used for sorting
 # The *expires* column displays a human friendly representation of the time 
when the message expires (or expired in case it was not yet GCed) instead of a 
unix timestamp.
 If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
timezone.
 The actual timestamp value with ms precision is still used when sorting the 
columns and is accessible via a cell tooltip.
 # The messages are now sortable by the *timestamp* column, as the server now 
sends the unix timestamp instead of an already pre-rendered datetime string in 
the server's locale
 # Add a new sortable column that displays the messages persistent size.

Attached is a screenshot illustrating the changes.

  was:
The accompanying PR improves the following parts of the Admin UI's message 
browser:

 * use fixed column width for columns with date/numeric/boolean values
 * move the userID column to the end and make it's width auto expand
 * the userID field now alternatively displays the *_AMQ_VALIDATED_USER* string 
property in case the userID field is empty. this e.g. is the case in our 
environment where we use client certificate based authentication.
 * display human readable names in the type column instead of numeric value. 
the numeric value is still accessible via a tooltip of the respective cell and 
used for sorting
 * The expires column displays a human friendly representation of the time when 
the message expires (or expired in case it was not yet GCed) instead of a unix 
timestamp.
If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
timezone.
The actual timestamp value with ms precision is still used when sorting the 
columns and is accessible via a cell tooltip.

Attached is a screenshot illustrating the changes.


> Improve rendering in message browser of Admin UI
> 
>
> Key: ARTEMIS-2541
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2541
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Web Console
>Affects Versions: 2.10.1
>Reporter: Sebastian T
>Priority: Minor
>
> The accompanying PR improves the following parts of the Admin UI's message 
> browser:
>  # use fixed column width for columns with date/numeric/boolean values
>  # move the *userID* column to the end and make it's width auto expand
>  # the *userID* column now alternatively displays the *_AMQ_VALIDATED_USER* 
> string property in case the userID field is empty. this e.g. is the case in 
> our environment where we use client certificate based authentication.
>  # display human readable names in the *type* column instead of numeric 
> value. the numeric value is still accessible via a tooltip of the respective 
> cell and used for sorting
>  # The *expires* column displays a human friendly representation of the time 
> when the message expires (or expired in case it was not yet GCed) instead of 
> a unix timestamp.
>  If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
> used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
> timezone.
>  The actual timestamp value with ms precision is still used when sorting the 
> columns and is accessible via a cell tooltip.
>  # The messages are now sortable by the *timestamp* column, as the server now 
> sends the unix timestamp instead of an already pre-rendered datetime string 
> in the server's locale
>  # Add a new sortable column that displays the messages persistent size.
> Attached is a screenshot illustrating the changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2541) Improve rendering in message browser of Admin UI

2019-11-07 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-2541:
-
Attachment: (was: ScreenShot.png)

> Improve rendering in message browser of Admin UI
> 
>
> Key: ARTEMIS-2541
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2541
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Web Console
>Affects Versions: 2.10.1
>Reporter: Sebastian T
>Priority: Minor
> Attachments: ScreenShot.png
>
>
> The accompanying PR improves the following parts of the Admin UI's message 
> browser:
>  * use fixed column width for columns with date/numeric/boolean values
>  * move the userID column to the end and make it's width auto expand
>  * the userID field now alternatively displays the *_AMQ_VALIDATED_USER* 
> string property in case the userID field is empty. this e.g. is the case in 
> our environment where we use client certificate based authentication.
>  * display human readable names in the type column instead of numeric value. 
> the numeric value is still accessible via a tooltip of the respective cell 
> and used for sorting
>  * The expires column displays a human friendly representation of the time 
> when the message expires (or expired in case it was not yet GCed) instead of 
> a unix timestamp.
> If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
> used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
> timezone.
> The actual timestamp value with ms precision is still used when sorting the 
> columns and is accessible via a cell tooltip.
> Attached is a screenshot illustrating the changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2541) Improve rendering in message browser of Admin UI

2019-11-07 Thread Sebastian T (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian T updated ARTEMIS-2541:
-
Attachment: ScreenShot.png

> Improve rendering in message browser of Admin UI
> 
>
> Key: ARTEMIS-2541
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2541
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Web Console
>Affects Versions: 2.10.1
>Reporter: Sebastian T
>Priority: Minor
> Attachments: ScreenShot.png
>
>
> The accompanying PR improves the following parts of the Admin UI's message 
> browser:
>  * use fixed column width for columns with date/numeric/boolean values
>  * move the userID column to the end and make it's width auto expand
>  * the userID field now alternatively displays the *_AMQ_VALIDATED_USER* 
> string property in case the userID field is empty. this e.g. is the case in 
> our environment where we use client certificate based authentication.
>  * display human readable names in the type column instead of numeric value. 
> the numeric value is still accessible via a tooltip of the respective cell 
> and used for sorting
>  * The expires column displays a human friendly representation of the time 
> when the message expires (or expired in case it was not yet GCed) instead of 
> a unix timestamp.
> If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
> used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
> timezone.
> The actual timestamp value with ms precision is still used when sorting the 
> columns and is accessible via a cell tooltip.
> Attached is a screenshot illustrating the changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2541) Improve rendering in message browser of Admin UI

2019-11-06 Thread Sebastian T (Jira)
Sebastian T created ARTEMIS-2541:


 Summary: Improve rendering in message browser of Admin UI
 Key: ARTEMIS-2541
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2541
 Project: ActiveMQ Artemis
  Issue Type: New Feature
  Components: Web Console
Affects Versions: 2.10.1
Reporter: Sebastian T
 Attachments: ScreenShot.png

The accompanying PR improves the following parts of the Admin UI's message 
browser:

 * use fixed column width for columns with date/numeric/boolean values
 * move the userID column to the end and make it's width auto expand
 * the userID field now alternatively displays the *_AMQ_VALIDATED_USER* string 
property in case the userID field is empty. this e.g. is the case in our 
environment where we use client certificate based authentication.
 * display human readable names in the type column instead of numeric value. 
the numeric value is still accessible via a tooltip of the respective cell and 
used for sorting
 * The expires column displays a human friendly representation of the time when 
the message expires (or expired in case it was not yet GCed) instead of a unix 
timestamp.
If the message expires in less than 24 hours the format "*In hh:mm:ss*" is 
used, otherwise the format is "*.mm.dd hh:mm:ss*" in the local user's 
timezone.
The actual timestamp value with ms precision is still used when sorting the 
columns and is accessible via a cell tooltip.

Attached is a screenshot illustrating the changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2540) Display LargeMessage column in message browser of admin UI

2019-11-05 Thread Sebastian T (Jira)
Sebastian T created ARTEMIS-2540:


 Summary: Display LargeMessage column in message browser of admin UI
 Key: ARTEMIS-2540
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2540
 Project: ActiveMQ Artemis
  Issue Type: New Feature
  Components: Web Console
Reporter: Sebastian T


Add a new column to the message browser view that indicates if a message is 
treated as large message by the broker.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2422) Support Message Redistribution based on Message Filters/Selectors

2019-07-10 Thread Sebastian T (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16882127#comment-16882127
 ] 

Sebastian T commented on ARTEMIS-2422:
--

[~jbertram] that is very valuable information! We could have used this 
information during the broker selection process last year.

As far as I understand the reason for not supporting it at the moment are 
primarily performance considerations in conjunction with lots of short lived 
consumers. That would not be the case for us. Our consumers are long lived but 
sometimes there are technical reasons that result in automatic reconnection of 
client instances. So for us it would be nice if we could enable that behaviour 
at the potenial expense of performance.

We can also look into creating a custom broker plugin or a patch for the server 
artifact ourselves. I checked the source code for the redistribution logic but 
currently have trouble grasping it. So hints or some guidance would be more 
than welcome regarding how this could be achieved/which parts would need to be 
modified.

> Support Message Redistribution based on Message Filters/Selectors
> -
>
> Key: ARTEMIS-2422
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2422
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 2.9.0
>Reporter: Sebastian T
>Priority: Major
>
> We are running an active/active Artemis clusters (with ON_DEMAND message 
> loadbalacing and message redistribution enabled) with a TCP loadbalancer in 
> front. Our customers use consumers with message selectors/filters. When 
> applications have to reestablish their connection to the broker (e.g. b/c of 
> redeployment, infrastructure issues, etc) it is not guaranteed that their 
> consumers end-up on the same cluster node as before.
> Since message filters are only taken in consideration on first-time 
> distribution (e.g. the moment the message arrives on a broker) or when ALL 
> consumers of a queue on a particular node are removed, we sometimes end up 
> with with a situation where messages waiting on one node to be consumed, 
> while the matching consumer is starving on another node.
> A related discussion from 2015: 
> [http://activemq.2283324.n4.nabble.com/artemis-cluster-don-t-redistribute-message-td4703503.html]
> We did run RabbitMQ in active/active configurations before and did not have 
> to worry about this particular issue.
> What we are looking for is an option that when a consumers is disconnected 
> from a queue and there are messages left in the queue that were matched by 
> this consumer and there are no other matching local consumer message then 
> redistribution of these messages should be triggered.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-2422) Support Message Redistribution based on Message Filters/Selectors

2019-07-10 Thread Sebastian T (JIRA)
Sebastian T created ARTEMIS-2422:


 Summary: Support Message Redistribution based on Message 
Filters/Selectors
 Key: ARTEMIS-2422
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2422
 Project: ActiveMQ Artemis
  Issue Type: New Feature
  Components: Broker
Affects Versions: 2.9.0
Reporter: Sebastian T


We are running an active/active Artemis clusters (with ON_DEMAND message 
loadbalacing and message redistribution enabled) with a TCP loadbalancer in 
front. Our customers use consumers with message selectors/filters. When 
applications have to reestablish their connection to the broker (e.g. b/c of 
redeployment, infrastructure issues, etc) it is not guaranteed that their 
consumers end-up on the same cluster node as before.

Since message filters are only taken in consideration on first-time 
distribution (e.g. the moment the message arrives on a broker) or when ALL 
consumers of a queue on a particular node are removed, we sometimes end up with 
with a situation where messages waiting on one node to be consumed, while the 
matching consumer is starving on another node.

A related discussion from 2015: 
[http://activemq.2283324.n4.nabble.com/artemis-cluster-don-t-redistribute-message-td4703503.html]

We did run RabbitMQ in active/active configurations before and did not have to 
worry about this particular issue.

What we are looking for is an option that when a consumers is disconnected from 
a queue and there are messages left in the queue that were matched by this 
consumer and there are no other matching local consumer message then 
redistribution of these messages should be triggered.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-2285) Sorting by user column in connections view not working

2019-03-26 Thread Sebastian T (JIRA)
Sebastian T created ARTEMIS-2285:


 Summary: Sorting by user column in connections view not working
 Key: ARTEMIS-2285
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2285
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Web Console
Affects Versions: 2.7.0
Reporter: Sebastian T


The connections view in the web console cannot be sorted by user, despite the 
users column header being clickable and showing sort direction indicator.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-2281) Improve Addresses and Queues view in web console

2019-03-21 Thread Sebastian T (JIRA)
Sebastian T created ARTEMIS-2281:


 Summary: Improve Addresses and Queues view in web console
 Key: ARTEMIS-2281
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2281
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: Web Console
Affects Versions: 2.7.0
Reporter: Sebastian T


The accompanying PR provides the following web console improvements:

Addresses View:
 * add tooltip to name cells
 * make queue count cells clickable to navigate to corresponding queues

Queues View:
 * add tooltips to name, address, filter cells
 * make address name cells clickable to navigate to corresponding address
 * make message count cells clickable to navigate to queue browser



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)