[jira] [Work logged] (AMQNET-656) AMQP failover implementation fails to reconnect in some cases

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQNET-656?focusedWorklogId=553329=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-553329
 ]

ASF GitHub Bot logged work on AMQNET-656:
-

Author: ASF GitHub Bot
Created on: 17/Feb/21 02:10
Start Date: 17/Feb/21 02:10
Worklog Time Spent: 10m 
  Work Description: brudo commented on pull request #59:
URL: https://github.com/apache/activemq-nms-amqp/pull/59#issuecomment-780246783


   @Havret thanks for your comments. The issue is that when you hit that first 
await, on a thread that is currently in a synchronized block anywhere up the 
call stack, that is already an invalid condition that may lead to deadlocks 
(and has been observed doing so - assuming I identified the right root cause). 
I agree that Task.Run is not the most elegant way to resolve it, but it might 
be the right solution for patching 1.8.x.
   
   A more correct way, assuming this kind of synchronization is still required 
in 2.0.x, would be to change to use an async-friendly alternative such as a 
SemaphoreSlim wherever the synchronized statement appears. Then Task.Run would 
not be needed. I'd be willing to help with a change like that for 2.0.x as a 
separate PR.
   
   I don't see any locks of any kind in your Artemis-specific client, but 
putting the recovery loop in its own long-running task also provides a form of 
synchronization / coordination, as that Task will only be running on one thread 
at a time. I like that approach.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 553329)
Time Spent: 2h 20m  (was: 2h 10m)

> AMQP failover implementation fails to reconnect in some cases
> -
>
> Key: AMQNET-656
> URL: https://issues.apache.org/jira/browse/AMQNET-656
> Project: ActiveMQ .Net
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: AMQP-1.8.1
>Reporter: Bruce Dodson
>Priority: Major
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> We recently had an issue where some of our producer instances were able to 
> reconnect immediately after the master/slave (or primary/standby) broker 
> cluster failed over, while others never reconnected.
> It appears to be related to two existing JIRAs, AMQNET-624 (addressed in 
> GitHub PR#45) and AMQNET-626 (new issue raised in the same PR, but closed 
> without any changes).
> Regarding the bug identified and fixed in AMQNET-624, part of the original 
> solution was pulled back: where TriggerReconnectionAttempt was called via 
> Task.Run, to instead call it directly. The second issue, AMQNET-626 was to 
> raise concern about the unawaited task returned by TriggerReconnectionAttempt.
> I think perhaps calling from Task.Run may have been beneficial after all: it 
> ensured that TriggerReconnectionAttempt was running on a thread from the 
> thread pool. Otherwise, when TriggerReconnectionAttempt calls 
> ScheduleReconnect, and ScheduleReconnect does an await, that is occurring 
> from within a lock statement.
> As noted in MSDN, an await statement _cannot_ occur inside of a lock 
> statement. That includes anywhere in the call stack, as far as I understand. 
> If you do, it is not caught by the compiler, but can lead to failures e.g. 
> where the task being awaited never gets scheduled.
> Invoking TriggerReconnectionAttempt from a thread pool thread (or another 
> background thread) is one way to avoid this issue, and using Task.Run() might 
> be the easiest way, even though it may also raise eyebrows. Any performance 
> overhead of Task.Run() shouldn't be a factor, since it is only invoked upon 
> losing connection, not continuously.
> The call to Task.Run() could also be moved into TriggerReconnectionAttempt, 
> like so:
> {code:java}
> // this is invoked using Task.Run, to ensure it runs on a thread pool thread
> // in case this was invoked from inside a lock statement (which it is)
> return Task.Run(async () => await 
> reconnectControl.ScheduleReconnect(Reconnect));{code}
> It does still leave the issue identified in AMQNET-626, that the result is 
> not checked, but it resolves the failover failure caused by calling await 
> inside of a lock.
> (Another way around this would be to use a SemaphoreSlim, or other 
> async-compatible synchronization mechanism instead of a lock statement. 
> However, that could have far-reaching implications, since lock statements are 
> used in many parts of the AMQP implementation.)



--
This message was 

[jira] [Created] (AMQNET-661) Deadlock (again) in Failover Transport when reconnecting

2021-02-16 Thread Andy DeMaurice (Jira)
Andy DeMaurice created AMQNET-661:
-

 Summary: Deadlock (again) in Failover Transport when reconnecting
 Key: AMQNET-661
 URL: https://issues.apache.org/jira/browse/AMQNET-661
 Project: ActiveMQ .Net
  Issue Type: Bug
  Components: ActiveMQ
Affects Versions: 1.7.1
 Environment: AmazonMQ , not sure of configuration (I'm a dev, not 
DevOps).

Client is .NET Core 3.1 app

Operating system is Windows Server something, not sure, but this should tell 
you:

0:068> vertarget
Windows 10 Version 14393 MP (16 procs) Free x64
Product: Server, suite: TerminalServer DataCenter SingleUserTS
10.0.14393.3986 (rs1_release.201002-1707)
Machine Name:
Debug session time: Tue Jan 12 17:02:23.000 2021 (UTC - 5:00)
System Uptime: 34 days 8:51:38.083
Process Uptime: 34 days 6:53:32.000
 Kernel time: 1 days 12:44:13.000
 User time: 14 days 4:40:26.000

Info about the exact NMS dll :

Image path: C:\Program 
Files\Meridium\ApplicationServer\policy-execution\Apache.NMS.dll
 Image name: Apache.NMS.dll
 Has CLR image header, track-debug-data flag not set
 Image was built with /Brepro flag.
 Timestamp: EA050D9E (This is a reproducible build file hash, not a timestamp)
 CheckSum: 
 ImageSize: 00016000
 File version: 1.7.1.3899
 Product version: 1.7.1.3899
 File flags: 0 (Mask 3F)
 File OS: 4 Unknown Win32
 File type: 2.0 Dll
 File date: .
 Translations: .04b0
 Information from resource tables:
 CompanyName: Apache Software Foundation, William D Cossey
 ProductName: Apache NMS Class Library for .net standard
 InternalName: Apache.NMS.dll
 OriginalFilename: Apache.NMS.dll
 ProductVersion: 1.7.1.3899
 FileVersion: 1.7.1.3899
 FileDescription: Apache.NMS
 LegalCopyright: Copyright (C) 2005-2015 Apache Software Foundation
 Comments: Apache NMS (.Net Standard Messaging Library): An abstract interface 
to Message Oriented Middleware (MOM) providers
Reporter: Andy DeMaurice
 Attachments: debuglog.txt

Similar to AMQNET-487, deadlock.  Thread 58 had a network exception, most 
likely trying to ACK a message: “Unable to write data to the transport 
connection: An existing connection was forcibly closed by the remote host..”

And somehow, thread 58 and 68 got into a fatal deadlock.

Debug log attached.[^debuglog.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (AMQNET-656) AMQP failover implementation fails to reconnect in some cases

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQNET-656?focusedWorklogId=553243=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-553243
 ]

ASF GitHub Bot logged work on AMQNET-656:
-

Author: ASF GitHub Bot
Created on: 16/Feb/21 23:03
Start Date: 16/Feb/21 23:03
Worklog Time Spent: 10m 
  Work Description: Havret edited a comment on pull request #59:
URL: https://github.com/apache/activemq-nms-amqp/pull/59#issuecomment-780173091


   I'm not sure if this solve anything. Currently you will leave the lock on 
first `await`. With the proposed fix you will leave it instantly. In either 
case lock doesn't work as it should. I'm not sure why 
https://issues.apache.org/jira/browse/AMQNET-626 was closed, so I reopened it. 
   
   As I said in https://github.com/apache/activemq-nms-amqp/pull/45 
   
   > I think we should revisit this part of code (mainly 
TriggerReconnectionAttempt) after we hopefully deliver the initial release. 
Calling async method without await or GetResult() is always a red flag.
   
   This was a bit too naively ported from Java and definitely requires a second 
look. A more reliable implementation might look as 
[follows](https://github.com/Havret/dotnet-activemq-artemis-client/blob/master/src/ActiveMQ.Artemis.Client/AutoRecovering/AutoRecoveringConnection.cs)
 but this wouldn't be a simple change.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 553243)
Time Spent: 2h 10m  (was: 2h)

> AMQP failover implementation fails to reconnect in some cases
> -
>
> Key: AMQNET-656
> URL: https://issues.apache.org/jira/browse/AMQNET-656
> Project: ActiveMQ .Net
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: AMQP-1.8.1
>Reporter: Bruce Dodson
>Priority: Major
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> We recently had an issue where some of our producer instances were able to 
> reconnect immediately after the master/slave (or primary/standby) broker 
> cluster failed over, while others never reconnected.
> It appears to be related to two existing JIRAs, AMQNET-624 (addressed in 
> GitHub PR#45) and AMQNET-626 (new issue raised in the same PR, but closed 
> without any changes).
> Regarding the bug identified and fixed in AMQNET-624, part of the original 
> solution was pulled back: where TriggerReconnectionAttempt was called via 
> Task.Run, to instead call it directly. The second issue, AMQNET-626 was to 
> raise concern about the unawaited task returned by TriggerReconnectionAttempt.
> I think perhaps calling from Task.Run may have been beneficial after all: it 
> ensured that TriggerReconnectionAttempt was running on a thread from the 
> thread pool. Otherwise, when TriggerReconnectionAttempt calls 
> ScheduleReconnect, and ScheduleReconnect does an await, that is occurring 
> from within a lock statement.
> As noted in MSDN, an await statement _cannot_ occur inside of a lock 
> statement. That includes anywhere in the call stack, as far as I understand. 
> If you do, it is not caught by the compiler, but can lead to failures e.g. 
> where the task being awaited never gets scheduled.
> Invoking TriggerReconnectionAttempt from a thread pool thread (or another 
> background thread) is one way to avoid this issue, and using Task.Run() might 
> be the easiest way, even though it may also raise eyebrows. Any performance 
> overhead of Task.Run() shouldn't be a factor, since it is only invoked upon 
> losing connection, not continuously.
> The call to Task.Run() could also be moved into TriggerReconnectionAttempt, 
> like so:
> {code:java}
> // this is invoked using Task.Run, to ensure it runs on a thread pool thread
> // in case this was invoked from inside a lock statement (which it is)
> return Task.Run(async () => await 
> reconnectControl.ScheduleReconnect(Reconnect));{code}
> It does still leave the issue identified in AMQNET-626, that the result is 
> not checked, but it resolves the failover failure caused by calling await 
> inside of a lock.
> (Another way around this would be to use a SemaphoreSlim, or other 
> async-compatible synchronization mechanism instead of a lock statement. 
> However, that could have far-reaching implications, since lock statements are 
> used in many parts of the AMQP implementation.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (AMQNET-656) AMQP failover implementation fails to reconnect in some cases

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQNET-656?focusedWorklogId=553241=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-553241
 ]

ASF GitHub Bot logged work on AMQNET-656:
-

Author: ASF GitHub Bot
Created on: 16/Feb/21 23:02
Start Date: 16/Feb/21 23:02
Worklog Time Spent: 10m 
  Work Description: Havret edited a comment on pull request #59:
URL: https://github.com/apache/activemq-nms-amqp/pull/59#issuecomment-780173091


   I'm not sure if this solve anything. Currently you will leave lock on first 
await. With the proposed fix you will leave it instantly. In either case lock 
doesn't work as it should. I'm not sure why 
https://issues.apache.org/jira/browse/AMQNET-626 was closed, so I reopened it. 
   
   As I said in https://github.com/apache/activemq-nms-amqp/pull/45 
   
   > I think we should revisit this part of code (mainly 
TriggerReconnectionAttempt) after we hopefully deliver the initial release. 
Calling async method without await or GetResult() is always a red flag.
   
   This was a bit too naively ported from Java and definitely requires a second 
look. A more reliable implementation might look as 
[follows](https://github.com/Havret/dotnet-activemq-artemis-client/blob/master/src/ActiveMQ.Artemis.Client/AutoRecovering/AutoRecoveringConnection.cs)
 but this wouldn't be simple change.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 553241)
Time Spent: 1h 50m  (was: 1h 40m)

> AMQP failover implementation fails to reconnect in some cases
> -
>
> Key: AMQNET-656
> URL: https://issues.apache.org/jira/browse/AMQNET-656
> Project: ActiveMQ .Net
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: AMQP-1.8.1
>Reporter: Bruce Dodson
>Priority: Major
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> We recently had an issue where some of our producer instances were able to 
> reconnect immediately after the master/slave (or primary/standby) broker 
> cluster failed over, while others never reconnected.
> It appears to be related to two existing JIRAs, AMQNET-624 (addressed in 
> GitHub PR#45) and AMQNET-626 (new issue raised in the same PR, but closed 
> without any changes).
> Regarding the bug identified and fixed in AMQNET-624, part of the original 
> solution was pulled back: where TriggerReconnectionAttempt was called via 
> Task.Run, to instead call it directly. The second issue, AMQNET-626 was to 
> raise concern about the unawaited task returned by TriggerReconnectionAttempt.
> I think perhaps calling from Task.Run may have been beneficial after all: it 
> ensured that TriggerReconnectionAttempt was running on a thread from the 
> thread pool. Otherwise, when TriggerReconnectionAttempt calls 
> ScheduleReconnect, and ScheduleReconnect does an await, that is occurring 
> from within a lock statement.
> As noted in MSDN, an await statement _cannot_ occur inside of a lock 
> statement. That includes anywhere in the call stack, as far as I understand. 
> If you do, it is not caught by the compiler, but can lead to failures e.g. 
> where the task being awaited never gets scheduled.
> Invoking TriggerReconnectionAttempt from a thread pool thread (or another 
> background thread) is one way to avoid this issue, and using Task.Run() might 
> be the easiest way, even though it may also raise eyebrows. Any performance 
> overhead of Task.Run() shouldn't be a factor, since it is only invoked upon 
> losing connection, not continuously.
> The call to Task.Run() could also be moved into TriggerReconnectionAttempt, 
> like so:
> {code:java}
> // this is invoked using Task.Run, to ensure it runs on a thread pool thread
> // in case this was invoked from inside a lock statement (which it is)
> return Task.Run(async () => await 
> reconnectControl.ScheduleReconnect(Reconnect));{code}
> It does still leave the issue identified in AMQNET-626, that the result is 
> not checked, but it resolves the failover failure caused by calling await 
> inside of a lock.
> (Another way around this would be to use a SemaphoreSlim, or other 
> async-compatible synchronization mechanism instead of a lock statement. 
> However, that could have far-reaching implications, since lock statements are 
> used in many parts of the AMQP implementation.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (AMQNET-656) AMQP failover implementation fails to reconnect in some cases

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQNET-656?focusedWorklogId=553242=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-553242
 ]

ASF GitHub Bot logged work on AMQNET-656:
-

Author: ASF GitHub Bot
Created on: 16/Feb/21 23:02
Start Date: 16/Feb/21 23:02
Worklog Time Spent: 10m 
  Work Description: Havret edited a comment on pull request #59:
URL: https://github.com/apache/activemq-nms-amqp/pull/59#issuecomment-780173091


   I'm not sure if this solve anything. Currently you will leave lock on first 
await. With the proposed fix you will leave it instantly. In either case lock 
doesn't work as it should. I'm not sure why 
https://issues.apache.org/jira/browse/AMQNET-626 was closed, so I reopened it. 
   
   As I said in https://github.com/apache/activemq-nms-amqp/pull/45 
   
   > I think we should revisit this part of code (mainly 
TriggerReconnectionAttempt) after we hopefully deliver the initial release. 
Calling async method without await or GetResult() is always a red flag.
   
   This was a bit too naively ported from Java and definitely requires a second 
look. A more reliable implementation might look as 
[follows](https://github.com/Havret/dotnet-activemq-artemis-client/blob/master/src/ActiveMQ.Artemis.Client/AutoRecovering/AutoRecoveringConnection.cs)
 but this wouldn't be a simple change.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 553242)
Time Spent: 2h  (was: 1h 50m)

> AMQP failover implementation fails to reconnect in some cases
> -
>
> Key: AMQNET-656
> URL: https://issues.apache.org/jira/browse/AMQNET-656
> Project: ActiveMQ .Net
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: AMQP-1.8.1
>Reporter: Bruce Dodson
>Priority: Major
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> We recently had an issue where some of our producer instances were able to 
> reconnect immediately after the master/slave (or primary/standby) broker 
> cluster failed over, while others never reconnected.
> It appears to be related to two existing JIRAs, AMQNET-624 (addressed in 
> GitHub PR#45) and AMQNET-626 (new issue raised in the same PR, but closed 
> without any changes).
> Regarding the bug identified and fixed in AMQNET-624, part of the original 
> solution was pulled back: where TriggerReconnectionAttempt was called via 
> Task.Run, to instead call it directly. The second issue, AMQNET-626 was to 
> raise concern about the unawaited task returned by TriggerReconnectionAttempt.
> I think perhaps calling from Task.Run may have been beneficial after all: it 
> ensured that TriggerReconnectionAttempt was running on a thread from the 
> thread pool. Otherwise, when TriggerReconnectionAttempt calls 
> ScheduleReconnect, and ScheduleReconnect does an await, that is occurring 
> from within a lock statement.
> As noted in MSDN, an await statement _cannot_ occur inside of a lock 
> statement. That includes anywhere in the call stack, as far as I understand. 
> If you do, it is not caught by the compiler, but can lead to failures e.g. 
> where the task being awaited never gets scheduled.
> Invoking TriggerReconnectionAttempt from a thread pool thread (or another 
> background thread) is one way to avoid this issue, and using Task.Run() might 
> be the easiest way, even though it may also raise eyebrows. Any performance 
> overhead of Task.Run() shouldn't be a factor, since it is only invoked upon 
> losing connection, not continuously.
> The call to Task.Run() could also be moved into TriggerReconnectionAttempt, 
> like so:
> {code:java}
> // this is invoked using Task.Run, to ensure it runs on a thread pool thread
> // in case this was invoked from inside a lock statement (which it is)
> return Task.Run(async () => await 
> reconnectControl.ScheduleReconnect(Reconnect));{code}
> It does still leave the issue identified in AMQNET-626, that the result is 
> not checked, but it resolves the failover failure caused by calling await 
> inside of a lock.
> (Another way around this would be to use a SemaphoreSlim, or other 
> async-compatible synchronization mechanism instead of a lock statement. 
> However, that could have far-reaching implications, since lock statements are 
> used in many parts of the AMQP implementation.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (AMQNET-656) AMQP failover implementation fails to reconnect in some cases

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQNET-656?focusedWorklogId=553239=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-553239
 ]

ASF GitHub Bot logged work on AMQNET-656:
-

Author: ASF GitHub Bot
Created on: 16/Feb/21 23:01
Start Date: 16/Feb/21 23:01
Worklog Time Spent: 10m 
  Work Description: Havret edited a comment on pull request #59:
URL: https://github.com/apache/activemq-nms-amqp/pull/59#issuecomment-780173091


   I'm not sure if this solve anything. Currently you will leave lock on first 
await. With the proposed fix you will leave it instantly. In either case lock 
doesn't work as it should. I'm not sure why 
https://issues.apache.org/jira/browse/AMQNET-626 was closed, so I reopened it. 
   
   As I said in https://github.com/apache/activemq-nms-amqp/pull/45 
   
   > I think we should revisit this part of code (mainly 
TriggerReconnectionAttempt) after we hopefully deliver the initial release. 
Calling async method without await or GetResult() is always a red flag.
   
   This was a bit too naively ported from Java and definitely requires a second 
look. A more reliable implementation might look as follows: 
https://github.com/Havret/dotnet-activemq-artemis-client/blob/master/src/ActiveMQ.Artemis.Client/AutoRecovering/AutoRecoveringConnection.cs



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 553239)
Time Spent: 1h 40m  (was: 1.5h)

> AMQP failover implementation fails to reconnect in some cases
> -
>
> Key: AMQNET-656
> URL: https://issues.apache.org/jira/browse/AMQNET-656
> Project: ActiveMQ .Net
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: AMQP-1.8.1
>Reporter: Bruce Dodson
>Priority: Major
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> We recently had an issue where some of our producer instances were able to 
> reconnect immediately after the master/slave (or primary/standby) broker 
> cluster failed over, while others never reconnected.
> It appears to be related to two existing JIRAs, AMQNET-624 (addressed in 
> GitHub PR#45) and AMQNET-626 (new issue raised in the same PR, but closed 
> without any changes).
> Regarding the bug identified and fixed in AMQNET-624, part of the original 
> solution was pulled back: where TriggerReconnectionAttempt was called via 
> Task.Run, to instead call it directly. The second issue, AMQNET-626 was to 
> raise concern about the unawaited task returned by TriggerReconnectionAttempt.
> I think perhaps calling from Task.Run may have been beneficial after all: it 
> ensured that TriggerReconnectionAttempt was running on a thread from the 
> thread pool. Otherwise, when TriggerReconnectionAttempt calls 
> ScheduleReconnect, and ScheduleReconnect does an await, that is occurring 
> from within a lock statement.
> As noted in MSDN, an await statement _cannot_ occur inside of a lock 
> statement. That includes anywhere in the call stack, as far as I understand. 
> If you do, it is not caught by the compiler, but can lead to failures e.g. 
> where the task being awaited never gets scheduled.
> Invoking TriggerReconnectionAttempt from a thread pool thread (or another 
> background thread) is one way to avoid this issue, and using Task.Run() might 
> be the easiest way, even though it may also raise eyebrows. Any performance 
> overhead of Task.Run() shouldn't be a factor, since it is only invoked upon 
> losing connection, not continuously.
> The call to Task.Run() could also be moved into TriggerReconnectionAttempt, 
> like so:
> {code:java}
> // this is invoked using Task.Run, to ensure it runs on a thread pool thread
> // in case this was invoked from inside a lock statement (which it is)
> return Task.Run(async () => await 
> reconnectControl.ScheduleReconnect(Reconnect));{code}
> It does still leave the issue identified in AMQNET-626, that the result is 
> not checked, but it resolves the failover failure caused by calling await 
> inside of a lock.
> (Another way around this would be to use a SemaphoreSlim, or other 
> async-compatible synchronization mechanism instead of a lock statement. 
> However, that could have far-reaching implications, since lock statements are 
> used in many parts of the AMQP implementation.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (AMQNET-656) AMQP failover implementation fails to reconnect in some cases

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQNET-656?focusedWorklogId=553238=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-553238
 ]

ASF GitHub Bot logged work on AMQNET-656:
-

Author: ASF GitHub Bot
Created on: 16/Feb/21 23:01
Start Date: 16/Feb/21 23:01
Worklog Time Spent: 10m 
  Work Description: Havret commented on pull request #59:
URL: https://github.com/apache/activemq-nms-amqp/pull/59#issuecomment-780173091


   I'm not sure if this solves anything. Currently you will leave lock on first 
await. With the proposed fix you will leave it instantly. In either case lock 
doesn't work as it should. I'm not sure why 
https://issues.apache.org/jira/browse/AMQNET-626 was closed, so I reopened it. 
   
   As I said in https://github.com/apache/activemq-nms-amqp/pull/45 
   
   > I think we should revisit this part of code (mainly 
TriggerReconnectionAttempt) after we hopefully deliver the initial release. 
Calling async method without await or GetResult() is always a red flag.
   
   This was a bit too naively ported from Java and definitely requires a second 
look. A more reliable implementation might look as follows: 
https://github.com/Havret/dotnet-activemq-artemis-client/blob/master/src/ActiveMQ.Artemis.Client/AutoRecovering/AutoRecoveringConnection.cs



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 553238)
Time Spent: 1.5h  (was: 1h 20m)

> AMQP failover implementation fails to reconnect in some cases
> -
>
> Key: AMQNET-656
> URL: https://issues.apache.org/jira/browse/AMQNET-656
> Project: ActiveMQ .Net
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: AMQP-1.8.1
>Reporter: Bruce Dodson
>Priority: Major
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We recently had an issue where some of our producer instances were able to 
> reconnect immediately after the master/slave (or primary/standby) broker 
> cluster failed over, while others never reconnected.
> It appears to be related to two existing JIRAs, AMQNET-624 (addressed in 
> GitHub PR#45) and AMQNET-626 (new issue raised in the same PR, but closed 
> without any changes).
> Regarding the bug identified and fixed in AMQNET-624, part of the original 
> solution was pulled back: where TriggerReconnectionAttempt was called via 
> Task.Run, to instead call it directly. The second issue, AMQNET-626 was to 
> raise concern about the unawaited task returned by TriggerReconnectionAttempt.
> I think perhaps calling from Task.Run may have been beneficial after all: it 
> ensured that TriggerReconnectionAttempt was running on a thread from the 
> thread pool. Otherwise, when TriggerReconnectionAttempt calls 
> ScheduleReconnect, and ScheduleReconnect does an await, that is occurring 
> from within a lock statement.
> As noted in MSDN, an await statement _cannot_ occur inside of a lock 
> statement. That includes anywhere in the call stack, as far as I understand. 
> If you do, it is not caught by the compiler, but can lead to failures e.g. 
> where the task being awaited never gets scheduled.
> Invoking TriggerReconnectionAttempt from a thread pool thread (or another 
> background thread) is one way to avoid this issue, and using Task.Run() might 
> be the easiest way, even though it may also raise eyebrows. Any performance 
> overhead of Task.Run() shouldn't be a factor, since it is only invoked upon 
> losing connection, not continuously.
> The call to Task.Run() could also be moved into TriggerReconnectionAttempt, 
> like so:
> {code:java}
> // this is invoked using Task.Run, to ensure it runs on a thread pool thread
> // in case this was invoked from inside a lock statement (which it is)
> return Task.Run(async () => await 
> reconnectControl.ScheduleReconnect(Reconnect));{code}
> It does still leave the issue identified in AMQNET-626, that the result is 
> not checked, but it resolves the failover failure caused by calling await 
> inside of a lock.
> (Another way around this would be to use a SemaphoreSlim, or other 
> async-compatible synchronization mechanism instead of a lock statement. 
> However, that could have far-reaching implications, since lock statements are 
> used in many parts of the AMQP implementation.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (AMQNET-656) AMQP failover implementation fails to reconnect in some cases

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQNET-656?focusedWorklogId=553218=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-553218
 ]

ASF GitHub Bot logged work on AMQNET-656:
-

Author: ASF GitHub Bot
Created on: 16/Feb/21 21:53
Start Date: 16/Feb/21 21:53
Worklog Time Spent: 10m 
  Work Description: michaelandrepearce commented on pull request #59:
URL: https://github.com/apache/activemq-nms-amqp/pull/59#issuecomment-780141406


   @lukeabsent would you mind quickly looking at this as a second pair of eyes?
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 553218)
Time Spent: 1h 20m  (was: 1h 10m)

> AMQP failover implementation fails to reconnect in some cases
> -
>
> Key: AMQNET-656
> URL: https://issues.apache.org/jira/browse/AMQNET-656
> Project: ActiveMQ .Net
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: AMQP-1.8.1
>Reporter: Bruce Dodson
>Priority: Major
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> We recently had an issue where some of our producer instances were able to 
> reconnect immediately after the master/slave (or primary/standby) broker 
> cluster failed over, while others never reconnected.
> It appears to be related to two existing JIRAs, AMQNET-624 (addressed in 
> GitHub PR#45) and AMQNET-626 (new issue raised in the same PR, but closed 
> without any changes).
> Regarding the bug identified and fixed in AMQNET-624, part of the original 
> solution was pulled back: where TriggerReconnectionAttempt was called via 
> Task.Run, to instead call it directly. The second issue, AMQNET-626 was to 
> raise concern about the unawaited task returned by TriggerReconnectionAttempt.
> I think perhaps calling from Task.Run may have been beneficial after all: it 
> ensured that TriggerReconnectionAttempt was running on a thread from the 
> thread pool. Otherwise, when TriggerReconnectionAttempt calls 
> ScheduleReconnect, and ScheduleReconnect does an await, that is occurring 
> from within a lock statement.
> As noted in MSDN, an await statement _cannot_ occur inside of a lock 
> statement. That includes anywhere in the call stack, as far as I understand. 
> If you do, it is not caught by the compiler, but can lead to failures e.g. 
> where the task being awaited never gets scheduled.
> Invoking TriggerReconnectionAttempt from a thread pool thread (or another 
> background thread) is one way to avoid this issue, and using Task.Run() might 
> be the easiest way, even though it may also raise eyebrows. Any performance 
> overhead of Task.Run() shouldn't be a factor, since it is only invoked upon 
> losing connection, not continuously.
> The call to Task.Run() could also be moved into TriggerReconnectionAttempt, 
> like so:
> {code:java}
> // this is invoked using Task.Run, to ensure it runs on a thread pool thread
> // in case this was invoked from inside a lock statement (which it is)
> return Task.Run(async () => await 
> reconnectControl.ScheduleReconnect(Reconnect));{code}
> It does still leave the issue identified in AMQNET-626, that the result is 
> not checked, but it resolves the failover failure caused by calling await 
> inside of a lock.
> (Another way around this would be to use a SemaphoreSlim, or other 
> async-compatible synchronization mechanism instead of a lock statement. 
> However, that could have far-reaching implications, since lock statements are 
> used in many parts of the AMQP implementation.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-3122) Forwarding Compressed messages to an embedded Broker throw errors

2021-02-16 Thread Tarek Hammoud (Jira)
Tarek Hammoud created ARTEMIS-3122:
--

 Summary: Forwarding Compressed messages to an embedded Broker 
throw errors
 Key: ARTEMIS-3122
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3122
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.16.0
 Environment: Linux tarek02 4.4.0-78-generic #99-Ubuntu SMP Thu Apr 27 
15:29:09 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
Reporter: Tarek Hammoud
 Attachments: TestSendReciveOnEmbedded.java, local-only.xml

Hi,

A process receives a compressed message from a global broker. The message is 
simply forwarded to an embedded broker which in-turn throws a compression 
exception. I set the random byte array size to be 150k which is greater than 
the default 100k min compression size. Dropping that number to a lower number 
(100k or below) does not exhibit the issue. Attached is a test program that 
fails every time. 

{{2021-02-16 15:51:24,691 EST] INFO  TestSendReciveOnEmbedded [main] Got a 
connection }}{{[2021-02-16 15:51:24,809 EST] INFO  TestSendReciveOnEmbedded 
[Thread-0 (ActiveMQ-client-global-threads)] 
Forward:ActiveMQMessage[ID:bb78fb83-7098-11eb-9dc4-8a8eef5d4c5d]:NON-PERSISTENT/ClientLargeMessageImpl[messageID=2328,
 durable=false, 
address=testerTopic,userID=bb78fb83-7098-11eb-9dc4-8a8eef5d4c5d,properties=TypedProperties[__AMQ_CID=bb76d8a0-7098-11eb-9dc4-8a8eef5d4c5d,_AMQ_LARGE_COMPRESSED=true,_AMQ_LARGE_SIZE=150100,_AMQ_ROUTING_TYPE=0]]
 }}{{[2021-02-16 15:51:24,824 EST] INFO  message [Thread-1 
(ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@47289387)]
 AMQ601501: User anonymous@invm:0 is consuming a message from 
ebf8ed65-316a-4bf9-b665-d87e9621c0b8 }}{{[2021-02-16 15:51:24,849 EST] ERROR 
client [Thread-1 (ActiveMQ-client-global-threads)] AMQ134003: Message Listener 
failed to prepare message for receipt, 
message=ClientLargeMessageImpl[messageID=20, durable=false, 
address=testerTopic,userID=bb82bf84-7098-11eb-9dc4-8a8eef5d4c5d,properties=TypedProperties[__AMQ_CID=bb146d40-7098-11eb-9dc4-8a8eef5d4c5d,_AMQ_LARGE_COMPRESSED=true,_AMQ_LARGE_SIZE=150100,_AMQ_ROUTING_TYPE=0]]
 }}{{org.apache.activemq.artemis.api.core.ActiveMQLargeMessageException: 
AMQ219029: Error writing body of message}}{{at 
org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.sendPacketToOutput(LargeMessageControllerImpl.java:1112)}}{{at
 
org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.setOutputStream(LargeMessageControllerImpl.java:257)}}{{at
 
org.apache.activemq.artemis.core.client.impl.CompressedLargeMessageControllerImpl.setOutputStream(CompressedLargeMessageControllerImpl.java:74)}}{{at
 
org.apache.activemq.artemis.core.client.impl.CompressedLargeMessageControllerImpl.saveBuffer(CompressedLargeMessageControllerImpl.java:79)}}{{at
 
org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.checkBuffer(ClientLargeMessageImpl.java:159)}}{{at
 
org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.checkCompletion(ClientLargeMessageImpl.java:84)}}{{at
 
org.apache.activemq.artemis.jms.client.ActiveMQMessage.doBeforeReceive(ActiveMQMessage.java:801)}}{{at
 
org.apache.activemq.artemis.jms.client.ActiveMQObjectMessage.doBeforeReceive(ActiveMQObjectMessage.java:100)}}{{at
 
org.apache.activemq.artemis.jms.client.JMSMessageListenerWrapper.onMessage(JMSMessageListenerWrapper.java:93)}}{{at
 
org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.callOnMessage(ClientConsumerImpl.java:1030)}}{{at
 
org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.access$400(ClientConsumerImpl.java:49)}}{{at
 
org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl$Runner.run(ClientConsumerImpl.java:1153)}}{{at
 
org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:42)}}{{at
 
org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31)}}{{at
 
org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:65)}}{{at
 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)}}{{at
 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)}}{{at
 
org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)}}{{Caused
 by: java.io.IOException: Error decompressing data}}{{at 
org.apache.activemq.artemis.utils.InflaterWriter.write(InflaterWriter.java:62)}}{{at
 java.base/java.io.OutputStream.write(OutputStream.java:157)}}{{at 
java.base/java.io.OutputStream.write(OutputStream.java:122)}}{{at 
org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.sendPacketToOutput(LargeMessageControllerImpl.java:1106)}}{{...
 17 common frames omitted}}{{Caused by: java.util.zip.DataFormatException: 
incorrect header 

[jira] [Commented] (AMQ-4965) Dequeue count for Topics increase for non-durable subscribers but not for durable subscribers

2021-02-16 Thread Matt Pavlovich (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-4965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285481#comment-17285481
 ] 

Matt Pavlovich commented on AMQ-4965:
-

Destination statistics have been significantly improved since 5.8.0. Please 
retest with the latest 5.16.x release and report back if there is still an area 
for improvement.

This ticket will be closed in 30-days if there is no further update.

> Dequeue count for Topics increase for non-durable subscribers but not  for 
> durable subscribers
> --
>
> Key: AMQ-4965
> URL: https://issues.apache.org/jira/browse/AMQ-4965
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.8.0
>Reporter: Abhi
>Assignee: Matt Pavlovich
>Priority: Major
>  Labels: close-pending
>
> Discussion:- 
> http://activemq.2283324.n4.nabble.com/Message-Dequeue-count-in-jconsole-0-even-after-messages-are-recieved-and-consumed-by-subscribers-tp4675875.html
> Currently, the Dequeue count metric for Topics is inconsistent in case of 
> durable and non-durable subscribers. It increases for non-durable subscribers 
> but not for durable subscribers. Moreover, the dequeue count on a topic is 
> not very meaningful. It can be changed so that topic dequeue counts are not 
> updated at all. 
> Also, it would be nice if such information is properly documented somewhere 
> in ActiveMQ docs as I couldn't find any information regarding  this behavior 
> in ActiveMQ docs. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (AMQ-4965) Dequeue count for Topics increase for non-durable subscribers but not for durable subscribers

2021-02-16 Thread Matt Pavlovich (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-4965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Pavlovich updated AMQ-4965:

Labels: close-pending  (was: )

> Dequeue count for Topics increase for non-durable subscribers but not  for 
> durable subscribers
> --
>
> Key: AMQ-4965
> URL: https://issues.apache.org/jira/browse/AMQ-4965
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.8.0
>Reporter: Abhi
>Assignee: Matt Pavlovich
>Priority: Major
>  Labels: close-pending
>
> Discussion:- 
> http://activemq.2283324.n4.nabble.com/Message-Dequeue-count-in-jconsole-0-even-after-messages-are-recieved-and-consumed-by-subscribers-tp4675875.html
> Currently, the Dequeue count metric for Topics is inconsistent in case of 
> durable and non-durable subscribers. It increases for non-durable subscribers 
> but not for durable subscribers. Moreover, the dequeue count on a topic is 
> not very meaningful. It can be changed so that topic dequeue counts are not 
> updated at all. 
> Also, it would be nice if such information is properly documented somewhere 
> in ActiveMQ docs as I couldn't find any information regarding  this behavior 
> in ActiveMQ docs. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (AMQ-4965) Dequeue count for Topics increase for non-durable subscribers but not for durable subscribers

2021-02-16 Thread Matt Pavlovich (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-4965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Pavlovich reassigned AMQ-4965:
---

Assignee: Matt Pavlovich

> Dequeue count for Topics increase for non-durable subscribers but not  for 
> durable subscribers
> --
>
> Key: AMQ-4965
> URL: https://issues.apache.org/jira/browse/AMQ-4965
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.8.0
>Reporter: Abhi
>Assignee: Matt Pavlovich
>Priority: Major
>
> Discussion:- 
> http://activemq.2283324.n4.nabble.com/Message-Dequeue-count-in-jconsole-0-even-after-messages-are-recieved-and-consumed-by-subscribers-tp4675875.html
> Currently, the Dequeue count metric for Topics is inconsistent in case of 
> durable and non-durable subscribers. It increases for non-durable subscribers 
> but not for durable subscribers. Moreover, the dequeue count on a topic is 
> not very meaningful. It can be changed so that topic dequeue counts are not 
> updated at all. 
> Also, it would be nice if such information is properly documented somewhere 
> in ActiveMQ docs as I couldn't find any information regarding  this behavior 
> in ActiveMQ docs. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (AMQ-5151) Incorrect authorization on virtual destination (wildcard)

2021-02-16 Thread Matt Pavlovich (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Pavlovich resolved AMQ-5151.
-
Resolution: Not A Problem

> Incorrect authorization on virtual destination (wildcard)
> -
>
> Key: AMQ-5151
> URL: https://issues.apache.org/jira/browse/AMQ-5151
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.9.0, 5.9.1
>Reporter: Alexandre Pauzies
>Assignee: Matt Pavlovich
>Priority: Major
>  Labels: authorization, security, virtualDestinations, wildcard
>
> I'm trying to use authorizationPlugin with virtual destinations:
> testTopic.group1
> testTopic.group2
> This is my authorizationEntries definition:
>  admin="admins" />
>  admin="admins" />
> 
> - When group1 tries to subscribe to testTopic.group2, I get an access denied: 
> "User is not authorized to read from..."
> - Same when group2 access group1
> - However, if group1 subscribes to testTopic.> it will have access to 
> everything
> I tracked the issue down to DefaultAuthorizationMap, 
> getReadACLs(ActiveMQDestination destination)
> This method will combine the read ACL from the 2 sub-topic authorization 
> entries and give access to destination "testTopic.>" to anyone in group1 or 
> group2.
> Am I doing something wrong?
> Is this scenario supported by authorizationPlugin?
> Thanks,
> Alex



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (AMQ-5151) Incorrect authorization on virtual destination (wildcard)

2021-02-16 Thread Matt Pavlovich (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Pavlovich closed AMQ-5151.
---

Closing due to inactivity and general improvements in versions since the ticket 
was created.

> Incorrect authorization on virtual destination (wildcard)
> -
>
> Key: AMQ-5151
> URL: https://issues.apache.org/jira/browse/AMQ-5151
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.9.0, 5.9.1
>Reporter: Alexandre Pauzies
>Assignee: Matt Pavlovich
>Priority: Major
>  Labels: authorization, security, virtualDestinations, wildcard
>
> I'm trying to use authorizationPlugin with virtual destinations:
> testTopic.group1
> testTopic.group2
> This is my authorizationEntries definition:
>  admin="admins" />
>  admin="admins" />
> 
> - When group1 tries to subscribe to testTopic.group2, I get an access denied: 
> "User is not authorized to read from..."
> - Same when group2 access group1
> - However, if group1 subscribes to testTopic.> it will have access to 
> everything
> I tracked the issue down to DefaultAuthorizationMap, 
> getReadACLs(ActiveMQDestination destination)
> This method will combine the read ACL from the 2 sub-topic authorization 
> entries and give access to destination "testTopic.>" to anyone in group1 or 
> group2.
> Am I doing something wrong?
> Is this scenario supported by authorizationPlugin?
> Thanks,
> Alex



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (AMQ-5157) Non persistent Messages not getting expired

2021-02-16 Thread Matt Pavlovich (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-5157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Pavlovich closed AMQ-5157.
---

> Non persistent Messages not getting expired
> ---
>
> Key: AMQ-5157
> URL: https://issues.apache.org/jira/browse/AMQ-5157
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 5.8.0
>Reporter: Anuj Khandelwal
>Assignee: Matt Pavlovich
>Priority: Major
>  Labels: close-pending
>
> It is coming from 
> http://activemq.2283324.n4.nabble.com/Non-persistent-Messages-Not-getting-expired-even-after-expiration-time-exceeded-td4680428.html#a4680459
>  
> Problem: Non-persistent messages, if off lined to tmp storage (may be because 
> of inactive durable subscriber), won’t be expired until they are scheduled 
> for dispatch. 
> Test scenario: Non persistent message is sent from the producer to the topic 
> which has inactive durable subscriber, this message will be stored in 
> non-persistent message tmp store "activemq-data/broker/tmpstorage/". 
> This message is not getting deleted even after Expiration time is exceeded. 
> According to discussion on ActiveMQ user forum, it will only be expired  when 
> the message is ready to dispatch. Which should not happen.
> Ideally broker should expire the message if expiration time exceeds 
> irrespective of message is ready to dispatch or other things.
> Thanks,
> Anuj



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (AMQ-5157) Non persistent Messages not getting expired

2021-02-16 Thread Matt Pavlovich (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-5157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Pavlovich resolved AMQ-5157.
-
Resolution: Not A Problem

> Non persistent Messages not getting expired
> ---
>
> Key: AMQ-5157
> URL: https://issues.apache.org/jira/browse/AMQ-5157
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 5.8.0
>Reporter: Anuj Khandelwal
>Assignee: Matt Pavlovich
>Priority: Major
>  Labels: close-pending
>
> It is coming from 
> http://activemq.2283324.n4.nabble.com/Non-persistent-Messages-Not-getting-expired-even-after-expiration-time-exceeded-td4680428.html#a4680459
>  
> Problem: Non-persistent messages, if off lined to tmp storage (may be because 
> of inactive durable subscriber), won’t be expired until they are scheduled 
> for dispatch. 
> Test scenario: Non persistent message is sent from the producer to the topic 
> which has inactive durable subscriber, this message will be stored in 
> non-persistent message tmp store "activemq-data/broker/tmpstorage/". 
> This message is not getting deleted even after Expiration time is exceeded. 
> According to discussion on ActiveMQ user forum, it will only be expired  when 
> the message is ready to dispatch. Which should not happen.
> Ideally broker should expire the message if expiration time exceeds 
> irrespective of message is ready to dispatch or other things.
> Thanks,
> Anuj



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (AMQ-5157) Non persistent Messages not getting expired

2021-02-16 Thread Matt Pavlovich (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-5157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285476#comment-17285476
 ] 

Matt Pavlovich commented on AMQ-5157:
-

Closing due to inactivity and availability of new feature.

> Non persistent Messages not getting expired
> ---
>
> Key: AMQ-5157
> URL: https://issues.apache.org/jira/browse/AMQ-5157
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 5.8.0
>Reporter: Anuj Khandelwal
>Assignee: Matt Pavlovich
>Priority: Major
>  Labels: close-pending
>
> It is coming from 
> http://activemq.2283324.n4.nabble.com/Non-persistent-Messages-Not-getting-expired-even-after-expiration-time-exceeded-td4680428.html#a4680459
>  
> Problem: Non-persistent messages, if off lined to tmp storage (may be because 
> of inactive durable subscriber), won’t be expired until they are scheduled 
> for dispatch. 
> Test scenario: Non persistent message is sent from the producer to the topic 
> which has inactive durable subscriber, this message will be stored in 
> non-persistent message tmp store "activemq-data/broker/tmpstorage/". 
> This message is not getting deleted even after Expiration time is exceeded. 
> According to discussion on ActiveMQ user forum, it will only be expired  when 
> the message is ready to dispatch. Which should not happen.
> Ideally broker should expire the message if expiration time exceeds 
> irrespective of message is ready to dispatch or other things.
> Thanks,
> Anuj



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (AMQ-8149) Create Docker Image

2021-02-16 Thread Matt Pavlovich (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Pavlovich updated AMQ-8149:

Description: 
Create an Apache ActiveMQ docker image

Ideas:
[ ] jib or jkube mvn plugin
[ ] create pre-image assembly to allow users to build customized containers
[ ] base jdk11 image for reference container

Tasks:
[Pending] Creation of Docker repository for ActiveMQ INFRA-21430
[ ] Add activemq-docker module to 5.17.x
[ ] Add dockerhub deployment to release process


  was:
Create an Apache ActiveMQ docker image

Ideas:

Tasks:
[Pending] Creation of Docker repository for ActiveMQ INFRA-21430
[ ] Add activemq-docker module to 5.17.x
[ ] Add dockerhub deployment to release process



> Create Docker Image
> ---
>
> Key: AMQ-8149
> URL: https://issues.apache.org/jira/browse/AMQ-8149
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.17.0
>Reporter: Matt Pavlovich
>Assignee: Matt Pavlovich
>Priority: Major
>
> Create an Apache ActiveMQ docker image
> Ideas:
> [ ] jib or jkube mvn plugin
> [ ] create pre-image assembly to allow users to build customized containers
> [ ] base jdk11 image for reference container
> Tasks:
> [Pending] Creation of Docker repository for ActiveMQ INFRA-21430
> [ ] Add activemq-docker module to 5.17.x
> [ ] Add dockerhub deployment to release process



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?focusedWorklogId=553177=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-553177
 ]

ASF GitHub Bot logged work on ARTEMIS-3117:
---

Author: ASF GitHub Bot
Created on: 16/Feb/21 20:15
Start Date: 16/Feb/21 20:15
Worklog Time Spent: 10m 
  Work Description: jbertram commented on pull request #3456:
URL: https://github.com/apache/activemq-artemis/pull/3456#issuecomment-780091005


   You can put them into `org.apache.activemq.artemis.core.remoting.impl.ssl` 
using something like `OpenSSLContextFactory` & `CachingOpenSSLContextFactory`. 
Extending the existing versions is tempting, but I'm hesitant to recommend that 
as it may get messy. In my opinion it makes sense to have multiple 
implementations given the pluggable nature of the factory.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 553177)
Time Spent: 50m  (was: 40m)

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?focusedWorklogId=553174=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-553174
 ]

ASF GitHub Bot logged work on ARTEMIS-3117:
---

Author: ASF GitHub Bot
Created on: 16/Feb/21 20:04
Start Date: 16/Feb/21 20:04
Worklog Time Spent: 10m 
  Work Description: sebthom commented on pull request #3456:
URL: https://github.com/apache/activemq-artemis/pull/3456#issuecomment-780085438


   @jbertram If I create DefaultSSLContextFactory/CachingSSLContextFactory for 
OpenSSL which into package should I place them using which names? Alternatively 
I could extend the existing ContextFactories and provide an additional method 
that returns the Netty SslContext.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 553174)
Time Spent: 40m  (was: 0.5h)

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?focusedWorklogId=553157=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-553157
 ]

ASF GitHub Bot logged work on ARTEMIS-3117:
---

Author: ASF GitHub Bot
Created on: 16/Feb/21 19:33
Start Date: 16/Feb/21 19:33
Worklog Time Spent: 10m 
  Work Description: jbertram edited a comment on pull request #3456:
URL: https://github.com/apache/activemq-artemis/pull/3456#issuecomment-780068689


   I see a couple of issues with this PR:
   
   1. It eliminates the use of `SSLContextFactoryProvider`.
   2. It eliminates any option to use `javax.net.ssl.SSLContext` (i.e. the 
default JDK provider). It only uses `io.netty.handler.ssl.SslContext` (i.e. the 
OpenSSL provider).
   
   IMO you should just change 
`org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor#loadOpenSslEngine`
 to use `SSLContextFactoryProvider` and then implement a caching OpenSSL SSL 
context factory like 
`org.apache.activemq.artemis.core.remoting.impl.ssl.CachingSSLContextFactory`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 553157)
Time Spent: 0.5h  (was: 20m)

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?focusedWorklogId=553156=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-553156
 ]

ASF GitHub Bot logged work on ARTEMIS-3117:
---

Author: ASF GitHub Bot
Created on: 16/Feb/21 19:33
Start Date: 16/Feb/21 19:33
Worklog Time Spent: 10m 
  Work Description: jbertram commented on pull request #3456:
URL: https://github.com/apache/activemq-artemis/pull/3456#issuecomment-780068689


   I see a handful of issues with this PR:
   
   1. It eliminates the use of `SSLContextFactoryProvider`.
   2. It eliminates any option to use `javax.net.ssl.SSLContext` (i.e. the 
default JDK provider). It only uses `io.netty.handler.ssl.SslContext` (i.e. the 
OpenSSL provider).
   
   IMO you should just change 
`org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor#loadOpenSslEngine`
 to use `SSLContextFactoryProvider` and then implement a caching OpenSSL SSL 
context factory like 
`org.apache.activemq.artemis.core.remoting.impl.ssl.CachingSSLContextFactory`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 553156)
Time Spent: 20m  (was: 10m)

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?focusedWorklogId=553137=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-553137
 ]

ASF GitHub Bot logged work on ARTEMIS-3117:
---

Author: ASF GitHub Bot
Created on: 16/Feb/21 19:16
Start Date: 16/Feb/21 19:16
Worklog Time Spent: 10m 
  Work Description: sebthom opened a new pull request #3456:
URL: https://github.com/apache/activemq-artemis/pull/3456


   degradation in JDK 11 during TLS connection initialization.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 553137)
Remaining Estimate: 0h
Time Spent: 10m

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285419#comment-17285419
 ] 

Justin Bertram commented on ARTEMIS-3117:
-

[~seb], most users switch from the JDK provider to the OpenSSL provider due to 
the performance increase. However, given that the {{CachingSSLContextFactory}} 
doesn't work with OpenSSL have you compared the JDK provider + 
{{CachingSSLContextFactory}} against bare OpenSSL?

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (AMQ-8149) Create Docker Image

2021-02-16 Thread Matt Pavlovich (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Pavlovich updated AMQ-8149:

Description: 
Create an Apache ActiveMQ docker image

Ideas:

Tasks:
[Pending] Creation of Docker repository for ActiveMQ INFRA-21430
[ ] Add activemq-docker module to 5.17.x
[ ] Add dockerhub deployment to release process


  was:
Create an Apache ActiveMQ docker image

Ideas:

Tasks:
[DONE] Request creation of Docker repository for ActiveMQ INFRA-21430



> Create Docker Image
> ---
>
> Key: AMQ-8149
> URL: https://issues.apache.org/jira/browse/AMQ-8149
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.17.0
>Reporter: Matt Pavlovich
>Assignee: Matt Pavlovich
>Priority: Major
>
> Create an Apache ActiveMQ docker image
> Ideas:
> Tasks:
> [Pending] Creation of Docker repository for ActiveMQ INFRA-21430
> [ ] Add activemq-docker module to 5.17.x
> [ ] Add dockerhub deployment to release process



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (AMQ-8149) Create Docker Image

2021-02-16 Thread Matt Pavlovich (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Pavlovich updated AMQ-8149:

Description: 
Create an Apache ActiveMQ docker image

Ideas:

Tasks:
[DONE] Request creation of Docker repository for ActiveMQ INFRA-21430


  was:
Create an Apache ActiveMQ docker image

Ideas:

Tasks:
[DONE] Request creation of Docker repository for ActiveMQ (



> Create Docker Image
> ---
>
> Key: AMQ-8149
> URL: https://issues.apache.org/jira/browse/AMQ-8149
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.17.0
>Reporter: Matt Pavlovich
>Assignee: Matt Pavlovich
>Priority: Major
>
> Create an Apache ActiveMQ docker image
> Ideas:
> Tasks:
> [DONE] Request creation of Docker repository for ActiveMQ INFRA-21430



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (AMQ-8149) Create Docker Image

2021-02-16 Thread Matt Pavlovich (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Pavlovich updated AMQ-8149:

Description: 
Create an Apache ActiveMQ docker image

Ideas:

Tasks:
[DONE] Request creation of Docker repository for ActiveMQ (


  was:
Create an Apache ActiveMQ docker image

Ideas:



> Create Docker Image
> ---
>
> Key: AMQ-8149
> URL: https://issues.apache.org/jira/browse/AMQ-8149
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.17.0
>Reporter: Matt Pavlovich
>Assignee: Matt Pavlovich
>Priority: Major
>
> Create an Apache ActiveMQ docker image
> Ideas:
> Tasks:
> [DONE] Request creation of Docker repository for ActiveMQ (



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285386#comment-17285386
 ] 

Francesco Nigro edited comment on ARTEMIS-3117 at 2/16/21, 6:01 PM:


Fair point: no idea about who's "somewhere in our cloud environment" but that 
can explain the effects fair easily: as long as we don't use a separate thread 
on Netty to handle SSL context initialization it would affect the event loop 
reactivity ie AMQP producers/consumers given that AMQP is handled on the Netty 
event loop in artemis AFAIK


was (Author: nigrofranz):
Fair point: no idea about who's "somewhere in our cloud environment" but that 
can explain it fair easily: as long as we don't use a separate thread on Netty 
to handle SSL context initialization it would affect the event loop reactivity 
ie AMQP producers/consumers given that AMQP is handled on the Netty event loop 
in artemis AFAIK

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285386#comment-17285386
 ] 

Francesco Nigro commented on ARTEMIS-3117:
--

Fair point: no idea about who's "somewhere in our cloud environment" but that 
can explain it fair easily: as long as we don't use a separate thread on Netty 
to handle SSL context initialization it would affect the event loop reactivity 
ie AMQP producers/consumers given that AMQP is handled on the Netty event loop 
in artemis AFAIK

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285376#comment-17285376
 ] 

Sebastian T edited comment on ARTEMIS-3117 at 2/16/21, 5:54 PM:


It looks like, despite the fact that the amqp connection count on our Artemis 
instance is stable, we get around 20 TCP connection attempts per second on the 
amqps port 5671 from somewhere in our cloud environment. As far as I understand 
this results in the same number of initialization of a new SSL contexts per 
second in NettyAcceptor. Since SSL context initialization is apparently 
considerably slower or more expensive in JDK11 than in JDK8, we only now (after 
the JDK switch) see this affecting the overall broker performance.

{noformat}
$ sudo tcpdump "tcp[tcpflags] & (tcp-syn|tcp-ack) == tcp-syn|tcp-ack" | pv 
--line-mode --rate > /dev/null
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
[19.8 /s]
{noformat}



was (Author: seb):
It looks like, despite the fact that the amqp connection count on our Artemis 
instance is stable, we get around 20 TCP connection attempts per second on the 
amqps port 5671 from somewhere in our cloud environment. As far as I understand 
this results in the same number of initialization of a new SSL contexts per 
second in NettyAcceptor. Since SSL context initialization is apparently 
considerably slower or more expensive in JDK11 than in JDK8, we only now (after 
the JDK switch) see this affecting the overall broker performance.

{noformat}
$ sudo tcpdump "tcp[tcpflags] & (tcp-syn|tcp-ack) == (tcp-syn|tcp-ack)" | pv 
--line-mode --rate > /dev/null
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
[19.8 /s]
{noformat}


> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285376#comment-17285376
 ] 

Sebastian T edited comment on ARTEMIS-3117 at 2/16/21, 5:52 PM:


It looks like, despite the fact that the amqp connection count on our Artemis 
instance is stable, we get around 20 TCP connection attempts per second on the 
amqps port 5671 from somewhere in our cloud environment. As far as I understand 
this results in the same number of initialization of a new SSL contexts per 
second in NettyAcceptor. Since SSL context initialization is apparently 
considerably slower or more expensive in JDK11 than in JDK8, we only now (after 
the JDK switch) see this affecting the overall broker performance.

{noformat}
$ sudo tcpdump "tcp[tcpflags] & (tcp-syn|tcp-ack) == (tcp-syn|tcp-ack)" | pv 
--line-mode --rate > /dev/null
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
[19.8 /s]
{noformat}



was (Author: seb):
It looks like, despite the fact that the amqp connection count on our Artemis 
instance is stable, we get around 20 TCP connection attempts per second on the 
amqps port 5671 from somewhere in our cloud environment. As far as I understand 
this results in the same amount of initialization of a new SSL context per 
second in NettyAcceptor. Since SSL context initialization is apparently 
considerably slower or more expensive in JDK11 than in JDK8, we only now (after 
the JDK switch) see this affecting the overall broker performance.

{noformat}
$ sudo tcpdump "tcp[tcpflags] & (tcp-syn|tcp-ack) == (tcp-syn|tcp-ack)" | pv 
--line-mode --rate > /dev/null
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
[19.8 /s]
{noformat}


> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285376#comment-17285376
 ] 

Sebastian T commented on ARTEMIS-3117:
--

It looks like, despite the fact that the amqp connection count on our Artemis 
instance is stable, we get around 20 TCP connection attempts per second on the 
amqps port 5671 from somewhere in our cloud environment. As far as I understand 
this results in the same amount of initialization of a new SSL context per 
second in NettyAcceptor. Since SSL context initialization is apparently 
considerably slower or more expensive in JDK11 than in JDK8, we only now (after 
the JDK switch) see this affecting the overall broker performance.

{noformat}
$ sudo tcpdump "tcp[tcpflags] & (tcp-syn|tcp-ack) == (tcp-syn|tcp-ack)" | pv 
--line-mode --rate > /dev/null
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
[19.8 /s]
{noformat}


> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-3121) Refactor NettyAcceptor.getPrototcols(Map) method

2021-02-16 Thread Sebastian T (Jira)
Sebastian T created ARTEMIS-3121:


 Summary: Refactor NettyAcceptor.getPrototcols(Map) method
 Key: ARTEMIS-3121
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3121
 Project: ActiveMQ Artemis
  Issue Type: Task
  Components: Broker
Affects Versions: 2.16.0
Reporter: Sebastian T


The NettyAcceptor.getPrototcols(Map) currently tries to join the keys of the 
given protocolManager map in a complicated and inefficient way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (AMQ-8149) Create Docker Image

2021-02-16 Thread Matt Pavlovich (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Pavlovich updated AMQ-8149:

Affects Version/s: (was: 5.16.2)
   5.17.0

> Create Docker Image
> ---
>
> Key: AMQ-8149
> URL: https://issues.apache.org/jira/browse/AMQ-8149
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.17.0
>Reporter: Matt Pavlovich
>Assignee: Matt Pavlovich
>Priority: Major
>
> Create an Apache ActiveMQ docker image
> Ideas:



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (AMQ-8149) Create Docker Image

2021-02-16 Thread Matt Pavlovich (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Pavlovich reassigned AMQ-8149:
---

Assignee: Matt Pavlovich

> Create Docker Image
> ---
>
> Key: AMQ-8149
> URL: https://issues.apache.org/jira/browse/AMQ-8149
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.16.2
>Reporter: Matt Pavlovich
>Assignee: Matt Pavlovich
>Priority: Major
>
> Create an Apache ActiveMQ docker image
> Ideas:



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (AMQ-8149) Create Docker Image

2021-02-16 Thread Matt Pavlovich (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Pavlovich updated AMQ-8149:

Description: 
Create an Apache ActiveMQ docker image

Ideas:


  was:Create an Apache ActiveMQ docker image


> Create Docker Image
> ---
>
> Key: AMQ-8149
> URL: https://issues.apache.org/jira/browse/AMQ-8149
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.16.2
>Reporter: Matt Pavlovich
>Priority: Major
>
> Create an Apache ActiveMQ docker image
> Ideas:



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (AMQ-8149) Create Docker Image

2021-02-16 Thread Matt Pavlovich (Jira)
Matt Pavlovich created AMQ-8149:
---

 Summary: Create Docker Image
 Key: AMQ-8149
 URL: https://issues.apache.org/jira/browse/AMQ-8149
 Project: ActiveMQ
  Issue Type: New Feature
Affects Versions: 5.16.2
Reporter: Matt Pavlovich


Create an Apache ActiveMQ docker image



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (AMQ-7046) Memory leak in 5.15.5

2021-02-16 Thread Matt Pavlovich (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-7046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285313#comment-17285313
 ] 

Matt Pavlovich commented on AMQ-7046:
-

This ticket is marked to close in 30 days if no additional information is 
provided.

> Memory leak in 5.15.5
> -
>
> Key: AMQ-7046
> URL: https://issues.apache.org/jira/browse/AMQ-7046
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.15.5
> Environment: We have a very busy ActiveMQ broker (> 1 million 
> messages per day) running in a docker container on a CentOS VM backed by a 
> MySQL database.
>  
>Reporter: James
>Assignee: Matt Pavlovich
>Priority: Major
>  Labels: close-pending, memory-leak
> Attachments: Screen Shot 2018-08-31 at 11.23.20.png
>
>
> We have just upgraded to version 5.15.5 in production and have run into a 
> problem which has caused us to roll back to 5.15.4. 
> We are seeing a memory leak that the garbage collection cannot cope with and 
> after a few hours the container runs out of memory.
> Attached is a screen shot of our AppDynamics monitoring showing the memory 
> usage.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (AMQ-7046) Memory leak in 5.15.5

2021-02-16 Thread Matt Pavlovich (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-7046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Pavlovich reassigned AMQ-7046:
---

Assignee: Matt Pavlovich

> Memory leak in 5.15.5
> -
>
> Key: AMQ-7046
> URL: https://issues.apache.org/jira/browse/AMQ-7046
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.15.5
> Environment: We have a very busy ActiveMQ broker (> 1 million 
> messages per day) running in a docker container on a CentOS VM backed by a 
> MySQL database.
>  
>Reporter: James
>Assignee: Matt Pavlovich
>Priority: Major
>  Labels: close-pending, memory-leak
> Attachments: Screen Shot 2018-08-31 at 11.23.20.png
>
>
> We have just upgraded to version 5.15.5 in production and have run into a 
> problem which has caused us to roll back to 5.15.4. 
> We are seeing a memory leak that the garbage collection cannot cope with and 
> after a few hours the container runs out of memory.
> Attached is a screen shot of our AppDynamics monitoring showing the memory 
> usage.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (AMQ-7046) Memory leak in 5.15.5

2021-02-16 Thread Matt Pavlovich (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-7046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285311#comment-17285311
 ] 

Matt Pavlovich commented on AMQ-7046:
-

The AppDynamics screenshot is not sufficient information to troubleshoot. 
Please test with latest ActiveMQ 5.15.x or 5.16.x and report back.

When reporting, please include a link to a heap dump, the activemq.xml 
configuration file and infromation about connecting clients-- for example, 
which protocols and message QOS are being used.

Example
# MQTTv3 QoS 1 with retain messages
# STOMP queues with persistent messages



> Memory leak in 5.15.5
> -
>
> Key: AMQ-7046
> URL: https://issues.apache.org/jira/browse/AMQ-7046
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.15.5
> Environment: We have a very busy ActiveMQ broker (> 1 million 
> messages per day) running in a docker container on a CentOS VM backed by a 
> MySQL database.
>  
>Reporter: James
>Priority: Major
>  Labels: memory-leak
> Attachments: Screen Shot 2018-08-31 at 11.23.20.png
>
>
> We have just upgraded to version 5.15.5 in production and have run into a 
> problem which has caused us to roll back to 5.15.4. 
> We are seeing a memory leak that the garbage collection cannot cope with and 
> after a few hours the container runs out of memory.
> Attached is a screen shot of our AppDynamics monitoring showing the memory 
> usage.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (AMQ-7046) Memory leak in 5.15.5

2021-02-16 Thread Matt Pavlovich (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-7046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Pavlovich updated AMQ-7046:

Labels: close-pending memory-leak  (was: memory-leak)

> Memory leak in 5.15.5
> -
>
> Key: AMQ-7046
> URL: https://issues.apache.org/jira/browse/AMQ-7046
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.15.5
> Environment: We have a very busy ActiveMQ broker (> 1 million 
> messages per day) running in a docker container on a CentOS VM backed by a 
> MySQL database.
>  
>Reporter: James
>Priority: Major
>  Labels: close-pending, memory-leak
> Attachments: Screen Shot 2018-08-31 at 11.23.20.png
>
>
> We have just upgraded to version 5.15.5 in production and have run into a 
> problem which has caused us to roll back to 5.15.4. 
> We are seeing a memory leak that the garbage collection cannot cope with and 
> after a few hours the container runs out of memory.
> Attached is a screen shot of our AppDynamics monitoring showing the memory 
> usage.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work started] (AMQ-7515) Networked broker does not pass along queue consumer upon reconnect

2021-02-16 Thread Matt Pavlovich (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-7515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on AMQ-7515 started by Matt Pavlovich.
---
> Networked broker does not pass along queue consumer upon reconnect
> --
>
> Key: AMQ-7515
> URL: https://issues.apache.org/jira/browse/AMQ-7515
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.15.10, 5.15.13
> Environment: ContOS 7 based docker image with Java 8.
>Reporter: Kevin Goerlitz
>Assignee: Matt Pavlovich
>Priority: Major
>
> We have a hub-spoke broker network with about 40 spokes.  when restarting the 
> hub broker, the spoke brokers reconnect.  However, when some of the spoke 
> brokers reconnect, they do not pass along the consumer of one of the queues 
> to the hub.  Messages the the queue are queued up and are not delivered until 
> either the client app is restarted or the spoke broker is restarted (which 
> will cause the client app to reconnect).
> It is usually the same set of spoke brokers that do not pass on the consumer. 
>  Each spoke broker has 5-8 consumers and it is always the consumer for the 
> same app that is not passed on (the queue is different for each remote 
> system).  The spoke broker still has the consumer active.
> The brokers are stand-alone and the client apps are using the STOMP protocol.
> The spoke brokers are connected to the hub via a duplex connection initiated 
> by the spoke.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (AMQ-7515) Networked broker does not pass along queue consumer upon reconnect

2021-02-16 Thread Matt Pavlovich (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-7515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Pavlovich reassigned AMQ-7515:
---

Assignee: Matt Pavlovich

> Networked broker does not pass along queue consumer upon reconnect
> --
>
> Key: AMQ-7515
> URL: https://issues.apache.org/jira/browse/AMQ-7515
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.15.10, 5.15.13
> Environment: ContOS 7 based docker image with Java 8.
>Reporter: Kevin Goerlitz
>Assignee: Matt Pavlovich
>Priority: Major
>
> We have a hub-spoke broker network with about 40 spokes.  when restarting the 
> hub broker, the spoke brokers reconnect.  However, when some of the spoke 
> brokers reconnect, they do not pass along the consumer of one of the queues 
> to the hub.  Messages the the queue are queued up and are not delivered until 
> either the client app is restarted or the spoke broker is restarted (which 
> will cause the client app to reconnect).
> It is usually the same set of spoke brokers that do not pass on the consumer. 
>  Each spoke broker has 5-8 consumers and it is always the consumer for the 
> same app that is not passed on (the queue is different for each remote 
> system).  The spoke broker still has the consumer active.
> The brokers are stand-alone and the client apps are using the STOMP protocol.
> The spoke brokers are connected to the hub via a duplex connection initiated 
> by the spoke.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (AMQ-7515) Networked broker does not pass along queue consumer upon reconnect

2021-02-16 Thread Matt Pavlovich (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-7515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285308#comment-17285308
 ] 

Matt Pavlovich commented on AMQ-7515:
-

Please attach the activemq.xml file

> Networked broker does not pass along queue consumer upon reconnect
> --
>
> Key: AMQ-7515
> URL: https://issues.apache.org/jira/browse/AMQ-7515
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.15.10, 5.15.13
> Environment: ContOS 7 based docker image with Java 8.
>Reporter: Kevin Goerlitz
>Assignee: Matt Pavlovich
>Priority: Major
>
> We have a hub-spoke broker network with about 40 spokes.  when restarting the 
> hub broker, the spoke brokers reconnect.  However, when some of the spoke 
> brokers reconnect, they do not pass along the consumer of one of the queues 
> to the hub.  Messages the the queue are queued up and are not delivered until 
> either the client app is restarted or the spoke broker is restarted (which 
> will cause the client app to reconnect).
> It is usually the same set of spoke brokers that do not pass on the consumer. 
>  Each spoke broker has 5-8 consumers and it is always the consumer for the 
> same app that is not passed on (the queue is different for each remote 
> system).  The spoke broker still has the consumer active.
> The brokers are stand-alone and the client apps are using the STOMP protocol.
> The spoke brokers are connected to the hub via a duplex connection initiated 
> by the spoke.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285280#comment-17285280
 ] 

Francesco Nigro commented on ARTEMIS-3117:
--

The profiler shows the same stack trace as 
[https://github.com/twitter/finagle/issues/856#issuecomment-648460426] so the 
fix/workaround should hold too :)

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285275#comment-17285275
 ] 

Justin Bertram commented on ARTEMIS-3117:
-

bq. Shouldn't the SSLContext performance issue only have an impact when 
establishing new connections?

As far as I understand, yes. An {{SSLContext}} should only be created when a 
new connection is established.

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285274#comment-17285274
 ] 

Sebastian T commented on ARTEMIS-3117:
--

I have one question of understanding. Shouldn't the SSLContext performance 
issue only have an impact when establishing new connections? The connection 
count is pretty stable on our broker.

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285272#comment-17285272
 ] 

Sebastian T commented on ARTEMIS-3117:
--

[~nigrofranz]I followed you advice and used async-profiler. Here are some 
results.
{noformat}
$ sudo ./profiler.sh -e cpu -d 30 -o flat 4008
Started [cpu] profiling
--- Execution profile ---
Total samples   : 4697
unknown_Java: 66 (1.41%)
not_walkable_Java   : 14 (0.30%)
deoptimization  : 3 (0.06%)

Frame buffer usage  : 4.1374%

  ns  percent  samples  top
  --  ---  ---  ---
  5454597354   11.51%  545  sha1_implCompress
  37621402337.94%  370  __lock_text_start_[k]
  23320437474.92%  233  sun.security.provider.DigestBase.engineReset
  17636245343.72%  175  
/tmp/libnetty_tcnative_linux_x86_6412383274641244971797.so (deleted)
  13808252432.91%  138  java.util.Arrays.fill
  12111436502.56%  121  java.util.Arrays.fill
  11220562322.37%  112  jbyte_disjoint_arraycopy
  10604680302.24%  106  sun.security.provider.SHA.implDigest
  10536771312.22%  104  
org.apache.activemq.artemis.protocol.amqp.broker.AMQPConnectionCallback.isWritable
   9742495412.06%   96  
org.apache.activemq.artemis.utils.collections.LinkedListImpl$Iterator.canAdvance
   9135088031.93%   90  [vdso]
   8106119081.71%   81  sun.security.provider.ByteArrayAccess.b2iBig64
   7708206931.63%   76  
org.apache.activemq.artemis.protocol.amqp.broker.AMQPSessionCallback.isWritable
   7707570931.63%   76  
org.apache.activemq.artemis.core.server.impl.QueueImpl.deliver
   6605953581.39%   65  
org.apache.activemq.artemis.core.server.impl.QueueImpl.handle
   5705513071.20%   57  sun.security.provider.DigestBase.engineUpdate
   5603966911.18%   56  
java.security.MessageDigest$Delegate.engineDigest
   4881475511.03%   48  eventfd_write_[k]
   4802739551.01%   48  sun.security.provider.SHA.implCompressCheck


$ sudo ./profiler.sh -e lock -d 30 -o flat 4008
Started [lock] profiling
--- Execution profile ---
Total samples   : 7255Frame buffer usage  : 0.0869%  ns  percent  
samples  top
  --  ---  ---  ---
 11699901984   92.19%  303  
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor
   8998756247.09% 5539  
java.util.concurrent.locks.ReentrantLock$NonfairSync
904165730.71% 1407  
org.apache.activemq.artemis.core.server.impl.QueueImpl
  1903030.00%4  java.lang.Class
   958510.00%2  java.lang.Object


$ sudo ./profiler.sh -e cpu -d 30 -o traces 4008
Started [cpu] profiling
--- Execution profile ---
Total samples   : 4544
unknown_Java: 76 (1.67%)
not_walkable_Java   : 7 (0.15%)
deoptimization  : 5 (0.11%)Frame buffer usage  : 4.1185%--- 2769920933 ns 
(6.04%), 277 samples
  [ 0] sha1_implCompress
  [ 1] java.security.MessageDigest$Delegate.engineDigest
  [ 2] java.security.MessageDigest.digest
  [ 3] java.security.MessageDigest.digest
  [ 4] com.sun.crypto.provider.PKCS12PBECipherCore.derive
  [ 5] com.sun.crypto.provider.PKCS12PBECipherCore.derive
  [ 6] com.sun.crypto.provider.HmacPKCS12PBESHA1.engineInit
  [ 7] javax.crypto.Mac.chooseProvider
  [ 8] javax.crypto.Mac.init
  [ 9] sun.security.pkcs12.PKCS12KeyStore.lambda$engineLoad$2
  [10] sun.security.pkcs12.PKCS12KeyStore$$Lambda$617.524606891.tryOnce
  [11] sun.security.pkcs12.PKCS12KeyStore$RetryWithZero.run
  [12] sun.security.pkcs12.PKCS12KeyStore.engineLoad
  [13] sun.security.util.KeyStoreDelegator.engineLoad
  [14] java.security.KeyStore.load
  [15] 
org.apache.activemq.artemis.core.remoting.impl.ssl.SSLSupport.loadKeystore
  [16] 
org.apache.activemq.artemis.core.remoting.impl.ssl.SSLSupport.loadTrustManagerFactory
  [17] 
org.apache.activemq.artemis.core.remoting.impl.ssl.SSLSupport.createNettyContext
  [18] 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.loadOpenSslEngine
  [19] 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler
  [20] 
org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor$4.initChannel
  [21] io.netty.channel.ChannelInitializer.initChannel
  [22] io.netty.channel.ChannelInitializer.handlerAdded
  [23] io.netty.channel.AbstractChannelHandlerContext.callHandlerAdded
  [24] io.netty.channel.DefaultChannelPipeline.callHandlerAdded0
  [25] io.netty.channel.DefaultChannelPipeline.access$100
  [26] io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask.execute
  [27] io.netty.channel.DefaultChannelPipeline.callHandlerAddedForAllHandlers
  [28] io.netty.channel.DefaultChannelPipeline.invokeHandlerAddedIfNeeded
  [29] io.netty.channel.AbstractChannel$AbstractUnsafe.register0
  [30] io.netty.channel.AbstractChannel$AbstractUnsafe.access$200
  [31] 

[jira] [Commented] (ARTEMIS-3120) ActiveMQXAResourceWrapper NPE if no LocatorConfig

2021-02-16 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285202#comment-17285202
 ] 

ASF subversion and git services commented on ARTEMIS-3120:
--

Commit fd1ccbe13553fab16b12f8a687a58c703e55b50e in activemq-artemis's branch 
refs/heads/master from Bartosz Spyrko-Smietanko
[ https://gitbox.apache.org/repos/asf?p=activemq-artemis.git;h=fd1ccbe ]

ARTEMIS-3120 Preserve default LocatorConfig if no configuration provided in 
RecoveryConfig


> ActiveMQXAResourceWrapper NPE if no LocatorConfig
> -
>
> Key: ARTEMIS-3120
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3120
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Domenico Francesco Bruscino
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code:java}
> WARN  [org.apache.activemq.artemis.service.extensions.xa.recovery] (Periodic 
> Recovery) AMQ172015: Can not connect to XARecoveryConfig 
> [transportConfiguration=[TransportConfiguration(name=, 
> factory=org-apache-activemq-artemis-core-remoting-impl-invm-InVMConnectorFactory)
>  ?serverId=0], discoveryConfiguration=null, username=null, password=, 
> JNDI_NAME=java:/JmsXA] on auto-generated resource recovery: 
> ActiveMQInternalErrorException[errorType=INTERNAL_ERROR message=AMQ219004: 
> Failed to initialise session factory]
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.initialize(ServerLocatorImpl.java:264)
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:660)
>   at 
> org.apache.activemq.artemis.service.extensions.xa.recovery.ActiveMQXAResourceWrapper.connect(ActiveMQXAResourceWrapper.java:312)
>   at 
> org.apache.activemq.artemis.service.extensions.xa.recovery.ActiveMQXAResourceWrapper.getDelegate(ActiveMQXAResourceWrapper.java:239)
>   at 
> org.apache.activemq.artemis.service.extensions.xa.recovery.ActiveMQXAResourceWrapper.recover(ActiveMQXAResourceWrapper.java:69)
>   at 
> org.apache.activemq.artemis.service.extensions.xa.ActiveMQXAResourceWrapperImpl.recover(ActiveMQXAResourceWrapperImpl.java:106)
>   at 
> com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.xaRecoveryFirstPass(XARecoveryModule.java:712)
>   at 
> com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.periodicWorkFirstPass(XARecoveryModule.java:233)
>   at 
> com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.periodicWorkFirstPass(XARecoveryModule.java:178)
>   at 
> com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.doWorkInternal(PeriodicRecovery.java:770)
>   at 
> com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.run(PeriodicRecovery.java:382)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.setThreadPools(ServerLocatorImpl.java:181)
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.initialize(ServerLocatorImpl.java:254)
>   ... 10 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-3120) ActiveMQXAResourceWrapper NPE if no LocatorConfig

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3120?focusedWorklogId=552971=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552971
 ]

ASF GitHub Bot logged work on ARTEMIS-3120:
---

Author: ASF GitHub Bot
Created on: 16/Feb/21 13:44
Start Date: 16/Feb/21 13:44
Worklog Time Spent: 10m 
  Work Description: asfgit closed pull request #3454:
URL: https://github.com/apache/activemq-artemis/pull/3454


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 552971)
Time Spent: 0.5h  (was: 20m)

> ActiveMQXAResourceWrapper NPE if no LocatorConfig
> -
>
> Key: ARTEMIS-3120
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3120
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Domenico Francesco Bruscino
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {code:java}
> WARN  [org.apache.activemq.artemis.service.extensions.xa.recovery] (Periodic 
> Recovery) AMQ172015: Can not connect to XARecoveryConfig 
> [transportConfiguration=[TransportConfiguration(name=, 
> factory=org-apache-activemq-artemis-core-remoting-impl-invm-InVMConnectorFactory)
>  ?serverId=0], discoveryConfiguration=null, username=null, password=, 
> JNDI_NAME=java:/JmsXA] on auto-generated resource recovery: 
> ActiveMQInternalErrorException[errorType=INTERNAL_ERROR message=AMQ219004: 
> Failed to initialise session factory]
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.initialize(ServerLocatorImpl.java:264)
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:660)
>   at 
> org.apache.activemq.artemis.service.extensions.xa.recovery.ActiveMQXAResourceWrapper.connect(ActiveMQXAResourceWrapper.java:312)
>   at 
> org.apache.activemq.artemis.service.extensions.xa.recovery.ActiveMQXAResourceWrapper.getDelegate(ActiveMQXAResourceWrapper.java:239)
>   at 
> org.apache.activemq.artemis.service.extensions.xa.recovery.ActiveMQXAResourceWrapper.recover(ActiveMQXAResourceWrapper.java:69)
>   at 
> org.apache.activemq.artemis.service.extensions.xa.ActiveMQXAResourceWrapperImpl.recover(ActiveMQXAResourceWrapperImpl.java:106)
>   at 
> com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.xaRecoveryFirstPass(XARecoveryModule.java:712)
>   at 
> com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.periodicWorkFirstPass(XARecoveryModule.java:233)
>   at 
> com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.periodicWorkFirstPass(XARecoveryModule.java:178)
>   at 
> com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.doWorkInternal(PeriodicRecovery.java:770)
>   at 
> com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.run(PeriodicRecovery.java:382)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.setThreadPools(ServerLocatorImpl.java:181)
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.initialize(ServerLocatorImpl.java:254)
>   ... 10 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285190#comment-17285190
 ] 

Sebastian T commented on ARTEMIS-3117:
--

After digging through the source code I guess in case of JDK SSL the issue can 
be mitigated by registering 
{{org.apache.activemq.artemis.core.remoting.impl.ssl.CachingSSLContextFactory}} 
via 
{{META-INF/services/org.apache.activemq.artemis.spi.core.remoting.ssl.SSLContextFactory}}.
 This however has no effect in case of using OpenSSL.

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-3045) ReplicationManager can batch sent replicated packets

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3045?focusedWorklogId=552954=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552954
 ]

ASF GitHub Bot logged work on ARTEMIS-3045:
---

Author: ASF GitHub Bot
Created on: 16/Feb/21 13:12
Start Date: 16/Feb/21 13:12
Worklog Time Spent: 10m 
  Work Description: franz1981 edited a comment on pull request #3392:
URL: https://github.com/apache/activemq-artemis/pull/3392#issuecomment-779826480


   Yep, and seems that we should do something like this 
https://github.com/netty/netty/pull/9687/files#diff-27a399de973ee2c525680c87ae344f68fac760db0d7155764b0454c045e16e04R35
 to enable it in our tests (maybe!)
   
   The only problem is that maybe it just check that no `synchronized` or other 
blocking calls are called, and it won't check if these are contended or 
not...mmm



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 552954)
Time Spent: 9h 50m  (was: 9h 40m)

> ReplicationManager can batch sent replicated packets
> 
>
> Key: ARTEMIS-3045
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3045
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>  Time Spent: 9h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-3045) ReplicationManager can batch sent replicated packets

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3045?focusedWorklogId=552953=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552953
 ]

ASF GitHub Bot logged work on ARTEMIS-3045:
---

Author: ASF GitHub Bot
Created on: 16/Feb/21 13:11
Start Date: 16/Feb/21 13:11
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on pull request #3392:
URL: https://github.com/apache/activemq-artemis/pull/3392#issuecomment-779826480


   Yep, and seems that we should do something like this 
https://github.com/netty/netty/pull/9687/files#diff-27a399de973ee2c525680c87ae344f68fac760db0d7155764b0454c045e16e04R35
 to enable it in our tests (maybe!)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 552953)
Time Spent: 9h 40m  (was: 9.5h)

> ReplicationManager can batch sent replicated packets
> 
>
> Key: ARTEMIS-3045
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3045
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-3045) ReplicationManager can batch sent replicated packets

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3045?focusedWorklogId=552950=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552950
 ]

ASF GitHub Bot logged work on ARTEMIS-3045:
---

Author: ASF GitHub Bot
Created on: 16/Feb/21 13:09
Start Date: 16/Feb/21 13:09
Worklog Time Spent: 10m 
  Work Description: gtully commented on pull request #3392:
URL: https://github.com/apache/activemq-artemis/pull/3392#issuecomment-779825632


   ok, sure, any time we can "ask the computer" for an answer we should, that 
looks like a nice tool.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 552950)
Time Spent: 9.5h  (was: 9h 20m)

> ReplicationManager can batch sent replicated packets
> 
>
> Key: ARTEMIS-3045
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3045
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>  Time Spent: 9.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-3045) ReplicationManager can batch sent replicated packets

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3045?focusedWorklogId=552946=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552946
 ]

ASF GitHub Bot logged work on ARTEMIS-3045:
---

Author: ASF GitHub Bot
Created on: 16/Feb/21 13:03
Start Date: 16/Feb/21 13:03
Worklog Time Spent: 10m 
  Work Description: franz1981 edited a comment on pull request #3392:
URL: https://github.com/apache/activemq-artemis/pull/3392#issuecomment-779822056


   @gtully 
   > maybe a little socket proxy 
   
   I'm more concerned about Artemis `ChannelImpl` blocking ops more then what 
Netty would do...
   In this case maybe would worth to try 
https://github.com/netty/netty/pull/9687 ie BlockHound to check if the Netty 
event loop won't block, wdyt?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 552946)
Time Spent: 9h 20m  (was: 9h 10m)

> ReplicationManager can batch sent replicated packets
> 
>
> Key: ARTEMIS-3045
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3045
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>  Time Spent: 9h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-3045) ReplicationManager can batch sent replicated packets

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3045?focusedWorklogId=552944=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552944
 ]

ASF GitHub Bot logged work on ARTEMIS-3045:
---

Author: ASF GitHub Bot
Created on: 16/Feb/21 13:02
Start Date: 16/Feb/21 13:02
Worklog Time Spent: 10m 
  Work Description: franz1981 commented on pull request #3392:
URL: https://github.com/apache/activemq-artemis/pull/3392#issuecomment-779822056


   @gtully 
   > maybe a little socket proxy 
   I'm more concerned about `ChannelImpl` blocking ops more then what Netty 
would do...
   In this case maybe would worth to try 
https://github.com/netty/netty/pull/9687 ie BlockHound to check if the Netty 
event loop won't block, wdyt?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 552944)
Time Spent: 9h  (was: 8h 50m)

> ReplicationManager can batch sent replicated packets
> 
>
> Key: ARTEMIS-3045
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3045
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>  Time Spent: 9h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-3045) ReplicationManager can batch sent replicated packets

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3045?focusedWorklogId=552945=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552945
 ]

ASF GitHub Bot logged work on ARTEMIS-3045:
---

Author: ASF GitHub Bot
Created on: 16/Feb/21 13:02
Start Date: 16/Feb/21 13:02
Worklog Time Spent: 10m 
  Work Description: franz1981 edited a comment on pull request #3392:
URL: https://github.com/apache/activemq-artemis/pull/3392#issuecomment-779822056


   @gtully 
   > maybe a little socket proxy 
   
   I'm more concerned about `ChannelImpl` blocking ops more then what Netty 
would do...
   In this case maybe would worth to try 
https://github.com/netty/netty/pull/9687 ie BlockHound to check if the Netty 
event loop won't block, wdyt?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 552945)
Time Spent: 9h 10m  (was: 9h)

> ReplicationManager can batch sent replicated packets
> 
>
> Key: ARTEMIS-3045
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3045
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>  Time Spent: 9h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-3045) ReplicationManager can batch sent replicated packets

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3045?focusedWorklogId=552942=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552942
 ]

ASF GitHub Bot logged work on ARTEMIS-3045:
---

Author: ASF GitHub Bot
Created on: 16/Feb/21 12:59
Start Date: 16/Feb/21 12:59
Worklog Time Spent: 10m 
  Work Description: gtully commented on pull request #3392:
URL: https://github.com/apache/activemq-artemis/pull/3392#issuecomment-779820507


   The best way to be sure is to write a test; maybe a little socket proxy can 
be used to block the write side of the replication channel to force blocking at 
some point and ask the computer how it behaves. There is a SocketProxy in the 
5.x test code base that could work in this cast I think.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 552942)
Time Spent: 8h 50m  (was: 8h 40m)

> ReplicationManager can batch sent replicated packets
> 
>
> Key: ARTEMIS-3045
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3045
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>  Time Spent: 8h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-3045) ReplicationManager can batch sent replicated packets

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3045?focusedWorklogId=552922=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552922
 ]

ASF GitHub Bot logged work on ARTEMIS-3045:
---

Author: ASF GitHub Bot
Created on: 16/Feb/21 12:39
Start Date: 16/Feb/21 12:39
Worklog Time Spent: 10m 
  Work Description: franz1981 edited a comment on pull request #3392:
URL: https://github.com/apache/activemq-artemis/pull/3392#issuecomment-778040106


   The change seems ok CI-wise, but, given that's running its logic on the 
Netty event loop we need a couple of questions to be answered:
   - what happen to the Netty channel writability with racing sends on a 
different artemis `Channel` (eg `PING`)? 
   The `REPLICATION` channel isn't the only one using the underlying `Netty` 
connection and that means that being awaken that a Netty channel is writable 
again can make it not writable while attempting to write the replication 
packets, because a concurrent write has filled it again!
   - `ChannelImpl::send` can block? If yes, it can cause some trouble because 
the cluster connection isn't the only citizen of the Netty event loop thread 
and this can cause other connections to starve (to not mention the same 
connection responses to be read!)
   
   @clebertsuconic @jbertram @gtully 
   I believe that answering to these questions is key to be sure that's a safe 
change...
   
   I see that `ChannelImpl.CHANNEL_ID.REPLICATION` Artemis `Channel` is always 
created with a `confWindowSize == -1` so it seems it won't have any blocking 
behaviour caused by the `resendCache` or `responseAsyncCache` and given that it 
cannot failover nor is using `sendBlocking`, its `ChannelImpl::lock`  isn't 
used, but is still brittle and I would like to enforce it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 552922)
Time Spent: 8h 40m  (was: 8.5h)

> ReplicationManager can batch sent replicated packets
> 
>
> Key: ARTEMIS-3045
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3045
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>  Time Spent: 8h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285155#comment-17285155
 ] 

Francesco Nigro commented on ARTEMIS-3117:
--

[~seb] I think that 
[https://github.com/twitter/finagle/issues/856#issuecomment-738814564] is a 
possible workaround for the original issue 

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3117) Performance degradation when switching from JDK8 to JDK11

2021-02-16 Thread Sebastian T (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17285138#comment-17285138
 ] 

Sebastian T commented on ARTEMIS-3117:
--

This looks like the same issue to me 
https://github.com/twitter/finagle/issues/856

> Performance degradation when switching from JDK8 to JDK11
> -
>
> Key: ARTEMIS-3117
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3117
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.16.0
> Environment: Amazon Linux 2, Amazon Corretto (OpenJDK 11), AMQP over 
> TLS via BoringSSL
>Reporter: Sebastian T
>Priority: Major
> Attachments: broker.xml, image-2021-02-12-21-39-32-185.png, 
> image-2021-02-12-21-40-21-125.png, image-2021-02-12-21-44-26-271.png, 
> image-2021-02-12-21-46-52-006.png, image-2021-02-12-21-47-02-387.png, 
> image-2021-02-12-21-47-57-301.png, image-2021-02-12-22-01-07-044.png
>
>
> Since it was announced that probably Artemis 2.18.0 will require Java 11 we 
> upgraded the JVM of one of our broker clusters from OpenJDK 8 to OpenJDK 11 
> and are seeing a noticable performance degradation which results in higher 
> CPU usage and higher latency.
> We are monitoring request/reply round trip duration with a custom distributed 
> qpid-jms based healthcheck applications. Here is a graphic that shows the 
> effect when we switched the JDK:
> !image-2021-02-12-21-39-32-185.png!
> CPU Usage of the broker process:
> !image-2021-02-12-22-01-07-044.png|width=874,height=262!
>  
> The broker itself is also monitored via Dynatrace, there I can see that after 
> upgrading to JDK 11 the broker process spend 21% of CPU time locking while in 
> JDK it only spent 3.2%.
> *JDK 8:*
> !image-2021-02-12-21-40-21-125.png|width=1247,height=438!
>  
> *JDK 11:*
> *!image-2021-02-12-21-44-26-271.png|width=1197,height=420!*
>  
> *A method hotspot breakdown reveals this:*
> !image-2021-02-12-21-47-02-387.png|width=1271,height=605!
> !image-2021-02-12-21-47-57-301.png|width=1059,height=627!
> Maybe I am misinterpreting the charts but the root cause seems to be 
> somewhere in {{org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1}} 
> and/or in 
> {{org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptor.getSslHandler}}
>  I currently cannot pinpoint the exact line number.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-3120) ActiveMQXAResourceWrapper NPE if no LocatorConfig

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3120?focusedWorklogId=552899=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552899
 ]

ASF GitHub Bot logged work on ARTEMIS-3120:
---

Author: ASF GitHub Bot
Created on: 16/Feb/21 10:48
Start Date: 16/Feb/21 10:48
Worklog Time Spent: 10m 
  Work Description: brusdev commented on pull request #3454:
URL: https://github.com/apache/activemq-artemis/pull/3454#issuecomment-779754903


   LGTM +1, I'll remove the square brackets before merging it



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 552899)
Time Spent: 20m  (was: 10m)

> ActiveMQXAResourceWrapper NPE if no LocatorConfig
> -
>
> Key: ARTEMIS-3120
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3120
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Domenico Francesco Bruscino
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code:java}
> WARN  [org.apache.activemq.artemis.service.extensions.xa.recovery] (Periodic 
> Recovery) AMQ172015: Can not connect to XARecoveryConfig 
> [transportConfiguration=[TransportConfiguration(name=, 
> factory=org-apache-activemq-artemis-core-remoting-impl-invm-InVMConnectorFactory)
>  ?serverId=0], discoveryConfiguration=null, username=null, password=, 
> JNDI_NAME=java:/JmsXA] on auto-generated resource recovery: 
> ActiveMQInternalErrorException[errorType=INTERNAL_ERROR message=AMQ219004: 
> Failed to initialise session factory]
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.initialize(ServerLocatorImpl.java:264)
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:660)
>   at 
> org.apache.activemq.artemis.service.extensions.xa.recovery.ActiveMQXAResourceWrapper.connect(ActiveMQXAResourceWrapper.java:312)
>   at 
> org.apache.activemq.artemis.service.extensions.xa.recovery.ActiveMQXAResourceWrapper.getDelegate(ActiveMQXAResourceWrapper.java:239)
>   at 
> org.apache.activemq.artemis.service.extensions.xa.recovery.ActiveMQXAResourceWrapper.recover(ActiveMQXAResourceWrapper.java:69)
>   at 
> org.apache.activemq.artemis.service.extensions.xa.ActiveMQXAResourceWrapperImpl.recover(ActiveMQXAResourceWrapperImpl.java:106)
>   at 
> com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.xaRecoveryFirstPass(XARecoveryModule.java:712)
>   at 
> com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.periodicWorkFirstPass(XARecoveryModule.java:233)
>   at 
> com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.periodicWorkFirstPass(XARecoveryModule.java:178)
>   at 
> com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.doWorkInternal(PeriodicRecovery.java:770)
>   at 
> com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.run(PeriodicRecovery.java:382)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.setThreadPools(ServerLocatorImpl.java:181)
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.initialize(ServerLocatorImpl.java:254)
>   ... 10 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3042) Official Docker Multistage Build as well as an official Docker image.

2021-02-16 Thread John Behm (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Behm updated ARTEMIS-3042:
---
Labels: docker, dockerfile, kubernetes  (was: docker, dockerfile,)

> Official Docker Multistage Build as well as an official Docker image.
> -
>
> Key: ARTEMIS-3042
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3042
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: John Behm
>Priority: Minor
>  Labels: docker,, dockerfile,, kubernetes
>
> It would be rather convenient to get people up and running with an easy to 
> build or to setup Docker image that automatically builds the project from 
> source, discards the build container and moves the necessary files over to 
> the final container that can simply be started.
> The current docker image build is not really user firendly or convenient at 
> all.
>  
> https://github.com/apache/activemq-artemis/tree/master/artemis-docker
> The whole setup process of artemis in a containerized environment is  very 
> far from even good.
> The hurdle of using this software is gigantic, as the configuration is so 
> complex, one will not be able to do this within one month without having gone 
> through the whole documentation multiple times.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (ARTEMIS-3120) ActiveMQXAResourceWrapper NPE if no LocatorConfig

2021-02-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3120?focusedWorklogId=552874=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552874
 ]

ASF GitHub Bot logged work on ARTEMIS-3120:
---

Author: ASF GitHub Bot
Created on: 16/Feb/21 09:31
Start Date: 16/Feb/21 09:31
Worklog Time Spent: 10m 
  Work Description: spyrkob opened a new pull request #3454:
URL: https://github.com/apache/activemq-artemis/pull/3454


   provided in RecoveryConfig
   
   Issue: https://issues.apache.org/jira/browse/ARTEMIS-3120



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 552874)
Remaining Estimate: 0h
Time Spent: 10m

> ActiveMQXAResourceWrapper NPE if no LocatorConfig
> -
>
> Key: ARTEMIS-3120
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3120
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Domenico Francesco Bruscino
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java}
> WARN  [org.apache.activemq.artemis.service.extensions.xa.recovery] (Periodic 
> Recovery) AMQ172015: Can not connect to XARecoveryConfig 
> [transportConfiguration=[TransportConfiguration(name=, 
> factory=org-apache-activemq-artemis-core-remoting-impl-invm-InVMConnectorFactory)
>  ?serverId=0], discoveryConfiguration=null, username=null, password=, 
> JNDI_NAME=java:/JmsXA] on auto-generated resource recovery: 
> ActiveMQInternalErrorException[errorType=INTERNAL_ERROR message=AMQ219004: 
> Failed to initialise session factory]
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.initialize(ServerLocatorImpl.java:264)
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:660)
>   at 
> org.apache.activemq.artemis.service.extensions.xa.recovery.ActiveMQXAResourceWrapper.connect(ActiveMQXAResourceWrapper.java:312)
>   at 
> org.apache.activemq.artemis.service.extensions.xa.recovery.ActiveMQXAResourceWrapper.getDelegate(ActiveMQXAResourceWrapper.java:239)
>   at 
> org.apache.activemq.artemis.service.extensions.xa.recovery.ActiveMQXAResourceWrapper.recover(ActiveMQXAResourceWrapper.java:69)
>   at 
> org.apache.activemq.artemis.service.extensions.xa.ActiveMQXAResourceWrapperImpl.recover(ActiveMQXAResourceWrapperImpl.java:106)
>   at 
> com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.xaRecoveryFirstPass(XARecoveryModule.java:712)
>   at 
> com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.periodicWorkFirstPass(XARecoveryModule.java:233)
>   at 
> com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.periodicWorkFirstPass(XARecoveryModule.java:178)
>   at 
> com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.doWorkInternal(PeriodicRecovery.java:770)
>   at 
> com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.run(PeriodicRecovery.java:382)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.setThreadPools(ServerLocatorImpl.java:181)
>   at 
> org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.initialize(ServerLocatorImpl.java:254)
>   ... 10 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-3120) ActiveMQXAResourceWrapper NPE if no LocatorConfig

2021-02-16 Thread Domenico Francesco Bruscino (Jira)
Domenico Francesco Bruscino created ARTEMIS-3120:


 Summary: ActiveMQXAResourceWrapper NPE if no LocatorConfig
 Key: ARTEMIS-3120
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3120
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Domenico Francesco Bruscino


{code:java}
WARN  [org.apache.activemq.artemis.service.extensions.xa.recovery] (Periodic 
Recovery) AMQ172015: Can not connect to XARecoveryConfig 
[transportConfiguration=[TransportConfiguration(name=, 
factory=org-apache-activemq-artemis-core-remoting-impl-invm-InVMConnectorFactory)
 ?serverId=0], discoveryConfiguration=null, username=null, password=, 
JNDI_NAME=java:/JmsXA] on auto-generated resource recovery: 
ActiveMQInternalErrorException[errorType=INTERNAL_ERROR message=AMQ219004: 
Failed to initialise session factory]
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.initialize(ServerLocatorImpl.java:264)
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:660)
at 
org.apache.activemq.artemis.service.extensions.xa.recovery.ActiveMQXAResourceWrapper.connect(ActiveMQXAResourceWrapper.java:312)
at 
org.apache.activemq.artemis.service.extensions.xa.recovery.ActiveMQXAResourceWrapper.getDelegate(ActiveMQXAResourceWrapper.java:239)
at 
org.apache.activemq.artemis.service.extensions.xa.recovery.ActiveMQXAResourceWrapper.recover(ActiveMQXAResourceWrapper.java:69)
at 
org.apache.activemq.artemis.service.extensions.xa.ActiveMQXAResourceWrapperImpl.recover(ActiveMQXAResourceWrapperImpl.java:106)
at 
com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.xaRecoveryFirstPass(XARecoveryModule.java:712)
at 
com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.periodicWorkFirstPass(XARecoveryModule.java:233)
at 
com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.periodicWorkFirstPass(XARecoveryModule.java:178)
at 
com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.doWorkInternal(PeriodicRecovery.java:770)
at 
com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.run(PeriodicRecovery.java:382)
Caused by: java.lang.NullPointerException
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.setThreadPools(ServerLocatorImpl.java:181)
at 
org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.initialize(ServerLocatorImpl.java:254)
... 10 more
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)