[jira] [Work logged] (ARTEMIS-4314) Federation, support consumerWindowSize zero and federate in batches only when the local queue is has excess capacity

2023-06-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4314?focusedWorklogId=866120=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-866120
 ]

ASF GitHub Bot logged work on ARTEMIS-4314:
---

Author: ASF GitHub Bot
Created on: 17/Jun/23 00:20
Start Date: 17/Jun/23 00:20
Worklog Time Spent: 10m 
  Work Description: clebertsuconic commented on PR #4511:
URL: 
https://github.com/apache/activemq-artemis/pull/4511#issuecomment-1595508793

   tests are good.. I'm merging this




Issue Time Tracking
---

Worklog Id: (was: 866120)
Time Spent: 2.5h  (was: 2h 20m)

> Federation, support consumerWindowSize zero and federate in batches only when 
> the local queue is has excess capacity
> 
>
> Key: ARTEMIS-4314
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4314
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Federation
>Affects Versions: 2.28.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.29.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Dual queue federation, where clusters federate in both direction can suffer 
> from message flip flopping once the priority adjustment kicks in.
> If there is a large backlog, the lower priority federation consumer is in 
> play once all of the local consumer credit is exhausted and the backlog can 
> drain to the other cluster.
> If demand is low there, the process can repeat. limiting the rate of the 
> federation consumer can help but it is not ideal b/c when there is no local 
> demand, we want to have a high rate of migration.
>  
> A possible solution is to have the federation consumer manage its own credit 
> and only flow messages when the local queue has capacity. Then flow a batch 
> of messages, and await again that the local queue has capacity. In this way, 
> there is no thundering herd effect, but there is also fast migration of 
> messages once there is demand.
> the consumerWindowSize=0 is already in play for consumer.receive calls and 
> there is already a defaultConsumerWindowSize for an address. These can be 
> combined to realise batchFederationOnCapacity semantics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (ARTEMIS-4314) Federation, support consumerWindowSize zero and federate in batches only when the local queue is has excess capacity

2023-06-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4314?focusedWorklogId=866121=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-866121
 ]

ASF GitHub Bot logged work on ARTEMIS-4314:
---

Author: ASF GitHub Bot
Created on: 17/Jun/23 00:20
Start Date: 17/Jun/23 00:20
Worklog Time Spent: 10m 
  Work Description: clebertsuconic merged PR #4511:
URL: https://github.com/apache/activemq-artemis/pull/4511




Issue Time Tracking
---

Worklog Id: (was: 866121)
Time Spent: 2h 40m  (was: 2.5h)

> Federation, support consumerWindowSize zero and federate in batches only when 
> the local queue is has excess capacity
> 
>
> Key: ARTEMIS-4314
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4314
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Federation
>Affects Versions: 2.28.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.29.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Dual queue federation, where clusters federate in both direction can suffer 
> from message flip flopping once the priority adjustment kicks in.
> If there is a large backlog, the lower priority federation consumer is in 
> play once all of the local consumer credit is exhausted and the backlog can 
> drain to the other cluster.
> If demand is low there, the process can repeat. limiting the rate of the 
> federation consumer can help but it is not ideal b/c when there is no local 
> demand, we want to have a high rate of migration.
>  
> A possible solution is to have the federation consumer manage its own credit 
> and only flow messages when the local queue has capacity. Then flow a batch 
> of messages, and await again that the local queue has capacity. In this way, 
> there is no thundering herd effect, but there is also fast migration of 
> messages once there is demand.
> the consumerWindowSize=0 is already in play for consumer.receive calls and 
> there is already a defaultConsumerWindowSize for an address. These can be 
> combined to realise batchFederationOnCapacity semantics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4314) Federation, support consumerWindowSize zero and federate in batches only when the local queue is has excess capacity

2023-06-16 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733683#comment-17733683
 ] 

ASF subversion and git services commented on ARTEMIS-4314:
--

Commit c6a82ff95ef94cd192fa17c515d535d42dbafb8e in activemq-artemis's branch 
refs/heads/main from Clebert Suconic
[ https://gitbox.apache.org/repos/asf?p=activemq-artemis.git;h=c6a82ff95e ]

ARTEMIS-4314 Small Tweak: using executor directly if no delay


> Federation, support consumerWindowSize zero and federate in batches only when 
> the local queue is has excess capacity
> 
>
> Key: ARTEMIS-4314
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4314
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Federation
>Affects Versions: 2.28.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.29.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Dual queue federation, where clusters federate in both direction can suffer 
> from message flip flopping once the priority adjustment kicks in.
> If there is a large backlog, the lower priority federation consumer is in 
> play once all of the local consumer credit is exhausted and the backlog can 
> drain to the other cluster.
> If demand is low there, the process can repeat. limiting the rate of the 
> federation consumer can help but it is not ideal b/c when there is no local 
> demand, we want to have a high rate of migration.
>  
> A possible solution is to have the federation consumer manage its own credit 
> and only flow messages when the local queue has capacity. Then flow a batch 
> of messages, and await again that the local queue has capacity. In this way, 
> there is no thundering herd effect, but there is also fast migration of 
> messages once there is demand.
> the consumerWindowSize=0 is already in play for consumer.receive calls and 
> there is already a defaultConsumerWindowSize for an address. These can be 
> combined to realise batchFederationOnCapacity semantics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (ARTEMIS-4314) Federation, support consumerWindowSize zero and federate in batches only when the local queue is has excess capacity

2023-06-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4314?focusedWorklogId=866103=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-866103
 ]

ASF GitHub Bot logged work on ARTEMIS-4314:
---

Author: ASF GitHub Bot
Created on: 16/Jun/23 20:14
Start Date: 16/Jun/23 20:14
Worklog Time Spent: 10m 
  Work Description: clebertsuconic commented on PR #4511:
URL: 
https://github.com/apache/activemq-artemis/pull/4511#issuecomment-1595259485

   @gtully either merge it or close if you prefer not to do this please?
   
   
   Thanks




Issue Time Tracking
---

Worklog Id: (was: 866103)
Time Spent: 2h 20m  (was: 2h 10m)

> Federation, support consumerWindowSize zero and federate in batches only when 
> the local queue is has excess capacity
> 
>
> Key: ARTEMIS-4314
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4314
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Federation
>Affects Versions: 2.28.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.29.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Dual queue federation, where clusters federate in both direction can suffer 
> from message flip flopping once the priority adjustment kicks in.
> If there is a large backlog, the lower priority federation consumer is in 
> play once all of the local consumer credit is exhausted and the backlog can 
> drain to the other cluster.
> If demand is low there, the process can repeat. limiting the rate of the 
> federation consumer can help but it is not ideal b/c when there is no local 
> demand, we want to have a high rate of migration.
>  
> A possible solution is to have the federation consumer manage its own credit 
> and only flow messages when the local queue has capacity. Then flow a batch 
> of messages, and await again that the local queue has capacity. In this way, 
> there is no thundering herd effect, but there is also fast migration of 
> messages once there is demand.
> the consumerWindowSize=0 is already in play for consumer.receive calls and 
> there is already a defaultConsumerWindowSize for an address. These can be 
> combined to realise batchFederationOnCapacity semantics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (ARTEMIS-4314) Federation, support consumerWindowSize zero and federate in batches only when the local queue is has excess capacity

2023-06-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4314?focusedWorklogId=866102=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-866102
 ]

ASF GitHub Bot logged work on ARTEMIS-4314:
---

Author: ASF GitHub Bot
Created on: 16/Jun/23 20:14
Start Date: 16/Jun/23 20:14
Worklog Time Spent: 10m 
  Work Description: clebertsuconic opened a new pull request, #4511:
URL: https://github.com/apache/activemq-artemis/pull/4511

   (no comment)




Issue Time Tracking
---

Worklog Id: (was: 866102)
Time Spent: 2h 10m  (was: 2h)

> Federation, support consumerWindowSize zero and federate in batches only when 
> the local queue is has excess capacity
> 
>
> Key: ARTEMIS-4314
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4314
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Federation
>Affects Versions: 2.28.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.29.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Dual queue federation, where clusters federate in both direction can suffer 
> from message flip flopping once the priority adjustment kicks in.
> If there is a large backlog, the lower priority federation consumer is in 
> play once all of the local consumer credit is exhausted and the backlog can 
> drain to the other cluster.
> If demand is low there, the process can repeat. limiting the rate of the 
> federation consumer can help but it is not ideal b/c when there is no local 
> demand, we want to have a high rate of migration.
>  
> A possible solution is to have the federation consumer manage its own credit 
> and only flow messages when the local queue has capacity. Then flow a batch 
> of messages, and await again that the local queue has capacity. In this way, 
> there is no thundering herd effect, but there is also fast migration of 
> messages once there is demand.
> the consumerWindowSize=0 is already in play for consumer.receive calls and 
> there is already a defaultConsumerWindowSize for an address. These can be 
> combined to realise batchFederationOnCapacity semantics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (ARTEMIS-4270) Messages get lost when using multiple consumers with topic hierarchies

2023-06-16 Thread Clebert Suconic (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Clebert Suconic closed ARTEMIS-4270.

Resolution: Not A Bug

closed as Not a bug.. (previous close as won't fix was a mistake).

Basically I ran your test with latest version and it worked.

> Messages get lost when using multiple consumers with topic hierarchies
> --
>
> Key: ARTEMIS-4270
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4270
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: JMS
>Affects Versions: 2.24.0
>Reporter: Moritz
>Priority: Major
> Attachments: topic-hierarchies-bug.zip
>
>
> There is an issue when we have the following setup:
>  * Shared durable consumer A listening to *news.#*
>  * Shared durable consumer B listening to *news.europe.#*
>  * Message M1 sent to *news.europe.sports*
>  * Message M2 sent to *news.europe*
> Expected behavior:
>  * A receives M1 and M2
>  * B receives M1 and M2
> Actual behavior:
>  * A receives M1 and M2
>  * B receives M1
> This happens when it is run with a clean Artemis, i.e. without any previous 
> data. If we run it a second time B receives M1 and M2. When using 
> *consumer.receive()* it also works as expected.
>  
> This also affects at least version *3.0.0-SNAPSHOT* however I couldn't select 
> it so I chose the second version I've tested it for. The attached project 
> showcases the bug where I simply adjusted the example 
> {*}apache-artemis-3.0.0-SNAPSHOT/examples/features/standard/topic-hierarchies{*}.
> I couldn't test it with 2.29.0-SNAPSHOT since I would get exceptions 
> concerning the topic not being multicast (already with the original example).
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (ARTEMIS-4270) Messages get lost when using multiple consumers with topic hierarchies

2023-06-16 Thread Clebert Suconic (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Clebert Suconic reopened ARTEMIS-4270:
--

> Messages get lost when using multiple consumers with topic hierarchies
> --
>
> Key: ARTEMIS-4270
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4270
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: JMS
>Affects Versions: 2.24.0
>Reporter: Moritz
>Priority: Major
> Attachments: topic-hierarchies-bug.zip
>
>
> There is an issue when we have the following setup:
>  * Shared durable consumer A listening to *news.#*
>  * Shared durable consumer B listening to *news.europe.#*
>  * Message M1 sent to *news.europe.sports*
>  * Message M2 sent to *news.europe*
> Expected behavior:
>  * A receives M1 and M2
>  * B receives M1 and M2
> Actual behavior:
>  * A receives M1 and M2
>  * B receives M1
> This happens when it is run with a clean Artemis, i.e. without any previous 
> data. If we run it a second time B receives M1 and M2. When using 
> *consumer.receive()* it also works as expected.
>  
> This also affects at least version *3.0.0-SNAPSHOT* however I couldn't select 
> it so I chose the second version I've tested it for. The attached project 
> showcases the bug where I simply adjusted the example 
> {*}apache-artemis-3.0.0-SNAPSHOT/examples/features/standard/topic-hierarchies{*}.
> I couldn't test it with 2.29.0-SNAPSHOT since I would get exceptions 
> concerning the topic not being multicast (already with the original example).
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (ARTEMIS-4310) Smaller Container / Dockerfile based on Alpine

2023-06-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4310?focusedWorklogId=866075=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-866075
 ]

ASF GitHub Bot logged work on ARTEMIS-4310:
---

Author: ASF GitHub Bot
Created on: 16/Jun/23 16:47
Start Date: 16/Jun/23 16:47
Worklog Time Spent: 10m 
  Work Description: thorstenhirsch commented on PR #4504:
URL: 
https://github.com/apache/activemq-artemis/pull/4504#issuecomment-1594970508

   Yes, I just checked it: `Journal Type: ASYNCIO` (on a Linux host). I didn't 
really know if this really means libaio, so I also checked what Artemis says on 
my Mac: `NIO`.
   
   So, yes, it looks like Artemis works successfully with libaio in the Alpine 
Linux image.




Issue Time Tracking
---

Worklog Id: (was: 866075)
Time Spent: 1h  (was: 50m)

> Smaller Container / Dockerfile based on Alpine
> --
>
> Key: ARTEMIS-4310
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4310
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.28.0
>Reporter: Thorsten Hirsch
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4270) Messages get lost when using multiple consumers with topic hierarchies

2023-06-16 Thread Clebert Suconic (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733574#comment-17733574
 ] 

Clebert Suconic commented on ARTEMIS-4270:
--

I just ran your example with 2.29.0 and everything seems to be working.


will close it.. please update the issue if you still see an issue.

> Messages get lost when using multiple consumers with topic hierarchies
> --
>
> Key: ARTEMIS-4270
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4270
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: JMS
>Affects Versions: 2.24.0
>Reporter: Moritz
>Priority: Major
> Attachments: topic-hierarchies-bug.zip
>
>
> There is an issue when we have the following setup:
>  * Shared durable consumer A listening to *news.#*
>  * Shared durable consumer B listening to *news.europe.#*
>  * Message M1 sent to *news.europe.sports*
>  * Message M2 sent to *news.europe*
> Expected behavior:
>  * A receives M1 and M2
>  * B receives M1 and M2
> Actual behavior:
>  * A receives M1 and M2
>  * B receives M1
> This happens when it is run with a clean Artemis, i.e. without any previous 
> data. If we run it a second time B receives M1 and M2. When using 
> *consumer.receive()* it also works as expected.
>  
> This also affects at least version *3.0.0-SNAPSHOT* however I couldn't select 
> it so I chose the second version I've tested it for. The attached project 
> showcases the bug where I simply adjusted the example 
> {*}apache-artemis-3.0.0-SNAPSHOT/examples/features/standard/topic-hierarchies{*}.
> I couldn't test it with 2.29.0-SNAPSHOT since I would get exceptions 
> concerning the topic not being multicast (already with the original example).
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (ARTEMIS-4270) Messages get lost when using multiple consumers with topic hierarchies

2023-06-16 Thread Clebert Suconic (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Clebert Suconic closed ARTEMIS-4270.

Resolution: Won't Fix

please reopen with a test update if you still see an issue.

> Messages get lost when using multiple consumers with topic hierarchies
> --
>
> Key: ARTEMIS-4270
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4270
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: JMS
>Affects Versions: 2.24.0
>Reporter: Moritz
>Priority: Major
> Attachments: topic-hierarchies-bug.zip
>
>
> There is an issue when we have the following setup:
>  * Shared durable consumer A listening to *news.#*
>  * Shared durable consumer B listening to *news.europe.#*
>  * Message M1 sent to *news.europe.sports*
>  * Message M2 sent to *news.europe*
> Expected behavior:
>  * A receives M1 and M2
>  * B receives M1 and M2
> Actual behavior:
>  * A receives M1 and M2
>  * B receives M1
> This happens when it is run with a clean Artemis, i.e. without any previous 
> data. If we run it a second time B receives M1 and M2. When using 
> *consumer.receive()* it also works as expected.
>  
> This also affects at least version *3.0.0-SNAPSHOT* however I couldn't select 
> it so I chose the second version I've tested it for. The attached project 
> showcases the bug where I simply adjusted the example 
> {*}apache-artemis-3.0.0-SNAPSHOT/examples/features/standard/topic-hierarchies{*}.
> I couldn't test it with 2.29.0-SNAPSHOT since I would get exceptions 
> concerning the topic not being multicast (already with the original example).
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (ARTEMIS-4317) Invisible/Unreleased Openwire consumers in Artemis 2.27.x broker queues

2023-06-16 Thread Clebert Suconic (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Clebert Suconic closed ARTEMIS-4317.

Resolution: Not A Bug

user commented this is fixed in REd Hat AMQ 7.11... which has all the fixes 
from 2.29.0.

> Invisible/Unreleased Openwire consumers in Artemis 2.27.x broker queues
> ---
>
> Key: ARTEMIS-4317
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4317
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: OpenWire
>Affects Versions: 2.27.0
>Reporter: Mohanavalli A
>Priority: Major
>
> We have a Spring boot + Camel JMS + Activemq Client based consumer connecting 
> to the Artemis 2.27.x broker and consuming messages. When there is a high 
> inflow of messages to the queue where the openwire consumer is connected, the 
> consumer count on the queue increases more than the actual number of 
> consumers connected to the queue, and these invisible or unreleased consumers 
> block the actual consumers from consuming the message. The issue can be 
> replicated on Artemis 2.28.0 as well.
> There was no issue in Artemis 2.26.0.
> When then same application with a similar load is tested on Redhat AMQ 
> 7.11.0, the issue does not occur. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4317) Invisible/Unreleased Openwire consumers in Artemis 2.27.x broker queues

2023-06-16 Thread Clebert Suconic (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733571#comment-17733571
 ] 

Clebert Suconic commented on ARTEMIS-4317:
--

It this does not happen in AMQ 7.11, this means this issue can be closed then.. 
because 2.29.0 has all the fixes associated in the  Red Hat fork (Redhat AMQ 
7.11) 

> Invisible/Unreleased Openwire consumers in Artemis 2.27.x broker queues
> ---
>
> Key: ARTEMIS-4317
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4317
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: OpenWire
>Affects Versions: 2.27.0
>Reporter: Mohanavalli A
>Priority: Major
>
> We have a Spring boot + Camel JMS + Activemq Client based consumer connecting 
> to the Artemis 2.27.x broker and consuming messages. When there is a high 
> inflow of messages to the queue where the openwire consumer is connected, the 
> consumer count on the queue increases more than the actual number of 
> consumers connected to the queue, and these invisible or unreleased consumers 
> block the actual consumers from consuming the message. The issue can be 
> replicated on Artemis 2.28.0 as well.
> There was no issue in Artemis 2.26.0.
> When then same application with a similar load is tested on Redhat AMQ 
> 7.11.0, the issue does not occur. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4017) FQQN Anycast Redistribution on a symmetric cluster is redistributing to different FQQN in the same address

2023-06-16 Thread Clebert Suconic (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733568#comment-17733568
 ] 

Clebert Suconic commented on ARTEMIS-4017:
--

[~rtista] [~jbertram]this could be related to ARTEMIS-4312, fixed with this 
commit of Justin's:


{code:java}
commit 3ff8419a4b0e32bff5b43997d09db9a9acc28586
Author: Justin Bertram 
Date:   Tue Jun 13 11:28:00 2023 -0500

ARTEMIS-4312 dupes w/redistribution and multicast

Multiple multicast queues on the same address can lead to duplicate
  
{code}
  messages during redistribution in a cluster.


> FQQN Anycast Redistribution on a symmetric cluster is redistributing to 
> different FQQN in the same address
> --
>
> Key: ARTEMIS-4017
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4017
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.25.0
> Environment: The current environment I'm working with is a Symmetric 
> cluster of 2 master nodes, and I intend to add a slave node for each of the 
> master.
> The first master has the following configuration:
> {code:xml}
> 
>  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
> xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
> 
> master0
> true
> data/bindings
> data/journal
> true
> 2
> 10
> 4096
> 10M
> 
> data/largemessages
> data/paging
> 
> 
> tcp://xxx.xxx.xxx.2:61616
> tcp://xxx.xxx.xxx.4:61616
> 
> 
> 
> 
>  name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true
> 
>  name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true
> 
>  name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true
> 
> 
> 
> 
> master0
> true
> ON_DEMAND
> 1
> 
> master0
> master1
> 
> 
> 
> 
> 
> 
> master0slave0discovery
> 
> true
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> DLQ
> ExpiryQueue
> 0
> 
> 1000
> 
> 
> -1
> 
> -1
> 
> 10
> PAGE
> true
> true
> false
> false
> 
> 
> ANYCAST
> 
> ANYCAST
> 
> 
> 
> 
> {code}
> The second master holds the following configuration:
> {code:xml}
> 
>  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
> xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
> 
> master1
> true
> data/bindings
> data/journal
> true
> 2
> 10
> 4096
> 10M
> 
> data/largemessages
> data/paging
> 
> 
> tcp://xxx.xxx.xxx.2:61616
> tcp://xxx.xxx.xxx.4:61616
> 
> 
> 
> 
>  name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true
> 
>  name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true
> 
>  

[jira] [Commented] (ARTEMIS-4141) Message flow gets stuck

2023-06-16 Thread Clebert Suconic (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733567#comment-17733567
 ] 

Clebert Suconic commented on ARTEMIS-4141:
--

can you verify if this is an issue in 2.29.0? We had another fix in the bridge 
in 2.28.0... this may have been fixed already.

> Message flow gets stuck
> ---
>
> Key: ARTEMIS-4141
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4141
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.22.0, 2.27.1
>Reporter: Ivan Iliev
>Priority: Blocker
>
> Reproducible at 
> [https://github.com/iiliev2/activeqm-artemis/tree/main/sf-queue-stuck] - see 
> the readme.
> Broader discussion of the issue at 
> https://the-asf.slack.com/archives/CFL910J30/p1673891918582619.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (ARTEMIS-4314) Federation, support consumerWindowSize zero and federate in batches only when the local queue is has excess capacity

2023-06-16 Thread Gary Tully (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved ARTEMIS-4314.
-
Resolution: Fixed

> Federation, support consumerWindowSize zero and federate in batches only when 
> the local queue is has excess capacity
> 
>
> Key: ARTEMIS-4314
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4314
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Federation
>Affects Versions: 2.28.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.29.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Dual queue federation, where clusters federate in both direction can suffer 
> from message flip flopping once the priority adjustment kicks in.
> If there is a large backlog, the lower priority federation consumer is in 
> play once all of the local consumer credit is exhausted and the backlog can 
> drain to the other cluster.
> If demand is low there, the process can repeat. limiting the rate of the 
> federation consumer can help but it is not ideal b/c when there is no local 
> demand, we want to have a high rate of migration.
>  
> A possible solution is to have the federation consumer manage its own credit 
> and only flow messages when the local queue has capacity. Then flow a batch 
> of messages, and await again that the local queue has capacity. In this way, 
> there is no thundering herd effect, but there is also fast migration of 
> messages once there is demand.
> the consumerWindowSize=0 is already in play for consumer.receive calls and 
> there is already a defaultConsumerWindowSize for an address. These can be 
> combined to realise batchFederationOnCapacity semantics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4314) Federation, support consumerWindowSize zero and federate in batches only when the local queue is has excess capacity

2023-06-16 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733544#comment-17733544
 ] 

ASF subversion and git services commented on ARTEMIS-4314:
--

Commit 9b5dbf42653f460a6f70e6a9c40b2c539cc09217 in activemq-artemis's branch 
refs/heads/main from Gary Tully
[ https://gitbox.apache.org/repos/asf?p=activemq-artemis.git;h=9b5dbf4265 ]

ARTEMIS-4314 - fix typo in doc


> Federation, support consumerWindowSize zero and federate in batches only when 
> the local queue is has excess capacity
> 
>
> Key: ARTEMIS-4314
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4314
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Federation
>Affects Versions: 2.28.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.29.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Dual queue federation, where clusters federate in both direction can suffer 
> from message flip flopping once the priority adjustment kicks in.
> If there is a large backlog, the lower priority federation consumer is in 
> play once all of the local consumer credit is exhausted and the backlog can 
> drain to the other cluster.
> If demand is low there, the process can repeat. limiting the rate of the 
> federation consumer can help but it is not ideal b/c when there is no local 
> demand, we want to have a high rate of migration.
>  
> A possible solution is to have the federation consumer manage its own credit 
> and only flow messages when the local queue has capacity. Then flow a batch 
> of messages, and await again that the local queue has capacity. In this way, 
> there is no thundering herd effect, but there is also fast migration of 
> messages once there is demand.
> the consumerWindowSize=0 is already in play for consumer.receive calls and 
> there is already a defaultConsumerWindowSize for an address. These can be 
> combined to realise batchFederationOnCapacity semantics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-4217) AMQ111005: Failed to convert message. Sending it to Dead Letter Address.

2023-06-16 Thread Justin Bertram (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733522#comment-17733522
 ] 

Justin Bertram edited comment on ARTEMIS-4217 at 6/16/23 2:53 PM:
--

[~jbertram], thanks for looking at this and sorry for the long delay...

It looks like it is hard to reproduce it. I'm currently not able to reproduce 
the problem on my local test machine. But I still see the problem on other 
systems:

In this case, the problem occurred with a "large message". Could a large 
message trigger some kind of conversion? Maybe during writing/reading from the 
storage (file system)? The payload of the message is a 1mb chunk of a zip file. 
The message also has some custom headers/properties which are visible in the 
log below. 

We only have AMQP consumers/producers on this system. I've also verified this 
in the admin console: 

!image-2023-06-16-15-59-25-689.png!!image-2023-06-16-15-59-25-721.png!

{noformat}
2023-06-16 15:44:48,301 WARN  [org.apache.activemq.artemis.core.server] 
AMQ222150: Sending message Reference[120978405]:NON-RELIABLE:LargeServerMessage 
[messageID=120978405,durable=false,userID=f5eb19e9-0c4b-11ee-8c25-f44d30e2ecf9,priority=4,
 timestamp=0,expiration=Mon Jun 19 15:44:46 CEST 2023, durable=false, 
address=MyQName, 
properties=TypedProperties[content-length=100,JMS_AMQP_ORIGINAL_ENCODING=6,content-version=2.0,AtmCommandTaskResultId=25160,
 
filename=MyBroker_2023-06-15_2243_12.0.0.6_Logisoft.7z,content-type=multipart/byteranges,x-version=7.9.45.349,AtmCommandTaskId=-1,_AMQ_LARGE_SIZE=100,extensions=1,
 
checksumrange=F4861070CFDFC33B7D77A74420AF25DF1FAFF4F57A01D75A1E0FB03047BF38C7,NATIVE_MESSAGE_ID=ID:AMQP_NO_PREFIX:MyQName_2023-06-16T15:44:46.8290577+02:00_97089137-6edd-49cd-a81a-0b40441f6be5,Issued=2023-06-16
 15:44:46.824 
+02:00,AtmJobId=15065,content-status=200,range=600-699,1773052136,process-id=Job_15065_MonitoringDataExchange__2023-06-16T15:38:58.0607859+02:00,atmid=CHE8455,received-getrange=600-699,0]]@80507286
 to Dead Letter Address, but there is no Dead Letter Address configured for 
queue MyQName so dropping it
2023-06-16 15:44:48,338 WARN  
[org.apache.activemq.artemis.protocol.amqp.logger] AMQ111005: Failed to convert 
message. Sending it to Dead Letter Address. 
  
org.apache.activemq.artemis.protocol.amqp.converter.coreWrapper.ConversionException:
 java.nio.channels.AsynchronousCloseException
       at 
org.apache.activemq.artemis.protocol.amqp.converter.CoreAmqpConverter.fromCore(CoreAmqpConverter.java:318)
 ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]
       at 
org.apache.activemq.artemis.protocol.amqp.converter.CoreAmqpConverter.checkAMQP(CoreAmqpConverter.java:79)
 ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]
       at 
org.apache.activemq.artemis.protocol.amqp.proton.ProtonServerSenderContext.executeDelivery(ProtonServerSenderContext.java:561)
 ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]
       at 
org.apache.activemq.artemis.core.server.impl.MessageReferenceImpl.run(MessageReferenceImpl.java:131)
 ~[artemis-server-2.28.0.jar:2.28.0]
       at 
io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
 ~[netty-common-4.1.86.Final.jar:4.1.86.Final]
       at 
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
 ~[netty-common-4.1.86.Final.jar:4.1.86.Final]
       at 
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
 ~[netty-common-4.1.86.Final.jar:4.1.86.Final]
       at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569) 
~[netty-transport-4.1.86.Final.jar:4.1.86.Final]
       at 
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
 ~[netty-common-4.1.86.Final.jar:4.1.86.Final]
       at 
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) 
~[netty-common-4.1.86.Final.jar:4.1.86.Final]
       at 
org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
 ~[artemis-commons-2.28.0.jar:?]
   Caused by: java.lang.RuntimeException: 
java.nio.channels.AsynchronousCloseException
       at 
org.apache.activemq.artemis.core.persistence.impl.journal.LargeBody.getBodyBufferSize(LargeBody.java:293)
 ~[artemis-server-2.28.0.jar:2.28.0]
       at 
org.apache.activemq.artemis.core.persistence.impl.journal.LargeServerMessageImpl.getBodyBufferSize(LargeServerMessageImpl.java:263)
 ~[artemis-server-2.28.0.jar:2.28.0]
       at 
org.apache.activemq.artemis.protocol.amqp.converter.coreWrapper.CoreBytesMessageWrapper.getBodyLength(CoreBytesMessageWrapper.java:98)
 ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]     
       at 
org.apache.activemq.artemis.protocol.amqp.converter.coreWrapper.CoreBytesMessageWrapper.getBinaryFromMessageBody(CoreBytesMessageWrapper.java:68)
 ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]     
  

[jira] [Work logged] (ARTEMIS-4314) Federation, support consumerWindowSize zero and federate in batches only when the local queue is has excess capacity

2023-06-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4314?focusedWorklogId=866056=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-866056
 ]

ASF GitHub Bot logged work on ARTEMIS-4314:
---

Author: ASF GitHub Bot
Created on: 16/Jun/23 14:44
Start Date: 16/Jun/23 14:44
Worklog Time Spent: 10m 
  Work Description: gtully merged PR #4509:
URL: https://github.com/apache/activemq-artemis/pull/4509




Issue Time Tracking
---

Worklog Id: (was: 866056)
Time Spent: 2h  (was: 1h 50m)

> Federation, support consumerWindowSize zero and federate in batches only when 
> the local queue is has excess capacity
> 
>
> Key: ARTEMIS-4314
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4314
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Federation
>Affects Versions: 2.28.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.29.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Dual queue federation, where clusters federate in both direction can suffer 
> from message flip flopping once the priority adjustment kicks in.
> If there is a large backlog, the lower priority federation consumer is in 
> play once all of the local consumer credit is exhausted and the backlog can 
> drain to the other cluster.
> If demand is low there, the process can repeat. limiting the rate of the 
> federation consumer can help but it is not ideal b/c when there is no local 
> demand, we want to have a high rate of migration.
>  
> A possible solution is to have the federation consumer manage its own credit 
> and only flow messages when the local queue has capacity. Then flow a batch 
> of messages, and await again that the local queue has capacity. In this way, 
> there is no thundering herd effect, but there is also fast migration of 
> messages once there is demand.
> the consumerWindowSize=0 is already in play for consumer.receive calls and 
> there is already a defaultConsumerWindowSize for an address. These can be 
> combined to realise batchFederationOnCapacity semantics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4314) Federation, support consumerWindowSize zero and federate in batches only when the local queue is has excess capacity

2023-06-16 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733540#comment-17733540
 ] 

ASF subversion and git services commented on ARTEMIS-4314:
--

Commit a8b4ee19928fbf0a0b2bff5baf649ace97edc967 in activemq-artemis's branch 
refs/heads/main from Gary Tully
[ https://gitbox.apache.org/repos/asf?p=activemq-artemis.git;h=a8b4ee1992 ]

ARTEMIS-4314 support queue federation batchOnCapacity via consumerWindowSize=0


> Federation, support consumerWindowSize zero and federate in batches only when 
> the local queue is has excess capacity
> 
>
> Key: ARTEMIS-4314
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4314
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Federation
>Affects Versions: 2.28.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.29.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Dual queue federation, where clusters federate in both direction can suffer 
> from message flip flopping once the priority adjustment kicks in.
> If there is a large backlog, the lower priority federation consumer is in 
> play once all of the local consumer credit is exhausted and the backlog can 
> drain to the other cluster.
> If demand is low there, the process can repeat. limiting the rate of the 
> federation consumer can help but it is not ideal b/c when there is no local 
> demand, we want to have a high rate of migration.
>  
> A possible solution is to have the federation consumer manage its own credit 
> and only flow messages when the local queue has capacity. Then flow a batch 
> of messages, and await again that the local queue has capacity. In this way, 
> there is no thundering herd effect, but there is also fast migration of 
> messages once there is demand.
> the consumerWindowSize=0 is already in play for consumer.receive calls and 
> there is already a defaultConsumerWindowSize for an address. These can be 
> combined to realise batchFederationOnCapacity semantics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4217) AMQ111005: Failed to convert message. Sending it to Dead Letter Address.

2023-06-16 Thread daves (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733522#comment-17733522
 ] 

daves commented on ARTEMIS-4217:


[~jbertram]  Thanks for looking at this and sorry for the long delay...

 

It looks like it is hard to reproduce it. I'm currently not able to reproduce 
the problem on my local test machine. But I still see the problem on other 
systems:

In this case, the problem occurred with a "large message". Could a large 
message trigger some kind of conversion? Maybe during writing/reading from the 
storage (file system)? The payload of the message is a 1mb chunk of a zip file. 
The message also has some custom headers/properties which are visible in the 
log below.

 

We only have AMQP consumers/producers on this system. I've also verified this 
in the admin console:

 

!image-2023-06-16-15-59-25-689.png!!image-2023-06-16-15-59-25-721.png!

 
 
{code:java}
2023-06-16 15:44:48,301 WARN  [org.apache.activemq.artemis.core.server] 
AMQ222150: Sending message Reference[120978405]:NON-RELIABLE:LargeServerMessage 
[messageID=120978405,durable=false,userID=f5eb19e9-0c4b-11ee-
8c25-f44d30e2ecf9,priority=4, timestamp=0,expiration=Mon Jun 19 15:44:46 CEST 
2023, durable=false, address=MyQName, 
properties=TypedProperties[content-length=100,JMS_AMQP_ORIGINAL_ENCODING=6,content-version=2.0,AtmCommandTaskResultId=25160,
 
filename=MyBroker_2023-06-15_2243_12.0.0.6_Logisoft.7z,content-type=multipart/byteranges,x-version=7.9.45.349,AtmCommandTaskId=-1,_AMQ_LARGE_SIZE=100,extensions=1,
 
checksumrange=F4861070CFDFC33B7D77A74420AF25DF1FAFF4F57A01D75A1E0FB03047BF38C7,NATIVE_MESSAGE_ID=ID:AMQP_NO_PREFIX:MyQName_2023-06-16T15:44:46.8290577+02:00_97089137-6edd-49cd-a81a-0b40441f6be5,Issued=2023-06-16
 15:44:46.824 
+02:00,AtmJobId=15065,content-status=200,range=600-699,1773052136,process-id=Job_15065_MonitoringDataExchange__2023-06-16T15:38:58.0607859+02:00,atmid=CHE8455,received-getrange=600-699,0]]@80507286
 to Dead Letter Address, but there is no Dead Letter Address configured for 
queue MyQName so dropping it 2023-06-16 15:44:48,338 WARN  
[org.apache.activemq.artemis.protocol.amqp.logger] AMQ111005: Failed to convert 
message. Sending it to Dead Letter Address. 
org.apache.activemq.artemis.protocol.amqp.converter.coreWrapper.ConversionException:
 java.nio.channels.AsynchronousCloseException     at 
org.apache.activemq.artemis.protocol.amqp.converter.CoreAmqpConverter.fromCore(CoreAmqpConverter.java:318)
 ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]     at 
org.apache.activemq.artemis.protocol.amqp.converter.CoreAmqpConverter.checkAMQP(CoreAmqpConverter.java:79)
 ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]     at 
org.apache.activemq.artemis.protocol.amqp.proton.ProtonServerSenderContext.executeDelivery(ProtonServerSenderContext.java:561)
 ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]     at 
org.apache.activemq.artemis.core.server.impl.MessageReferenceImpl.run(MessageReferenceImpl.java:131)
 ~[artemis-server-2.28.0.jar:2.28.0]     at 
io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
 ~[netty-common-4.1.86.Final.jar:4.1.86.Final]     at 
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
 ~[netty-common-4.1.86.Final.jar:4.1.86.Final]     at 
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
 ~[netty-common-4.1.86.Final.jar:4.1.86.Final]     at 
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569) 
~[netty-transport-4.1.86.Final.jar:4.1.86.Final]     at 
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
 ~[netty-common-4.1.86.Final.jar:4.1.86.Final]     at 
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) 
~[netty-common-4.1.86.Final.jar:4.1.86.Final]     at 
org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
 ~[artemis-commons-2.28.0.jar:?] Caused by: java.lang.RuntimeException: 
java.nio.channels.AsynchronousCloseException     at 
org.apache.activemq.artemis.core.persistence.impl.journal.LargeBody.getBodyBufferSize(LargeBody.java:293)
 ~[artemis-server-2.28.0.jar:2.28.0]     at 
org.apache.activemq.artemis.core.persistence.impl.journal.LargeServerMessageImpl.getBodyBufferSize(LargeServerMessageImpl.java:263)
 ~[artemis-server-2.28.0.jar:2.28.0]     at 
org.apache.activemq.artemis.protocol.amqp.converter.coreWrapper.CoreBytesMessageWrapper.getBodyLength(CoreBytesMessageWrapper.java:98)
 ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]     at 
org.apache.activemq.artemis.protocol.amqp.converter.coreWrapper.CoreBytesMessageWrapper.getBinaryFromMessageBody(CoreBytesMessageWrapper.java:68)
 ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]     at 

[jira] [Updated] (ARTEMIS-4217) AMQ111005: Failed to convert message. Sending it to Dead Letter Address.

2023-06-16 Thread daves (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

daves updated ARTEMIS-4217:
---
Attachment: image-2023-06-16-15-59-25-721.png

> AMQ111005: Failed to convert message. Sending it to Dead Letter Address.
> 
>
> Key: ARTEMIS-4217
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4217
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.28.0
> Environment: Windows Server 2022 21H2
> openjdk 19.0.1 2022-10-18
> OpenJDK Runtime Environment (build 19.0.1+10-21)
> OpenJDK 64-Bit Server VM (build 19.0.1+10-21, mixed mode, sharing)
>Reporter: daves
>Priority: Major
> Attachments: ArtemisConvertError.zip, 
> image-2023-06-16-15-59-25-689.png, image-2023-06-16-15-59-25-721.png
>
>
> Some of the AMQP messages sent by my client never arrive at the consumer. In 
> the Artemis log I found the following exception:
> {noformat}
> 2023-03-23 18:06:58,084 WARN  
> [org.apache.activemq.artemis.protocol.amqp.logger] AMQ111005: Failed to 
> convert message. Sending it to Dead Letter Address. 
> org.apache.activemq.artemis.protocol.amqp.converter.coreWrapper.ConversionException:
>  java.nio.channels.ClosedChannelException
>      at 
> org.apache.activemq.artemis.protocol.amqp.converter.CoreAmqpConverter.fromCore(CoreAmqpConverter.java:318)
>  ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]
>      at 
> org.apache.activemq.artemis.protocol.amqp.converter.CoreAmqpConverter.checkAMQP(CoreAmqpConverter.java:79)
>  ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]     
>      at 
> org.apache.activemq.artemis.protocol.amqp.proton.ProtonServerSenderContext.executeDelivery(ProtonServerSenderContext.java:561)
>  ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]     
>      at 
> org.apache.activemq.artemis.core.server.impl.MessageReferenceImpl.run(MessageReferenceImpl.java:131)
>  ~[artemis-server-2.28.0.jar:2.28.0]     
>      at 
> io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
>  ~[netty-common-4.1.86.Final.jar:4.1.86.Final]     
>      at 
> io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
>  ~[netty-common-4.1.86.Final.jar:4.1.86.Final]     
>      at 
> io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
>  ~[netty-common-4.1.86.Final.jar:4.1.86.Final]     
>      at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569) 
> ~[netty-transport-4.1.86.Final.jar:4.1.86.Final]     
>      at 
> io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>  ~[netty-common-4.1.86.Final.jar:4.1.86.Final]     
>      at 
> io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) 
> ~[netty-common-4.1.86.Final.jar:4.1.86.Final]     
>      at 
> org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
>  ~[artemis-commons-2.28.0.jar:?] Caused by: java.lang.RuntimeException: 
> java.nio.channels.ClosedChannelException     
>      at 
> org.apache.activemq.artemis.core.persistence.impl.journal.LargeBody.getBodyBufferSize(LargeBody.java:293)
>  ~[artemis-server-2.28.0.jar:2.28.0]     
>      at 
> org.apache.activemq.artemis.core.persistence.impl.journal.LargeServerMessageImpl.getBodyBufferSize(LargeServerMessageImpl.java:263)
>  ~[artemis-server-2.28.0.jar:2.28.0]     
>      at 
> org.apache.activemq.artemis.protocol.amqp.converter.coreWrapper.CoreBytesMessageWrapper.getBodyLength(CoreBytesMessageWrapper.java:98)
>  ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]     
>      at 
> org.apache.activemq.artemis.protocol.amqp.converter.coreWrapper.CoreBytesMessageWrapper.getBinaryFromMessageBody(CoreBytesMessageWrapper.java:68)
>  ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]     
>      at 
> org.apache.activemq.artemis.protocol.amqp.converter.coreWrapper.CoreBytesMessageWrapper.createAMQPSection(CoreBytesMessageWrapper.java:78)
>  ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]     
>      at 
> org.apache.activemq.artemis.protocol.amqp.converter.CoreAmqpConverter.fromCore(CoreAmqpConverter.java:106)
>  ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]     
>      ... 10 more 
> Caused by: java.nio.channels.ClosedChannelException     
>      at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:165) 
> ~[?:?]     
>      at sun.nio.ch.FileChannelImpl.size(FileChannelImpl.java:416) ~[?:?]     
>      at 
> org.apache.activemq.artemis.core.io.nio.NIOSequentialFile.size(NIOSequentialFile.java:339)
>  ~[artemis-journal-2.28.0.jar:2.28.0]     
>      at 
> org.apache.activemq.artemis.core.persistence.impl.journal.LargeBody.getBodyBufferSize(LargeBody.java:285)
>  ~[artemis-server-2.28.0.jar:2.28.0]     
>      at 
> 

[jira] [Updated] (ARTEMIS-4217) AMQ111005: Failed to convert message. Sending it to Dead Letter Address.

2023-06-16 Thread daves (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

daves updated ARTEMIS-4217:
---
Attachment: image-2023-06-16-15-59-25-689.png

> AMQ111005: Failed to convert message. Sending it to Dead Letter Address.
> 
>
> Key: ARTEMIS-4217
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4217
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.28.0
> Environment: Windows Server 2022 21H2
> openjdk 19.0.1 2022-10-18
> OpenJDK Runtime Environment (build 19.0.1+10-21)
> OpenJDK 64-Bit Server VM (build 19.0.1+10-21, mixed mode, sharing)
>Reporter: daves
>Priority: Major
> Attachments: ArtemisConvertError.zip, 
> image-2023-06-16-15-59-25-689.png, image-2023-06-16-15-59-25-721.png
>
>
> Some of the AMQP messages sent by my client never arrive at the consumer. In 
> the Artemis log I found the following exception:
> {noformat}
> 2023-03-23 18:06:58,084 WARN  
> [org.apache.activemq.artemis.protocol.amqp.logger] AMQ111005: Failed to 
> convert message. Sending it to Dead Letter Address. 
> org.apache.activemq.artemis.protocol.amqp.converter.coreWrapper.ConversionException:
>  java.nio.channels.ClosedChannelException
>      at 
> org.apache.activemq.artemis.protocol.amqp.converter.CoreAmqpConverter.fromCore(CoreAmqpConverter.java:318)
>  ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]
>      at 
> org.apache.activemq.artemis.protocol.amqp.converter.CoreAmqpConverter.checkAMQP(CoreAmqpConverter.java:79)
>  ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]     
>      at 
> org.apache.activemq.artemis.protocol.amqp.proton.ProtonServerSenderContext.executeDelivery(ProtonServerSenderContext.java:561)
>  ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]     
>      at 
> org.apache.activemq.artemis.core.server.impl.MessageReferenceImpl.run(MessageReferenceImpl.java:131)
>  ~[artemis-server-2.28.0.jar:2.28.0]     
>      at 
> io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
>  ~[netty-common-4.1.86.Final.jar:4.1.86.Final]     
>      at 
> io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
>  ~[netty-common-4.1.86.Final.jar:4.1.86.Final]     
>      at 
> io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
>  ~[netty-common-4.1.86.Final.jar:4.1.86.Final]     
>      at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569) 
> ~[netty-transport-4.1.86.Final.jar:4.1.86.Final]     
>      at 
> io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>  ~[netty-common-4.1.86.Final.jar:4.1.86.Final]     
>      at 
> io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) 
> ~[netty-common-4.1.86.Final.jar:4.1.86.Final]     
>      at 
> org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
>  ~[artemis-commons-2.28.0.jar:?] Caused by: java.lang.RuntimeException: 
> java.nio.channels.ClosedChannelException     
>      at 
> org.apache.activemq.artemis.core.persistence.impl.journal.LargeBody.getBodyBufferSize(LargeBody.java:293)
>  ~[artemis-server-2.28.0.jar:2.28.0]     
>      at 
> org.apache.activemq.artemis.core.persistence.impl.journal.LargeServerMessageImpl.getBodyBufferSize(LargeServerMessageImpl.java:263)
>  ~[artemis-server-2.28.0.jar:2.28.0]     
>      at 
> org.apache.activemq.artemis.protocol.amqp.converter.coreWrapper.CoreBytesMessageWrapper.getBodyLength(CoreBytesMessageWrapper.java:98)
>  ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]     
>      at 
> org.apache.activemq.artemis.protocol.amqp.converter.coreWrapper.CoreBytesMessageWrapper.getBinaryFromMessageBody(CoreBytesMessageWrapper.java:68)
>  ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]     
>      at 
> org.apache.activemq.artemis.protocol.amqp.converter.coreWrapper.CoreBytesMessageWrapper.createAMQPSection(CoreBytesMessageWrapper.java:78)
>  ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]     
>      at 
> org.apache.activemq.artemis.protocol.amqp.converter.CoreAmqpConverter.fromCore(CoreAmqpConverter.java:106)
>  ~[artemis-amqp-protocol-2.28.0.jar:2.28.0]     
>      ... 10 more 
> Caused by: java.nio.channels.ClosedChannelException     
>      at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:165) 
> ~[?:?]     
>      at sun.nio.ch.FileChannelImpl.size(FileChannelImpl.java:416) ~[?:?]     
>      at 
> org.apache.activemq.artemis.core.io.nio.NIOSequentialFile.size(NIOSequentialFile.java:339)
>  ~[artemis-journal-2.28.0.jar:2.28.0]     
>      at 
> org.apache.activemq.artemis.core.persistence.impl.journal.LargeBody.getBodyBufferSize(LargeBody.java:285)
>  ~[artemis-server-2.28.0.jar:2.28.0]     
>      at 
> 

[jira] [Commented] (AMQ-9283) Memory leak on stomp transport when a client unsubscribe

2023-06-16 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-9283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733472#comment-17733472
 ] 

ASF subversion and git services commented on AMQ-9283:
--

Commit 90f96a02992b64ba330c6e6b62e6a8fc3d4d8120 in activemq's branch 
refs/heads/main from u8675309
[ https://gitbox.apache.org/repos/asf?p=activemq.git;h=90f96a0299 ]

[AMQ-9283] Fix memory leak with STOMP unsubscribe


> Memory leak on stomp transport when a client unsubscribe
> 
>
> Key: AMQ-9283
> URL: https://issues.apache.org/jira/browse/AMQ-9283
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: STOMP
>Reporter: Jean-Baptiste Onofré
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.19.0, 5.17.5, 5.18.2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (AMQ-9283) Memory leak on stomp transport when a client unsubscribe

2023-06-16 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-9283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733473#comment-17733473
 ] 

ASF subversion and git services commented on AMQ-9283:
--

Commit c87726f2db2378f3d4a13b818a565ce5cce3af8a in activemq's branch 
refs/heads/activemq-5.18.x from u8675309
[ https://gitbox.apache.org/repos/asf?p=activemq.git;h=c87726f2db ]

[AMQ-9283] Fix memory leak with STOMP unsubscribe

(cherry picked from commit 90f96a02992b64ba330c6e6b62e6a8fc3d4d8120)


> Memory leak on stomp transport when a client unsubscribe
> 
>
> Key: AMQ-9283
> URL: https://issues.apache.org/jira/browse/AMQ-9283
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: STOMP
>Reporter: Jean-Baptiste Onofré
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.19.0, 5.17.5, 5.18.2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (AMQ-9283) Memory leak on stomp transport when a client unsubscribe

2023-06-16 Thread Jira


 [ 
https://issues.apache.org/jira/browse/AMQ-9283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Baptiste Onofré resolved AMQ-9283.
---
Resolution: Fixed

> Memory leak on stomp transport when a client unsubscribe
> 
>
> Key: AMQ-9283
> URL: https://issues.apache.org/jira/browse/AMQ-9283
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: STOMP
>Reporter: Jean-Baptiste Onofré
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.19.0, 5.17.5, 5.18.2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (AMQ-9283) Memory leak on stomp transport when a client unsubscribe

2023-06-16 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-9283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733475#comment-17733475
 ] 

ASF subversion and git services commented on AMQ-9283:
--

Commit 587980fa9b2e521790c398f907dff5749e654dd3 in activemq's branch 
refs/heads/activemq-5.17.x from u8675309
[ https://gitbox.apache.org/repos/asf?p=activemq.git;h=587980fa9b ]

[AMQ-9283] Fix memory leak with STOMP unsubscribe

(cherry picked from commit 90f96a02992b64ba330c6e6b62e6a8fc3d4d8120)


> Memory leak on stomp transport when a client unsubscribe
> 
>
> Key: AMQ-9283
> URL: https://issues.apache.org/jira/browse/AMQ-9283
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: STOMP
>Reporter: Jean-Baptiste Onofré
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.19.0, 5.17.5, 5.18.2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (ARTEMIS-4314) Federation, support consumerWindowSize zero and federate in batches only when the local queue is has excess capacity

2023-06-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4314?focusedWorklogId=865979=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-865979
 ]

ASF GitHub Bot logged work on ARTEMIS-4314:
---

Author: ASF GitHub Bot
Created on: 16/Jun/23 10:11
Start Date: 16/Jun/23 10:11
Worklog Time Spent: 10m 
  Work Description: gtully commented on code in PR #4509:
URL: https://github.com/apache/activemq-artemis/pull/4509#discussion_r1232058024


##
artemis-server/src/main/java/org/apache/activemq/artemis/core/server/Queue.java:
##
@@ -389,6 +389,8 @@ default int retryMessages(Filter filter, Integer 
expectedHits) throws Exception
 
boolean hasMatchingConsumer(Message message);
 

Review Comment:
   agree, I was leaving it unchanged but calling it what it is 
PendingMessageCount is best, sorted. thanks.





Issue Time Tracking
---

Worklog Id: (was: 865979)
Time Spent: 1h 50m  (was: 1h 40m)

> Federation, support consumerWindowSize zero and federate in batches only when 
> the local queue is has excess capacity
> 
>
> Key: ARTEMIS-4314
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4314
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Federation
>Affects Versions: 2.28.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.29.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Dual queue federation, where clusters federate in both direction can suffer 
> from message flip flopping once the priority adjustment kicks in.
> If there is a large backlog, the lower priority federation consumer is in 
> play once all of the local consumer credit is exhausted and the backlog can 
> drain to the other cluster.
> If demand is low there, the process can repeat. limiting the rate of the 
> federation consumer can help but it is not ideal b/c when there is no local 
> demand, we want to have a high rate of migration.
>  
> A possible solution is to have the federation consumer manage its own credit 
> and only flow messages when the local queue has capacity. Then flow a batch 
> of messages, and await again that the local queue has capacity. In this way, 
> there is no thundering herd effect, but there is also fast migration of 
> messages once there is demand.
> the consumerWindowSize=0 is already in play for consumer.receive calls and 
> there is already a defaultConsumerWindowSize for an address. These can be 
> combined to realise batchFederationOnCapacity semantics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (ARTEMIS-4314) Federation, support consumerWindowSize zero and federate in batches only when the local queue is has excess capacity

2023-06-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4314?focusedWorklogId=865975=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-865975
 ]

ASF GitHub Bot logged work on ARTEMIS-4314:
---

Author: ASF GitHub Bot
Created on: 16/Jun/23 10:08
Start Date: 16/Jun/23 10:08
Worklog Time Spent: 10m 
  Work Description: gtully commented on code in PR #4509:
URL: https://github.com/apache/activemq-artemis/pull/4509#discussion_r1232055299


##
artemis-server/src/main/java/org/apache/activemq/artemis/core/server/federation/FederatedQueueConsumerImpl.java:
##
@@ -155,6 +167,67 @@ private synchronized void connect() throws Exception {
   }
}
 
+   interface QueueHandle {
+  long getMessageCount();
+  int getCreditWindow();
+   }
+
+   private QueueHandle createQueueHandle(ActiveMQServer server, 
ClientSession.QueueQuery queryResult) {
+  final Queue queue = server.locateQueue(queryResult.getName());
+  int creditWindow = DEFAULT_CONSUMER_WINDOW_SIZE;
+
+  final Integer defaultConsumerWindowSize = 
queryResult.getDefaultConsumerWindowSize();
+  if (defaultConsumerWindowSize != null) {
+ creditWindow = defaultConsumerWindowSize.intValue();
+ if (creditWindow <= 0) {
+creditWindow = DEFAULT_CONSUMER_WINDOW_SIZE;
+logger.trace("{} override non positive queue consumerWindowSize 
with {}.", this, creditWindow);
+ }
+  }
+
+  final int finalCreditWindow = creditWindow;
+  return new QueueHandle() {
+ @Override
+ public long getMessageCount() {
+return queue.getMessageCountForRing();
+ }
+
+ @Override
+ public int getCreditWindow() {
+return finalCreditWindow;
+ }
+  };
+   }
+
+   private void scheduleCreditOnEmpty(final int delay, final QueueHandle 
handle) {
+  if (handle != null) {

Review Comment:
   agree, that check was just defensive but unnecessary, the consumer can go 
stale with a pending check but that is already covered and any failure will 
result in the session/consumer getting recreated, so all that in necessary is 
that we don't reschedule in that case.
   thanks for the feedback.





Issue Time Tracking
---

Worklog Id: (was: 865975)
Time Spent: 1h 40m  (was: 1.5h)

> Federation, support consumerWindowSize zero and federate in batches only when 
> the local queue is has excess capacity
> 
>
> Key: ARTEMIS-4314
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4314
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Federation
>Affects Versions: 2.28.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.29.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Dual queue federation, where clusters federate in both direction can suffer 
> from message flip flopping once the priority adjustment kicks in.
> If there is a large backlog, the lower priority federation consumer is in 
> play once all of the local consumer credit is exhausted and the backlog can 
> drain to the other cluster.
> If demand is low there, the process can repeat. limiting the rate of the 
> federation consumer can help but it is not ideal b/c when there is no local 
> demand, we want to have a high rate of migration.
>  
> A possible solution is to have the federation consumer manage its own credit 
> and only flow messages when the local queue has capacity. Then flow a batch 
> of messages, and await again that the local queue has capacity. In this way, 
> there is no thundering herd effect, but there is also fast migration of 
> messages once there is demand.
> the consumerWindowSize=0 is already in play for consumer.receive calls and 
> there is already a defaultConsumerWindowSize for an address. These can be 
> combined to realise batchFederationOnCapacity semantics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (ARTEMIS-4314) Federation, support consumerWindowSize zero and federate in batches only when the local queue is has excess capacity

2023-06-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4314?focusedWorklogId=865973=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-865973
 ]

ASF GitHub Bot logged work on ARTEMIS-4314:
---

Author: ASF GitHub Bot
Created on: 16/Jun/23 10:05
Start Date: 16/Jun/23 10:05
Worklog Time Spent: 10m 
  Work Description: gtully commented on PR #4509:
URL: 
https://github.com/apache/activemq-artemis/pull/4509#issuecomment-1594440095

   thanks @clebertsuconic
   I have used the queue executor to sync on the metrics, great input thanks. 
That will make it more responsive, missing a trigger would incur an unnecessary 
delay.




Issue Time Tracking
---

Worklog Id: (was: 865973)
Time Spent: 1.5h  (was: 1h 20m)

> Federation, support consumerWindowSize zero and federate in batches only when 
> the local queue is has excess capacity
> 
>
> Key: ARTEMIS-4314
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4314
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Federation
>Affects Versions: 2.28.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.29.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Dual queue federation, where clusters federate in both direction can suffer 
> from message flip flopping once the priority adjustment kicks in.
> If there is a large backlog, the lower priority federation consumer is in 
> play once all of the local consumer credit is exhausted and the backlog can 
> drain to the other cluster.
> If demand is low there, the process can repeat. limiting the rate of the 
> federation consumer can help but it is not ideal b/c when there is no local 
> demand, we want to have a high rate of migration.
>  
> A possible solution is to have the federation consumer manage its own credit 
> and only flow messages when the local queue has capacity. Then flow a batch 
> of messages, and await again that the local queue has capacity. In this way, 
> there is no thundering herd effect, but there is also fast migration of 
> messages once there is demand.
> the consumerWindowSize=0 is already in play for consumer.receive calls and 
> there is already a defaultConsumerWindowSize for an address. These can be 
> combined to realise batchFederationOnCapacity semantics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-4270) Messages get lost when using multiple consumers with topic hierarchies

2023-06-16 Thread Moritz (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733433#comment-17733433
 ] 

Moritz commented on ARTEMIS-4270:
-

Any updates on this one?

> Messages get lost when using multiple consumers with topic hierarchies
> --
>
> Key: ARTEMIS-4270
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4270
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: JMS
>Affects Versions: 2.24.0
>Reporter: Moritz
>Priority: Major
> Attachments: topic-hierarchies-bug.zip
>
>
> There is an issue when we have the following setup:
>  * Shared durable consumer A listening to *news.#*
>  * Shared durable consumer B listening to *news.europe.#*
>  * Message M1 sent to *news.europe.sports*
>  * Message M2 sent to *news.europe*
> Expected behavior:
>  * A receives M1 and M2
>  * B receives M1 and M2
> Actual behavior:
>  * A receives M1 and M2
>  * B receives M1
> This happens when it is run with a clean Artemis, i.e. without any previous 
> data. If we run it a second time B receives M1 and M2. When using 
> *consumer.receive()* it also works as expected.
>  
> This also affects at least version *3.0.0-SNAPSHOT* however I couldn't select 
> it so I chose the second version I've tested it for. The attached project 
> showcases the bug where I simply adjusted the example 
> {*}apache-artemis-3.0.0-SNAPSHOT/examples/features/standard/topic-hierarchies{*}.
> I couldn't test it with 2.29.0-SNAPSHOT since I would get exceptions 
> concerning the topic not being multicast (already with the original example).
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (AMQ-9280) Upgrade to commons-io 2.13.0

2023-06-16 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733419#comment-17733419
 ] 

ASF subversion and git services commented on AMQ-9280:
--

Commit c5a88336fa1cf2d21c4ee4a5b6b36f58d23750c9 in activemq's branch 
refs/heads/main from JB Onofré
[ https://gitbox.apache.org/repos/asf?p=activemq.git;h=c5a88336fa ]

[AMQ-9280] Upgrade to commons-io 2.13.0


> Upgrade to commons-io 2.13.0
> 
>
> Key: AMQ-9280
> URL: https://issues.apache.org/jira/browse/AMQ-9280
> Project: ActiveMQ
>  Issue Type: Dependency upgrade
>Reporter: Jean-Baptiste Onofré
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.19.0, 5.17.5, 5.18.2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (AMQ-9280) Upgrade to commons-io 2.13.0

2023-06-16 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733420#comment-17733420
 ] 

ASF subversion and git services commented on AMQ-9280:
--

Commit 7ca1a0cb436813e0f212ec7548ab2c23d4ab10ca in activemq's branch 
refs/heads/main from JB Onofré
[ https://gitbox.apache.org/repos/asf?p=activemq.git;h=7ca1a0cb43 ]

Merge pull request #1033 from jbonofre/AMQ-9280

[AMQ-9280] Upgrade to commons-io 2.13.0

> Upgrade to commons-io 2.13.0
> 
>
> Key: AMQ-9280
> URL: https://issues.apache.org/jira/browse/AMQ-9280
> Project: ActiveMQ
>  Issue Type: Dependency upgrade
>Reporter: Jean-Baptiste Onofré
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.19.0, 5.17.5, 5.18.2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (AMQ-9283) Memory leak on stomp transport when a client unsubscribe

2023-06-16 Thread Jira
Jean-Baptiste Onofré created AMQ-9283:
-

 Summary: Memory leak on stomp transport when a client unsubscribe
 Key: AMQ-9283
 URL: https://issues.apache.org/jira/browse/AMQ-9283
 Project: ActiveMQ
  Issue Type: Bug
  Components: STOMP
Reporter: Jean-Baptiste Onofré
Assignee: Jean-Baptiste Onofré
 Fix For: 5.19.0, 5.17.5, 5.18.2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (AMQ-9281) Cleanup Camel dependencies

2023-06-16 Thread Jira


 [ 
https://issues.apache.org/jira/browse/AMQ-9281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Baptiste Onofré updated AMQ-9281:
--
Fix Version/s: (was: 5.18.2)

> Cleanup Camel dependencies
> --
>
> Key: AMQ-9281
> URL: https://issues.apache.org/jira/browse/AMQ-9281
> Project: ActiveMQ
>  Issue Type: Task
>  Components: Camel
>Reporter: Jean-Baptiste Onofré
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.19.0
>
>
> As we don't provide activemq-camel component, I think we should remove all 
> reference to Camel (maybe just keeping few itests, I will check).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (AMQ-9280) Upgrade to commons-io 2.13.0

2023-06-16 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733422#comment-17733422
 ] 

ASF subversion and git services commented on AMQ-9280:
--

Commit 4f3f5985c96ac50097dc00f95739b6562ecbbed1 in activemq's branch 
refs/heads/activemq-5.17.x from JB Onofré
[ https://gitbox.apache.org/repos/asf?p=activemq.git;h=4f3f5985c9 ]

[AMQ-9280] Upgrade to commons-io 2.13.0

(cherry picked from commit c5a88336fa1cf2d21c4ee4a5b6b36f58d23750c9)


> Upgrade to commons-io 2.13.0
> 
>
> Key: AMQ-9280
> URL: https://issues.apache.org/jira/browse/AMQ-9280
> Project: ActiveMQ
>  Issue Type: Dependency upgrade
>Reporter: Jean-Baptiste Onofré
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.19.0, 5.17.5, 5.18.2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (AMQ-9280) Upgrade to commons-io 2.13.0

2023-06-16 Thread Jira


 [ 
https://issues.apache.org/jira/browse/AMQ-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Baptiste Onofré resolved AMQ-9280.
---
Resolution: Fixed

> Upgrade to commons-io 2.13.0
> 
>
> Key: AMQ-9280
> URL: https://issues.apache.org/jira/browse/AMQ-9280
> Project: ActiveMQ
>  Issue Type: Dependency upgrade
>Reporter: Jean-Baptiste Onofré
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.19.0, 5.17.5, 5.18.2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (AMQ-9280) Upgrade to commons-io 2.13.0

2023-06-16 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17733421#comment-17733421
 ] 

ASF subversion and git services commented on AMQ-9280:
--

Commit 40a13bfa214c51bb1c43a3d04b8149f5bb5940f6 in activemq's branch 
refs/heads/activemq-5.18.x from JB Onofré
[ https://gitbox.apache.org/repos/asf?p=activemq.git;h=40a13bfa21 ]

[AMQ-9280] Upgrade to commons-io 2.13.0

(cherry picked from commit c5a88336fa1cf2d21c4ee4a5b6b36f58d23750c9)


> Upgrade to commons-io 2.13.0
> 
>
> Key: AMQ-9280
> URL: https://issues.apache.org/jira/browse/AMQ-9280
> Project: ActiveMQ
>  Issue Type: Dependency upgrade
>Reporter: Jean-Baptiste Onofré
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.19.0, 5.17.5, 5.18.2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (AMQ-9280) Upgrade to commons-io 2.13.0

2023-06-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-9280?focusedWorklogId=865967=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-865967
 ]

ASF GitHub Bot logged work on AMQ-9280:
---

Author: ASF GitHub Bot
Created on: 16/Jun/23 09:26
Start Date: 16/Jun/23 09:26
Worklog Time Spent: 10m 
  Work Description: jbonofre merged PR #1033:
URL: https://github.com/apache/activemq/pull/1033




Issue Time Tracking
---

Worklog Id: (was: 865967)
Time Spent: 20m  (was: 10m)

> Upgrade to commons-io 2.13.0
> 
>
> Key: AMQ-9280
> URL: https://issues.apache.org/jira/browse/AMQ-9280
> Project: ActiveMQ
>  Issue Type: Dependency upgrade
>Reporter: Jean-Baptiste Onofré
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.19.0, 5.17.5, 5.18.2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4317) Invisible/Unreleased Openwire consumers in Artemis 2.27.x broker queues

2023-06-16 Thread Mohanavalli A (Jira)
Mohanavalli A created ARTEMIS-4317:
--

 Summary: Invisible/Unreleased Openwire consumers in Artemis 2.27.x 
broker queues
 Key: ARTEMIS-4317
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4317
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: OpenWire
Affects Versions: 2.27.0
Reporter: Mohanavalli A


We have a Spring boot + Camel JMS + Activemq Client based consumer connecting 
to the Artemis 2.27.x broker and consuming messages. When there is a high 
inflow of messages to the queue where the openwire consumer is connected, the 
consumer count on the queue increases more than the actual number of consumers 
connected to the queue, and these invisible or unreleased consumers block the 
actual consumers from consuming the message. The issue can be replicated on 
Artemis 2.28.0 as well.
There was no issue in Artemis 2.26.0.

When then same application with a similar load is tested on Redhat AMQ 7.11.0, 
the issue does not occur. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4316) Example HTML does not render correctly

2023-06-16 Thread Geert Schuring (Jira)
Geert Schuring created ARTEMIS-4316:
---

 Summary: Example HTML does not render correctly
 Key: ARTEMIS-4316
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4316
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 2.28.0
Reporter: Geert Schuring
 Fix For: 2.29.0
 Attachments: image-2023-06-16-10-43-45-924.png

The HTML files in the examples do not render correctly. Especially the code 
parts. For example, the Readme in the CDI example:

apache-artemis-2.28.0/examples/features/standard/cdi/readme.html

!image-2023-06-16-10-43-45-924.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work logged] (AMQ-8049) Failed to start Apache ActiveMQ (mKahaDB / JMX)

2023-06-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-8049?focusedWorklogId=865934=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-865934
 ]

ASF GitHub Bot logged work on AMQ-8049:
---

Author: ASF GitHub Bot
Created on: 16/Jun/23 06:48
Start Date: 16/Jun/23 06:48
Worklog Time Spent: 10m 
  Work Description: lraczko commented on PR #984:
URL: https://github.com/apache/activemq/pull/984#issuecomment-1594191204

   @jbonofre - Did you have time to take a look?




Issue Time Tracking
---

Worklog Id: (was: 865934)
Time Spent: 1h  (was: 50m)

> Failed to start Apache ActiveMQ (mKahaDB / JMX)
> ---
>
> Key: AMQ-8049
> URL: https://issues.apache.org/jira/browse/AMQ-8049
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.15.12
> Environment: - Ubuntu 18.04.4 LTS (x86_64, server)
>  - OpenJDK 11.0.7 (64-Bit Server VM)
>  - ActiveMQ 5.15.12 (vanilla; tar.gz)
>Reporter: Guillermo Grandes
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Summary:
> {noformat}
> 2020-10-06 15:02:03,889 | ERROR | Failed to start Apache ActiveMQ (localhost, 
> null) | org.apache.activemq.broker.BrokerService | main
> javax.management.InstanceAlreadyExistsException: 
> org.apache.activemq:type=Broker,brokerName=localhost,service=PersistenceAdapter,instanceName=KahaDBPersistenceAdapter[/opt/activemq/data/kahadb/queue#3a#2f#2fActiveMQ.#3e_Index_/opt/activemq/data/kahadb/queue#3a#2f#2fActiveMQ.#3e]
> {noformat}
> This is the full-log (start-to-fail-and-shutdown):
> {code:none}
> 2020-10-06 15:01:58,507 | INFO  | Refreshing 
> org.apache.activemq.xbean.XBeanBrokerFactory$1@56620197: startup date [Tue 
> Oct 06 15:01:58 CEST 2020]; root of context hierarchy | 
> org.apache.activemq.xbean.XBeanBrokerFactory$1 | main
> 2020-10-06 15:02:02,417 | INFO  | Using Persistence Adapter: 
> MultiKahaDBPersistenceAdapter[/opt/activemq/data/kahadb][KahaDBPersistenceAdapter[/opt/activemq/data/kahadb/queue#3a#2f#2fActiveMQ.#3e],
>  
> KahaDBPersistenceAdapter[/opt/activemq/data/kahadb/topic#3a#2f#2fActiveMQ.#3e]]
>  | org.apache.activemq.broker.BrokerService | main
> 2020-10-06 15:02:02,638 | INFO  | KahaDB is version 6 | 
> org.apache.activemq.store.kahadb.MessageDatabase | main
> 2020-10-06 15:02:02,823 | INFO  | KahaDB is version 6 | 
> org.apache.activemq.store.kahadb.MessageDatabase | main
> 2020-10-06 15:02:02,978 | INFO  | KahaDB is version 6 | 
> org.apache.activemq.store.kahadb.MessageDatabase | main
> 2020-10-06 15:02:03,007 | INFO  | Stopping async queue tasks | 
> org.apache.activemq.store.kahadb.KahaDBStore | main
> 2020-10-06 15:02:03,010 | INFO  | Stopping async topic tasks | 
> org.apache.activemq.store.kahadb.KahaDBStore | main
> 2020-10-06 15:02:03,095 | INFO  | Stopped KahaDB | 
> org.apache.activemq.store.kahadb.KahaDBStore | main
> 2020-10-06 15:02:03,257 | INFO  | KahaDB is version 6 | 
> org.apache.activemq.store.kahadb.MessageDatabase | main
> 2020-10-06 15:02:03,480 | INFO  | KahaDB is version 6 | 
> org.apache.activemq.store.kahadb.MessageDatabase | main
> 2020-10-06 15:02:03,575 | INFO  | KahaDB is version 6 | 
> org.apache.activemq.store.kahadb.MessageDatabase | main
> 2020-10-06 15:02:03,681 | INFO  | KahaDB is version 6 | 
> org.apache.activemq.store.kahadb.MessageDatabase | main
> 2020-10-06 15:02:03,798 | INFO  | KahaDB is version 6 | 
> org.apache.activemq.store.kahadb.MessageDatabase | main
> 2020-10-06 15:02:03,851 | INFO  | KahaDB is version 6 | 
> org.apache.activemq.store.kahadb.MessageDatabase | main
> 2020-10-06 15:02:03,859 | INFO  | Page File: 
> /opt/activemq/data/kahadb/queue#3a#2f#2fActiveMQ.#3e/db.data. Recovering 
> pageFile free list due to prior unclean shutdown.. | 
> org.apache.activemq.store.kahadb.disk.page.PageFile | KahaDB Index Free Page 
> Recovery
> 2020-10-06 15:02:03,889 | ERROR | Failed to start Apache ActiveMQ (localhost, 
> null) | org.apache.activemq.broker.BrokerService | main
> javax.management.InstanceAlreadyExistsException: 
> org.apache.activemq:type=Broker,brokerName=localhost,service=PersistenceAdapter,instanceName=KahaDBPersistenceAdapter[/opt/activemq/data/kahadb/queue#3a#2f#2fActiveMQ.#3e_Index_/opt/activemq/data/kahadb/queue#3a#2f#2fActiveMQ.#3e]
> at 
> java.management/com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:436)[:]
> at 
> java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1855)[:]
> at 
> java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:955)[:]
> at 
>