[jira] [Created] (ARTEMIS-4664) autoCreatedResource can get removed while receiving batch of messages

2024-02-29 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-4664:
---

 Summary: autoCreatedResource can get removed while receiving batch 
of messages
 Key: ARTEMIS-4664
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4664
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Anton Roskvist


There is a very small window where an auto created resource can get 
auto-removed while receiving a batch of new messages.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4637) Allow unordered xml conf elements for clusters and bridges

2024-02-06 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-4637:
---

 Summary: Allow unordered xml conf elements for clusters and bridges
 Key: ARTEMIS-4637
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4637
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: Anton Roskvist


This allows any order of xml configuration elements within cluster and 
core-bridge config blocks



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4527) Redistributor race when consumerCount reaches 0 in cluster

2023-12-06 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-4527:
---

 Summary: Redistributor race when consumerCount reaches 0 in cluster
 Key: ARTEMIS-4527
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4527
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Anton Roskvist


This is a very rare bug caused by cluster notifications arriving in the wrong 
order in some very specific circumstances



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4510) Add auto-create-destination logic to diverts

2023-11-17 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-4510:
---

 Summary: Add auto-create-destination logic to diverts
 Key: ARTEMIS-4510
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4510
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: Anton Roskvist


This enables the use of dynamic routing decisions within the transformer by 
setting the message address. It also covers for a rare problem where if any of 
the forwarding addresses are removed during runtime, such as from auto-delete, 
the message would get silently dropped.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4498) Expose internal queues for management and observability

2023-11-13 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-4498:
---

 Summary: Expose internal queues for management and observability
 Key: ARTEMIS-4498
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4498
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: Anton Roskvist


This was enabled by default in previous versions of the broker and where quite 
good for troubleshooting and observability purposes



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4460) Core client reconnect/failover loop because of incompatible versions

2023-10-16 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-4460:
---

 Summary: Core client reconnect/failover loop because of 
incompatible versions
 Key: ARTEMIS-4460
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4460
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Anton Roskvist


If reconnecting or running failover to a broker that is not compatible with the 
current client version an infinite reconnect/failover loop can occur (even if 
setting max retry/reconnect/failoverattempts to some finite value). 

This can be triggered by a broker upgrade followed by a rollback for example.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4455) Improve message redistribution balance for OFF_WITH_REDISTRIBUTION

2023-10-06 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-4455:
---

 Summary: Improve message redistribution balance for 
OFF_WITH_REDISTRIBUTION
 Key: ARTEMIS-4455
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4455
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: Anton Roskvist


This benefits the case where messages arrive on a clustered node without a 
local consumer but with multiple remote targets.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4450) Auto-deleted clustered destinations can cause message loss

2023-10-04 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-4450:

Summary: Auto-deleted clustered destinations can cause message loss  (was: 
auto-delete clustered destinations can cause message loss)

> Auto-deleted clustered destinations can cause message loss
> --
>
> Key: ARTEMIS-4450
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4450
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Anton Roskvist
>Priority: Major
>
> If a destination has a remote binding but not a local one, certain 
> MessageLoadBalancingTypes can cause message loss, transparent to the producer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4450) auto-delete clustered destinations can cause message loss

2023-10-04 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-4450:
---

 Summary: auto-delete clustered destinations can cause message loss
 Key: ARTEMIS-4450
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4450
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Anton Roskvist


If a destination has a remote binding but not a local one, certain 
MessageLoadBalancingTypes can cause message loss, transparent to the producer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4430) Generalize and extend message compression

2023-09-14 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-4430:
---

 Summary: Generalize and extend message compression
 Key: ARTEMIS-4430
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4430
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: Anton Roskvist


Current implementation for message compression only applies for large messages 
sent directly from a client. It would be nice if there was an option to do this 
for any type of message, given a certain size. Something like the following 
client option: 
{code:java}
compressMessagesOver=n{code}
Ideally this would be available as an option for "core bridges" as well, since 
one of their use cases is for sending messages over high latency and/or low 
bandwidth links (WAN) where compression makes a lot of sense. Diverts might be 
a good candidate for this as well, enabling the possibility of message 
compression from clients who are otherwise unable to do it by themselves. 

Actually in place it should work transparently for any given consumer given 
that decompression happens in the ClientConsumer that's interfacing with the 
broker rather than in the protocol implementation.

I have been able to get parts of this to work but have run into problems 
compressing "ServerLargeMessage" messages. I'm hoping someone with more 
intimate knowledge of the broker internals can figure something out there.

A variant or addition might be an address setting to do this compression when 
receiving the message on the broker instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4325) Ability to failback after failover

2023-06-22 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-4325:

Description: 
This would be similar to the "priorityBackup" functionality in the ActiveMQ 5 
broker.

The primary use case for this is to more easily maintain a good distribution of 
consumers and producers across a broker cluster over time.

 

The intended behavior for my own purposes would be something like:

 - Ensure an even distribution across the broker cluster when first connecting 
a high throughput client.
 - When a broker becomes unavailable (network outage, patch, crash, whatever), 
move affected client workers to another broker in the cluster to maintain 
throughput.
 - When the original broker comes back, move the recently failed over resources 
to the original broker again.

  was:
This would be similar to the "priorityBackup" functionality in the ActiveMQ 5 
broker.

The primary use case for this is to more easily maintain a good distribution of 
consumers and producers across a broker cluster.

 

The intended behavior for my own purposes would be something like:

 - Ensure an even distribution across the broker cluster when first connecting 
a high throughput client.
 - When a broker becomes unavailable (network outage, patch, crash, whatever), 
move affected client workers to another broker in the cluster to maintain 
throughput.
 - When the original broker comes back, move the recently failed over resources 
to the original broker again.


> Ability to failback after failover
> --
>
> Key: ARTEMIS-4325
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4325
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>Reporter: Anton Roskvist
>Priority: Major
>
> This would be similar to the "priorityBackup" functionality in the ActiveMQ 5 
> broker.
> The primary use case for this is to more easily maintain a good distribution 
> of consumers and producers across a broker cluster over time.
>  
> The intended behavior for my own purposes would be something like:
>  - Ensure an even distribution across the broker cluster when first 
> connecting a high throughput client.
>  - When a broker becomes unavailable (network outage, patch, crash, 
> whatever), move affected client workers to another broker in the cluster to 
> maintain throughput.
>  - When the original broker comes back, move the recently failed over 
> resources to the original broker again.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4325) Ability to failback after failover

2023-06-22 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-4325:

Description: 
This would be similar to the "priorityBackup" functionality in the ActiveMQ 5 
broker.

The primary use case for this is to more easily maintain a good distribution of 
consumers and producers across a broker cluster.

 

The intended behavior for my own purposes would be something like:

 - Ensure an even distribution across the broker cluster when first connecting 
a high throughput client.
 - When a broker becomes unavailable (network outage, patch, crash, whatever), 
move affected client workers to another broker in the cluster to maintain 
throughput.
 - When the original broker comes back, move the recently failed over resources 
to the original broker again.

  was:
This would be similar to the "priorityBackup" functionality in the ActiveMQ 5 
broker.

The primary use case for this is to more easily maintain a good distribution of 
consumers and producers across a broker cluster.

The intended behavior for my own purposes would be something like:
 
- Ensure an even distribution across the broker cluster when first connecting a 
high throughput client.
 - When a broker becomes unavailable (network outage, patch, crash, whatever), 
move affected client workers to another broker in the cluster to maintain 
throughput.
 - When the original broker comes back, move the recently failed over resources 
to the original broker again.


> Ability to failback after failover
> --
>
> Key: ARTEMIS-4325
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4325
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>Reporter: Anton Roskvist
>Priority: Major
>
> This would be similar to the "priorityBackup" functionality in the ActiveMQ 5 
> broker.
> The primary use case for this is to more easily maintain a good distribution 
> of consumers and producers across a broker cluster.
>  
> The intended behavior for my own purposes would be something like:
>  - Ensure an even distribution across the broker cluster when first 
> connecting a high throughput client.
>  - When a broker becomes unavailable (network outage, patch, crash, 
> whatever), move affected client workers to another broker in the cluster to 
> maintain throughput.
>  - When the original broker comes back, move the recently failed over 
> resources to the original broker again.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4325) Ability to failback after failover

2023-06-22 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-4325:
---

 Summary: Ability to failback after failover
 Key: ARTEMIS-4325
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4325
 Project: ActiveMQ Artemis
  Issue Type: New Feature
Reporter: Anton Roskvist


This would be similar to the "priorityBackup" functionality in the ActiveMQ 5 
broker.

The primary use case for this is to more easily maintain a good distribution of 
consumers and producers across a broker cluster.

The intended behavior for my own purposes would be something like:
 
- Ensure an even distribution across the broker cluster when first connecting a 
high throughput client.
 - When a broker becomes unavailable (network outage, patch, crash, whatever), 
move affected client workers to another broker in the cluster to maintain 
throughput.
 - When the original broker comes back, move the recently failed over resources 
to the original broker again.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4216) Queue with autoDelete might get reaped on server start if containing only paged messages

2023-03-22 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-4216:
---

 Summary: Queue with autoDelete might get reaped on server start if 
containing only paged messages
 Key: ARTEMIS-4216
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4216
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Anton Roskvist


Queue with autoDelete might get reaped on server start if containing only paged 
messages



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4215) JournalFlush might never happen when journal-sync-* is false

2023-03-22 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-4215:
---

 Summary: JournalFlush might never happen when journal-sync-* is 
false
 Key: ARTEMIS-4215
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4215
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Anton Roskvist


When journal-sync-* is false then flushes to journal will only ever happen once 
the buffer is full or when the broker is cleanly shut down, regardless of how 
much time passes. This means that if the broker is started with these settings 
a crash/kill -9 will for sure loose messages, regardless of how much time 
passes since the messages where sent.

This started with version 2.18.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4186) Ability to set compressionLevel for compressLargeMessages

2023-02-28 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-4186:
---

 Summary: Ability to set compressionLevel for compressLargeMessages
 Key: ARTEMIS-4186
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4186
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: Anton Roskvist






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4185) Resending compressed message uncompressed throws exception in consumer

2023-02-27 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-4185:
---

 Summary: Resending compressed message uncompressed throws 
exception in consumer
 Key: ARTEMIS-4185
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4185
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Anton Roskvist


Resending compressed message uncompressed throws exception in consumer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4184) Bidges with concurrency not checked/cleared properly on config reload

2023-02-27 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-4184:
---

 Summary: Bidges with concurrency not checked/cleared properly on 
config reload
 Key: ARTEMIS-4184
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4184
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Anton Roskvist


Bidges with concurrency not checked/cleared properly on config reload



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4091) Make scaleDown target more deterministic

2022-11-15 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-4091:
---

 Summary: Make scaleDown target more deterministic
 Key: ARTEMIS-4091
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4091
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: Anton Roskvist


This change makes scaleDown prefer the scaleDown connector as the scaleDown 
target, meaning you can more easily predict which node scaleDown will happen 
to, especially when issued through management



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4084) Rollbacking massive amounts of messages might crash broker

2022-11-07 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-4084:
---

 Summary: Rollbacking massive amounts of messages might crash broker
 Key: ARTEMIS-4084
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4084
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Anton Roskvist


Critical Analyzer triggers, but even if it is set to LOG or disabled the broker 
is put in such a bad state it becomes unresponsive until restarted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4016) Bridges created by management operations are removed on restart and config reload

2022-09-26 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-4016:
---

 Summary: Bridges created by management operations are removed on 
restart and config reload
 Key: ARTEMIS-4016
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4016
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Anton Roskvist


Bridges created by management operations are removed on restart and config 
reload



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-3973) High pressure on paging can leave address stuck in paging

2022-09-06 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-3973:
---

 Summary: High pressure on paging can leave address stuck in paging
 Key: ARTEMIS-3973
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3973
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Anton Roskvist


When sending a significant load through the paging store, there are occasional 
issues that causes the address to get stuck in paging and may end up in a 
generally inconsistent state (incorrect metrics, and unreadable messages for 
instance).

I have seen these stack traces associated with this issue:
{code:java}
WARN  [org.apache.activemq.artemis.core.server] AMQ25: Sending unexpected 
exception to the client: java.lang.IllegalStateException: Unable to insert
at 
io.netty.util.collection.IntObjectHashMap.put(IntObjectHashMap.java:141) 
[netty-common-4.1.79.Final.jar:4.1.79.Final]
at 
org.apache.activemq.artemis.core.paging.cursor.impl.PageSubscriptionImpl$PageCursorInfo.internalAddACK(PageSubscriptionImpl.java:1192)
 [artemis-server-2.25.0.jar:2.25.0]
at 
org.apache.activemq.artemis.core.paging.cursor.impl.PageSubscriptionImpl$PageCursorInfo.addACK(PageSubscriptionImpl.java:1174)
 [artemis-server-2.25.0.jar:2.25.0]
at 
org.apache.activemq.artemis.core.paging.cursor.impl.PageSubscriptionImpl.processACK(PageSubscriptionImpl.java:961)
 [artemis-server-2.25.0.jar:2.25.0]
at 
org.apache.activemq.artemis.core.paging.cursor.impl.PageSubscriptionImpl$PageCursorTX.afterCommit(PageSubscriptionImpl.java:1260)
 [artemis-server-2.25.0.jar:2.25.0]
at 
org.apache.activemq.artemis.core.transaction.impl.TransactionImpl.afterCommit(TransactionImpl.java:564)
 [artemis-server-2.25.0.jar:2.25.0]
at 
org.apache.activemq.artemis.core.transaction.impl.TransactionImpl$2.done(TransactionImpl.java:305)
 [artemis-server-2.25.0.jar:2.25.0]
at 
org.apache.activemq.artemis.core.persistence.impl.journal.OperationContextImpl.executeOnCompletion(OperationContextImpl.java:195)
 [artemis-server-2.25.0.jar:2.25.0]
at 
org.apache.activemq.artemis.core.persistence.impl.journal.OperationContextImpl.executeOnCompletion(OperationContextImpl.java:141)
 [artemis-server-2.25.0.jar:2.25.0]
at 
org.apache.activemq.artemis.core.persistence.impl.journal.AbstractJournalStorageManager.afterCompleteOperations(AbstractJournalStorageManager.java:338)
 [artemis-server-2.25.0.jar:2.25.0]
at 
org.apache.activemq.artemis.core.transaction.impl.TransactionImpl.commit(TransactionImpl.java:296)
 [artemis-server-2.25.0.jar:2.25.0]
at 
org.apache.activemq.artemis.core.transaction.impl.TransactionImpl.commit(TransactionImpl.java:247)
 [artemis-server-2.25.0.jar:2.25.0]
at 
org.apache.activemq.artemis.core.server.impl.ServerSessionImpl.commit(ServerSessionImpl.java:1320)
 [artemis-server-2.25.0.jar:2.25.0]
at 
org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler.slowPacketHandler(ServerSessionPacketHandler.java:474)
 [artemis-server-2.25.0.jar:2.25.0]
at 
org.apache.activemq.artemis.core.protocol.core.ServerSessionPacketHandler.onMessagePacket(ServerSessionPacketHandler.java:298)
 [artemis-server-2.25.0.jar:2.25.0]
at org.apache.activemq.artemis.utils.actors.Actor.doTask(Actor.java:33) 
[artemis-commons-2.25.0.jar:]
at 
org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:67)
 [artemis-commons-2.25.0.jar:]
at 
org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:56)
 [artemis-commons-2.25.0.jar:]
at 
org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31)
 [artemis-commons-2.25.0.jar:]
at 
org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:67)
 [artemis-commons-2.25.0.jar:]
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
 [java.base:]
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 [java.base:]
at 
org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
 [artemis-commons-2.25.0.jar:] {code}
and
{code:java}
WARN  
[org.apache.activemq.artemis.core.paging.cursor.impl.PageSubscriptionImpl] 
Index 16384 out of bounds for length 16384: 
java.lang.ArrayIndexOutOfBoundsException: Index 16384 out of bounds for length 
16384
at 
io.netty.util.collection.IntObjectHashMap.indexOf(IntObjectHashMap.java:345) 
[netty-common-4.1.79.Final.jar:4.1.79.Final]
at 
io.netty.util.collection.IntObjectHashMap.get(IntObjectHashMap.java:114) 
[netty-common-4.1.79.Final.jar:4.1.79.Final]
at 
org.apache.activemq.artemis.core.paging.cursor.impl.PageSubscriptionImpl$PageCursorInfo.isAck(PageSubscriptionImpl.java:1061)
 

[jira] [Created] (ARTEMIS-3960) Messages received over a bridge do not populate the duplicateCache

2022-08-29 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-3960:
---

 Summary: Messages received over a bridge do not populate the 
duplicateCache
 Key: ARTEMIS-3960
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3960
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Anton Roskvist


Messages received over a bridge do not populate the duplicateCache
If the bridge is configured with duplicate detection = true
the forwarded messages will not populate the duplicateCache of the receiving 
address.

 

This affects messages sent over a cluster connection as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-3943) Clients can cause broker OOM when reading paged messages

2022-08-19 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17581781#comment-17581781
 ] 

Anton Roskvist commented on ARTEMIS-3943:
-

Thanks [~clebertsuconic] , looking forward to 2.25.0.
I should add that overall the changes to paging seem really nice!

> Clients can cause broker OOM when reading paged messages
> 
>
> Key: ARTEMIS-3943
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3943
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.24.0
>Reporter: Anton Roskvist
>Priority: Major
>
> Starting with version 2.24.0 of the broker, a single client can cause the 
> broker to OOM and terminate if an address is paging and holds enough messages 
> to collectively fill the broker heap.
> To reproduce:
> {code:java}
> $ apache-artemis-2.24.0/bin/artemis create broker
> $ broker/bin/artemis-service start
> $ broker/bin/artemis producer \
>   --destination TEST \
>   --text-size 10 \
>   --message-count 10{code}
> Optional:
> Tweak "message-count" + "text-size" above together with the "-Xmx"-property 
> in artemis.profile to be able to trigger it faster
> Kill broker with:
> {code:java}
> $ broker/bin/artemis consumer \
>    --destination TEST \
>    --message-count 1 \
>    --url "(tcp://localhost:61616)?consumerWindowSize=-1" \
>    --sleep 6{code}
> Not saying this is demonstrating proper usage of the client, but this was the 
> easiest way I could think of to reproduce the problem.
> Running the same procedure against an older version of the broker results in 
> no such issue



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-3943) Clients can cause broker OOM when reading paged messages

2022-08-18 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17581401#comment-17581401
 ] 

Anton Roskvist commented on ARTEMIS-3943:
-

Not from my testing at least... running the test described in this ticket for 
versions < 2.24.0 and the client happily processes one message a minute (or 
whatever the sleep timer is configured to) whereas on 2.24.0 the broker hits an 
OME after some ~5 seconds of the client running.

This is thrown in the log:
https://pastebin.mozilla.org/3c20JZvG/raw

> Clients can cause broker OOM when reading paged messages
> 
>
> Key: ARTEMIS-3943
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3943
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.24.0
>Reporter: Anton Roskvist
>Priority: Major
>
> Starting with version 2.24.0 of the broker, a single client can cause the 
> broker to OOM and terminate if an address is paging and holds enough messages 
> to collectively fill the broker heap.
> To reproduce:
> {code:java}
> $ apache-artemis-2.24.0/bin/artemis create broker
> $ broker/bin/artemis-service start
> $ broker/bin/artemis producer \
>   --destination TEST \
>   --text-size 10 \
>   --message-count 10{code}
> Optional:
> Tweak "message-count" + "text-size" above together with the "-Xmx"-property 
> in artemis.profile to be able to trigger it faster
> Kill broker with:
> {code:java}
> $ broker/bin/artemis consumer \
>    --destination TEST \
>    --message-count 1 \
>    --url "(tcp://localhost:61616)?consumerWindowSize=-1" \
>    --sleep 6{code}
> Not saying this is demonstrating proper usage of the client, but this was the 
> easiest way I could think of to reproduce the problem.
> Running the same procedure against an older version of the broker results in 
> no such issue



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-3943) Clients can cause broker OOM when reading paged messages

2022-08-18 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17581394#comment-17581394
 ] 

Anton Roskvist commented on ARTEMIS-3943:
-

Hi,

Fortunately I have not been affected by this "in the wild" but found it during 
some unrelated testing... the client in this case is misconfigured/broken (on 
purpose), but still... a single client should not be able to bring down the 
entire broker that easily, especially considering it's not doing anything "too 
out there" to begin with?

I would imagine the same thing could happen for properly configured clients as 
well, say from multiple addresses and consumers in aggregate when recovering 
from global-paging for instance?

Regardless, if this ticket is a duplicate and also already addressed, then it's 
all good! I think it would make sense to run "paging-flow-control" as default 
since that sounds "safer" and more closely resembling how previous versions of 
the broker behaves, right?

Br,

Anton

> Clients can cause broker OOM when reading paged messages
> 
>
> Key: ARTEMIS-3943
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3943
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.24.0
>Reporter: Anton Roskvist
>Priority: Major
>
> Starting with version 2.24.0 of the broker, a single client can cause the 
> broker to OOM and terminate if an address is paging and holds enough messages 
> to collectively fill the broker heap.
> To reproduce:
> {code:java}
> $ apache-artemis-2.24.0/bin/artemis create broker
> $ broker/bin/artemis-service start
> $ broker/bin/artemis producer \
>   --destination TEST \
>   --text-size 10 \
>   --message-count 10{code}
> Optional:
> Tweak "message-count" + "text-size" above together with the "-Xmx"-property 
> in artemis.profile to be able to trigger it faster
> Kill broker with:
> {code:java}
> $ broker/bin/artemis consumer \
>    --destination TEST \
>    --message-count 1 \
>    --url "(tcp://localhost:61616)?consumerWindowSize=-1" \
>    --sleep 6{code}
> Not saying this is demonstrating proper usage of the client, but this was the 
> easiest way I could think of to reproduce the problem.
> Running the same procedure against an older version of the broker results in 
> no such issue



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-3943) Clients can cause broker OOM when reading paged messages

2022-08-17 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-3943:
---

 Summary: Clients can cause broker OOM when reading paged messages
 Key: ARTEMIS-3943
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3943
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 2.24.0
Reporter: Anton Roskvist


Starting with version 2.24.0 of the broker, a single client can cause the 
broker to OOM and terminate if an address is paging and holds enough messages 
to collectively fill the broker heap.

To reproduce:
{code:java}
$ apache-artemis-2.24.0/bin/artemis create broker
$ broker/bin/artemis-service start
$ broker/bin/artemis producer \
  --destination TEST \
  --text-size 10 \
  --message-count 10{code}
Optional:
Tweak "message-count" + "text-size" above together with the "-Xmx"-property in 
artemis.profile to be able to trigger it faster

Kill broker with:
{code:java}
$ broker/bin/artemis consumer \
   --destination TEST \
   --message-count 1 \
   --url "(tcp://localhost:61616)?consumerWindowSize=-1" \
   --sleep 6{code}

Not saying this is demonstrating proper usage of the client, but this was the 
easiest way I could think of to reproduce the problem.
Running the same procedure against an older version of the broker results in no 
such issue



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-3933) ScaleDown NPE on DLA resources with multiple destinations

2022-08-12 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-3933:

Description: If using auto created dead letter resources and multiple 
destinations are placed under a common DLA address then scaleDown will fail if 
the address is paging and leave the broker in a "half dead" state. No message 
loss.  (was: If using auto created dead letter resources and multiple 
destinations are placed under a common DLA address then scaleDown will fail and 
leave broker in a "half dead" state. No message loss.)

> ScaleDown NPE on DLA resources with multiple destinations
> -
>
> Key: ARTEMIS-3933
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3933
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Anton Roskvist
>Priority: Major
>
> If using auto created dead letter resources and multiple destinations are 
> placed under a common DLA address then scaleDown will fail if the address is 
> paging and leave the broker in a "half dead" state. No message loss.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-3933) ScaleDown NPE on DLA resources with multiple destinations

2022-08-11 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-3933:
---

 Summary: ScaleDown NPE on DLA resources with multiple destinations
 Key: ARTEMIS-3933
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3933
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Anton Roskvist


If using auto created dead letter resources and multiple destinations are 
placed under a common DLA address then scaleDown will fail and leave broker in 
a "half dead" state. No message loss.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-3439) CLI commands leave empty management addresses around

2022-08-08 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1757#comment-1757
 ] 

Anton Roskvist commented on ARTEMIS-3439:
-

Agreed, I can not see this behavior on recent version either, thanks

> CLI commands leave empty management addresses around
> 
>
> Key: ARTEMIS-3439
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3439
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.18.0
>Reporter: Anton Roskvist
>Priority: Minor
>
> *This might affect more addresses that I have yet to notice* but starting 
> with 2.18.0 each CLI command issued to the broker leaves "empty" addresses 
> behind.
> Steps to reproduce:
>  Create a broker
>  Run any command (I'd recommend "bin/artemis address show" a few times for a 
> nice visual)
> Does not seem to cause any serious issues but gives address listings and the 
> web console a cluttered look.
> Addresses are cleared by a restart



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-3439) CLI commands leave empty management addresses around

2022-06-16 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17554999#comment-17554999
 ] 

Anton Roskvist commented on ARTEMIS-3439:
-

Thanks, but I'm aware of that. My thought was that to someone new to the broker 
this would make the console and some of the cli tools look very messy as the 
default behavior. I agree with not having "auto-delete = true" as the default 
configuration but perhaps this should not be the case for internal and 
management addresses?

> CLI commands leave empty management addresses around
> 
>
> Key: ARTEMIS-3439
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3439
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.18.0
>Reporter: Anton Roskvist
>Priority: Minor
>
> *This might affect more addresses that I have yet to notice* but starting 
> with 2.18.0 each CLI command issued to the broker leaves "empty" addresses 
> behind.
> Steps to reproduce:
>  Create a broker
>  Run any command (I'd recommend "bin/artemis address show" a few times for a 
> nice visual)
> Does not seem to cause any serious issues but gives address listings and the 
> web console a cluttered look.
> Addresses are cleared by a restart



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (ARTEMIS-3840) Core bridges with concurrency > 1 will get removed on configuration reload

2022-05-18 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-3840:
---

 Summary: Core bridges with concurrency > 1 will get removed on 
configuration reload
 Key: ARTEMIS-3840
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3840
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Anton Roskvist


If a core bridge is set up with > 1 concurrency then any and all configuration 
reloads (even the one adding the bridge) will cause it to get removed again 
right away.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (ARTEMIS-3834) Include paged messages when running "Send messages to DLA"-job

2022-05-16 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-3834:
---

 Summary: Include paged messages when running "Send messages to 
DLA"-job
 Key: ARTEMIS-3834
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3834
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: Anton Roskvist


Currently this operation does not include paged messages so it has to be run 
multiple times



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (ARTEMIS-3827) OpenWire - Anonymous producers risk loosing their sent messages

2022-05-12 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-3827:
---

 Summary: OpenWire - Anonymous producers risk loosing their sent 
messages
 Key: ARTEMIS-3827
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3827
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Anton Roskvist


If an anonymous producer sends messages to a destination that is not 
auto-created (address + queue) then message sends will get acked/committed by 
the broker without getting stored.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (ARTEMIS-3805) Change default Bridge Producer Window Size to 1MB

2022-05-05 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17532227#comment-17532227
 ] 

Anton Roskvist commented on ARTEMIS-3805:
-

Hello [~clebertsuconic] 

This seems to introduce an issue for me in a testing environment... messages 
get "stuck" on $.artemis.internal...-queues. Setting the previous default  
(producer-window-size=-1) resolves the issue and message flow resumes.

It might be because some messages getting forwarded in my case exceeds the 1MiB 
window size and I guess smaller messages sent after those gets stuck "behind" 
the large ones.

Again, setting the previous default solves it for me but if this is expected 
behavior I guess it should be mentioned in the patch notes once 2.22.0 releases?

> Change default Bridge Producer Window Size to 1MB
> -
>
> Key: ARTEMIS-3805
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3805
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Clebert Suconic
>Assignee: Clebert Suconic
>Priority: Major
> Fix For: 2.22.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The default producer value for a Bridge (clustered or not) is -1, meaning 
> unlimited.
> We had seen scenarios where the target and source runs out of memory when 
> running on slow networking or disk.
> I have looked into changing the implementation to back pressure the 
> networking, however values are added into an Executor (through an Actor), and 
> we the alternate back pressure would mean to add a new value to this 
> executor, binding it towards the network.
> Which is a whole circle round back to the same problem... managing credits.
> instead of adding a new value, I will change the default value for Bridges 
> which would have the same impact.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (ARTEMIS-3780) OpenWire messageConverter throws exceptions for all user properties

2022-04-19 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17524245#comment-17524245
 ] 

Anton Roskvist commented on ARTEMIS-3780:
-

Converting to String beforehand to avoid exception.
Shows some performance increase when running a producer/consumer pair sending 
and receiving 100k messages 5 times, alternating between setting 8 custom 
headers or not.



{*}Before{*}:
8 user properties: 
*Average: 64.236*
Each run: 68.30, 60.74, 62.86, 65.39, 63.89

0 user properties: 
*Average: 54.862*
Each run: 53.55, 52.55, 56.24, 61.14, 50.83

{*}After{*}: 
8 user properties: 
*Average: 52.944*
Each run: 49.48, 57.12, 53.84, 51.66, 52.62

0 user properties: 
*Average: 52.1*
Each run: 50.45, 52.28, 50.23, 51.69, 55.85

> OpenWire messageConverter throws exceptions for all user properties
> ---
>
> Key: ARTEMIS-3780
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3780
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Anton Roskvist
>Priority: Major
>
> OpenWire messageConverter throws exceptions for all user defined properties 
> on messages since they arrive with the type "UTF8Buffer" when running 
> "putMsgProperties"



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3780) OpenWire messageConverter throws exceptions for all user properties

2022-04-19 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-3780:

Description: OpenWire messageConverter throws exceptions for all user 
defined properties on messages since they arrive with the type "UTF8Buffer" 
when running "putMsgProperties"  (was: OpenWire messageConverter throws 
exceptions for all user defined properties on messages since they arrive with 
the type "UTF8Buffer" which is currently not handled in the TypeConverter.)

> OpenWire messageConverter throws exceptions for all user properties
> ---
>
> Key: ARTEMIS-3780
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3780
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Anton Roskvist
>Priority: Major
>
> OpenWire messageConverter throws exceptions for all user defined properties 
> on messages since they arrive with the type "UTF8Buffer" when running 
> "putMsgProperties"



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (ARTEMIS-3780) OpenWire messageConverter throws exceptions for all user properties

2022-04-19 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-3780:
---

 Summary: OpenWire messageConverter throws exceptions for all user 
properties
 Key: ARTEMIS-3780
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3780
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: Anton Roskvist


OpenWire messageConverter throws exceptions for all user defined properties on 
messages since they arrive with the type "UTF8Buffer" which is currently not 
handled in the TypeConverter.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3771) Rework destination handling for the OpenWire-protocol

2022-04-08 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-3771:

Description: 
Rework destination handling for the OpenWire-protocol

 

Mainly improves things for brokers with many destinations and for handling 
clients not reusing JMS resources, but should be a nice overall improvement for 
OpenWire.

Changes are:
openWireDestinationCache now global, also taking over the role for the 
connection destination cache
Avoid calling BindingQuery since that method is expensive when dealing with a 
large number of destinations
openwireDestinationCacheSize no longer has to be a power of 2

  was:
Rework destination handling for the OpenWire-protocol

 

Mainly improves things for brokers with many destinations and for handling 
clients not reusing JMS resources, but should be a nice overall improvement for 
OpenWire.

Changes include:
openWireDestinationCache now global, also taking over the role for the 
connection destination cache
Avoid calling BindingQuery since that method is expensive when dealing with a 
large number of destinations


> Rework destination handling for the OpenWire-protocol
> -
>
> Key: ARTEMIS-3771
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3771
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Anton Roskvist
>Priority: Major
>
> Rework destination handling for the OpenWire-protocol
>  
> Mainly improves things for brokers with many destinations and for handling 
> clients not reusing JMS resources, but should be a nice overall improvement 
> for OpenWire.
> Changes are:
> openWireDestinationCache now global, also taking over the role for the 
> connection destination cache
> Avoid calling BindingQuery since that method is expensive when dealing with a 
> large number of destinations
> openwireDestinationCacheSize no longer has to be a power of 2



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (ARTEMIS-3771) Rework destination handling for the OpenWire-protocol

2022-04-08 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-3771:
---

 Summary: Rework destination handling for the OpenWire-protocol
 Key: ARTEMIS-3771
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3771
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: Anton Roskvist


Rework destination handling for the OpenWire-protocol

 

Mainly improves things for brokers with many destinations and for handling 
clients not reusing JMS resources, but should be a nice overall improvement for 
OpenWire.

Changes include:
openWireDestinationCache now global, also taking over the role for the 
connection destination cache
Avoid calling BindingQuery since that method is expensive when dealing with a 
large number of destinations



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (ARTEMIS-3733) Destination cache size too small for OpenWire clients

2022-03-22 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-3733:
---

 Summary: Destination cache size too small for OpenWire clients
 Key: ARTEMIS-3733
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3733
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: Anton Roskvist


For brokers dealing with OpenWire clients and a large number of destination the 
default destination cache is fixed at 16 destinations leading to a lot of 
overhead for looking up destinations when there are lots of them.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (ARTEMIS-3698) Avoid byte[] property values when converting from OpenWire

2022-03-18 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17509042#comment-17509042
 ] 

Anton Roskvist commented on ARTEMIS-3698:
-

Thanks for confirming that, I should be all good then.

 

Br,

Anton

> Avoid byte[] property values when converting from OpenWire
> --
>
> Key: ARTEMIS-3698
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3698
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Here's some example code using the Qpid JMS client:
> {code:java}
> session.createConsumer(queue).setMessageListener(message -> {
> try {  
> Map headers = new TreeMap<>();
> Enumeration en = (Enumeration) 
> message.getPropertyNames();
> while (en.hasMoreElements()) {
> String name = en.nextElement();
> headers.put(name, message.getStringProperty(name));
> }
> System.out.println(headers);
> } catch (Exception e) {
> e.printStackTrace();
> }
> });
> {code}
> If an OpenWire JMS client sends messages to this queue the following 
> exception is thrown:
> {code:java}
> javax.jms.MessageFormatException: Property __HDR_MESSAGE_ID was a 
> org.apache.qpid.proton.amqp.Binary and cannot be read as a java.lang.String
>   at 
> org.apache.qpid.jms.message.JmsMessagePropertySupport.convertPropertyTo(JmsMessagePropertySupport.java:47)
>   at 
> org.apache.qpid.jms.message.JmsMessage.getStringProperty(JmsMessage.java:393)
>   at com.mycompany.camel.AMQClient2.lambda$main$0(AMQClient2.java:34)
>   at 
> org.apache.qpid.jms.JmsMessageConsumer.deliverNextPending(JmsMessageConsumer.java:749)
>   at 
> org.apache.qpid.jms.JmsMessageConsumer.access$100(JmsMessageConsumer.java:58)
>   at 
> org.apache.qpid.jms.JmsMessageConsumer$MessageDeliverTask.run(JmsMessageConsumer.java:808)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at 
> org.apache.qpid.jms.util.QpidJMSThreadFactory$1.run(QpidJMSThreadFactory.java:86)
>   at java.lang.Thread.run(Thread.java:748){code}
> Despite the fact section 3.5.4 of the JMS 2 spec notes that all supported 
> properties should be convertible to {{java.lang.String}} it's important to 
> note that the broker supports much more than just JMS. Even the supported 
> wire protocols used by JMS clients support much more than just JMS. 
> Therefore, there are going to be instances where certain conversions are not 
> possible. The JMS API has support for dealing with these instances (e.g. via 
> the {{MessageFormatException}}) and clients should be written to deal with 
> them. In fact, it is _critical_ for a consumer to validate a message's data 
> and protect itself from unexpected circumstances.
> That said, it would be nice to avoid {{byte[]}} property values to improve 
> the user experience. Therefore, I will update the broker to eliminate 
> {{byte[]}} values when converting between properties known to be used by the 
> OpenWire JMS client and the broker's core message format _where possible_. 
> This will mitigate the instances of {{MessageFormatException}} observed in 
> the code in the description, but it will not eliminate all potential 
> instances of {{MessageFormatException}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (ARTEMIS-3698) Avoid byte[] property values when converting from OpenWire

2022-03-18 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17509004#comment-17509004
 ] 

Anton Roskvist commented on ARTEMIS-3698:
-

Hi [~jbertram] 

I'm trying out this PR in a testing environment and I am seeing a lot of these 
warnings now popping up:


{code:java}
2022-03-18 17:10:53,639 WARN  [org.apache.activemq.artemis.core.server] 
AMQ222302: Failed to deal with property __HDR_PRODUCER_ID when converting 
message from core to OpenWire: Cannot cast [B to 
org.apache.activemq.artemis.api.core.SimpleString
2022-03-18 17:10:53,641 WARN  [org.apache.activemq.artemis.core.server] 
AMQ222302: Failed to deal with property __HDR_MESSAGE_ID when converting 
message from core to OpenWire: Cannot cast [B to 
org.apache.activemq.artemis.api.core.SimpleString
2022-03-18 17:10:53,641 WARN  [org.apache.activemq.artemis.core.server] 
AMQ222302: Failed to deal with property __HDR_PRODUCER_ID when converting 
message from core to OpenWire: Cannot cast [B to 
org.apache.activemq.artemis.api.core.SimpleString
2022-03-18 17:10:53,642 WARN  [org.apache.activemq.artemis.core.server] 
AMQ222302: Failed to deal with property __HDR_MESSAGE_ID when converting 
message from core to OpenWire: Cannot cast [B to 
org.apache.activemq.artemis.api.core.SimpleString {code}
I'm running on the same journal as the previous broker, meaning that some 
messages where received before this change. Could that be the problem? Can the 
warning be safely ignored?

Br,

Anton

> Avoid byte[] property values when converting from OpenWire
> --
>
> Key: ARTEMIS-3698
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3698
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Here's some example code using the Qpid JMS client:
> {code:java}
> session.createConsumer(queue).setMessageListener(message -> {
> try {  
> Map headers = new TreeMap<>();
> Enumeration en = (Enumeration) 
> message.getPropertyNames();
> while (en.hasMoreElements()) {
> String name = en.nextElement();
> headers.put(name, message.getStringProperty(name));
> }
> System.out.println(headers);
> } catch (Exception e) {
> e.printStackTrace();
> }
> });
> {code}
> If an OpenWire JMS client sends messages to this queue the following 
> exception is thrown:
> {code:java}
> javax.jms.MessageFormatException: Property __HDR_MESSAGE_ID was a 
> org.apache.qpid.proton.amqp.Binary and cannot be read as a java.lang.String
>   at 
> org.apache.qpid.jms.message.JmsMessagePropertySupport.convertPropertyTo(JmsMessagePropertySupport.java:47)
>   at 
> org.apache.qpid.jms.message.JmsMessage.getStringProperty(JmsMessage.java:393)
>   at com.mycompany.camel.AMQClient2.lambda$main$0(AMQClient2.java:34)
>   at 
> org.apache.qpid.jms.JmsMessageConsumer.deliverNextPending(JmsMessageConsumer.java:749)
>   at 
> org.apache.qpid.jms.JmsMessageConsumer.access$100(JmsMessageConsumer.java:58)
>   at 
> org.apache.qpid.jms.JmsMessageConsumer$MessageDeliverTask.run(JmsMessageConsumer.java:808)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at 
> org.apache.qpid.jms.util.QpidJMSThreadFactory$1.run(QpidJMSThreadFactory.java:86)
>   at java.lang.Thread.run(Thread.java:748){code}
> Despite the fact section 3.5.4 of the JMS 2 spec notes that all supported 
> properties should be convertible to {{java.lang.String}} it's important to 
> note that the broker supports much more than just JMS. Even the supported 
> wire protocols used by JMS clients support much more than just JMS. 
> Therefore, there are going to be instances where certain conversions are not 
> possible. The JMS API has support for dealing with these instances (e.g. via 
> the {{MessageFormatException}}) and clients should be written to deal with 
> them. In fact, it is _critical_ for a consumer to validate a message's data 
> and protect itself from unexpected circumstances.
> That said, it would be nice to avoid {{byte[]}} property values to improve 
> the user experience. Therefore, I will update the broker to eliminate 
> {{byte[]}} values when converting between properties known to be used by the 
> OpenWire JMS client and the broker's core message format _where possible_. 
> This will mitigate the instances of {{MessageFormatException}} observed in 
> the code in the description, but it will not eliminate all potential 
> instances of {{MessageFormatException}}.



--
This message was sent by Atlassian Jira

[jira] [Reopened] (ARTEMIS-2934) ARTEMIS-2226 causes excessive notificaions to be sent for Spring XA clients

2022-02-09 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist reopened ARTEMIS-2934:
-

> ARTEMIS-2226 causes excessive notificaions to be sent for Spring XA clients
> ---
>
> Key: ARTEMIS-2934
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2934
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>Reporter: Anton Roskvist
>Priority: Minor
>
> Hi,
> The fix in https://issues.apache.org/jira/browse/ARTEMIS-2226 causes 
> excessive notifications to be sent for clients running XA transaction through 
> the Spring framework.
> The notifications sent are SESSION_CREATED and SESSION_CLOSED.
> I strongly suspect this is because Spring DMLC cannot cache consumers 
> properly when running XA, causing it to create and remove a new session for 
> each message processed.
> Now I am not arguing that is not bad practice, because it is, but lots of 
> applications run on top of this logic. I also suspect this might affect more 
> but not be as pronounced.
>  
> I have been able to prove the aforementioned patch is what causes the issue 
> by removing:
> sendSessionNotification(CoreNotificationType.SESSION_CREATED);
> and
> sendSessionNotification(CoreNotificationType.SESSION_CLOSED);
> from ServerSessionImpl.java (they where added in the patch)
> Now I do not fully understand the intent of the original patch but I think it 
> should be made conditional, that is, send those notifications only for MQTT 
> session or something similar.
>  
> In the environment I am testing this on the difference is huge as I have a 
> lot of independent applications all running Spring+XA. About 40% of all 
> messages getting sent and received are notifications.
>  
> Br,
> Anton



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3647) rolledbackMessageRefs can grow until OOM with OpenWire clients

2022-02-07 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-3647:

Summary: rolledbackMessageRefs can grow until OOM with OpenWire clients  
(was: rolledbackMessageRefs can grow until OOM for OpenWire clients)

> rolledbackMessageRefs can grow until OOM with OpenWire clients
> --
>
> Key: ARTEMIS-3647
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3647
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Anton Roskvist
>Priority: Major
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> {color:#1d1c1d}In my use case I have quite a few long lived OpenWire 
> consumers. I noticed that over time the heap usage increases. Looking through 
> a heap dump, I found that memory is held in "rolledbackMessageRefs". In one 
> case holding as much as 1.6GB of data with 0 messages on queue. 
> Disconnecting the consumer and then reconnecting released the memory.
> Clients are running Spring with transactions. The clients affected by this 
> have some small issue receiving messages such that some of them are retried a 
> couple of times before getting processed properly.
> I suspect that "rolledbackMessageRefs"{color} are not getting cleared with 
> the message ref once it's finally processed for some reason.
> {-}{color:#1d1c1d}I have not found a way to reproduce this yet and it happens 
> over several days.
> {color}{-}{color:#1d1c1d}UPDATE: I can easily reproduce this by setting up a 
> standalone Artemis broker with "out-of-the-box"-configuration and using these 
> tools:{color} -- [https://github.com/erik-wramner/JmsTools]  (AmqJmsConsumer 
> and optionally AmqJmsProducer)
> 1. Start the broker
> 2. Send 100k messages to "queue://TEST"
> {code:java}
> # java -jar JmsTools/shaded-jars/AmqJmsProducer.jar -url 
> "tcp://localhost:61616" -user USER -pw PASSWORD -queue TEST -count 
> 10{code}
> 3. Receive one more message than produced and do a rollback on 30% of them 
> (unrealistic, but means this can be done in minutes instead of days. Receive 
> one more to ensure consumer stays live)
> {code:java}
> # java -jar JmsTools/shaded-jars/AmqJmsConsumer.jar -url 
> "tcp://localhost:61616?jms.prefetchPolicy.all=100=true"
>  -user USER -pw PASSWORD -queue TEST -count 11 -rollback 30{code}
> 4. Wait until no more messages are left on "queue://TEST" (a few might be on 
> DLQ but that's okay)
> 5. Get a heap dump with the consumer still connected
> {code:java}
> # jmap -dump:format=b,file=dump.hprof Artemis_PID{code}
> 6. Running "Leak suspects" with MAT will show a (relatively) large amount of 
> memory held by {color:#1d1c1d}"rolledbackMessageRefs"{color} for the consumer 
> connected to queue://TEST
> The consumer is run with "jms.nonBlockingRedelivery=true" to speed things up, 
> though it should not be strictly needed.
> As an added bonus this also shows that the prefetch limit 
> "jms.prefetchPolicy.all=100" is not respected while messages are in the 
> redelivery process which can easily be seen in the consoles 
> "Attributes"-section for the queue. This is also true for the default 
> prefetch value of 1000.
> Br,
> Anton



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (AMQ-6763) Thread hangs on setXid

2022-01-24 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17481032#comment-17481032
 ] 

Anton Roskvist commented on AMQ-6763:
-

I think you are right in that it's a client issue, though I have also seen it 
happen with Apache Camel as the client (ActiveMQ component).
I actually had a chance to revisit this a few weeks back and found that for my 
case at least (with Artemis as the broker) the issue has been solved and I 
believe it's part of this change:
https://issues.apache.org/jira/browse/ARTEMIS-2870

Perhaps their test or fix might shed light on why the issue occurs with 
ActiveMQ as well?

Sorry for not updating about this earlier.

Br,

Anton

> Thread hangs on setXid
> --
>
> Key: AMQ-6763
> URL: https://issues.apache.org/jira/browse/AMQ-6763
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: XA
>Affects Versions: 5.14.5
>Reporter: Jakub
>Assignee: Jean-Baptiste Onofré
>Priority: Minor
> Fix For: 5.17.0, 5.15.16, 5.16.4
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> I've noticed issues with distributed transactions (XA) on karaf when using 
> ActiveMQ with JDBC storeage (postgres). After some time (it isn't 
> deterministic) I've observed that on database side 'idle in transaction' 
> appeared (it's other schema than used by ActiveMQ). After debugging it seams 
> that the reason why transactions are hanging is ActiveMQ and  
> org.apache.activemq.transport.FutureResponse.getResult method that waits 
> forever for a response.
> {code}
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x000768585aa8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403)
> at 
> org.apache.activemq.transport.FutureResponse.getResult(FutureResponse.java:48)
> at 
> org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:87)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1388)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1428)
> at 
> org.apache.activemq.TransactionContext.setXid(TransactionContext.java:751)
> at 
> org.apache.activemq.TransactionContext.invokeBeforeEnd(TransactionContext.java:424)
> at 
> org.apache.activemq.TransactionContext.end(TransactionContext.java:408)
> at 
> org.apache.geronimo.transaction.manager.WrapperNamedXAResource.end(WrapperNamedXAResource.java:61)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:588)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:567)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.beforePrepare(TransactionImpl.java:414)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.commit(TransactionImpl.java:262)
> at 
> org.apache.geronimo.transaction.manager.TransactionManagerImpl.commit(TransactionManagerImpl.java:252)
> at 
> org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1020)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:761)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:730)
> at 
> org.apache.aries.transaction.internal.AriesPlatformTransactionManager.commit(AriesPlatformTransactionManager.java:75)
> at 
> org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:484)
> at 
> org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:291)
> at 
> org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
> at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:655)
> . custom service
> {code}
> {code}
> "DefaultMessageListenerContainer-3" #13199 prio=5 os_prio=0 
> tid=0x7fb8687e6800 nid=0x3954 waiting on condition [0x7fb7b0b98000]
>

[jira] [Updated] (ARTEMIS-3647) rolledbackMessageRefs can grow until OOM for OpenWire clients

2022-01-17 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-3647:

Description: 
{color:#1d1c1d}In my use case I have quite a few long lived OpenWire consumers. 
I noticed that over time the heap usage increases. Looking through a heap dump, 
I found that memory is held in "rolledbackMessageRefs". In one case holding as 
much as 1.6GB of data with 0 messages on queue. 

Disconnecting the consumer and then reconnecting released the memory.

Clients are running Spring with transactions. The clients affected by this have 
some small issue receiving messages such that some of them are retried a couple 
of times before getting processed properly.

I suspect that "rolledbackMessageRefs"{color} are not getting cleared with the 
message ref once it's finally processed for some reason.

{-}{color:#1d1c1d}I have not found a way to reproduce this yet and it happens 
over several days.
{color}{-}{color:#1d1c1d}UPDATE: I can easily reproduce this by setting up a 
standalone Artemis broker with "out-of-the-box"-configuration and using these 
tools:{color} -- [https://github.com/erik-wramner/JmsTools]  (AmqJmsConsumer 
and optionally AmqJmsProducer)

1. Start the broker
2. Send 100k messages to "queue://TEST"
{code:java}
# java -jar JmsTools/shaded-jars/AmqJmsProducer.jar -url 
"tcp://localhost:61616" -user USER -pw PASSWORD -queue TEST -count 10{code}
3. Receive one more message than produced and do a rollback on 30% of them 
(unrealistic, but means this can be done in minutes instead of days. Receive 
one more to ensure consumer stays live)
{code:java}
# java -jar JmsTools/shaded-jars/AmqJmsConsumer.jar -url 
"tcp://localhost:61616?jms.prefetchPolicy.all=100=true"
 -user USER -pw PASSWORD -queue TEST -count 11 -rollback 30{code}
4. Wait until no more messages are left on "queue://TEST" (a few might be on 
DLQ but that's okay)
5. Get a heap dump with the consumer still connected
{code:java}
# jmap -dump:format=b,file=dump.hprof Artemis_PID{code}
6. Running "Leak suspects" with MAT will show a (relatively) large amount of 
memory held by {color:#1d1c1d}"rolledbackMessageRefs"{color} for the consumer 
connected to queue://TEST

The consumer is run with "jms.nonBlockingRedelivery=true" to speed things up, 
though it should not be strictly needed.

As an added bonus this also shows that the prefetch limit 
"jms.prefetchPolicy.all=100" is not respected while messages are in the 
redelivery process which can easily be seen in the consoles 
"Attributes"-section for the queue. This is also true for the default prefetch 
value of 1000.

Br,

Anton

  was:
{color:#1d1c1d}In my use case I have quite a few long lived OpenWire consumers. 
I noticed that over time the heap usage increases. Looking through a heap dump, 
I found that memory is held in "rolledbackMessageRefs". In one case holding as 
much as 1.6GB of data with 0 messages on queue. 

Disconnecting the consumer and then reconnecting released the memory.

Clients are running Spring with transactions. The clients affected by this have 
some small issue receiving messages such that some of them are retried a couple 
of times before getting processed properly.

I suspect that "rolledbackMessageRefs"{color} are not getting cleared with the 
message ref once it's finally processed for some reason.

 \{-}{color:#1d1c1d}I have not found a way to reproduce this yet and it happens 
over several days.

{color}{-}{color:#1d1c1d}UPDATE: I can easily reproduce this by setting up a 
standalone Artemis broker with "out-of-the-box"-configuration and using these 
tools:
{color} -{color:#1d1c1d}{color}- [https://github.com/erik-wramner/JmsTools]  
(AmqJmsConsumer and optionally AmqJmsProducer)

1. Start the broker
2. Send 100k messages to "queue://TEST"
{code:java}
# java -jar JmsTools/shaded-jars/AmqJmsProducer.jar -url 
"tcp://localhost:61616" -user USER -pw PASSWORD -queue TEST -count 10{code}
3. Receive all but one messages and do a rollback on 30% of them (unrealistic, 
but means this can be done in minutes instead of days)
{code:java}
# java -jar JmsTools/shaded-jars/AmqJmsConsumer.jar -url 
"tcp://localhost:61616?jms.prefetchPolicy.all=100=true"
 -user USER -pw PASSWORD -queue TEST -count 9 -rollback 30{code}
4. Wait until no more messages are left on "queue://TEST" (a few might be on 
DLQ but that's okay)
5. Get a heap dump with the consumer still connected
{code:java}
# jmap -dump:format=b,file=dump.hprof Artemis_PID{code}
6. Running "Leak suspects" with MAT will show a (relatively) large amount of 
memory held by {color:#1d1c1d}"rolledbackMessageRefs"{color} for the consumer 
connected to queue://TEST

The consumer is run with "jms.nonBlockingRedelivery=true" to speed things up, 
though it should not be strictly needed.

As an added bonus this also shows that the prefetch limit 
"jms.prefetchPolicy.all=100" is not respected 

[jira] [Updated] (ARTEMIS-3646) OpenWire clients leave incorrect queue metrics when messages are sent to DLQ

2022-01-17 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-3646:

Description: 
{color:#1d1c1d}Messages getting sent to DLQ from an OpenWire client leaves 
incorrect queue metrics behind. {color}{color:#1d1c1d}"DeliveringSize"
"DurableDeliveringSize" 
"DurablePersistentSize"
"PersistentSize"

All metrics above end up with negative values, even if the consumers are later 
disconnected or if the messages are retried and successfully consumed.

I am able to reproduce this easily with an Artemis broker running 
"out-of-the-box"-config and using the following clients:
[https://github.com/erik-wramner/JmsTools]   (AmqJmsConsumer.jar and 
AmqJmsProducer.jar)

Example: 
{color}
{code:java}
# java -jar shaded-jars/AmqJmsProducer.jar -url "tcp://localhost:61616" -user 
USER -pw PASSWORD -queue TEST -count 100{code}
{code:java}
# java -jar shaded-jars/AmqJmsConsumer.jar -url "tcp://localhost:61616" -user 
USER -pw PASSWORD -queue TEST -count 1 -rollback 100 -t 10{code}
{color:#1d1c1d}
Using Artemis clients from the same tools results in no such issue. I am seeing 
the issue with other OpenWire clients also

Br,{color}

{color:#1d1c1d}Anton{color}

  was:
{color:#1d1c1d}Messages getting sent to DLQ from an OpenWire client leaves 
incorrect queue metrics behind. 
{color}{color:#1d1c1d}"DeliveringSize"
"DurableDeliveringSize" 
"DurablePersistentSize"
"PersistentSize"

All metrics above end up with negative values, even if the consumers are later 
disconnected or if the messages are retried and successfully consumed.

I am able to reproduce this easily with an Artemis broker running 
"out-of-the-box"-config and using the following clients:
[https://github.com/erik-wramner/JmsTools]   (AmqJmsConsumer.jar and 
AmqJmsProducer.jar)

example: 
# java -jar shaded-jars/AmqJmsProducer.jar -url "tcp://localhost:61616" -user 
USER -pw PASSWORD -queue TEST -count 100

# java -jar shaded-jars/AmqJmsConsumer.jar -url "tcp://localhost:61616" -user 
USER -pw PASSWORD -queue TEST -count 1 -rollback 100 -t 10

Using Artemis clients from the same tools results in no such issue. I am seeing 
the issue with other OpenWire clients also

Br,
{color}

{color:#1d1c1d}Anton{color}


> OpenWire clients leave incorrect queue metrics when messages are sent to DLQ
> 
>
> Key: ARTEMIS-3646
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3646
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Anton Roskvist
>Priority: Major
>
> {color:#1d1c1d}Messages getting sent to DLQ from an OpenWire client leaves 
> incorrect queue metrics behind. {color}{color:#1d1c1d}"DeliveringSize"
> "DurableDeliveringSize" 
> "DurablePersistentSize"
> "PersistentSize"
> All metrics above end up with negative values, even if the consumers are 
> later disconnected or if the messages are retried and successfully consumed.
> I am able to reproduce this easily with an Artemis broker running 
> "out-of-the-box"-config and using the following clients:
> [https://github.com/erik-wramner/JmsTools]   (AmqJmsConsumer.jar and 
> AmqJmsProducer.jar)
> Example: 
> {color}
> {code:java}
> # java -jar shaded-jars/AmqJmsProducer.jar -url "tcp://localhost:61616" -user 
> USER -pw PASSWORD -queue TEST -count 100{code}
> {code:java}
> # java -jar shaded-jars/AmqJmsConsumer.jar -url "tcp://localhost:61616" -user 
> USER -pw PASSWORD -queue TEST -count 1 -rollback 100 -t 10{code}
> {color:#1d1c1d}
> Using Artemis clients from the same tools results in no such issue. I am 
> seeing the issue with other OpenWire clients also
> Br,{color}
> {color:#1d1c1d}Anton{color}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3647) rolledbackMessageRefs can grow until OOM for OpenWire clients

2022-01-17 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-3647:

Summary: rolledbackMessageRefs can grow until OOM for OpenWire clients  
(was: rolledbackMessageRefs can grow until OOM for OpenWre clients)

> rolledbackMessageRefs can grow until OOM for OpenWire clients
> -
>
> Key: ARTEMIS-3647
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3647
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Anton Roskvist
>Priority: Major
>
> {color:#1d1c1d}In my use case I have quite a few long lived OpenWire 
> consumers. I noticed that over time the heap usage increases. Looking through 
> a heap dump, I found that memory is held in "rolledbackMessageRefs". In one 
> case holding as much as 1.6GB of data with 0 messages on queue. 
> Disconnecting the consumer and then reconnecting released the memory.
> Clients are running Spring with transactions. The clients affected by this 
> have some small issue receiving messages such that some of them are retried a 
> couple of times before getting processed properly.
> I suspect that "rolledbackMessageRefs"{color} are not getting cleared with 
> the message ref once it's finally processed for some reason.
>  \{-}{color:#1d1c1d}I have not found a way to reproduce this yet and it 
> happens over several days.
> {color}{-}{color:#1d1c1d}UPDATE: I can easily reproduce this by setting up a 
> standalone Artemis broker with "out-of-the-box"-configuration and using these 
> tools:
> {color} -{color:#1d1c1d}{color}- [https://github.com/erik-wramner/JmsTools]  
> (AmqJmsConsumer and optionally AmqJmsProducer)
> 1. Start the broker
> 2. Send 100k messages to "queue://TEST"
> {code:java}
> # java -jar JmsTools/shaded-jars/AmqJmsProducer.jar -url 
> "tcp://localhost:61616" -user USER -pw PASSWORD -queue TEST -count 
> 10{code}
> 3. Receive all but one messages and do a rollback on 30% of them 
> (unrealistic, but means this can be done in minutes instead of days)
> {code:java}
> # java -jar JmsTools/shaded-jars/AmqJmsConsumer.jar -url 
> "tcp://localhost:61616?jms.prefetchPolicy.all=100=true"
>  -user USER -pw PASSWORD -queue TEST -count 9 -rollback 30{code}
> 4. Wait until no more messages are left on "queue://TEST" (a few might be on 
> DLQ but that's okay)
> 5. Get a heap dump with the consumer still connected
> {code:java}
> # jmap -dump:format=b,file=dump.hprof Artemis_PID{code}
> 6. Running "Leak suspects" with MAT will show a (relatively) large amount of 
> memory held by {color:#1d1c1d}"rolledbackMessageRefs"{color} for the consumer 
> connected to queue://TEST
> The consumer is run with "jms.nonBlockingRedelivery=true" to speed things up, 
> though it should not be strictly needed.
> As an added bonus this also shows that the prefetch limit 
> "jms.prefetchPolicy.all=100" is not respected while messages are in the 
> redelivery process which can easily be seen in the consoles 
> "Attributes"-section for the queue. This is also true for the default 
> prefetch value of 1000.
> Br,
> Anton



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3647) rolledbackMessageRefs can grow until OOM for OpenWre clients

2022-01-17 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-3647:

Description: 
{color:#1d1c1d}In my use case I have quite a few long lived OpenWire consumers. 
I noticed that over time the heap usage increases. Looking through a heap dump, 
I found that memory is held in "rolledbackMessageRefs". In one case holding as 
much as 1.6GB of data with 0 messages on queue. 

Disconnecting the consumer and then reconnecting released the memory.

Clients are running Spring with transactions. The clients affected by this have 
some small issue receiving messages such that some of them are retried a couple 
of times before getting processed properly.

I suspect that "rolledbackMessageRefs"{color} are not getting cleared with the 
message ref once it's finally processed for some reason.

 \{-}{color:#1d1c1d}I have not found a way to reproduce this yet and it happens 
over several days.

{color}{-}{color:#1d1c1d}UPDATE: I can easily reproduce this by setting up a 
standalone Artemis broker with "out-of-the-box"-configuration and using these 
tools:
{color} -{color:#1d1c1d}{color}- [https://github.com/erik-wramner/JmsTools]  
(AmqJmsConsumer and optionally AmqJmsProducer)

1. Start the broker
2. Send 100k messages to "queue://TEST"
{code:java}
# java -jar JmsTools/shaded-jars/AmqJmsProducer.jar -url 
"tcp://localhost:61616" -user USER -pw PASSWORD -queue TEST -count 10{code}
3. Receive all but one messages and do a rollback on 30% of them (unrealistic, 
but means this can be done in minutes instead of days)
{code:java}
# java -jar JmsTools/shaded-jars/AmqJmsConsumer.jar -url 
"tcp://localhost:61616?jms.prefetchPolicy.all=100=true"
 -user USER -pw PASSWORD -queue TEST -count 9 -rollback 30{code}
4. Wait until no more messages are left on "queue://TEST" (a few might be on 
DLQ but that's okay)
5. Get a heap dump with the consumer still connected
{code:java}
# jmap -dump:format=b,file=dump.hprof Artemis_PID{code}
6. Running "Leak suspects" with MAT will show a (relatively) large amount of 
memory held by {color:#1d1c1d}"rolledbackMessageRefs"{color} for the consumer 
connected to queue://TEST

The consumer is run with "jms.nonBlockingRedelivery=true" to speed things up, 
though it should not be strictly needed.

As an added bonus this also shows that the prefetch limit 
"jms.prefetchPolicy.all=100" is not respected while messages are in the 
redelivery process which can easily be seen in the consoles 
"Attributes"-section for the queue. This is also true for the default prefetch 
value of 1000.

Br,

Anton

  was:
{color:#1d1c1d}In my use case I have quite a few long lived OpenWire consumers. 
I noticed that over time the heap usage increases. Looking through a heap dump, 
I found that memory is held in "rolledbackMessageRefs". In one case holding as 
much as 1.6GB of data with 0 messages on queue. 

Disconnecting the consumer and then reconnecting released the memory.

Clients are running Spring with transactions. The clients affected by this have 
some small issue receiving messages such that some of them are retried a couple 
of times before getting processed properly.

I suspect that "{color:#1d1c1d}rolledbackMessageRefs"{color} are not getting 
cleared with the message ref once it's finally processed for some reason.

{color:#1d1c1d} I have not found a way to reproduce this yet and it happens 
over several days.{color}
{color}


> rolledbackMessageRefs can grow until OOM for OpenWre clients
> 
>
> Key: ARTEMIS-3647
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3647
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Anton Roskvist
>Priority: Major
>
> {color:#1d1c1d}In my use case I have quite a few long lived OpenWire 
> consumers. I noticed that over time the heap usage increases. Looking through 
> a heap dump, I found that memory is held in "rolledbackMessageRefs". In one 
> case holding as much as 1.6GB of data with 0 messages on queue. 
> Disconnecting the consumer and then reconnecting released the memory.
> Clients are running Spring with transactions. The clients affected by this 
> have some small issue receiving messages such that some of them are retried a 
> couple of times before getting processed properly.
> I suspect that "rolledbackMessageRefs"{color} are not getting cleared with 
> the message ref once it's finally processed for some reason.
>  \{-}{color:#1d1c1d}I have not found a way to reproduce this yet and it 
> happens over several days.
> {color}{-}{color:#1d1c1d}UPDATE: I can easily reproduce this by setting up a 
> standalone Artemis broker with "out-of-the-box"-configuration and using these 
> tools:
> {color} -{color:#1d1c1d}{color}- [https://github.com/erik-wramner/JmsTools]  
> 

[jira] [Created] (ARTEMIS-3647) rolledbackMessageRefs can grow until OOM for OpenWre clients

2022-01-14 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-3647:
---

 Summary: rolledbackMessageRefs can grow until OOM for OpenWre 
clients
 Key: ARTEMIS-3647
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3647
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Anton Roskvist


{color:#1d1c1d}In my use case I have quite a few long lived OpenWire consumers. 
I noticed that over time the heap usage increases. Looking through a heap dump, 
I found that memory is held in "rolledbackMessageRefs". In one case holding as 
much as 1.6GB of data with 0 messages on queue. 

Disconnecting the consumer and then reconnecting released the memory.

Clients are running Spring with transactions. The clients affected by this have 
some small issue receiving messages such that some of them are retried a couple 
of times before getting processed properly.

I suspect that "{color:#1d1c1d}rolledbackMessageRefs"{color} are not getting 
cleared with the message ref once it's finally processed for some reason.

{color:#1d1c1d} I have not found a way to reproduce this yet and it happens 
over several days.{color}
{color}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (ARTEMIS-3646) OpenWire clients leave incorrect queue metrics when messages are sent to DLQ

2022-01-14 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-3646:
---

 Summary: OpenWire clients leave incorrect queue metrics when 
messages are sent to DLQ
 Key: ARTEMIS-3646
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3646
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Anton Roskvist


{color:#1d1c1d}Messages getting sent to DLQ from an OpenWire client leaves 
incorrect queue metrics behind. 
{color}{color:#1d1c1d}"DeliveringSize"
"DurableDeliveringSize" 
"DurablePersistentSize"
"PersistentSize"

All metrics above end up with negative values, even if the consumers are later 
disconnected or if the messages are retried and successfully consumed.

I am able to reproduce this easily with an Artemis broker running 
"out-of-the-box"-config and using the following clients:
[https://github.com/erik-wramner/JmsTools]   (AmqJmsConsumer.jar and 
AmqJmsProducer.jar)

example: 
# java -jar shaded-jars/AmqJmsProducer.jar -url "tcp://localhost:61616" -user 
USER -pw PASSWORD -queue TEST -count 100

# java -jar shaded-jars/AmqJmsConsumer.jar -url "tcp://localhost:61616" -user 
USER -pw PASSWORD -queue TEST -count 1 -rollback 100 -t 10

Using Artemis clients from the same tools results in no such issue. I am seeing 
the issue with other OpenWire clients also

Br,
{color}

{color:#1d1c1d}Anton{color}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (ARTEMIS-3608) OFF_WITH_REDISTRIBUTION - no redistribution for non persistent Multicast messages

2021-12-15 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-3608:
---

 Summary: OFF_WITH_REDISTRIBUTION - no redistribution for non 
persistent Multicast messages
 Key: ARTEMIS-3608
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3608
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Anton Roskvist


When running a cluster with OFF_WITH_REDISTRIBUTION load balancing semantics, 
non persistent messages sent to a broker without a directly connected consumer 
results in dropped messages even if a remote one is present.

This might be expected behavior but I think it's wrong. Setting 
OFF_WITH_REDISTRIBUTION I would expect messages to reach a corresponding 
consumer regardless of where it is connected in the cluster.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3557) ARTEMIS-1925 fix does not handle redistribution to "old" consumers

2021-11-17 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-3557:

Description: 
OFF_WITH_REDISTRIBUTION does not handle this scenario:

If a destination and consumer exist on one node in a cluster and a producer 
shows up on another node messages will not get redistributed until the old 
consumer disconnects and reconnects.

  was:
OFF_WITH_REDISTRIBUTION does not handle two scenarios:

1. non durable Multicast where the subscriber is on a separate node from the 
publisher. Messages gets dropped.

2. If a destination and consumer exist on one node in a cluster and a producer 
shows up on another node messages will not get redistributed until the old 
consumer disconnects and reconnects.


> ARTEMIS-1925 fix does not handle redistribution to "old" consumers
> --
>
> Key: ARTEMIS-3557
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3557
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Anton Roskvist
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> OFF_WITH_REDISTRIBUTION does not handle this scenario:
> If a destination and consumer exist on one node in a cluster and a producer 
> shows up on another node messages will not get redistributed until the old 
> consumer disconnects and reconnects.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3557) ARTEMIS-1925 fix does not handle redistribution to "old" consumers

2021-11-17 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-3557:

Summary: ARTEMIS-1925 fix does not handle redistribution to "old" consumers 
 (was: ARTEMIS-1925 fix does not handle Multicast in cluster and redistribution 
to "old" consumers)

> ARTEMIS-1925 fix does not handle redistribution to "old" consumers
> --
>
> Key: ARTEMIS-3557
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3557
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Anton Roskvist
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> OFF_WITH_REDISTRIBUTION does not handle two scenarios:
> 1. non durable Multicast where the subscriber is on a separate node from the 
> publisher. Messages gets dropped.
> 2. If a destination and consumer exist on one node in a cluster and a 
> producer shows up on another node messages will not get redistributed until 
> the old consumer disconnects and reconnects.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (ARTEMIS-3557) ARTEMIS-1925 fix does not handle Multicast in cluster and redistribution to "old" consumers

2021-11-15 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17443639#comment-17443639
 ] 

Anton Roskvist commented on ARTEMIS-3557:
-

Hi,

It might very well be by design, but at least I find the behavior a bit 
unintuitive. Connecting a client to a cluster you should not have to know where 
other clients are connected to be able to receive messages, regardless of them 
being multicast or not. That might just be me though, I'll accept that 
possibility :). I did add a condition on the #matchBinding that solves this for 
specifically OFF_WITH_REDISTRIBUTION.

For 2 I do believe my PR solves that scenario, at least from what I can tell. 
The addition to PostOfficeImpl adds a redistributor when setting up a local 
binding (if there are remote consumers set up for it), as well as when a new 
remote consumer is added. This operation could probably be done better (with 
fewer steps) but I have yet to figure out how.

Br,

Anton

> ARTEMIS-1925 fix does not handle Multicast in cluster and redistribution to 
> "old" consumers
> ---
>
> Key: ARTEMIS-3557
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3557
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Anton Roskvist
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> OFF_WITH_REDISTRIBUTION does not handle two scenarios:
> 1. non durable Multicast where the subscriber is on a separate node from the 
> publisher. Messages gets dropped.
> 2. If a destination and consumer exist on one node in a cluster and a 
> producer shows up on another node messages will not get redistributed until 
> the old consumer disconnects and reconnects.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (ARTEMIS-3557) ARTEMIS-1925 fix does not handle Multicast in cluster and redistribution to "old" consumers

2021-11-08 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-3557:
---

 Summary: ARTEMIS-1925 fix does not handle Multicast in cluster and 
redistribution to "old" consumers
 Key: ARTEMIS-3557
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3557
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Anton Roskvist


OFF_WITH_REDISTRIBUTION does not handle two scenarios:

1. non durable Multicast where the subscriber is on a separate node from the 
publisher. Messages gets dropped.

2. If a destination and consumer exist on one node in a cluster and a producer 
shows up on another node messages will not get redistributed until the old 
consumer disconnects and reconnects.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (ARTEMIS-1925) Allow message redistribution even with OFF message-load-balancing semantics

2021-11-01 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-1925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17436865#comment-17436865
 ] 

Anton Roskvist commented on ARTEMIS-1925:
-

Excellent, sounds good [~gtully] .

So I agree that auto-created queue resources can be considered out of scope for 
this but support down the line would be greatly appreciated. At least from my 
use case, where I'm working with a large number of destinations (1-2k) in a 
dynamic environment, meaning that clients might add a queue unannounced. I will 
see if I can create test cases for both scenarios I mentioned, but again, 
limited experience from my part so I might not be able to deliver on that. 
Should it be handled as a separate issue or here if i do find something?

For Multicast/topic behavior I would expect it to work with redistribution 
enabled but not with OFF I guess, but that is clearly a matter of 
interpretation. Seems strange though that cluster communication for multicast 
would be dependent on initial distribution (or ON_DEMAND), right?

> Allow message redistribution even with OFF message-load-balancing semantics
> ---
>
> Key: ARTEMIS-1925
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1925
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Justin Bertram
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.20.0
>
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> Currently if the {{message-load-balancing}} is {{STRICT}} or {{OFF}} then 
> message redistribution is disabled.  Message redistribution should be 
> controlled only by the {{redistribtion-delay}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-1925) Allow message redistribution even with OFF message-load-balancing semantics

2021-11-01 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-1925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17436734#comment-17436734
 ] 

Anton Roskvist commented on ARTEMIS-1925:
-

[~gtully]  I have looked further into this and I have tried to make some small 
changes. By removing the "OFF_WITH_DISTRIBUTION" condition on 
RemoteQueueBinding I can get close to what I think the intended behavior is, 
but two issues remain. One small and one potentially nasty one.

The small issue is that if a queue with a consumer exists on node A but not on 
B, and B suddenly gets messages and the queue gets auto created, then 
redistribution is not started. Messages pile up on node B. If the consumer 
detaches and reattaches redistribution happens. I saw this issue with a 
previous attempt at achieving the same functionality in the broker 
(redistribution but no initial distribution) and it seems to be because a 
redistributor is only added when a consumer is created AND you have a local 
binding for that queue.

The bigger issue might be related and comes from Multicast/topics in a similar 
scenario. As it is now (with your change and also the changes I've made), if a 
publisher and subscriber for the same topic are on different nodes, all 
messages are silently dropped (for non-durable messages).

These are the changes I've made: 
[https://github.com/AntonRoskvist/activemq-artemis/commit/eddfd5aae29b3746167ffbdf877cde2a3fa25227]

Please note that while it does work for the smaller issue it does not do 
anything for the topics. Also the change in postoffice should probably be 
conditional (only add redistributor if some other node in the cluster has a 
consumer) but I have yet to figure out how to poll for binding info on 
clustered nodes.

Hope any of this is helpful. I will keep looking into it but I have pretty 
limited coding experience...

Br,

Anton

> Allow message redistribution even with OFF message-load-balancing semantics
> ---
>
> Key: ARTEMIS-1925
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1925
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Justin Bertram
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.20.0
>
>  Time Spent: 10h 10m
>  Remaining Estimate: 0h
>
> Currently if the {{message-load-balancing}} is {{STRICT}} or {{OFF}} then 
> message redistribution is disabled.  Message redistribution should be 
> controlled only by the {{redistribtion-delay}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-1925) Allow message redistribution even with OFF message-load-balancing semantics

2021-10-29 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-1925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17436145#comment-17436145
 ] 

Anton Roskvist commented on ARTEMIS-1925:
-

[~gtully]  Oh, okay. If the message gets stored and then forwarded in both 
cases, then my suggestion would add little to no benefit. Thanks for clearing 
that up, and for looking into this "OFF_WITH_REDISTRIBUTION" feature.

I have run a local build of your most recent change and it does indeed look a 
whole lot better. There is no initial distribution happening.
The issue I'm now having is that no redistribution is happening either, so now 
the load balancing behavior closely resembles that of "OFF" instead.

Br,

Anton

> Allow message redistribution even with OFF message-load-balancing semantics
> ---
>
> Key: ARTEMIS-1925
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1925
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Justin Bertram
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.20.0
>
>  Time Spent: 10h 10m
>  Remaining Estimate: 0h
>
> Currently if the {{message-load-balancing}} is {{STRICT}} or {{OFF}} then 
> message redistribution is disabled.  Message redistribution should be 
> controlled only by the {{redistribtion-delay}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-1925) Allow message redistribution even with OFF message-load-balancing semantics

2021-10-28 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-1925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17435558#comment-17435558
 ] 

Anton Roskvist commented on ARTEMIS-1925:
-

You got it, no problem... I've got an environment running with some 3k msgs/s 
in just "forwards" as overhead due to the initial distribution that I'd like to 
get rid off, so let me know any way I can help.
  
As for the side note, no not STRICT-type either...
To follow the example above (3 broker cluster, one common queue, consumers 
connected to two brokers). With ON_DEMAND , if a producer sends to either of 
those brokers, then messages will get equally distributed between the brokers 
with consumers. (This has the overhead that every other message sent to a 
broker that has a local consumer will still get distributed to the other node)

I propose a solution where that is true for targeting the broker without 
consumers, but if sending to one of the brokers with a local consumer, then 
100% of the messages end up on just that broker.

redistribution can then forward messages internally if needed, like if a 
consumer disconnects.

The benefit to just OFF_WITH_REDISTRIBUTION would be less disk operations, 
since messages sent to the broker without consumers would not _need_ to 
redistribute them (because of initial distribution) but would be able to if the 
need should arise otherwise. Hope that made it more clear and not worse

Br,

Anton

> Allow message redistribution even with OFF message-load-balancing semantics
> ---
>
> Key: ARTEMIS-1925
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1925
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Justin Bertram
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.19.0
>
>  Time Spent: 9h 50m
>  Remaining Estimate: 0h
>
> Currently if the {{message-load-balancing}} is {{STRICT}} or {{OFF}} then 
> message redistribution is disabled.  Message redistribution should be 
> controlled only by the {{redistribtion-delay}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-1925) Allow message redistribution even with OFF message-load-balancing semantics

2021-10-28 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-1925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17435417#comment-17435417
 ] 

Anton Roskvist commented on ARTEMIS-1925:
-

[~gtully] This feature does not seem to be working properly, though I'm 
hesitant to open a ticket for it in case I have misunderstood anything...
>From my testing though I can't really see a difference in message flow between 
>setting "ON_DEMAND" or "OFF_WITH_REDISTRIBUTION". I still see the initial 
>distribution of messages even though there are locally connected consumers on 
>all brokers in the cluster. To get rid of those I can still set "OFF" but then 
>I get the issue of stuck messages on some low volume queues with single 
>consumers. Have I misunderstood the intent behind the feature or is there 
>something else going on? I am running with mostly "openwire" clients if that 
>might have anything to do with it... 

As a side note I think that there should be one more option for load balancing, 
and that is to do initial load balancing, but only for queues that does not 
have a local consumer bur remote ones, meaning that in a cluster of 3 brokers, 
where incoming load is already evenly distributed and 2 brokers have local 
consumers, then 2/3 of messages gets directly delivered and the third gets this 
"Initial load balancing"... with added redistribution capabilities for if a 
consumer goes down or moves. Does that make sense?

Br,

Anton

> Allow message redistribution even with OFF message-load-balancing semantics
> ---
>
> Key: ARTEMIS-1925
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1925
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Justin Bertram
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.19.0
>
>  Time Spent: 9h 50m
>  Remaining Estimate: 0h
>
> Currently if the {{message-load-balancing}} is {{STRICT}} or {{OFF}} then 
> message redistribution is disabled.  Message redistribution should be 
> controlled only by the {{redistribtion-delay}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (AMQ-6763) Thread hangs on setXid

2021-09-30 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17422924#comment-17422924
 ] 

Anton Roskvist commented on AMQ-6763:
-

[~mbettiol] ...but your threads are locked according to your previous post, or 
did I misunderstand you?

Regardless, since you can reliable reproduce this in a test setting, try to 
increase these values there. I really think that will help.

Br,

Anton

> Thread hangs on setXid
> --
>
> Key: AMQ-6763
> URL: https://issues.apache.org/jira/browse/AMQ-6763
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: XA
>Affects Versions: 5.14.5
>Reporter: Jakub
>Assignee: Jean-Baptiste Onofré
>Priority: Minor
> Fix For: 5.17.0, 5.15.16, 5.16.4
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> I've noticed issues with distributed transactions (XA) on karaf when using 
> ActiveMQ with JDBC storeage (postgres). After some time (it isn't 
> deterministic) I've observed that on database side 'idle in transaction' 
> appeared (it's other schema than used by ActiveMQ). After debugging it seams 
> that the reason why transactions are hanging is ActiveMQ and  
> org.apache.activemq.transport.FutureResponse.getResult method that waits 
> forever for a response.
> {code}
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x000768585aa8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403)
> at 
> org.apache.activemq.transport.FutureResponse.getResult(FutureResponse.java:48)
> at 
> org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:87)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1388)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1428)
> at 
> org.apache.activemq.TransactionContext.setXid(TransactionContext.java:751)
> at 
> org.apache.activemq.TransactionContext.invokeBeforeEnd(TransactionContext.java:424)
> at 
> org.apache.activemq.TransactionContext.end(TransactionContext.java:408)
> at 
> org.apache.geronimo.transaction.manager.WrapperNamedXAResource.end(WrapperNamedXAResource.java:61)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:588)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:567)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.beforePrepare(TransactionImpl.java:414)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.commit(TransactionImpl.java:262)
> at 
> org.apache.geronimo.transaction.manager.TransactionManagerImpl.commit(TransactionManagerImpl.java:252)
> at 
> org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1020)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:761)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:730)
> at 
> org.apache.aries.transaction.internal.AriesPlatformTransactionManager.commit(AriesPlatformTransactionManager.java:75)
> at 
> org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:484)
> at 
> org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:291)
> at 
> org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
> at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:655)
> . custom service
> {code}
> {code}
> "DefaultMessageListenerContainer-3" #13199 prio=5 os_prio=0 
> tid=0x7fb8687e6800 nid=0x3954 waiting on condition [0x7fb7b0b98000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x000765f532c0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at 

[jira] [Commented] (AMQ-6763) Thread hangs on setXid

2021-09-30 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17422635#comment-17422635
 ] 

Anton Roskvist commented on AMQ-6763:
-

[~mbettiol] I might actually know what the issue is there as I ran into similar 
things when setting up the test scenario I posted earlier... it is very similar 
to the issue described here but if I'm right it's caused by a deadlock in the 
wildfly/jboss workmanager.

Try locating the jca config block in your standalone.xml and look for the 
workmanager thread config. Increase short and long lived "max-threads" and you 
should be good to go.

[~jbonofre] I don't know if this helps or not, but like I described earlier in 
my scenario the broker is ActiveMQ Artemis. This issue seem to have been 
resolved by: [https://github.com/apache/activemq-artemis/pull/3498]

Hope any of this helps
Br,
Anton

> Thread hangs on setXid
> --
>
> Key: AMQ-6763
> URL: https://issues.apache.org/jira/browse/AMQ-6763
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: XA
>Affects Versions: 5.14.5
>Reporter: Jakub
>Assignee: Jean-Baptiste Onofré
>Priority: Minor
> Fix For: 5.17.0, 5.15.16, 5.16.4
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> I've noticed issues with distributed transactions (XA) on karaf when using 
> ActiveMQ with JDBC storeage (postgres). After some time (it isn't 
> deterministic) I've observed that on database side 'idle in transaction' 
> appeared (it's other schema than used by ActiveMQ). After debugging it seams 
> that the reason why transactions are hanging is ActiveMQ and  
> org.apache.activemq.transport.FutureResponse.getResult method that waits 
> forever for a response.
> {code}
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x000768585aa8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403)
> at 
> org.apache.activemq.transport.FutureResponse.getResult(FutureResponse.java:48)
> at 
> org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:87)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1388)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1428)
> at 
> org.apache.activemq.TransactionContext.setXid(TransactionContext.java:751)
> at 
> org.apache.activemq.TransactionContext.invokeBeforeEnd(TransactionContext.java:424)
> at 
> org.apache.activemq.TransactionContext.end(TransactionContext.java:408)
> at 
> org.apache.geronimo.transaction.manager.WrapperNamedXAResource.end(WrapperNamedXAResource.java:61)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:588)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:567)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.beforePrepare(TransactionImpl.java:414)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.commit(TransactionImpl.java:262)
> at 
> org.apache.geronimo.transaction.manager.TransactionManagerImpl.commit(TransactionManagerImpl.java:252)
> at 
> org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1020)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:761)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:730)
> at 
> org.apache.aries.transaction.internal.AriesPlatformTransactionManager.commit(AriesPlatformTransactionManager.java:75)
> at 
> org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:484)
> at 
> org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:291)
> at 
> org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
> at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:655)
> . custom service
> 

[jira] [Created] (ARTEMIS-3501) Corrupted message in journal can inhibit broker from starting

2021-09-28 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-3501:
---

 Summary: Corrupted message in journal can inhibit broker from 
starting
 Key: ARTEMIS-3501
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3501
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Anton Roskvist
 Attachments: artemis.log

In my case this was caused by a single bad message that seems to have come from 
a broker restart i.e the broker was working, then it was stopped, and upon 
starting back up it was unable to do so and instead threw some troubling 
exceptions in the log (attached)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3439) CLI commands leave empty management addresses around

2021-08-24 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-3439:

Description: 
*This might affect more addresses that I have yet to notice* but starting with 
2.18.0 each CLI command issued to the broker leaves "empty" addresses behind.

Steps to reproduce:
 Create a broker
 Run any command (I'd recommend "bin/artemis address show" a few times for a 
nice visual)

Does not seem to cause any serious issues but gives address listings and the 
web console a cluttered look.
Addresses are cleared by a restart

  was:
*This might affect more addresses that I have yet to notice* but starting with 
2.18.0 each CLI command issued to the broker leaves "empty" addresses behind.

Steps to reproduce:
Create a broker
Run any command (I'd recommend "bin/artemis address show" a few times for a 
nice visual)

Does not seem to cause any serious issues but gives address listings and the 
web console a cluttered look.


> CLI commands leave empty management addresses around
> 
>
> Key: ARTEMIS-3439
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3439
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.18.0
>Reporter: Anton Roskvist
>Priority: Minor
>
> *This might affect more addresses that I have yet to notice* but starting 
> with 2.18.0 each CLI command issued to the broker leaves "empty" addresses 
> behind.
> Steps to reproduce:
>  Create a broker
>  Run any command (I'd recommend "bin/artemis address show" a few times for a 
> nice visual)
> Does not seem to cause any serious issues but gives address listings and the 
> web console a cluttered look.
> Addresses are cleared by a restart



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-3439) CLI commands leave empty management addresses around

2021-08-24 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-3439:
---

 Summary: CLI commands leave empty management addresses around
 Key: ARTEMIS-3439
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3439
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 2.18.0
Reporter: Anton Roskvist


*This might affect more addresses that I have yet to notice* but starting with 
2.18.0 each CLI command issued to the broker leaves "empty" addresses behind.

Steps to reproduce:
Create a broker
Run any command (I'd recommend "bin/artemis address show" a few times for a 
nice visual)

Does not seem to cause any serious issues but gives address listings and the 
web console a cluttered look.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (AMQ-7470) ActiveMQ producer thread hangs on setXid

2021-08-10 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17396593#comment-17396593
 ] 

Anton Roskvist commented on AMQ-7470:
-

I did, yes. And also timeouts for the inactivityMonitor and all settings I 
could think of that could be even remotely related in the TCP transport for 
connection, sockets and so on.

> ActiveMQ producer thread hangs on setXid
> 
>
> Key: AMQ-7470
> URL: https://issues.apache.org/jira/browse/AMQ-7470
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: AMQP, Broker, JMS client
>Affects Versions: 5.15.6
>Reporter: Rajesh Pote
>Assignee: Jean-Baptiste Onofré
>Priority: Blocker
> Attachments: setXid_bug.tar.gz
>
>
> I've noticed issues with distributed transactions (XA) on karaf when using 
> ActiveMQ with JDBC storeage (postgres). After some time (it isn't 
> deterministic) I've observed that on database side 'idle in transaction' 
> appeared (it's other schema than used by ActiveMQ). After debugging it seams 
> that the reason why transactions are hanging is ActiveMQ and  
> org.apache.activemq.transport.FutureResponse.getResult method that waits 
> forever for a response.
> {code}
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x000768585aa8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403)
> at 
> org.apache.activemq.transport.FutureResponse.getResult(FutureResponse.java:48)
> at 
> org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:87)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1388)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1428)
> at 
> org.apache.activemq.TransactionContext.setXid(TransactionContext.java:751)
> at 
> org.apache.activemq.TransactionContext.invokeBeforeEnd(TransactionContext.java:424)
> at 
> org.apache.activemq.TransactionContext.end(TransactionContext.java:408)
> at 
> org.apache.geronimo.transaction.manager.WrapperNamedXAResource.end(WrapperNamedXAResource.java:61)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:588)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:567)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.beforePrepare(TransactionImpl.java:414)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.commit(TransactionImpl.java:262)
> at 
> org.apache.geronimo.transaction.manager.TransactionManagerImpl.commit(TransactionManagerImpl.java:252)
> at 
> org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1020)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:761)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:730)
> at 
> org.apache.aries.transaction.internal.AriesPlatformTransactionManager.commit(AriesPlatformTransactionManager.java:75)
> at 
> org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:484)
> at 
> org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:291)
> at 
> org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
> at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:655)
> . custom service
> {code}
> {code}
> "DefaultMessageListenerContainer-3" #13199 prio=5 os_prio=0 
> tid=0x7fb8687e6800 nid=0x3954 waiting on condition [0x7fb7b0b98000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x000765f532c0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> 

[jira] [Commented] (AMQ-7470) ActiveMQ producer thread hangs on setXid

2021-08-10 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17396581#comment-17396581
 ] 

Anton Roskvist commented on AMQ-7470:
-

Hi,

I did try those options and also "jms.connectResponseTimeout" and pretty much 
all other settings I could find for the transport and connection relating to 
timeouts and none of them helped in my case.

Br,

Anton

> ActiveMQ producer thread hangs on setXid
> 
>
> Key: AMQ-7470
> URL: https://issues.apache.org/jira/browse/AMQ-7470
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: AMQP, Broker, JMS client
>Affects Versions: 5.15.6
>Reporter: Rajesh Pote
>Assignee: Jean-Baptiste Onofré
>Priority: Blocker
> Attachments: setXid_bug.tar.gz
>
>
> I've noticed issues with distributed transactions (XA) on karaf when using 
> ActiveMQ with JDBC storeage (postgres). After some time (it isn't 
> deterministic) I've observed that on database side 'idle in transaction' 
> appeared (it's other schema than used by ActiveMQ). After debugging it seams 
> that the reason why transactions are hanging is ActiveMQ and  
> org.apache.activemq.transport.FutureResponse.getResult method that waits 
> forever for a response.
> {code}
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x000768585aa8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403)
> at 
> org.apache.activemq.transport.FutureResponse.getResult(FutureResponse.java:48)
> at 
> org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:87)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1388)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1428)
> at 
> org.apache.activemq.TransactionContext.setXid(TransactionContext.java:751)
> at 
> org.apache.activemq.TransactionContext.invokeBeforeEnd(TransactionContext.java:424)
> at 
> org.apache.activemq.TransactionContext.end(TransactionContext.java:408)
> at 
> org.apache.geronimo.transaction.manager.WrapperNamedXAResource.end(WrapperNamedXAResource.java:61)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:588)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:567)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.beforePrepare(TransactionImpl.java:414)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.commit(TransactionImpl.java:262)
> at 
> org.apache.geronimo.transaction.manager.TransactionManagerImpl.commit(TransactionManagerImpl.java:252)
> at 
> org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1020)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:761)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:730)
> at 
> org.apache.aries.transaction.internal.AriesPlatformTransactionManager.commit(AriesPlatformTransactionManager.java:75)
> at 
> org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:484)
> at 
> org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:291)
> at 
> org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
> at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:655)
> . custom service
> {code}
> {code}
> "DefaultMessageListenerContainer-3" #13199 prio=5 os_prio=0 
> tid=0x7fb8687e6800 nid=0x3954 waiting on condition [0x7fb7b0b98000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x000765f532c0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 

[jira] [Updated] (ARTEMIS-3313) DLA messages disappear when running retry or export/import

2021-06-16 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-3313:

Summary: DLA messages disappear when running retry or export/import  (was: 
DLA messages disapear when running retry or export/import)

> DLA messages disappear when running retry or export/import
> --
>
> Key: ARTEMIS-3313
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3313
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0, 2.17.0
>Reporter: Anton Roskvist
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> DLA messages sometimes disappear when running retry or export/import
> This only seem to happen on dead letter queues where dead-letter resources 
> are created automatically
> When running debug logging the following shows up for every message during 
> import:
> DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] 
> Message CoreMessage[MESSAGE] is not going anywhere as it didn't have a 
> binding on address:QUEUENAME
> All other queue information gets imported though, down to the dead letter 
> filter. Just not the messages.
>  
> This seems to be reproducible by simply:
>   - configuring "auto-create-dead-letter-resources"
>   - sending a message to an ANYCAST queue
>   - sending that message to DLA
>   - run the "export" tool
>   - Get a fresh/empty journal 
>   - import the data export



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3313) DLA messages disapear when running retry or export/import

2021-06-16 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-3313:

Summary: DLA messages disapear when running retry or export/import  (was: 
DLA messages disapear when running export/import)

> DLA messages disapear when running retry or export/import
> -
>
> Key: ARTEMIS-3313
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3313
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0, 2.17.0
>Reporter: Anton Roskvist
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> DLA messages sometimes disappear when running retry or export/import
> This only seem to happen on dead letter queues where dead-letter resources 
> are created automatically
> When running debug logging the following shows up for every message during 
> import:
> DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] 
> Message CoreMessage[MESSAGE] is not going anywhere as it didn't have a 
> binding on address:QUEUENAME
> All other queue information gets imported though, down to the dead letter 
> filter. Just not the messages.
>  
> This seems to be reproducible by simply:
>   - configuring "auto-create-dead-letter-resources"
>   - sending a message to an ANYCAST queue
>   - sending that message to DLA
>   - run the "export" tool
>   - Get a fresh/empty journal 
>   - import the data export



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3313) DLA messages disapear when running export/import

2021-06-16 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-3313:

Description: 
DLA messages sometimes disappear when running retry or export/import

This only seem to happen on dead letter queues where dead-letter resources are 
created automatically

When running debug logging the following shows up for every message during 
import:

DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] Message 
CoreMessage[MESSAGE] is not going anywhere as it didn't have a binding on 
address:QUEUENAME

All other queue information gets imported though, down to the dead letter 
filter. Just not the messages.

 

This seems to be reproducible by simply:
  - configuring "auto-create-dead-letter-resources"
  - sending a message to an ANYCAST queue
  - sending that message to DLA
  - run the "export" tool
  - Get a fresh/empty journal 
  - import the data export

  was:
DLQ messages disapear when running export/import

This only seem to happen on dead letter queues where dead-letter resources are 
created automatically

When running debug logging the following shows up for every message during 
import:

DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] Message 
CoreMessage[MESSAGE] is not going anywhere as it didn't have a binding on 
address:QUEUENAME

All other queue information gets imported though, down to the dead letter 
filter. Just not the messages.

 

This seems to be reproducible by simply:
  - configuring "auto-create-dead-letter-resources"
  - sending a message to an ANYCAST queue
  - sending that message to DLA
  - run the "export" tool
  - Get a fresh/empty journal 
  - import the data export


> DLA messages disapear when running export/import
> 
>
> Key: ARTEMIS-3313
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3313
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0, 2.17.0
>Reporter: Anton Roskvist
>Priority: Major
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> DLA messages sometimes disappear when running retry or export/import
> This only seem to happen on dead letter queues where dead-letter resources 
> are created automatically
> When running debug logging the following shows up for every message during 
> import:
> DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] 
> Message CoreMessage[MESSAGE] is not going anywhere as it didn't have a 
> binding on address:QUEUENAME
> All other queue information gets imported though, down to the dead letter 
> filter. Just not the messages.
>  
> This seems to be reproducible by simply:
>   - configuring "auto-create-dead-letter-resources"
>   - sending a message to an ANYCAST queue
>   - sending that message to DLA
>   - run the "export" tool
>   - Get a fresh/empty journal 
>   - import the data export



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3313) DLA messages disapear when running export/import

2021-06-15 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-3313:

Description: 
DLQ messages disapear when running export/import

This only seem to happen on dead letter queues where dead-letter resources are 
created automatically

When running debug logging the following shows up for every message during 
import:

DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] Message 
CoreMessage[MESSAGE] is not going anywhere as it didn't have a binding on 
address:QUEUENAME

All other queue information gets imported though, down to the dead letter 
filter. Just not the messages.

 

This seems to be reproducible by simply:
  - configuring "auto-create-dead-letter-resources"
  - sending a message to an ANYCAST queue
  - sending that message to DLA
  - run the "export" tool
  - Get a fresh/empty journal 
  - import the data export

  was:
DLQ messages disapear when running retry or export/import

This only seem to happen on dead letter queues where dead-letter resources are 
created automatically

When running debug logging the following shows up for every message during 
retry or import:

DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] Message 
CoreMessage[MESSAGE] is not going anywhere as it didn't have a binding on 
address:QUEUENAME

All other queue information gets imported though, down to the dead letter 
filter. Just not the messages.

 

This seems to be reproducible by simply:
 - configuring "auto-create-dead-letter-resources"
 - sending a message to an ANYCAST queue
 - sending that message to DLA
 - run the "export" tool
 - Get a fresh/empty journal 
 - import the data export


> DLA messages disapear when running export/import
> 
>
> Key: ARTEMIS-3313
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3313
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0, 2.17.0
>Reporter: Anton Roskvist
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> DLQ messages disapear when running export/import
> This only seem to happen on dead letter queues where dead-letter resources 
> are created automatically
> When running debug logging the following shows up for every message during 
> import:
> DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] 
> Message CoreMessage[MESSAGE] is not going anywhere as it didn't have a 
> binding on address:QUEUENAME
> All other queue information gets imported though, down to the dead letter 
> filter. Just not the messages.
>  
> This seems to be reproducible by simply:
>   - configuring "auto-create-dead-letter-resources"
>   - sending a message to an ANYCAST queue
>   - sending that message to DLA
>   - run the "export" tool
>   - Get a fresh/empty journal 
>   - import the data export



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3313) DLA messages disapear when running export/import

2021-06-15 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-3313:

Summary: DLA messages disapear when running export/import  (was: DLA 
messages disapear when running retry or export/import)

> DLA messages disapear when running export/import
> 
>
> Key: ARTEMIS-3313
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3313
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0, 2.17.0
>Reporter: Anton Roskvist
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> DLQ messages disapear when running retry or export/import
> This only seem to happen on dead letter queues where dead-letter resources 
> are created automatically
> When running debug logging the following shows up for every message during 
> retry or import:
> DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] 
> Message CoreMessage[MESSAGE] is not going anywhere as it didn't have a 
> binding on address:QUEUENAME
> All other queue information gets imported though, down to the dead letter 
> filter. Just not the messages.
>  
> This seems to be reproducible by simply:
>  - configuring "auto-create-dead-letter-resources"
>  - sending a message to an ANYCAST queue
>  - sending that message to DLA
>  - run the "export" tool
>  - Get a fresh/empty journal 
>  - import the data export



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (ARTEMIS-3336) Queue browsing no longer works in console

2021-06-07 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist closed ARTEMIS-3336.
---
Resolution: Invalid

> Queue browsing no longer works in console
> -
>
> Key: ARTEMIS-3336
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3336
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.18.0
>Reporter: Anton Roskvist
>Priority: Major
>
> Browsing messages through the console no longer works against current builds 
> of the broker.
> It returns messages like:
> java.lang.IndexOutOfBoundsException : Error reading in simpleString, 
> length=340131896 is greater than readableBytes=17
> It might be caused by this:
> https://issues.apache.org/jira/browse/ARTEMIS-3141
> Way to reproduce:
> Build a broker from the current main branch 
> send a message to a queue
> browse() the queue from the console



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3336) Queue browsing no longer works in console

2021-06-07 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17358844#comment-17358844
 ] 

Anton Roskvist commented on ARTEMIS-3336:
-

Hi Justin,

My apologies, upon building against 3913c17c8de942ee7be8dbe74c5ac395f5a30a79 I 
no longer have any issues browsing messages (even using the same 
broker-instance as earlier today). I have seen that issue since a week back or 
so but my last build was from the end of last week.

Br,

Anton

> Queue browsing no longer works in console
> -
>
> Key: ARTEMIS-3336
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3336
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.18.0
>Reporter: Anton Roskvist
>Priority: Major
>
> Browsing messages through the console no longer works against current builds 
> of the broker.
> It returns messages like:
> java.lang.IndexOutOfBoundsException : Error reading in simpleString, 
> length=340131896 is greater than readableBytes=17
> It might be caused by this:
> https://issues.apache.org/jira/browse/ARTEMIS-3141
> Way to reproduce:
> Build a broker from the current main branch 
> send a message to a queue
> browse() the queue from the console



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-3336) Queue browsing no longer works in console

2021-06-07 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-3336:
---

 Summary: Queue browsing no longer works in console
 Key: ARTEMIS-3336
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3336
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 2.18.0
Reporter: Anton Roskvist


Browsing messages through the console no longer works against current builds of 
the broker.
It returns messages like:
java.lang.IndexOutOfBoundsException : Error reading in simpleString, 
length=340131896 is greater than readableBytes=17
It might be caused by this:
https://issues.apache.org/jira/browse/ARTEMIS-3141

Way to reproduce:
Build a broker from the current main branch 
send a message to a queue
browse() the queue from the console



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3313) DLA messages disapear when running retry or export/import

2021-05-31 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-3313:

Summary: DLA messages disapear when running retry or export/import  (was: 
DLQ messages disaperaring when running retry or export/import)

> DLA messages disapear when running retry or export/import
> -
>
> Key: ARTEMIS-3313
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3313
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0, 2.17.0
>Reporter: Anton Roskvist
>Priority: Major
>
> DLQ messages disapear when running retry or export/import
> This only seem to happen on dead letter queues where dead-letter resources 
> are created automatically
> When running debug logging the following shows up for every message during 
> retry or import:
> DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] 
> Message CoreMessage[MESSAGE] is not going anywhere as it didn't have a 
> binding on address:QUEUENAME
> All other queue information gets imported though, down to the dead letter 
> filter. Just not the messages.
>  
> This seems to be reproducible by simply:
>  - configuring "auto-create-dead-letter-resources"
>  - sending a message to an ANYCAST queue
>  - sending that message to DLA
>  - run the "export" tool
>  - Get a fresh/empty journal 
>  - import the data export



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3313) DLQ messages disaperaring when running retry or export/import

2021-05-31 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-3313:

Description: 
DLQ messages disapear when running retry or export/import

This only seem to happen on dead letter queues where dead-letter resources are 
created automatically

When running debug logging the following shows up for every message during 
retry or import:

DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] Message 
CoreMessage[MESSAGE] is not going anywhere as it didn't have a binding on 
address:QUEUENAME

All other queue information gets imported though, down to the dead letter 
filter. Just not the messages.

 

This seems to be reproducible by simply:
 - configuring "auto-create-dead-letter-resources"
 - sending a message to an ANYCAST queue
 - sending that message to DLA
 - run the "export" tool
 - Get a fresh/empty journal 
 - import the data export

  was:
DLQ messages disapear when running retry or export/import

This only seem to happen on dead letter queues where dead-letter resources are 
created automatically

When running debug logging the following shows up for every message during 
retry or import:

DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] Message 
CoreMessage[MESSAGE] is not going anywhere as it didn't have a binding on 
address:QUEUENAME

All other queue information gets imported though, down to the dead letter 
filter. Just not the messages.

It usually does not happen and I have yet to be able to reproduce it, but 
reusing the message journal from backups give the same results every time.


> DLQ messages disaperaring when running retry or export/import
> -
>
> Key: ARTEMIS-3313
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3313
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0, 2.17.0
>Reporter: Anton Roskvist
>Priority: Major
>
> DLQ messages disapear when running retry or export/import
> This only seem to happen on dead letter queues where dead-letter resources 
> are created automatically
> When running debug logging the following shows up for every message during 
> retry or import:
> DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] 
> Message CoreMessage[MESSAGE] is not going anywhere as it didn't have a 
> binding on address:QUEUENAME
> All other queue information gets imported though, down to the dead letter 
> filter. Just not the messages.
>  
> This seems to be reproducible by simply:
>  - configuring "auto-create-dead-letter-resources"
>  - sending a message to an ANYCAST queue
>  - sending that message to DLA
>  - run the "export" tool
>  - Get a fresh/empty journal 
>  - import the data export



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3313) DLQ messages disaperaring when running retry or export/import

2021-05-31 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17354455#comment-17354455
 ] 

Anton Roskvist commented on ARTEMIS-3313:
-

Hi, I got some more time to look into this issue and I think I have found the 
underlying problem...

The original messages on their respective "auto-created" DLA are located on 
MUTICAST queues but retain their old _AMQ_ROUTING_TYPE saying they are ANYCAST 
messages (which is correct for their original queue).

This works well for their current journal, but when exporting and importing 
said messages to a new journal they cannot get routed properly since the DLA 
address only has MULTICAST queues. (don't know why they get discarded instead 
of ending up on a new DLA though)

I have been able to verify this by changing _AMQ_ROUTING_TYPE from 1 to 0 in 
the exported data-file. After that the messages can be imported and even 
"retried" successfully, meaning they end up on their original ANYCAST queue.

They still have the _AMQ_ROUTING_TYPE set to the incorrect value though, so I 
don't know if that can cause any additional issues though.

> DLQ messages disaperaring when running retry or export/import
> -
>
> Key: ARTEMIS-3313
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3313
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0, 2.17.0
>Reporter: Anton Roskvist
>Priority: Major
>
> DLQ messages disapear when running retry or export/import
> This only seem to happen on dead letter queues where dead-letter resources 
> are created automatically
> When running debug logging the following shows up for every message during 
> retry or import:
> DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] 
> Message CoreMessage[MESSAGE] is not going anywhere as it didn't have a 
> binding on address:QUEUENAME
> All other queue information gets imported though, down to the dead letter 
> filter. Just not the messages.
> It usually does not happen and I have yet to be able to reproduce it, but 
> reusing the message journal from backups give the same results every time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-3313) DLQ messages disaperaring when running retry or export/import

2021-05-21 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-3313:
---

 Summary: DLQ messages disaperaring when running retry or 
export/import
 Key: ARTEMIS-3313
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3313
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 2.17.0, 2.16.0
Reporter: Anton Roskvist


DLQ messages disapear when running retry or export/import

This only seem to happen on dead letter queues where dead-letter resources are 
created automatically

When running debug logging the following shows up for every message during 
retry or import:

DEBUG [org.apache.activemq.artemis.core.postoffice.impl.PostOfficeImpl] Message 
CoreMessage[MESSAGE] is not going anywhere as it didn't have a binding on 
address:QUEUENAME

All other queue information gets imported though, down to the dead letter 
filter. Just not the messages.

It usually does not happen and I have yet to be able to reproduce it, but 
reusing the message journal from backups give the same results every time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3272) Remaining AIO issue from ARTEMIS-3084

2021-05-18 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17346872#comment-17346872
 ] 

Anton Roskvist commented on ARTEMIS-3272:
-

Okay, thanks Clebert. 

I will keep running with datasync=true then. I did not notice any performance 
penalty of that anyway which is why the configuration was originally set. I 
guess something else have changed in my setup or in some release of the broker 
since originally switching it to false.

> Remaining AIO issue from ARTEMIS-3084
> -
>
> Key: ARTEMIS-3272
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3272
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0, 2.17.0
>Reporter: Anton Roskvist
>Priority: Major
> Attachments: broker.xml, formatted_stack.txt, stack, stack.txt
>
>
> Hi,
> Starting with artemis-2.16.0 I was getting these WARN messages, followed by a 
> broker shutdown:
> {quote}
> WARN [org.apache.activemq.artemis.core.server] AMQ222010: Critical IO Error, 
> shutting down the server. 
> file=AIOSequentialFile:/path/to/artemis-2.16.0/data/journal/activemq-data-2665706.amq,
>  message=Timeout on close: java.io.IOException: Timeout on close
> {quote}
> They were not that frequent, maybe once per week or so. I read that this was 
> a known issue that was supposed to be resolved in artemis-2.17.0.
> After upgrading I see this WARN message instead, followed by a thread dump:
> {quote}WARN [org.apache.activemq.artemis.journal] File activemq-data-49.amq 
> still has pending IO before closing it
> {quote}
> This does not appear to cause any issues as far as I can tell but it is 
> printed several times a day so the log is completely cluttered (from the 
> thread dump). 
> I am attaching one of the thread dumps from my logs here: [^stack]
> In this setup I am running a static broker cluster of 5 brokers, each 
> processing messages at about 40-50 Mbps
> Clients are primarily using OpenWire
> ext4 is used as the filesystem
> Br,
> Anton



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3272) Remaining AIO issue from ARTEMIS-3084

2021-05-17 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17346034#comment-17346034
 ] 

Anton Roskvist commented on ARTEMIS-3272:
-

Hi [~clebertsuconic] ,
Any idea what might be wrong here? If possible I'd like to be able to run this 
cluster with journal-datasync = false, or at least get an understanding of what 
is actually going on and if this poses a risk for data loss or journal 
corruption. Just let me know if there's anything I can do to help figure this 
out...

> Remaining AIO issue from ARTEMIS-3084
> -
>
> Key: ARTEMIS-3272
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3272
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0, 2.17.0
>Reporter: Anton Roskvist
>Priority: Major
> Attachments: broker.xml, formatted_stack.txt, stack, stack.txt
>
>
> Hi,
> Starting with artemis-2.16.0 I was getting these WARN messages, followed by a 
> broker shutdown:
> {quote}
> WARN [org.apache.activemq.artemis.core.server] AMQ222010: Critical IO Error, 
> shutting down the server. 
> file=AIOSequentialFile:/path/to/artemis-2.16.0/data/journal/activemq-data-2665706.amq,
>  message=Timeout on close: java.io.IOException: Timeout on close
> {quote}
> They were not that frequent, maybe once per week or so. I read that this was 
> a known issue that was supposed to be resolved in artemis-2.17.0.
> After upgrading I see this WARN message instead, followed by a thread dump:
> {quote}WARN [org.apache.activemq.artemis.journal] File activemq-data-49.amq 
> still has pending IO before closing it
> {quote}
> This does not appear to cause any issues as far as I can tell but it is 
> printed several times a day so the log is completely cluttered (from the 
> thread dump). 
> I am attaching one of the thread dumps from my logs here: [^stack]
> In this setup I am running a static broker cluster of 5 brokers, each 
> processing messages at about 40-50 Mbps
> Clients are primarily using OpenWire
> ext4 is used as the filesystem
> Br,
> Anton



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3272) Remaining AIO issue from ARTEMIS-3084

2021-05-03 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17338199#comment-17338199
 ] 

Anton Roskvist commented on ARTEMIS-3272:
-

I have been running over the weekend with:
true
and I have yet to see this error message printed, so I would say that setting 
"solved" it.

Any idea whats going on here?

> Remaining AIO issue from ARTEMIS-3084
> -
>
> Key: ARTEMIS-3272
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3272
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0, 2.17.0
>Reporter: Anton Roskvist
>Priority: Major
> Attachments: broker.xml, formatted_stack.txt, stack, stack.txt
>
>
> Hi,
> Starting with artemis-2.16.0 I was getting these WARN messages, followed by a 
> broker shutdown:
> {quote}
> WARN [org.apache.activemq.artemis.core.server] AMQ222010: Critical IO Error, 
> shutting down the server. 
> file=AIOSequentialFile:/path/to/artemis-2.16.0/data/journal/activemq-data-2665706.amq,
>  message=Timeout on close: java.io.IOException: Timeout on close
> {quote}
> They were not that frequent, maybe once per week or so. I read that this was 
> a known issue that was supposed to be resolved in artemis-2.17.0.
> After upgrading I see this WARN message instead, followed by a thread dump:
> {quote}WARN [org.apache.activemq.artemis.journal] File activemq-data-49.amq 
> still has pending IO before closing it
> {quote}
> This does not appear to cause any issues as far as I can tell but it is 
> printed several times a day so the log is completely cluttered (from the 
> thread dump). 
> I am attaching one of the thread dumps from my logs here: [^stack]
> In this setup I am running a static broker cluster of 5 brokers, each 
> processing messages at about 40-50 Mbps
> Clients are primarily using OpenWire
> ext4 is used as the filesystem
> Br,
> Anton



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3272) Remaining AIO issue from ARTEMIS-3084

2021-04-30 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17337580#comment-17337580
 ] 

Anton Roskvist commented on ARTEMIS-3272:
-

I have attached the files you requested. I don't really know what happened to 
the formatting in the original file, but I made these with windows line 
endings, hope they look better.

Yes, I can sort of reproduce it... it happens several times a day in a testing 
environment. So not on demand, but reliably.

> Remaining AIO issue from ARTEMIS-3084
> -
>
> Key: ARTEMIS-3272
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3272
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0, 2.17.0
>Reporter: Anton Roskvist
>Priority: Major
> Attachments: broker.xml, formatted_stack.txt, stack, stack.txt
>
>
> Hi,
> Starting with artemis-2.16.0 I was getting these WARN messages, followed by a 
> broker shutdown:
> {quote}
> WARN [org.apache.activemq.artemis.core.server] AMQ222010: Critical IO Error, 
> shutting down the server. 
> file=AIOSequentialFile:/path/to/artemis-2.16.0/data/journal/activemq-data-2665706.amq,
>  message=Timeout on close: java.io.IOException: Timeout on close
> {quote}
> They were not that frequent, maybe once per week or so. I read that this was 
> a known issue that was supposed to be resolved in artemis-2.17.0.
> After upgrading I see this WARN message instead, followed by a thread dump:
> {quote}WARN [org.apache.activemq.artemis.journal] File activemq-data-49.amq 
> still has pending IO before closing it
> {quote}
> This does not appear to cause any issues as far as I can tell but it is 
> printed several times a day so the log is completely cluttered (from the 
> thread dump). 
> I am attaching one of the thread dumps from my logs here: [^stack]
> In this setup I am running a static broker cluster of 5 brokers, each 
> processing messages at about 40-50 Mbps
> Clients are primarily using OpenWire
> ext4 is used as the filesystem
> Br,
> Anton



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3272) Remaining AIO issue from ARTEMIS-3084

2021-04-30 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-3272:

Attachment: stack.txt
formatted_stack.txt
broker.xml

> Remaining AIO issue from ARTEMIS-3084
> -
>
> Key: ARTEMIS-3272
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3272
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0, 2.17.0
>Reporter: Anton Roskvist
>Priority: Major
> Attachments: broker.xml, formatted_stack.txt, stack, stack.txt
>
>
> Hi,
> Starting with artemis-2.16.0 I was getting these WARN messages, followed by a 
> broker shutdown:
> {quote}
> WARN [org.apache.activemq.artemis.core.server] AMQ222010: Critical IO Error, 
> shutting down the server. 
> file=AIOSequentialFile:/path/to/artemis-2.16.0/data/journal/activemq-data-2665706.amq,
>  message=Timeout on close: java.io.IOException: Timeout on close
> {quote}
> They were not that frequent, maybe once per week or so. I read that this was 
> a known issue that was supposed to be resolved in artemis-2.17.0.
> After upgrading I see this WARN message instead, followed by a thread dump:
> {quote}WARN [org.apache.activemq.artemis.journal] File activemq-data-49.amq 
> still has pending IO before closing it
> {quote}
> This does not appear to cause any issues as far as I can tell but it is 
> printed several times a day so the log is completely cluttered (from the 
> thread dump). 
> I am attaching one of the thread dumps from my logs here: [^stack]
> In this setup I am running a static broker cluster of 5 brokers, each 
> processing messages at about 40-50 Mbps
> Clients are primarily using OpenWire
> ext4 is used as the filesystem
> Br,
> Anton



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-3272) Remaining AIO issue from ARTEMIS-3084

2021-04-30 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-3272:
---

 Summary: Remaining AIO issue from ARTEMIS-3084
 Key: ARTEMIS-3272
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3272
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 2.17.0, 2.16.0
Reporter: Anton Roskvist
 Attachments: stack

Hi,

Starting with artemis-2.16.0 I was getting these WARN messages, followed by a 
broker shutdown:
{quote}
WARN [org.apache.activemq.artemis.core.server] AMQ222010: Critical IO Error, 
shutting down the server. 
file=AIOSequentialFile:/path/to/artemis-2.16.0/data/journal/activemq-data-2665706.amq,
 message=Timeout on close: java.io.IOException: Timeout on close
{quote}
They were not that frequent, maybe once per week or so. I read that this was a 
known issue that was supposed to be resolved in artemis-2.17.0.

After upgrading I see this WARN message instead, followed by a thread dump:
{quote}WARN [org.apache.activemq.artemis.journal] File activemq-data-49.amq 
still has pending IO before closing it
{quote}
This does not appear to cause any issues as far as I can tell but it is printed 
several times a day so the log is completely cluttered (from the thread dump). 

I am attaching one of the thread dumps from my logs here: [^stack]

In this setup I am running a static broker cluster of 5 brokers, each 
processing messages at about 40-50 Mbps
Clients are primarily using OpenWire
ext4 is used as the filesystem

Br,

Anton



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3198) Add ability to increase concurrency on core bridges

2021-03-22 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306350#comment-17306350
 ] 

Anton Roskvist commented on ARTEMIS-3198:
-

Not from what I could work out anyway, or how would you set that up? Setting 
multiple "connector-ref" towards the same broker?

> Add ability to increase concurrency on core bridges
> ---
>
> Key: ARTEMIS-3198
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3198
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Reporter: Anton Roskvist
>Priority: Minor
>
> Add ability to increase concurrency on core bridges. This is useful for 
> deploying bridges over high latency networks when the message volume is high. 
> More concurrency allows for increased throughput.
>  
> I have run some tests locally, sending 20k messages across a WAN link. Using 
> the default setting I was able to move all messages from point A to point B 
> in: 2m5s31ms
> Adding another bridge, with identical parameters besides the name, the same 
> 20k messages where moved in: 1min6s14ms
> Adding a third means: 33s.19ms
> So this is pretty much linear increase in throughput based on the number of 
> bridges configured for the same destination. This works, but if multiple 
> queues and destinations are involved the config file gets quite messy. 
> Therefor I propose the addition of a concurrency property for these bridges, 
> which basically spawns N amount of bridges behind the scenes instead, keeping 
> the config file nice and tidy and the messages flying.
> Test summary:
> 20k messages (identical across runs), point A to point B over WAN:
> 1 bridge : 2m 5s 31ms
> 2 bridges: 1m 6s 14ms
> 3 bridges:   33s 19ms
> Br,
> Anton



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-3198) Add ability to increase concurrency on core bridges

2021-03-22 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated ARTEMIS-3198:

Description: 
Add ability to increase concurrency on core bridges. This is useful for 
deploying bridges over high latency networks when the message volume is high. 
More concurrency allows for increased throughput.

 

I have run some tests locally, sending 20k messages across a WAN link. Using 
the default setting I was able to move all messages from point A to point B in: 
2m5s31ms
Adding another bridge, with identical parameters besides the name, the same 20k 
messages where moved in: 1min6s14ms
Adding a third means: 33s.19ms

So this is pretty much linear increase in throughput based on the number of 
bridges configured for the same destination. This works, but if multiple queues 
and destinations are involved the config file gets quite messy. Therefor I 
propose the addition of a concurrency property for these bridges, which 
basically spawns N amount of bridges behind the scenes instead, keeping the 
config file nice and tidy and the messages flying.

Test summary:
20k messages (identical across runs), point A to point B over WAN:


1 bridge : 2m 5s 31ms
2 bridges: 1m 6s 14ms
3 bridges:   33s 19ms


Br,

Anton

  was:Add ability to increase concurrency on core bridges. This is useful for 
deploying bridges over high latency networks when the message volume is high. 
More concurrency allows for increased throughput


> Add ability to increase concurrency on core bridges
> ---
>
> Key: ARTEMIS-3198
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3198
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Reporter: Anton Roskvist
>Priority: Minor
>
> Add ability to increase concurrency on core bridges. This is useful for 
> deploying bridges over high latency networks when the message volume is high. 
> More concurrency allows for increased throughput.
>  
> I have run some tests locally, sending 20k messages across a WAN link. Using 
> the default setting I was able to move all messages from point A to point B 
> in: 2m5s31ms
> Adding another bridge, with identical parameters besides the name, the same 
> 20k messages where moved in: 1min6s14ms
> Adding a third means: 33s.19ms
> So this is pretty much linear increase in throughput based on the number of 
> bridges configured for the same destination. This works, but if multiple 
> queues and destinations are involved the config file gets quite messy. 
> Therefor I propose the addition of a concurrency property for these bridges, 
> which basically spawns N amount of bridges behind the scenes instead, keeping 
> the config file nice and tidy and the messages flying.
> Test summary:
> 20k messages (identical across runs), point A to point B over WAN:
> 1 bridge : 2m 5s 31ms
> 2 bridges: 1m 6s 14ms
> 3 bridges:   33s 19ms
> Br,
> Anton



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-3198) Add ability to increase concurrency on core bridges

2021-03-22 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-3198:
---

 Summary: Add ability to increase concurrency on core bridges
 Key: ARTEMIS-3198
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3198
 Project: ActiveMQ Artemis
  Issue Type: New Feature
  Components: Broker
Reporter: Anton Roskvist


Add ability to increase concurrency on core bridges. This is useful for 
deploying bridges over high latency networks when the message volume is high. 
More concurrency allows for increased throughput



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2934) ARTEMIS-2226 causes excessive notificaions to be sent for Spring XA clients

2021-02-12 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17283590#comment-17283590
 ] 

Anton Roskvist commented on ARTEMIS-2934:
-

Okay sure, that makes sense. But then maybe some functionality could be added 
to add/remove message filters on sent or received notifications as you can do 
on regular queues?

Such that you can already set notification address, but also set filters for 
it? Example:
activemq.notifications
somefilter1,somefilter2

somefilter1,somefilter2

That could surely be useful for more scenarios outside of what is discussed 
here as well?

Br,

Anton

> ARTEMIS-2226 causes excessive notificaions to be sent for Spring XA clients
> ---
>
> Key: ARTEMIS-2934
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2934
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>Reporter: Anton Roskvist
>Priority: Minor
>
> Hi,
> The fix in https://issues.apache.org/jira/browse/ARTEMIS-2226 causes 
> excessive notifications to be sent for clients running XA transaction through 
> the Spring framework.
> The notifications sent are SESSION_CREATED and SESSION_CLOSED.
> I strongly suspect this is because Spring DMLC cannot cache consumers 
> properly when running XA, causing it to create and remove a new session for 
> each message processed.
> Now I am not arguing that is not bad practice, because it is, but lots of 
> applications run on top of this logic. I also suspect this might affect more 
> but not be as pronounced.
>  
> I have been able to prove the aforementioned patch is what causes the issue 
> by removing:
> sendSessionNotification(CoreNotificationType.SESSION_CREATED);
> and
> sendSessionNotification(CoreNotificationType.SESSION_CLOSED);
> from ServerSessionImpl.java (they where added in the patch)
> Now I do not fully understand the intent of the original patch but I think it 
> should be made conditional, that is, send those notifications only for MQTT 
> session or something similar.
>  
> In the environment I am testing this on the difference is huge as I have a 
> lot of independent applications all running Spring+XA. About 40% of all 
> messages getting sent and received are notifications.
>  
> Br,
> Anton



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2934) ARTEMIS-2226 causes excessive notificaions to be sent for Spring XA clients

2021-02-11 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17283316#comment-17283316
 ] 

Anton Roskvist commented on ARTEMIS-2934:
-

Sure,

I have several brokers running in active-active clustering mode. I am not 
consuming any of the notifications but through monitoring I can see that the 
send/receive rate for the activemq.notifications topic is _very_ high for all 
brokers.

Turning this off ( by just commenting out the sendSessionNotification mentioned 
in the post above) I can very clearly see that this comes at a considerable 
overhead, baking a ~10-40 % CPU load in difference depending on the number of 
connected brokers in the cluster.

Again, this is for clients running on top of spring DMLC with XA transaction, 
i.e creating and destroying resources for each and every message processed. 
Fringe use case, but its very clear that the PR mentioned in this case 
introduced this behavior.

Br,

Anton

> ARTEMIS-2226 causes excessive notificaions to be sent for Spring XA clients
> ---
>
> Key: ARTEMIS-2934
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2934
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>Reporter: Anton Roskvist
>Priority: Minor
>
> Hi,
> The fix in https://issues.apache.org/jira/browse/ARTEMIS-2226 causes 
> excessive notifications to be sent for clients running XA transaction through 
> the Spring framework.
> The notifications sent are SESSION_CREATED and SESSION_CLOSED.
> I strongly suspect this is because Spring DMLC cannot cache consumers 
> properly when running XA, causing it to create and remove a new session for 
> each message processed.
> Now I am not arguing that is not bad practice, because it is, but lots of 
> applications run on top of this logic. I also suspect this might affect more 
> but not be as pronounced.
>  
> I have been able to prove the aforementioned patch is what causes the issue 
> by removing:
> sendSessionNotification(CoreNotificationType.SESSION_CREATED);
> and
> sendSessionNotification(CoreNotificationType.SESSION_CLOSED);
> from ServerSessionImpl.java (they where added in the patch)
> Now I do not fully understand the intent of the original patch but I think it 
> should be made conditional, that is, send those notifications only for MQTT 
> session or something similar.
>  
> In the environment I am testing this on the difference is huge as I have a 
> lot of independent applications all running Spring+XA. About 40% of all 
> messages getting sent and received are notifications.
>  
> Br,
> Anton



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (AMQ-7453) Duplex networkConnector failure with mKahaDB after inactive destinations deleted

2021-01-14 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17264723#comment-17264723
 ] 

Anton Roskvist commented on AMQ-7453:
-

Well, for my part I have different settings for some different use cases, but 
it would be something like this:



Broker-1

  
  


Broker-2

  
  


 

This is the Simplex setup that has the same issue I mentioned, I see the issue 
regardless of what network connector configuration options I use though, the 
common denominator seems to be the mKahaDB

 

Br,

Anton

> Duplex networkConnector failure with mKahaDB after inactive destinations 
> deleted
> 
>
> Key: AMQ-7453
> URL: https://issues.apache.org/jira/browse/AMQ-7453
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker, KahaDB
>Affects Versions: 5.15.12
> Environment: Windows 10 Home 64-bit
> Java 1.8.0_45-b14
> ActiveMQ 5.15.12
>Reporter: Anthony Kocherov
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Attachments: bad-activemq-restart.debug.log, bad-activemq.debug.log, 
> bad-remote-activemq.xml, bad-remote-wrapper.log, good-remote-activemq.xml, 
> good-remote-wrapper.log, local-activemq.xml
>
>
> Cannot re-establish duplex network connection on remote broker with following 
> error:
> {noformat}
> INFO | jvm 1 | 2020/03/21 14:41:32 | ERROR | Failed to create responder end 
> of duplex network bridge Q-bridge@ID:LHC-59471-1584794492297-0:1
> INFO | jvm 1 | 2020/03/21 14:41:32 | java.lang.IllegalStateException: 
> PageFile is not loaded
> INFO | jvm 1 | 2020/03/21 14:41:32 | at 
> org.apache.activemq.store.kahadb.disk.page.PageFile.assertLoaded(PageFile.java:906)[activemq-kahadb-store-5.15.12.jar:5.15.12]
> ...{noformat}
> Remote broker is configured to [delete inactive 
> destinations|https://activemq.apache.org/delete-inactive-destinations.html] 
> and uses mKahaDB persistence adapters for different destinations (as 
> described here: [Automatic Per Destination Persistence 
> Adapter|https://activemq.apache.org/kahadb]).
> Same setup, but single kahaDB persistence adapter on remote broker is not 
> causing the issue.
> See attached files for detailed configuration and logs (configuration allows 
> to run both brokers on same PC):
>  * local broker config: [^local-activemq.xml]
>  * remote broker *bad* config and log (delete inactive dest. + mKahaDB + 
> perDestination="true"): [^bad-remote-activemq.xml] , [^bad-remote-wrapper.log]
>  * remote broker *good* config and log (delete inactive dest. + kahaDB): 
> [^good-remote-activemq.xml] , [^good-remote-wrapper.log]
>  
> *Use case*
> Simulate network connection loss and then re-establish duplex communication 
> after remote broker destinations were purged due to inactivity:
> 1. clean installation of 
> [apache-activemq-5.15.12-bin.zip|https://activemq.apache.org/components/classic/download/]
>  2. start remote broker
>  3. start local broker
>  4. destination queue created automatically on remote (active consumer from 
> local broker is also shown correctly in web-console)
>  5. stop local broker
>  6. wait for a while until destination is deleted on remote due to inactivity
>  7. start local broker again
> Steps 6.-7. can be repeated multiple times with the same result. However, if 
> required queue is created through web-console on remote broker, duplex bridge 
> establishes successfully, but as soon as destination is purged, problem 
> repeats: [^bad-activemq.debug.log]
> 8. Problem disappears if remote broker is restarted, but comes back whenever 
> inactive destinations are purged once again: [^bad-activemq-restart.debug.log]
>  
> *Some observations*
> The main difference I see in logs (good vs bad situation), that in bad 
> situation following messages appear after inactive destinations deleted:
> {noformat}
> INFO | jvm 1 | 2020/03/22 11:51:20 | INFO | Stopping async queue tasks
> INFO | jvm 1 | 2020/03/22 11:51:20 | INFO | Stopping async topic tasks
> INFO | jvm 1 | 2020/03/22 11:51:20 | INFO | Stopped KahaDB{noformat}
> Since this moment local broker cannot establish duplex connection any more, 
> and it doesn't matter which destinations have been purged – with the same 
> name (App.Data) or any other. Also it doesn't matter whether local broker 
> already had any successful communication with the remote. As soon as these 
> messages appear, broker cannot "create responder end of duplex network 
> bridge" because of "PageFile is not loaded".
> When I try to do the same with single kahadb instance, these messages do not 
> appear and no such problem.
>  
> *Links*
> Original discussion here: 
> [http://activemq.2283324.n4.nabble.com/Duplex-networkConnector-error-with-mKahaDB-after-inactive-destinations-deleted-td4755827.html]



--
This message was sent 

[jira] [Created] (ARTEMIS-2934) ARTEMIS-2226 causes excessive notificaions to be sent for Spring XA clients

2020-10-07 Thread Anton Roskvist (Jira)
Anton Roskvist created ARTEMIS-2934:
---

 Summary: ARTEMIS-2226 causes excessive notificaions to be sent for 
Spring XA clients
 Key: ARTEMIS-2934
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2934
 Project: ActiveMQ Artemis
  Issue Type: New Feature
Reporter: Anton Roskvist


Hi,

The fix in https://issues.apache.org/jira/browse/ARTEMIS-2226 causes excessive 
notifications to be sent for clients running XA transaction through the Spring 
framework.

The notifications sent are SESSION_CREATED and SESSION_CLOSED.

I strongly suspect this is because Spring DMLC cannot cache consumers properly 
when running XA, causing it to create and remove a new session for each message 
processed.

Now I am not arguing that is not bad practice, because it is, but lots of 
applications run on top of this logic. I also suspect this might affect more 
but not be as pronounced.

 

I have been able to prove the aforementioned patch is what causes the issue by 
removing:
sendSessionNotification(CoreNotificationType.SESSION_CREATED);
and
sendSessionNotification(CoreNotificationType.SESSION_CLOSED);

from ServerSessionImpl.java (they where added in the patch)

Now I do not fully understand the intent of the original patch but I think it 
should be made conditional, that is, send those notifications only for MQTT 
session or something similar.

 

In the environment I am testing this on the difference is huge as I have a lot 
of independent applications all running Spring+XA. About 40% of all messages 
getting sent and received are notifications.

 

Br,

Anton



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (AMQ-7470) ActiveMQ producer thread hangs on setXid

2020-10-05 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17208009#comment-17208009
 ] 

Anton Roskvist edited comment on AMQ-7470 at 10/5/20, 11:58 AM:


Hi,

 

I think I've found a way to reproduce this issue, at least it looks _really_ 
similar. It's a bit clunky, where I set up a bunch of actual running instances, 
run messages through them and randomly stop and start the brokers.
 I have automated the setup process and a way to break everything. I will 
attach an archive with everything needed to trigger the issue here, so you 
should be able to try it out with just a few commands on a Linux machine with 
Java being the only prereq. Hope it helps!

ps. Because of Arjuna this has to run in java 8 (on Wildfly at least). Check 
out the README for instructions of how to run everything

 

Br,

Anton[^setXid_bug.tar.gz]


was (Author: antonroskvist):
Hi,

 

I think I've found a way to reproduce this issue, at least it looks _really_ 
similar. It's a bit clunky, where I set up a bunch of actual running instances, 
run messages through them and randomly stop and start the brokers.
 I have automated the setup process and a way to break everything. I will 
attach an archive with everything needed to trigger the issue here, so you 
should be able to try it out with just a few commands on a Linux machine with 
Java being the only prereq. Hope it helps!

ps. Because of Arjuna this has to run in java 8 (or Wildfly at least). Check 
out the README for instructions of how to run everything

 

Br,

Anton[^setXid_bug.tar.gz]

> ActiveMQ producer thread hangs on setXid
> 
>
> Key: AMQ-7470
> URL: https://issues.apache.org/jira/browse/AMQ-7470
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: AMQP, Broker, JMS client
>Affects Versions: 5.15.6
>Reporter: Rajesh Pote
>Assignee: Jean-Baptiste Onofré
>Priority: Blocker
> Attachments: setXid_bug.tar.gz
>
>
> I've noticed issues with distributed transactions (XA) on karaf when using 
> ActiveMQ with JDBC storeage (postgres). After some time (it isn't 
> deterministic) I've observed that on database side 'idle in transaction' 
> appeared (it's other schema than used by ActiveMQ). After debugging it seams 
> that the reason why transactions are hanging is ActiveMQ and  
> org.apache.activemq.transport.FutureResponse.getResult method that waits 
> forever for a response.
> {code}
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x000768585aa8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403)
> at 
> org.apache.activemq.transport.FutureResponse.getResult(FutureResponse.java:48)
> at 
> org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:87)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1388)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1428)
> at 
> org.apache.activemq.TransactionContext.setXid(TransactionContext.java:751)
> at 
> org.apache.activemq.TransactionContext.invokeBeforeEnd(TransactionContext.java:424)
> at 
> org.apache.activemq.TransactionContext.end(TransactionContext.java:408)
> at 
> org.apache.geronimo.transaction.manager.WrapperNamedXAResource.end(WrapperNamedXAResource.java:61)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:588)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:567)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.beforePrepare(TransactionImpl.java:414)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.commit(TransactionImpl.java:262)
> at 
> org.apache.geronimo.transaction.manager.TransactionManagerImpl.commit(TransactionManagerImpl.java:252)
> at 
> org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1020)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:761)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:730)
> at 
> 

[jira] [Comment Edited] (AMQ-7470) ActiveMQ producer thread hangs on setXid

2020-10-05 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17208009#comment-17208009
 ] 

Anton Roskvist edited comment on AMQ-7470 at 10/5/20, 11:26 AM:


Hi,

 

I think I've found a way to reproduce this issue, at least it looks _really_ 
similar. It's a bit clunky, where I set up a bunch of actual running instances, 
run messages through them and randomly stop and start the brokers.
 I have automated the setup process and a way to break everything. I will 
attach an archive with everything needed to trigger the issue here, so you 
should be able to try it out with just a few commands on a Linux machine with 
Java being the only prereq. Hope it helps!

ps. Because of Arjuna this has to run in java 8 (or Wildfly at least). Check 
out the README for instructions of how to run everything

 

Br,

Anton[^setXid_bug.tar.gz]


was (Author: antonroskvist):
Hi,

 

I think I've found a way to reproduce this issue, at least it looks _really_ 
similar. It's a bit clunky, where I set up a bunch of actual running instances, 
run messages through them and randomly stop and start the brokers.
I have automated the setup process and a way to break everything. I will attach 
an archive with everything needed to trigger the issue here, so you should be 
able to try it out with just a few commands on a Linux machine with Java being 
the only prereq. Hope it helps!

ps. Because of Arjuna this has to run in java 8 (or Wildfly at least). Check 
out the README for instructions of how to run everything

 

Br,

Anton[^setXid_bug.tar.gz]

> ActiveMQ producer thread hangs on setXid
> 
>
> Key: AMQ-7470
> URL: https://issues.apache.org/jira/browse/AMQ-7470
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: AMQP, Broker, JMS client
>Affects Versions: 5.15.6
>Reporter: Rajesh Pote
>Assignee: Jean-Baptiste Onofré
>Priority: Blocker
> Attachments: setXid_bug.tar.gz
>
>
> I've noticed issues with distributed transactions (XA) on karaf when using 
> ActiveMQ with JDBC storeage (postgres). After some time (it isn't 
> deterministic) I've observed that on database side 'idle in transaction' 
> appeared (it's other schema than used by ActiveMQ). After debugging it seams 
> that the reason why transactions are hanging is ActiveMQ and  
> org.apache.activemq.transport.FutureResponse.getResult method that waits 
> forever for a response.
> {code}
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x000768585aa8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403)
> at 
> org.apache.activemq.transport.FutureResponse.getResult(FutureResponse.java:48)
> at 
> org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:87)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1388)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1428)
> at 
> org.apache.activemq.TransactionContext.setXid(TransactionContext.java:751)
> at 
> org.apache.activemq.TransactionContext.invokeBeforeEnd(TransactionContext.java:424)
> at 
> org.apache.activemq.TransactionContext.end(TransactionContext.java:408)
> at 
> org.apache.geronimo.transaction.manager.WrapperNamedXAResource.end(WrapperNamedXAResource.java:61)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:588)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:567)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.beforePrepare(TransactionImpl.java:414)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.commit(TransactionImpl.java:262)
> at 
> org.apache.geronimo.transaction.manager.TransactionManagerImpl.commit(TransactionManagerImpl.java:252)
> at 
> org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1020)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:761)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:730)
> at 
> 

[jira] [Commented] (AMQ-7470) ActiveMQ producer thread hangs on setXid

2020-10-05 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17208009#comment-17208009
 ] 

Anton Roskvist commented on AMQ-7470:
-

Hi,

 

I think I've found a way to reproduce this issue, at least it looks _really_ 
similar. It's a bit clunky, where I set up a bunch of actual running instances, 
run messages through them and randomly stop and start the brokers.
I have automated the setup process and a way to break everything. I will attach 
an archive with everything needed to trigger the issue here, so you should be 
able to try it out with just a few commands on a Linux machine with Java being 
the only prereq. Hope it helps!

ps. Because of Arjuna this has to run in java 8 (or Wildfly at least). Check 
out the README for instructions of how to run everything

 

Br,

Anton[^setXid_bug.tar.gz]

> ActiveMQ producer thread hangs on setXid
> 
>
> Key: AMQ-7470
> URL: https://issues.apache.org/jira/browse/AMQ-7470
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: AMQP, Broker, JMS client
>Affects Versions: 5.15.6
>Reporter: Rajesh Pote
>Assignee: Jean-Baptiste Onofré
>Priority: Blocker
> Attachments: setXid_bug.tar.gz
>
>
> I've noticed issues with distributed transactions (XA) on karaf when using 
> ActiveMQ with JDBC storeage (postgres). After some time (it isn't 
> deterministic) I've observed that on database side 'idle in transaction' 
> appeared (it's other schema than used by ActiveMQ). After debugging it seams 
> that the reason why transactions are hanging is ActiveMQ and  
> org.apache.activemq.transport.FutureResponse.getResult method that waits 
> forever for a response.
> {code}
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x000768585aa8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403)
> at 
> org.apache.activemq.transport.FutureResponse.getResult(FutureResponse.java:48)
> at 
> org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:87)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1388)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1428)
> at 
> org.apache.activemq.TransactionContext.setXid(TransactionContext.java:751)
> at 
> org.apache.activemq.TransactionContext.invokeBeforeEnd(TransactionContext.java:424)
> at 
> org.apache.activemq.TransactionContext.end(TransactionContext.java:408)
> at 
> org.apache.geronimo.transaction.manager.WrapperNamedXAResource.end(WrapperNamedXAResource.java:61)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:588)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:567)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.beforePrepare(TransactionImpl.java:414)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.commit(TransactionImpl.java:262)
> at 
> org.apache.geronimo.transaction.manager.TransactionManagerImpl.commit(TransactionManagerImpl.java:252)
> at 
> org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1020)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:761)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:730)
> at 
> org.apache.aries.transaction.internal.AriesPlatformTransactionManager.commit(AriesPlatformTransactionManager.java:75)
> at 
> org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:484)
> at 
> org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:291)
> at 
> org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
> at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:655)
> . custom service
> {code}
> {code}
> 

[jira] [Updated] (AMQ-7470) ActiveMQ producer thread hangs on setXid

2020-10-05 Thread Anton Roskvist (Jira)


 [ 
https://issues.apache.org/jira/browse/AMQ-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Roskvist updated AMQ-7470:

Attachment: setXid_bug.tar.gz

> ActiveMQ producer thread hangs on setXid
> 
>
> Key: AMQ-7470
> URL: https://issues.apache.org/jira/browse/AMQ-7470
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: AMQP, Broker, JMS client
>Affects Versions: 5.15.6
>Reporter: Rajesh Pote
>Assignee: Jean-Baptiste Onofré
>Priority: Blocker
> Attachments: setXid_bug.tar.gz
>
>
> I've noticed issues with distributed transactions (XA) on karaf when using 
> ActiveMQ with JDBC storeage (postgres). After some time (it isn't 
> deterministic) I've observed that on database side 'idle in transaction' 
> appeared (it's other schema than used by ActiveMQ). After debugging it seams 
> that the reason why transactions are hanging is ActiveMQ and  
> org.apache.activemq.transport.FutureResponse.getResult method that waits 
> forever for a response.
> {code}
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x000768585aa8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403)
> at 
> org.apache.activemq.transport.FutureResponse.getResult(FutureResponse.java:48)
> at 
> org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:87)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1388)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1428)
> at 
> org.apache.activemq.TransactionContext.setXid(TransactionContext.java:751)
> at 
> org.apache.activemq.TransactionContext.invokeBeforeEnd(TransactionContext.java:424)
> at 
> org.apache.activemq.TransactionContext.end(TransactionContext.java:408)
> at 
> org.apache.geronimo.transaction.manager.WrapperNamedXAResource.end(WrapperNamedXAResource.java:61)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:588)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:567)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.beforePrepare(TransactionImpl.java:414)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.commit(TransactionImpl.java:262)
> at 
> org.apache.geronimo.transaction.manager.TransactionManagerImpl.commit(TransactionManagerImpl.java:252)
> at 
> org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1020)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:761)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:730)
> at 
> org.apache.aries.transaction.internal.AriesPlatformTransactionManager.commit(AriesPlatformTransactionManager.java:75)
> at 
> org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:484)
> at 
> org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:291)
> at 
> org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
> at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:655)
> . custom service
> {code}
> {code}
> "DefaultMessageListenerContainer-3" #13199 prio=5 os_prio=0 
> tid=0x7fb8687e6800 nid=0x3954 waiting on condition [0x7fb7b0b98000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x000765f532c0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403)
> at 
> 

[jira] [Commented] (AMQ-7470) ActiveMQ producer thread hangs on setXid

2020-09-11 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17194257#comment-17194257
 ] 

Anton Roskvist commented on AMQ-7470:
-

Oh yeah, I just remembered one more thing that might be relevant to this... If 
I remove failover brokers in the connectionURL I am not seeing the issue, 
meaning that if the JBoss clients go from:
failover:(broker1,broker2)?option1

to jus:

failover:(broker1)?option1

then broker restarts do not seem to trigger the issue, at least not as often or 
that I have noticed.

Of course I can not run with such a configuration, but for testing purposes it 
might reveal something relevant to this issue.

 

Br,

Anton

> ActiveMQ producer thread hangs on setXid
> 
>
> Key: AMQ-7470
> URL: https://issues.apache.org/jira/browse/AMQ-7470
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: AMQP, Broker, JMS client
>Affects Versions: 5.15.6
>Reporter: Rajesh Pote
>Assignee: Jean-Baptiste Onofré
>Priority: Blocker
>
> I've noticed issues with distributed transactions (XA) on karaf when using 
> ActiveMQ with JDBC storeage (postgres). After some time (it isn't 
> deterministic) I've observed that on database side 'idle in transaction' 
> appeared (it's other schema than used by ActiveMQ). After debugging it seams 
> that the reason why transactions are hanging is ActiveMQ and  
> org.apache.activemq.transport.FutureResponse.getResult method that waits 
> forever for a response.
> {code}
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x000768585aa8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403)
> at 
> org.apache.activemq.transport.FutureResponse.getResult(FutureResponse.java:48)
> at 
> org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:87)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1388)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1428)
> at 
> org.apache.activemq.TransactionContext.setXid(TransactionContext.java:751)
> at 
> org.apache.activemq.TransactionContext.invokeBeforeEnd(TransactionContext.java:424)
> at 
> org.apache.activemq.TransactionContext.end(TransactionContext.java:408)
> at 
> org.apache.geronimo.transaction.manager.WrapperNamedXAResource.end(WrapperNamedXAResource.java:61)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:588)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:567)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.beforePrepare(TransactionImpl.java:414)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.commit(TransactionImpl.java:262)
> at 
> org.apache.geronimo.transaction.manager.TransactionManagerImpl.commit(TransactionManagerImpl.java:252)
> at 
> org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1020)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:761)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:730)
> at 
> org.apache.aries.transaction.internal.AriesPlatformTransactionManager.commit(AriesPlatformTransactionManager.java:75)
> at 
> org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:484)
> at 
> org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:291)
> at 
> org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96)
> at 
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
> at 
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:655)
> . custom service
> {code}
> {code}
> "DefaultMessageListenerContainer-3" #13199 prio=5 os_prio=0 
> tid=0x7fb8687e6800 nid=0x3954 waiting on condition [0x7fb7b0b98000]
>java.lang.Thread.State: WAITING (parking)
> at 

[jira] [Commented] (AMQ-7470) ActiveMQ producer thread hangs on setXid

2020-09-08 Thread Anton Roskvist (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17192215#comment-17192215
 ] 

Anton Roskvist commented on AMQ-7470:
-

[~AndreasBaumgart]Well, yes and no, but mostly no :(. I can reproduce it very 
easily in a larger TEST environment, but can't for the life of me get it to 
trigger in any local env. I even tried replicating large parts of the env in 
local containers and running there, but no luck.

Anyway, everything runs on Linux hosts. There are clients running different 
software stacks as well that have no issue at all. The ones that do have issues 
are jBoss applications using XA, EJB, MDBs, Oracle DB and external Artemis 
brokers. The lock happens in in the jBoss application (client) and it does not 
seem to be happening more or less often depending on load.

It is a failover or possibly failback between brokers that causes it, and it 
happens regardless of the broker shutdown being forceful (kill -9) or graceful. 
It does not happen every time, but two or three broker restarts are guaranteed 
to trigger it.

 

Br,

Anton

> ActiveMQ producer thread hangs on setXid
> 
>
> Key: AMQ-7470
> URL: https://issues.apache.org/jira/browse/AMQ-7470
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: AMQP, Broker, JMS client
>Affects Versions: 5.15.6
>Reporter: Rajesh Pote
>Assignee: Jean-Baptiste Onofré
>Priority: Blocker
>
> I've noticed issues with distributed transactions (XA) on karaf when using 
> ActiveMQ with JDBC storeage (postgres). After some time (it isn't 
> deterministic) I've observed that on database side 'idle in transaction' 
> appeared (it's other schema than used by ActiveMQ). After debugging it seams 
> that the reason why transactions are hanging is ActiveMQ and  
> org.apache.activemq.transport.FutureResponse.getResult method that waits 
> forever for a response.
> {code}
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x000768585aa8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403)
> at 
> org.apache.activemq.transport.FutureResponse.getResult(FutureResponse.java:48)
> at 
> org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:87)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1388)
> at 
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1428)
> at 
> org.apache.activemq.TransactionContext.setXid(TransactionContext.java:751)
> at 
> org.apache.activemq.TransactionContext.invokeBeforeEnd(TransactionContext.java:424)
> at 
> org.apache.activemq.TransactionContext.end(TransactionContext.java:408)
> at 
> org.apache.geronimo.transaction.manager.WrapperNamedXAResource.end(WrapperNamedXAResource.java:61)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:588)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.endResources(TransactionImpl.java:567)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.beforePrepare(TransactionImpl.java:414)
> at 
> org.apache.geronimo.transaction.manager.TransactionImpl.commit(TransactionImpl.java:262)
> at 
> org.apache.geronimo.transaction.manager.TransactionManagerImpl.commit(TransactionManagerImpl.java:252)
> at 
> org.springframework.transaction.jta.JtaTransactionManager.doCommit(JtaTransactionManager.java:1020)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:761)
> at 
> org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:730)
> at 
> org.apache.aries.transaction.internal.AriesPlatformTransactionManager.commit(AriesPlatformTransactionManager.java:75)
> at 
> org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:484)
> at 
> org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:291)
> at 
> org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96)
> at 
> 

  1   2   >