[jira] [Commented] (ARTEMIS-1307) Improve performance of OrderedExecutor

2024-04-11 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836346#comment-17836346
 ] 

Francesco Nigro commented on ARTEMIS-1307:
--

I was young(er) and more naive and just wanted to make "everything" faster :) 

> Improve performance of OrderedExecutor
> --
>
> Key: ARTEMIS-1307
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1307
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.2.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
>
> The current ordered executor is using ConcurrentLinkedQueue that:
> - has expensive queue::size operation (ie O( n ))
> - has node instances scattered in the heap  
> There are faster and cheaper alternatives in specialized libraries (eg 
> JCtools) that could be used instead of it to be more friendly with the GC and 
> to provide more throughput too when CPU bound.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (ARTEMIS-1307) Improve performance of OrderedExecutor

2024-04-11 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836341#comment-17836341
 ] 

Francesco Nigro edited comment on ARTEMIS-1307 at 4/11/24 8:00 PM:
---

Hi [~jbertram] I can if there's still the need for such: I mean, if it proves 
to be a costy component which would benefit by it.

I remember that most of the operations in Artemis (apart from OpenWire) use to 
run on the Netty Event Loops: if this thing still hold, probably it won't 
matter.
If it has changed instead (or there are other performance sentitive paths which 
run on the executors) I suppose you need to profile a use care and make sure 
that the bottleneck is on the OrderedExecutor's tasks.


was (Author: nigrofranz):
Hi [~jbertram] I can if there's still the need for such: I mean, if it proves 
to be a costy component which would benefit by it.

I remember that most of the operations in Artemis (apart from OpenWire) use to 
run on the Netty Event Loops: if this thing still hold, probably it won't 
matter.
If it has changed than, I suppose you need to profile to make sure that one 
bottleneck is on the OrderedExecutor's tasks.

> Improve performance of OrderedExecutor
> --
>
> Key: ARTEMIS-1307
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1307
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.2.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
>
> The current ordered executor is using ConcurrentLinkedQueue that:
> - has expensive queue::size operation (ie O( n ))
> - has node instances scattered in the heap  
> There are faster and cheaper alternatives in specialized libraries (eg 
> JCtools) that could be used instead of it to be more friendly with the GC and 
> to provide more throughput too when CPU bound.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (ARTEMIS-1307) Improve performance of OrderedExecutor

2024-04-11 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-1307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836341#comment-17836341
 ] 

Francesco Nigro commented on ARTEMIS-1307:
--

Hi [~jbertram] I can if there's still the need for such: I mean, if it proves 
to be a costy component which would benefit by it.

I remember that most of the operations in Artemis (apart from OpenWire) use to 
run on the Netty Event Loops: if this thing still hold, probably it won't 
matter.
If it has changed than, I suppose you need to profile to make sure that one 
bottleneck is on the OrderedExecutor's tasks.

> Improve performance of OrderedExecutor
> --
>
> Key: ARTEMIS-1307
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1307
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.2.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
>
> The current ordered executor is using ConcurrentLinkedQueue that:
> - has expensive queue::size operation (ie O( n ))
> - has node instances scattered in the heap  
> There are faster and cheaper alternatives in specialized libraries (eg 
> JCtools) that could be used instead of it to be more friendly with the GC and 
> to provide more throughput too when CPU bound.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-3703) Block clients until coordinated sequence is advanced after backup drop

2022-03-07 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3703:
-
Description: 
With pluggable quorum vote, if the backup connection drop, live can serve 
clients unreplicated while not yet incremented activation sequence: this can 
lead to inconsistency if the unreplicated live will crash *before* it, causing 
the replica to be able to become live, but with some missing data.

 

  was:
With pluggable quorum vote, if the backup connection drop, live can serve 
clients unreplicated while not yet incremented activation sequence: this can 
lead to inconsistency if the unreplicated live will crash *before* it. 

A backup with that same activation sequence can start and become live even if 
the just crashed live data is different.

 

 


> Block clients until coordinated sequence is advanced after backup drop
> --
>
> Key: ARTEMIS-3703
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3703
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Domenico Francesco Bruscino
>Priority: Major
>
> With pluggable quorum vote, if the backup connection drop, live can serve 
> clients unreplicated while not yet incremented activation sequence: this can 
> lead to inconsistency if the unreplicated live will crash *before* it, 
> causing the replica to be able to become live, but with some missing data.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (ARTEMIS-3703) Block clients until coordinated sequence is advanced after backup drop

2022-03-07 Thread Francesco Nigro (Jira)
Francesco Nigro created ARTEMIS-3703:


 Summary: Block clients until coordinated sequence is advanced 
after backup drop
 Key: ARTEMIS-3703
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3703
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: Francesco Nigro
Assignee: Domenico Francesco Bruscino


With pluggable quorum vote, if the backup connection drop, live can serve 
clients unreplicated while not yet incremented activation sequence: this can 
lead to inconsistency if the unreplicated live will crash *before* it. 

A backup with that same activation sequence can start and become live even if 
the just crashed live data is different.

 

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Work started] (ARTEMIS-3679) Brokers shutdown after daylight saving fall back

2022-02-10 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on ARTEMIS-3679 started by Francesco Nigro.

> Brokers shutdown after daylight saving fall back
> 
>
> Key: ARTEMIS-3679
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3679
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.20.0
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This is what happened on a two brokers JDBC shared-store setup after daylight 
> savings change. This also causes the backup shutdown with the same critical 
> IO error.
> {code:java}
> 2021-10-31 01:58:44,002 WARN 
> [org.apache.activemq.artemis.core.server.impl.jdbc.JdbcLeaseLock] [LIVE] 
> d5b17659-c4f6-4847-bfb2-6c5f209a0fb9 query currentTimestamp = 2021-10-31 
> 01:58:43.27 on database should happen AFTER 2021-10-31 01:58:44.0 on 
> broker 2021-10-31 02:59:00,217 WARN [org.apache.activemq.artemis.core.server] 
> AMQ222010: Critical IO Error, shutting down the server. file=NULL, 
> message=Lost NodeManager lock: java.io.IOException: lost lock at 
> org.apache.activemq.artemis.core.server.impl.SharedStoreLiveActivation.lambda$registerActiveLockListener$0(SharedStoreLiveActivation.java:123)
>  [artemis-server-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
> org.apache.activemq.artemis.core.server.NodeManager.lambda$notifyLostLock$0(NodeManager.java:143)
>  [artemis-server-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
> java.base/java.lang.Iterable.forEach(Iterable.java:75) [java.base:] at 
> org.apache.activemq.artemis.core.server.NodeManager.notifyLostLock(NodeManager.java:141)
>  [artemis-server-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
> org.apache.activemq.artemis.core.server.impl.jdbc.JdbcNodeManager.notifyLostLock(JdbcNodeManager.java:154)
>  [artemis-server-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
> org.apache.activemq.artemis.core.server.impl.jdbc.ActiveMQScheduledLeaseLock.run(ActiveMQScheduledLeaseLock.java:114)
>  [artemis-server-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
> org.apache.activemq.artemis.core.server.ActiveMQScheduledComponent.runForExecutor(ActiveMQScheduledComponent.java:313)
>  [artemis-commons-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
> org.apache.activemq.artemis.core.server.ActiveMQScheduledComponent.lambda$bookedRunForScheduler$2(ActiveMQScheduledComponent.java:320)
>  [artemis-commons-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
> org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:42)
>  [artemis-commons-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
> org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31)
>  [artemis-commons-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
> org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:65)
>  [artemis-commons-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [java.base:] at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [java.base:] at 
> org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
>  [artemis-commons-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] 
> {code}
> The reason seems related to the TIMESTAMP field used on 
> HOLDER_EXPIRATION_TIME: given that it doesn't contains any time zone 
> information, if it compares against TIMESTAMP WITH TIME ZONE values ie 
> CURRENT_TIMESTAMP query results, they won't work as expected.
> In addition, CURRENT_TIMESTAMP values, while converted into the Java world, 
> reports UTC time values, while TIMESTAMP ones are adjusted depending by the 
> actual time zone (sensitive to day saving time adjustment as well!).
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3679) Brokers shutdown after daylight saving fall back

2022-02-10 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3679:
-
Description: 
This is what happened on a two brokers JDBC shared-store setup after daylight 
savings change. This also causes the backup shutdown with the same critical IO 
error.
{code:java}
2021-10-31 01:58:44,002 WARN 
[org.apache.activemq.artemis.core.server.impl.jdbc.JdbcLeaseLock] [LIVE] 
d5b17659-c4f6-4847-bfb2-6c5f209a0fb9 query currentTimestamp = 2021-10-31 
01:58:43.27 on database should happen AFTER 2021-10-31 01:58:44.0 on broker 
2021-10-31 02:59:00,217 WARN [org.apache.activemq.artemis.core.server] 
AMQ222010: Critical IO Error, shutting down the server. file=NULL, message=Lost 
NodeManager lock: java.io.IOException: lost lock at 
org.apache.activemq.artemis.core.server.impl.SharedStoreLiveActivation.lambda$registerActiveLockListener$0(SharedStoreLiveActivation.java:123)
 [artemis-server-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
org.apache.activemq.artemis.core.server.NodeManager.lambda$notifyLostLock$0(NodeManager.java:143)
 [artemis-server-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
java.base/java.lang.Iterable.forEach(Iterable.java:75) [java.base:] at 
org.apache.activemq.artemis.core.server.NodeManager.notifyLostLock(NodeManager.java:141)
 [artemis-server-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
org.apache.activemq.artemis.core.server.impl.jdbc.JdbcNodeManager.notifyLostLock(JdbcNodeManager.java:154)
 [artemis-server-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
org.apache.activemq.artemis.core.server.impl.jdbc.ActiveMQScheduledLeaseLock.run(ActiveMQScheduledLeaseLock.java:114)
 [artemis-server-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
org.apache.activemq.artemis.core.server.ActiveMQScheduledComponent.runForExecutor(ActiveMQScheduledComponent.java:313)
 [artemis-commons-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
org.apache.activemq.artemis.core.server.ActiveMQScheduledComponent.lambda$bookedRunForScheduler$2(ActiveMQScheduledComponent.java:320)
 [artemis-commons-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:42)
 [artemis-commons-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31)
 [artemis-commons-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:65)
 [artemis-commons-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
 [java.base:] at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 [java.base:] at 
org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
 [artemis-commons-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] 
{code}
The reason seems related to the TIMESTAMP field used on HOLDER_EXPIRATION_TIME: 
given that it doesn't contains any time zone information, if it compares 
against TIMESTAMP WITH TIME ZONE values ie CURRENT_TIMESTAMP query results, 
they won't work as expected.

In addition, CURRENT_TIMESTAMP values, while converted into the Java world, 
reports UTC time values, while TIMESTAMP ones are adjusted depending by the 
actual time zone (sensitive to day saving time adjustment as well!).

 

 

  was:
This is what happened on a two brokers JDBC shared-store setup after daylight 
savings change. This also causes the backup shutdown with the same critical IO 
error.

 
{code:java}
2021-10-31 01:58:44,002 WARN 
[org.apache.activemq.artemis.core.server.impl.jdbc.JdbcLeaseLock] [LIVE] 
d5b17659-c4f6-4847-bfb2-6c5f209a0fb9 query currentTimestamp = 2021-10-31 
01:58:43.27 on database should happen AFTER 2021-10-31 01:58:44.0 on broker 
2021-10-31 02:59:00,217 WARN [org.apache.activemq.artemis.core.server] 
AMQ222010: Critical IO Error, shutting down the server. file=NULL, message=Lost 
NodeManager lock: java.io.IOException: lost lock at 
org.apache.activemq.artemis.core.server.impl.SharedStoreLiveActivation.lambda$registerActiveLockListener$0(SharedStoreLiveActivation.java:123)
 [artemis-server-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
org.apache.activemq.artemis.core.server.NodeManager.lambda$notifyLostLock$0(NodeManager.java:143)
 [artemis-server-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
java.base/java.lang.Iterable.forEach(Iterable.java:75) [java.base:] at 
org.apache.activemq.artemis.core.server.NodeManager.notifyLostLock(NodeManager.java:141)
 [artemis-server-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
org.apache.activemq.artemis.core.server.impl.jdbc.JdbcNodeManager.notifyLostLock(JdbcNodeManager.java:154)
 [artemis-server-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 

[jira] [Created] (ARTEMIS-3679) Brokers shutdown after daylight saving fall back

2022-02-10 Thread Francesco Nigro (Jira)
Francesco Nigro created ARTEMIS-3679:


 Summary: Brokers shutdown after daylight saving fall back
 Key: ARTEMIS-3679
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3679
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 2.20.0
Reporter: Francesco Nigro
Assignee: Francesco Nigro


This is what happened on a two brokers JDBC shared-store setup after daylight 
savings change. This also causes the backup shutdown with the same critical IO 
error.

 
{code:java}
2021-10-31 01:58:44,002 WARN 
[org.apache.activemq.artemis.core.server.impl.jdbc.JdbcLeaseLock] [LIVE] 
d5b17659-c4f6-4847-bfb2-6c5f209a0fb9 query currentTimestamp = 2021-10-31 
01:58:43.27 on database should happen AFTER 2021-10-31 01:58:44.0 on broker 
2021-10-31 02:59:00,217 WARN [org.apache.activemq.artemis.core.server] 
AMQ222010: Critical IO Error, shutting down the server. file=NULL, message=Lost 
NodeManager lock: java.io.IOException: lost lock at 
org.apache.activemq.artemis.core.server.impl.SharedStoreLiveActivation.lambda$registerActiveLockListener$0(SharedStoreLiveActivation.java:123)
 [artemis-server-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
org.apache.activemq.artemis.core.server.NodeManager.lambda$notifyLostLock$0(NodeManager.java:143)
 [artemis-server-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
java.base/java.lang.Iterable.forEach(Iterable.java:75) [java.base:] at 
org.apache.activemq.artemis.core.server.NodeManager.notifyLostLock(NodeManager.java:141)
 [artemis-server-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
org.apache.activemq.artemis.core.server.impl.jdbc.JdbcNodeManager.notifyLostLock(JdbcNodeManager.java:154)
 [artemis-server-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
org.apache.activemq.artemis.core.server.impl.jdbc.ActiveMQScheduledLeaseLock.run(ActiveMQScheduledLeaseLock.java:114)
 [artemis-server-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
org.apache.activemq.artemis.core.server.ActiveMQScheduledComponent.runForExecutor(ActiveMQScheduledComponent.java:313)
 [artemis-commons-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
org.apache.activemq.artemis.core.server.ActiveMQScheduledComponent.lambda$bookedRunForScheduler$2(ActiveMQScheduledComponent.java:320)
 [artemis-commons-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:42)
 [artemis-commons-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31)
 [artemis-commons-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:65)
 [artemis-commons-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
 [java.base:] at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 [java.base:] at 
org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
 [artemis-commons-2.16.0.redhat-00012.jar:2.16.0.redhat-00012] 
{code}
 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (ARTEMIS-3659) JMS shared subscriptions create additional unnecessary core Queues if clientID is set

2022-01-28 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro closed ARTEMIS-3659.

Resolution: Not A Bug

https://issues.apache.org/jira/browse/QPIDJMS-220 explains that's not a bug

> JMS shared subscriptions create additional unnecessary core Queues if 
> clientID is set
> -
>
> Key: ARTEMIS-3659
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3659
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> When user create a shared subscription and its consumers, if the consumers 
> use separate connections with different ClientIDs, the consumer(s) would act 
> as  (un-shared) subscribers ie both receiving all messages from the "shared" 
> subscription in multicast.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3659) JMS shared subscriptions create additional unnecessary core Queues if clientID is set

2022-01-28 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3659:
-
Description: When user create a shared subscription and its consumers, if 
the consumers use separate connections with different ClientIDs, the 
consumer(s) would act as  (un-shared) subscribers ie both receiving all 
messages from the "shared" subscription in multicast.  (was: When user create a 
shared subscription and its consumers, if the consumers use separate 
connections with different ClientIDs, the consumer(s) would act unshared 
subscriptions, meaning that both consumer(s) would receive the messages from 
the "shared" subscription in multicast.)

> JMS shared subscriptions create additional unnecessary core Queues if 
> clientID is set
> -
>
> Key: ARTEMIS-3659
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3659
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> When user create a shared subscription and its consumers, if the consumers 
> use separate connections with different ClientIDs, the consumer(s) would act 
> as  (un-shared) subscribers ie both receiving all messages from the "shared" 
> subscription in multicast.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3659) JMS shared subscriptions create additional unnecessary core Queues if clientID is set

2022-01-28 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3659:
-
Description: When user create a shared subscription and its consumers, if 
the consumers use separate connections with different ClientIDs, the 
consumer(s) would act unshared subscriptions, meaning that both consumer(s) 
would receive the messages from the "shared" subscription in multicast.

> JMS shared subscriptions create additional unnecessary core Queues if 
> clientID is set
> -
>
> Key: ARTEMIS-3659
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3659
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> When user create a shared subscription and its consumers, if the consumers 
> use separate connections with different ClientIDs, the consumer(s) would act 
> unshared subscriptions, meaning that both consumer(s) would receive the 
> messages from the "shared" subscription in multicast.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (ARTEMIS-3659) JMS shared subscription create additional unnecessary core Queues if clientID is set

2022-01-28 Thread Francesco Nigro (Jira)
Francesco Nigro created ARTEMIS-3659:


 Summary: JMS shared subscription create additional unnecessary 
core Queues if clientID is set
 Key: ARTEMIS-3659
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3659
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Francesco Nigro






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (ARTEMIS-3659) JMS shared subscription create additional unnecessary core Queues if clientID is set

2022-01-28 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro reassigned ARTEMIS-3659:


Assignee: Francesco Nigro

> JMS shared subscription create additional unnecessary core Queues if clientID 
> is set
> 
>
> Key: ARTEMIS-3659
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3659
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3659) JMS shared subscriptions create additional unnecessary core Queues if clientID is set

2022-01-28 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3659:
-
Summary: JMS shared subscriptions create additional unnecessary core Queues 
if clientID is set  (was: JMS shared subscription create additional unnecessary 
core Queues if clientID is set)

> JMS shared subscriptions create additional unnecessary core Queues if 
> clientID is set
> -
>
> Key: ARTEMIS-3659
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3659
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3620) Journal blocking delete/update record with no sync

2022-01-18 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3620:
-
Description: 
https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, has 
introduced a blocking logic while checking for record's presence on both delete 
and update operations, regardless the configured {{sync}} parameter.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence that can save blocking in the happy (and most common) 
path: 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}

There are 3 solutions to this issue:
# reintroduce {{checkKnownRecordID}} and save blocking in the common & happy 
path
# introduce a smaller semantic change that don't report any error with no sync 
and no callback specified 
# introduce a bigger semantic change that don't report any error due to missing 
record ID to delete/update
 
Just as a side note, the {{try}} version of the same method already take care 
of ignoring existence of the record to delete, but the change proposed here
would help to give the same semantic guarantees if user explicitly declare to 
NOT have any interest in the outcome of the operation ie no IO callbacks and no 
sync.

  was:
https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, has 
introduced a blocking logic while checking for record's presence on both delete 
and update operations, regardless the configured {{sync}} parameter.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence that can save blocking in the happy (and most common) 
path: 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}

There are 3 solutions to this issue:
# reintroduce {{checkKnownRecordID}} and save blocking in the common & happy 
path
# introduce a smaller semantic change that don't report any error with no sync 
and no callback specified 
# introduce a bigger semantic change that don't report any error due to missing 
record ID to delete/update
 


> Journal blocking delete/update record with no sync 
> ---
>
> Key: ARTEMIS-3620
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3620
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, 
> has introduced a blocking logic while checking for record's presence on both 
> delete and update operations, regardless the configured {{sync}} parameter.
> Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
> check for record presence 

[jira] [Updated] (ARTEMIS-3620) Journal blocking delete/update record with no sync

2022-01-18 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3620:
-
Description: 
https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, has 
introduced a blocking logic while checking for record's presence on both delete 
and update operations, regardless the configured {{sync}} parameter.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence that can save blocking in the happy (and most common) 
path: 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}

There are 3 solutions to this issue:
# reintroduce {{checkKnownRecordID}} and save blocking in the common & happy 
path
# introduce a smaller semantic change that don't report any error with no sync 
and no callback specified 
# introduce a bigger semantic change that don't report any error due to missing 
record ID to delete/update
 

Just as a side note, the {{try}} version of the same method already take care 
of ignoring existence of the record to delete, but the change proposed here
would help to give the same semantic guarantees if user explicitly declare to 
NOT have any interest in the outcome of the operation ie no IO callbacks and no 
sync.

  was:
https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, has 
introduced a blocking logic while checking for record's presence on both delete 
and update operations, regardless the configured {{sync}} parameter.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence that can save blocking in the happy (and most common) 
path: 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}

There are 3 solutions to this issue:
# reintroduce {{checkKnownRecordID}} and save blocking in the common & happy 
path
# introduce a smaller semantic change that don't report any error with no sync 
and no callback specified 
# introduce a bigger semantic change that don't report any error due to missing 
record ID to delete/update
 
Just as a side note, the {{try}} version of the same method already take care 
of ignoring existence of the record to delete, but the change proposed here
would help to give the same semantic guarantees if user explicitly declare to 
NOT have any interest in the outcome of the operation ie no IO callbacks and no 
sync.


> Journal blocking delete/update record with no sync 
> ---
>
> Key: ARTEMIS-3620
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3620
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> 

[jira] [Updated] (ARTEMIS-3651) Simplify batch-delay with Netty's FlushConsolidationHandler

2022-01-17 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3651:
-
Description: 
{{batchDelay}} is an ancient acceptors/connnectors configuration parameter that 
aim to batching while sending messages through Netty by using a separate 
thread, looping over all active connections, to flush pending batches in 
background.
This is going to use additional Threads (to perform flush) and complicate our 
internal APIs to deal with batching logic (see {{NettyConnection::write}} 
{{flush}} and {{batched}} parameters).

https://netty.io/4.1/api/io/netty/handler/flush/FlushConsolidationHandler.html  
configured with {{ consolidateWhenNoReadInProgress == true}} works in the same 
way, while simplifying Artemis code-base and APIs  ie much less code to be 
maintained

This same could be used on cluster connections by default (see ARTEMIS-3045 for 
more info) or could be installed while these are used as replication channel(s)

  was:
{{batchDelay}} is an ancient acceptors/connnectors configuration parameter that 
aim to force batching while sending messages through Netty and use a separate 
thread, looping over all active connections, to flush pending batches in 
background.

https://netty.io/4.1/api/io/netty/handler/flush/FlushConsolidationHandler.html  
configured with {{ consolidateWhenNoReadInProgress == true}} works in the same 
way, while simplifying Artemis code-base ie much less code to be maintained


> Simplify batch-delay with Netty's FlushConsolidationHandler
> ---
>
> Key: ARTEMIS-3651
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3651
> Project: ActiveMQ Artemis
>  Issue Type: Wish
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> {{batchDelay}} is an ancient acceptors/connnectors configuration parameter 
> that aim to batching while sending messages through Netty by using a separate 
> thread, looping over all active connections, to flush pending batches in 
> background.
> This is going to use additional Threads (to perform flush) and complicate our 
> internal APIs to deal with batching logic (see {{NettyConnection::write}} 
> {{flush}} and {{batched}} parameters).
> https://netty.io/4.1/api/io/netty/handler/flush/FlushConsolidationHandler.html
>   configured with {{ consolidateWhenNoReadInProgress == true}} works in the 
> same way, while simplifying Artemis code-base and APIs  ie much less code to 
> be maintained
> This same could be used on cluster connections by default (see ARTEMIS-3045 
> for more info) or could be installed while these are used as replication 
> channel(s)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3651) Simplify batch-delay with Netty's FlushConsolidationHandler

2022-01-17 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3651:
-
Summary: Simplify batch-delay with Netty's FlushConsolidationHandler  (was: 
Replace batch-delay current logic, replacing it with Netty's 
FlushConsolidationHandler)

> Simplify batch-delay with Netty's FlushConsolidationHandler
> ---
>
> Key: ARTEMIS-3651
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3651
> Project: ActiveMQ Artemis
>  Issue Type: Wish
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> {{batchDelay}} is an ancient acceptors/connnectors configuration parameter 
> that aim to force batching while sending messages through Netty and use a 
> separate thread, looping over all active connections, to flush pending 
> batches in background.
> https://netty.io/4.1/api/io/netty/handler/flush/FlushConsolidationHandler.html
>   configured with {{ consolidateWhenNoReadInProgress == true}} works in the 
> same way, while simplifying Artemis code-base ie much less code to be 
> maintained



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (ARTEMIS-3651) Replace batch-delay current logic, replacing it with Netty's FlushConsolidationHandler

2022-01-17 Thread Francesco Nigro (Jira)
Francesco Nigro created ARTEMIS-3651:


 Summary: Replace batch-delay current logic, replacing it with 
Netty's FlushConsolidationHandler
 Key: ARTEMIS-3651
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3651
 Project: ActiveMQ Artemis
  Issue Type: Wish
Reporter: Francesco Nigro
Assignee: Francesco Nigro


{{batchDelay}} is an ancient acceptors/connnectors configuration parameter that 
aim to force batching while sending messages through Netty and use a separate 
thread, looping over all active connections, to flush pending batches in 
background.

https://netty.io/4.1/api/io/netty/handler/flush/FlushConsolidationHandler.html  
configured with {{ consolidateWhenNoReadInProgress == true}} works in the same 
way, while simplifying Artemis code-base ie much less code to be maintained



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3430) Activation Sequence Auto-Repair

2022-01-13 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3430:
-
Description: 
This can be seen both as a bug or an improvement over the existing self-heal 
behaviour of activation sequence introduced by 
https://issues.apache.org/jira/browse/ARTEMIS-3340.

In short, the existing protocol to increase activation sequence while 
un-replicated is:
# remote i -> -(i + 1) ie remote CLAIM 
# local i -> (i + 1) ie local commit
# remote -(i + 1) -> (i + 1) ie remote COMMIT

This protocol has been designed to allow witness brokers to acknowledge if 
their data is no longer up-to-date and to save them to throw it away if still 
valuable, during a partial failure while increasing activation sequence.

In the current version, self-repairing is allowed just if live broker has 
performed 2. but not 3. ie local activation sequence is updated, but 
coordinated one isn't committed yet.
If the failing broker is restarted it can "fix" the coordinated sequence and 
move on to become live again, but if 2. fail (or just never happen), the 
coordinated activation sequence cannot be fixed if not with some admin 
intervention, after inspecting *all* local activation sequences.

The reason why other brokers cannot "fix" the sequence is because the local 
sequence of the failed broker is unknown and just roll-backing the claimed one 
(to the previous or to the right committed value) can makes the failed broker 
to believe to have up-to-date data too, causing journal misalignments.

The solution to this can be to fix the claimed sequence moving it to the right 
commit value while forbidding other brokers to run un-replicated using it.
This is achieved by further increasing it *after* repaired: it would 
prematurely age other in-sync brokers (including the failed one), but allowing 
auto-repair without admin intervention.
The sole drawback of this strategy is that a further fail of the repairing 
broker while increasing sequence will give to it an exclusive responsibility to 
auto-repair (again, on restart) because no other brokers can have an 
high-enough local sequence.

  was:
This can be seen both as a bug or an improvement over the existing self-heal 
behaviour of activation sequence introduced by 
https://issues.apache.org/jira/browse/ARTEMIS-3340.

In short, the existing protocol to increase activation sequence while 
un-replicated is:
# remote i -> -(i + 1) ie remote CLAIM 
# local i -> (i + 1) ie local commit
# remote -(i + 1) -> (i + 1) ie remote COMMIT

This protocol has been designed to allow witness brokers to acknowledge if 
their data is no longer up-to-date and to save them to throw it away if still 
valuable, during a partial failure while increasing activation sequence.

In the current version, self-repairing is allowed just if live broker has 
performed 2. but not 3. ie local activation sequence is updated, but 
coordinated one isn't committed yet.
If the failing broker is restarted it can "fix" the coordinated sequence and 
move on to become live again, but if 2. fail (or just never happen), the 
coordinated activation sequence cannot be fixed if not with some admin 
intervention, after inspecting *all* local activation sequences.

The reason why other brokers cannot "fix" the sequence is because the local 
sequence of the failed broker is unknown and just roll-backing the claimed one 
(to the previous or to the right committed value) can makes the failed broker 
to believe to have up-to-date data too, causing journal misalignments.

The solution to this can be to fix the claimed sequence moving it to the right 
commit value while forbidding other broker to run un-replicated using it.
This is achieved by further increasing it *after* repaired: it would 
prematurely age other in-sync brokers (including the failed one), but allowing 
auto-repair without admin intervention.
The sole drawback of this strategy is that a further fail of the repairing 
broker while increasing sequence will give to it an exclusive responsibility to 
auto-repair (again, on restart) because no other brokers can have an 
high-enough local sequence.


> Activation Sequence Auto-Repair
> ---
>
> Key: ARTEMIS-3430
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3430
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.19.0
>
>
> This can be seen both as a bug or an improvement over the existing self-heal 
> behaviour of activation sequence introduced by 
> https://issues.apache.org/jira/browse/ARTEMIS-3340.
> In short, the existing protocol to increase activation sequence while 
> un-replicated is:
> # remote i -> -(i + 1) ie remote CLAIM 
> # local i -> (i + 1) ie local commit
> # remote -(i + 

[jira] [Updated] (ARTEMIS-3430) Activation Sequence Auto-Repair

2022-01-13 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3430:
-
Description: 
This can be seen both as a bug or an improvement over the existing self-heal 
behaviour of activation sequence introduced by 
https://issues.apache.org/jira/browse/ARTEMIS-3340.

In short, the existing protocol to increase activation sequence while 
un-replicated is:
# remote i -> -(i + 1) ie remote CLAIM 
# local i -> (i + 1) ie local commit
# remote -(i + 1) -> (i + 1) ie remote COMMIT

This protocol has been designed to allow witness brokers to acknowledge if 
their data is no longer up-to-date and to save them to throw it away if still 
valuable, during a partial failure while increasing activation sequence.

In the current version, self-repairing is allowed just if live broker has 
performed 2. but not 3. ie local activation sequence is updated, but 
coordinated one isn't committed yet.
If the failing broker is restarted it can "fix" the coordinated sequence and 
move on to become live again, but if 2. fail (or just never happen), the 
coordinated activation sequence cannot be fixed if not with some admin 
intervention, after inspecting *all* local activation sequences.

The reason why other brokers cannot "fix" the sequence is because the local 
sequence of the failed broker is unknown and just roll-backing the claimed one 
(to the previous or to the right committed value) can makes the failed broker 
to believe to have up-to-date data too, causing journal misalignments.

The solution to this can be to fix the claimed sequence moving it to the right 
commit value while forbidding other broker to run un-replicated using it.
This is achieved by further increasing it *after* repaired: it would 
prematurely age other in-sync brokers (including the failed one), but allowing 
auto-repair without admin intervention.
The sole drawback of this strategy is that a further fail of the repairing 
broker while increasing sequence will give to it an exclusive responsibility to 
auto-repair (again, on restart) because no other brokers can have an 
high-enough local sequence.

  was:
This can be seen both as a bug or an improvement over the existing self-heal 
behaviour of activation sequence introduced by 
https://issues.apache.org/jira/browse/ARTEMIS-3340.

In short, the existing protocol to increase activation sequence while 
un-replicated is:
# remote i -> -(i + 1) ie remote CLAIM 
# local i -> (i + 1) ie local commit
# remote -(i + 1) -> (i + 1) ie remote COMMIT

This protocol has been designed to allow witness brokers to acknowledge if 
their data is no longer up-to-date and to save them to throw it away if still 
valuable, during a partial failure while increasing activation sequence.

In the current version, self-repairing is allowed just if live broker has 
performed 2. but not 3. ie local activation sequence is updated, but 
coordinated one isn't committed yet.
If the failing broker is restarted it can "fix" the coordinated sequence and 
move on to become live again, but if 2. fail (or just never happen), the 
coordinated activation sequence cannot be fixed if not with some admin 
intervention, after inspecting *all* local activation sequences.

The reason why other brokers cannot "fix" the sequence is because the local 
sequence of the failed broker is unknown and just roll-backing the claimed one 
(to the previous or to the right committed value) can makes the failed broker 
to believe to have up-to-date data too, causing journal misalignments.

The solution to this can be to fix the claimed sequence moving it to the right 
commit value while forbidding any broker to run un-replicated using it.
This is achieved by further increasing it *after* repaired: it would 
prematurely age other in-sync brokers (including the failed one), but allowing 
auto-repair without admin intervention.
The sole drawback of this strategy is that a further fail of the repairing 
broker while increasing sequence will give to it an exclusive responsibility to 
auto-repair (again, on restart) because no other brokers can have an 
high-enough local sequence.


> Activation Sequence Auto-Repair
> ---
>
> Key: ARTEMIS-3430
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3430
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Fix For: 2.19.0
>
>
> This can be seen both as a bug or an improvement over the existing self-heal 
> behaviour of activation sequence introduced by 
> https://issues.apache.org/jira/browse/ARTEMIS-3340.
> In short, the existing protocol to increase activation sequence while 
> un-replicated is:
> # remote i -> -(i + 1) ie remote CLAIM 
> # local i -> (i + 1) ie local commit
> # remote -(i + 1) 

[jira] [Updated] (ARTEMIS-3643) Improve Pluggable Quorum HA if backup fail to read its journal

2022-01-13 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3643:
-
Description: 
During failover, the backup ensures to acquire ownership of the latest 
activation sequence value, but if it fails right after to load the journal 
(because of OOM) it would crash, becoming the only broker able to serve clients.

It would be better to move the activation sequence change/commit past loading 
the journal and before opening the acceptors for the clients.

  was:
During failover, the backup ensures to acquire ownership of the latest 
activation sequence value, but if it fails, right after, to load the journal 
(because of OOM) it would crash, becoming the only broker able to serve clients.

It would be better to move the activation sequence change/commit past loading 
the journal and before opening the acceptors for the clients.


> Improve Pluggable Quorum HA if backup fail to read its journal
> --
>
> Key: ARTEMIS-3643
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3643
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> During failover, the backup ensures to acquire ownership of the latest 
> activation sequence value, but if it fails right after to load the journal 
> (because of OOM) it would crash, becoming the only broker able to serve 
> clients.
> It would be better to move the activation sequence change/commit past loading 
> the journal and before opening the acceptors for the clients.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3643) Improve Pluggable Quorum HA if backup fail to read its journal

2022-01-13 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3643:
-
Summary: Improve Pluggable Quorum HA if backup fail to read its journal  
(was: Increase Pluggable Quorum HA if backup fail to read its journal)

> Improve Pluggable Quorum HA if backup fail to read its journal
> --
>
> Key: ARTEMIS-3643
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3643
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> During failover, the backup ensures to acquire ownership of the latest 
> activation sequence value, but if it fails, right after, to load the journal 
> (because of OOM) it would crash, becoming the only broker able to serve 
> clients.
> It would be better to move the activation sequence change/commit past loading 
> the journal and before opening the acceptors for the clients.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3643) Increase Pluggable Quorum HA if backup fail to read its journal

2022-01-13 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3643:
-
Description: 
During failover, the backup ensures to increase the activation sequence value, 
but if it fails to load the journal (because of OOM) and crash, its prior live 
won't be able, if restarted, to be live again. 

It would be better to move the activation sequence change past loading the 
journal and before opening the acceptors for the clients: this would ensure 
both brokers, in case of a failed journal loading, to be able to become live, 
if restarted.

  was:
During failover, the backup ensures to acquire ownership of the latest 
activation sequence value, but if it failes, right after, to load the journal 
(because of OOM) it would crash, becoming the only broker able to serve clients.

It would be better to move the activation sequence change past loading the 
journal and before opening the acceptors for the clients.


> Increase Pluggable Quorum HA if backup fail to read its journal
> ---
>
> Key: ARTEMIS-3643
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3643
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> During failover, the backup ensures to increase the activation sequence 
> value, but if it fails to load the journal (because of OOM) and crash, its 
> prior live won't be able, if restarted, to be live again. 
> It would be better to move the activation sequence change past loading the 
> journal and before opening the acceptors for the clients: this would ensure 
> both brokers, in case of a failed journal loading, to be able to become live, 
> if restarted.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3643) Increase Pluggable Quorum HA if backup fail to read its journal

2022-01-13 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3643:
-
Issue Type: Improvement  (was: Bug)

> Increase Pluggable Quorum HA if backup fail to read its journal
> ---
>
> Key: ARTEMIS-3643
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3643
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> During failover, the backup ensures to increase the activation sequence 
> value, but if it fails to load the journal (because of OOM) and crash, its 
> prior live won't be able, if restarted, to be live again. 
> It would be better to move the activation sequence change past loading the 
> journal and before opening the acceptors for the clients: this would ensure 
> both brokers, in case of a failed journal loading, to be able to become live, 
> if restarted.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (ARTEMIS-3643) Increase Pluggable Quorum HA if backup fail to read its journal

2022-01-13 Thread Francesco Nigro (Jira)
Francesco Nigro created ARTEMIS-3643:


 Summary: Increase Pluggable Quorum HA if backup fail to read its 
journal
 Key: ARTEMIS-3643
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3643
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Francesco Nigro
Assignee: Francesco Nigro


During failover, the backup ensures to acquire ownership of the latest 
activation sequence value, but if it failes, right after, to load the journal 
(because of OOM) it would crash, becoming the only broker able to serve clients.

It would be better to move the activation sequence change past loading the 
journal and before opening the acceptors for the clients.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3618) Faster Artemis CORE client MessageListener::onMessage without SecurityManager

2021-12-23 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3618:
-
Description: 
Currently {{{}ClientConsumerImpl{}}}, responsible of calling JMS 
{{{}MessageListener::onMessage{}}}, is installing/restoring the listener 
thread's context ClassLoader by using a secured action regardless any security 
manager is installed.
This secured action (using {{{}AccessController::doPrivileged{}}}) is very 
heavyweight and can often be as costy (or more) then the user/application code 
handling the received message (see 
[https://bugs.openjdk.java.net/browse/JDK-8062162] for more info).

The {{SecurityManager}} will be removed in the future (see 
[https://openjdk.java.net/jeps/411]) but until that moment would be nice to 
reduce such cost at least if no {{SecurityManager}} is installed.

This is a flamegraph showing the listener stack trace:
!noSecurityManager.png|width=920,height=398!

As the image shows, in violet, here's the cost of 
{{AccessController::doPrivileged}} if no {{SecurityManager}} is installed ie 14 
samples
 - handling the message costs 3 samples
 - acknowledge it costs 19 samples

TLDR {{AccessController::doPrivileged}} cost ~5 times handling the message and 
nearly the same as acking back it

  was:
Currently {{{}ClientConsumerImpl{}}}, responsible of calling JMS 
{{{}MessageListener::onMessage{}}}, is installing/restoring the listener 
thread's context ClassLoader by using a secured action regardless any security 
manager is installed.
This secured action (using {{{}AccessController::doPrivileged{}}}) is very 
heavyweight and can often be as costy (or more) then the user/application code 
handling the received message (see 
[https://bugs.openjdk.java.net/browse/JDK-8062162] for more info).

The {{SecurityManager}} will be removed in the future (see 
[https://openjdk.java.net/jeps/411]) but until that moment would be nice to 
reduce such cost at least if no {{SecurityManager}} is installed.

This is a flamegraph showing the listener stack trace:
!noSecurityManager.png|width=920,height=398!

As the image shows, in violet, here's the cost of 
{{AccessController::doPrivileged}} is no {{SecurityManager}} is installed ie 14 
samples
 - handling the message costs 3 samples
 - acknowledge it costs 19 samples

TLDR {{AccessController::doPrivileged}} cost ~5 times handling the message and 
nearly the same as acking back it


> Faster Artemis CORE client MessageListener::onMessage without SecurityManager
> -
>
> Key: ARTEMIS-3618
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3618
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Attachments: noSecurityManager.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently {{{}ClientConsumerImpl{}}}, responsible of calling JMS 
> {{{}MessageListener::onMessage{}}}, is installing/restoring the listener 
> thread's context ClassLoader by using a secured action regardless any 
> security manager is installed.
> This secured action (using {{{}AccessController::doPrivileged{}}}) is very 
> heavyweight and can often be as costy (or more) then the user/application 
> code handling the received message (see 
> [https://bugs.openjdk.java.net/browse/JDK-8062162] for more info).
> The {{SecurityManager}} will be removed in the future (see 
> [https://openjdk.java.net/jeps/411]) but until that moment would be nice to 
> reduce such cost at least if no {{SecurityManager}} is installed.
> This is a flamegraph showing the listener stack trace:
> !noSecurityManager.png|width=920,height=398!
> As the image shows, in violet, here's the cost of 
> {{AccessController::doPrivileged}} if no {{SecurityManager}} is installed ie 
> 14 samples
>  - handling the message costs 3 samples
>  - acknowledge it costs 19 samples
> TLDR {{AccessController::doPrivileged}} cost ~5 times handling the message 
> and nearly the same as acking back it



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3620) Journal blocking delete/update record with no sync

2021-12-23 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3620:
-
Description: 
https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, has 
introduced a blocking logic while checking for record's presence on both delete 
and update operations, regardless the configured {{sync}} parameter.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence that can save blocking in the happy (and most common) 
path: 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}

There are 3 solutions to this issue:
# reintroduce {{checkKnownRecordID}} and save blocking in the common & happy 
path
# introduce a smaller semantic change that don't report any error with no sync 
and no callback specified 
# introduce a bigger semantic change that don't report any error due to missing 
record ID to delete/update
 

  was:
https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, has 
introduced a blocking logic while checking for record's presence on both delete 
and update operations, regardless the configured {{sync}} parameter.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence that can save blocking in the happy (and most common) 
path: 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}

There are 3 solutions to this issue:
# reintroduce {{checkKnownRecordID}} and save blocking with no sync
# introduce a smaller semantic change that don't report any error with no sync 
and no callback specified 
# introduce a bigger semantic change that don't report any error due to missing 
record ID to delete/update
 


> Journal blocking delete/update record with no sync 
> ---
>
> Key: ARTEMIS-3620
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3620
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, 
> has introduced a blocking logic while checking for record's presence on both 
> delete and update operations, regardless the configured {{sync}} parameter.
> Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
> check for record presence that can save blocking in the happy (and most 
> common) path: 
> {code:java}
>   private boolean checkKnownRecordID(final long id, boolean strict) throws 
> Exception {
>   if (records.containsKey(id) || pendingRecords.contains(id) || 
> (compactor != null && compactor.containsRecord(id))) {
>  return true;
>   }
>  

[jira] [Updated] (ARTEMIS-3620) Journal blocking delete/update record with no sync

2021-12-23 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3620:
-
Summary: Journal blocking delete/update record with no sync   (was: Journal 
is blocking caller thread on delete/update record with no sync )

> Journal blocking delete/update record with no sync 
> ---
>
> Key: ARTEMIS-3620
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3620
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, 
> has introduced a blocking logic while checking for record's presence on both 
> delete and update operations, regardless the configured {{sync}} parameter.
> Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
> check for record presence that can save blocking in the happy (and most 
> common) path: 
> {code:java}
>   private boolean checkKnownRecordID(final long id, boolean strict) throws 
> Exception {
>   if (records.containsKey(id) || pendingRecords.contains(id) || 
> (compactor != null && compactor.containsRecord(id))) {
>  return true;
>   }
>   final SimpleFuture known = new SimpleFutureImpl<>();
>   // retry on the append thread. maybe the appender thread is not keeping 
> up.
>   appendExecutor.execute(new Runnable() {
>  @Override
>  public void run() {
> journalLock.readLock().lock();
> try {
>known.set(records.containsKey(id)
>   || pendingRecords.contains(id)
>   || (compactor != null && compactor.containsRecord(id)));
> } finally {
>journalLock.readLock().unlock();
> }
>  }
>   });
>   if (!known.get()) {
>  if (strict) {
> throw new IllegalStateException("Cannot find add info " + id + " 
> on compactor or current records");
>  }
>  return false;
>   } else {
>  return true;
>   }
>}
> {code}
> There are 3 solutions to this issue:
> # reintroduce {{checkKnownRecordID}} and save blocking with no sync
> # introduce a smaller semantic change that don't report any error with no 
> sync and no callback specified 
> # introduce a bigger semantic change that don't report any error due to 
> missing record ID to delete/update
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3620) Journal is blocking caller thread on delete/update record with no sync

2021-12-23 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3620:
-
Description: 
https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, has 
introduced a blocking logic while checking for record's presence on both delete 
and update operations, regardless the configured {{sync}} parameter.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence that can save blocking in the happy (and most common) 
path: 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}

There are 3 solutions to this issue:
# reintroduce {{checkKnownRecordID}} and save blocking with no sync
# introduce a smaller semantic change that don't report any error with no sync 
and no callback specified 
# introduce a bigger semantic change that don't report any error due to missing 
record ID to delete/update
 

  was:
https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, has 
introduced a blocking logic while checking for record's presence on delete, 
regardless the configured {{sync}} parameter.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence that can save blocking in the happy (and most common) 
path: 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}

There are 3 solutions to this issue:
# reintroduce {{checkKnownRecordID}} and save blocking with no sync
# introduce a smaller semantic change that don't report any error with no sync 
and no callback specified 
# introduce a bigger semantic change that don't report any error due to missing 
record ID to delete
 


> Journal is blocking caller thread on delete/update record with no sync 
> ---
>
> Key: ARTEMIS-3620
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3620
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, 
> has introduced a blocking logic while checking for record's presence on both 
> delete and update operations, regardless the configured {{sync}} parameter.
> Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
> check for record presence that can save blocking in the happy (and most 
> common) path: 
> {code:java}
>   private boolean checkKnownRecordID(final long id, boolean strict) throws 
> Exception {
>   if (records.containsKey(id) || pendingRecords.contains(id) || 
> (compactor != null && compactor.containsRecord(id))) {
>  return true;
>   }
>   final 

[jira] [Updated] (ARTEMIS-3620) Journal is blocking caller thread on delete/update record with no sync

2021-12-23 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3620:
-
Summary: Journal is blocking caller thread on delete/update record with no 
sync   (was: Journal is blocking caller thread on delete record with no sync )

> Journal is blocking caller thread on delete/update record with no sync 
> ---
>
> Key: ARTEMIS-3620
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3620
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, 
> has introduced a blocking logic while checking for record's presence on 
> delete, regardless the configured {{sync}} parameter.
> Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
> check for record presence that can save blocking in the happy (and most 
> common) path: 
> {code:java}
>   private boolean checkKnownRecordID(final long id, boolean strict) throws 
> Exception {
>   if (records.containsKey(id) || pendingRecords.contains(id) || 
> (compactor != null && compactor.containsRecord(id))) {
>  return true;
>   }
>   final SimpleFuture known = new SimpleFutureImpl<>();
>   // retry on the append thread. maybe the appender thread is not keeping 
> up.
>   appendExecutor.execute(new Runnable() {
>  @Override
>  public void run() {
> journalLock.readLock().lock();
> try {
>known.set(records.containsKey(id)
>   || pendingRecords.contains(id)
>   || (compactor != null && compactor.containsRecord(id)));
> } finally {
>journalLock.readLock().unlock();
> }
>  }
>   });
>   if (!known.get()) {
>  if (strict) {
> throw new IllegalStateException("Cannot find add info " + id + " 
> on compactor or current records");
>  }
>  return false;
>   } else {
>  return true;
>   }
>}
> {code}
> There are 3 solutions to this issue:
> # reintroduce {{checkKnownRecordID}} and save blocking with no sync
> # introduce a smaller semantic change that don't report any error with no 
> sync and no callback specified 
> # introduce a bigger semantic change that don't report any error due to 
> missing record ID to delete
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3620) Journal is blocking caller thread on delete record with no sync

2021-12-22 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3620:
-
Description: 
https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, has 
introduced a blocking logic while checking for record's presence on delete, 
regardless the configured {{sync}} parameter.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence that can save blocking in the happy (and most common) 
path: 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}

There are 3 solutions to this issue:
# reintroduce {{checkKnownRecordID}} and save blocking with no sync
# introduce a smaller semantic change that don't report any error with no sync 
and no callback specified 
# introduce a bigger semantic change that don't report any error due to missing 
record ID to delete
 

  was:
https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, has 
introduced a blocking logic while checking for record's presence on delete, 
regardless the configured {{sync}} parameter.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence that can save blocking in the happy path: 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}

There are 3 solutions to this issue:
# reintroduce {{checkKnownRecordID}} and save blocking with no sync
# introduce a smaller semantic change that don't report any error with no sync 
and no callback specified 
# introduce a bigger semantic change that don't report any error due to missing 
record ID to delete
 


> Journal is blocking caller thread on delete record with no sync 
> 
>
> Key: ARTEMIS-3620
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3620
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, 
> has introduced a blocking logic while checking for record's presence on 
> delete, regardless the configured {{sync}} parameter.
> Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
> check for record presence that can save blocking in the happy (and most 
> common) path: 
> {code:java}
>   private boolean checkKnownRecordID(final long id, boolean strict) throws 
> Exception {
>   if (records.containsKey(id) || pendingRecords.contains(id) || 
> (compactor != null && compactor.containsRecord(id))) {
>  return true;
>   }
>   final SimpleFuture known = new SimpleFutureImpl<>();
>   // retry on the append thread. maybe the 

[jira] [Work started] (ARTEMIS-3620) Journal is blocking caller thread on delete record with no sync

2021-12-22 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on ARTEMIS-3620 started by Francesco Nigro.

> Journal is blocking caller thread on delete record with no sync 
> 
>
> Key: ARTEMIS-3620
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3620
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, 
> has introduced a blocking logic while checking for record's presence on 
> delete, regardless the configured {{sync}} parameter.
> Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
> check for record presence that can save blocking in the happy path: 
> {code:java}
>   private boolean checkKnownRecordID(final long id, boolean strict) throws 
> Exception {
>   if (records.containsKey(id) || pendingRecords.contains(id) || 
> (compactor != null && compactor.containsRecord(id))) {
>  return true;
>   }
>   final SimpleFuture known = new SimpleFutureImpl<>();
>   // retry on the append thread. maybe the appender thread is not keeping 
> up.
>   appendExecutor.execute(new Runnable() {
>  @Override
>  public void run() {
> journalLock.readLock().lock();
> try {
>known.set(records.containsKey(id)
>   || pendingRecords.contains(id)
>   || (compactor != null && compactor.containsRecord(id)));
> } finally {
>journalLock.readLock().unlock();
> }
>  }
>   });
>   if (!known.get()) {
>  if (strict) {
> throw new IllegalStateException("Cannot find add info " + id + " 
> on compactor or current records");
>  }
>  return false;
>   } else {
>  return true;
>   }
>}
> {code}
> There are 3 solutions to this issue:
> # reintroduce {{checkKnownRecordID}} and save blocking with no sync
> # introduce a smaller semantic change that don't report any error with no 
> sync and no callback specified 
> # introduce a bigger semantic change that don't report any error due to 
> missing record ID to delete
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3620) Journal is blocking caller thread on delete record with no sync

2021-12-22 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3620:
-
Description: 
https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, has 
introduced a blocking logic while checking for record's presence on delete, 
regardless the configured {{sync}} parameter.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence that can save blocking in the happy path: 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}

There are 3 solutions to this issue:
# reintroduce {{checkKnownRecordID}} and save blocking with no sync
# introduce a smaller semantic change that don't report any error with no sync 
and no callback specified 
# introduce a bigger semantic change that don't report any error due to missing 
record ID to delete
 

  was:
https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, has 
introduced a blocking logic while checking for record's presence on delete, 
regardless the configured {{sync}} parameter.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence that can save blocking in the happy path: 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}

2 solutions to this issue (that will likely impact other methods with sync == 
false that was using {{checkKnownRecordID}}) are:
# reintroduce {{checkKnownRecordID}}
# introduce a semantic change that won't check for record presence
 


> Journal is blocking caller thread on delete record with no sync 
> 
>
> Key: ARTEMIS-3620
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3620
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, 
> has introduced a blocking logic while checking for record's presence on 
> delete, regardless the configured {{sync}} parameter.
> Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
> check for record presence that can save blocking in the happy path: 
> {code:java}
>   private boolean checkKnownRecordID(final long id, boolean strict) throws 
> Exception {
>   if (records.containsKey(id) || pendingRecords.contains(id) || 
> (compactor != null && compactor.containsRecord(id))) {
>  return true;
>   }
>   final SimpleFuture known = new SimpleFutureImpl<>();
>   // retry on the append thread. maybe the appender thread is not keeping 
> up.
>   appendExecutor.execute(new Runnable() {
>  @Override
>  public void run() {
> 

[jira] [Updated] (ARTEMIS-3620) Journal is blocking caller thread on delete record with no sync

2021-12-22 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3620:
-
Description: 
https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, has 
introduced a blocking logic while checking for record's presence on delete, 
regardless the configured {{sync}} parameter.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence that can save blocking in the happy path: 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}

2 solutions to this issue (that will likely impact other methods with sync == 
false that was using {{checkKnownRecordID}}) are:
# reintroduce {{checkKnownRecordID}}
# introduce a semantic change that won't check for record presence
 

  was:
https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, has 
introduced a blocking logic while checking for record's presence on delete.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence that can save blocking in the happy path: 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}

2 solutions to this issue (that will likely impact other methods with sync == 
false that was using {{checkKnownRecordID}}) are:
# reintroduce {{checkKnownRecordID}}
# introduce a semantic change that won't check for record presence
 


> Journal is blocking caller thread on delete record with no sync 
> 
>
> Key: ARTEMIS-3620
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3620
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, 
> has introduced a blocking logic while checking for record's presence on 
> delete, regardless the configured {{sync}} parameter.
> Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
> check for record presence that can save blocking in the happy path: 
> {code:java}
>   private boolean checkKnownRecordID(final long id, boolean strict) throws 
> Exception {
>   if (records.containsKey(id) || pendingRecords.contains(id) || 
> (compactor != null && compactor.containsRecord(id))) {
>  return true;
>   }
>   final SimpleFuture known = new SimpleFutureImpl<>();
>   // retry on the append thread. maybe the appender thread is not keeping 
> up.
>   appendExecutor.execute(new Runnable() {
>  @Override
>  public void run() {
> journalLock.readLock().lock();
> try {
>known.set(records.containsKey(id)
>   || 

[jira] [Updated] (ARTEMIS-3620) Journal is blocking caller thread on delete record with no sync

2021-12-22 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3620:
-
Description: 
https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, has 
introduced a blocking logic while checking for record's presence on delete.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence that can save blocking in the happy path: 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}

2 solutions to this issue (that will likely impact other methods with sync == 
false that was using {{checkKnownRecordID}}) are:
# reintroduce {{checkKnownRecordID}}
# introduce a semantic change that won't check for record presence
 

  was:
https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, has 
introduced a blocking check for record's presence on delete.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence that can save blocking in the happy path: 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}

2 solutions to this issue (that will likely impact other methods with sync == 
false that was using {{checkKnownRecordID}}) are:
# reintroduce {{checkKnownRecordID}}
# introduce a semantic change that won't check for record presence
 


> Journal is blocking caller thread on delete record with no sync 
> 
>
> Key: ARTEMIS-3620
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3620
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, 
> has introduced a blocking logic while checking for record's presence on 
> delete.
> Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
> check for record presence that can save blocking in the happy path: 
> {code:java}
>   private boolean checkKnownRecordID(final long id, boolean strict) throws 
> Exception {
>   if (records.containsKey(id) || pendingRecords.contains(id) || 
> (compactor != null && compactor.containsRecord(id))) {
>  return true;
>   }
>   final SimpleFuture known = new SimpleFutureImpl<>();
>   // retry on the append thread. maybe the appender thread is not keeping 
> up.
>   appendExecutor.execute(new Runnable() {
>  @Override
>  public void run() {
> journalLock.readLock().lock();
> try {
>known.set(records.containsKey(id)
>   || pendingRecords.contains(id)
>   || (compactor != null && compactor.containsRecord(id)));
>   

[jira] [Updated] (ARTEMIS-3620) Journal is blocking caller thread on delete record with no sync

2021-12-22 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3620:
-
Issue Type: Bug  (was: Improvement)

> Journal is blocking caller thread on delete record with no sync 
> 
>
> Key: ARTEMIS-3620
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3620
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, 
> has introduced a blocking check for record's presence on delete.
> Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
> check for record presence that can save blocking in the happy path: 
> {code:java}
>   private boolean checkKnownRecordID(final long id, boolean strict) throws 
> Exception {
>   if (records.containsKey(id) || pendingRecords.contains(id) || 
> (compactor != null && compactor.containsRecord(id))) {
>  return true;
>   }
>   final SimpleFuture known = new SimpleFutureImpl<>();
>   // retry on the append thread. maybe the appender thread is not keeping 
> up.
>   appendExecutor.execute(new Runnable() {
>  @Override
>  public void run() {
> journalLock.readLock().lock();
> try {
>known.set(records.containsKey(id)
>   || pendingRecords.contains(id)
>   || (compactor != null && compactor.containsRecord(id)));
> } finally {
>journalLock.readLock().unlock();
> }
>  }
>   });
>   if (!known.get()) {
>  if (strict) {
> throw new IllegalStateException("Cannot find add info " + id + " 
> on compactor or current records");
>  }
>  return false;
>   } else {
>  return true;
>   }
>}
> {code}
> 2 solutions to this issue (that will likely impact other methods with sync == 
> false that was using {{checkKnownRecordID}}) are:
> # reintroduce {{checkKnownRecordID}}
> # introduce a semantic change that won't check for record presence
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3620) Appending delete records can save waiting it to happen if sync is false

2021-12-22 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3620:
-
Description: 
https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, has 
introduced a blocking check for record's presence on delete.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence that can save blocking in the happy path: 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}

2 solutions to this issue (that will likely impact other methods with sync == 
false that was using {{checkKnownRecordID}}) are:
# reintroduce {{checkKnownRecordID}}
# introduce a semantic change that won't check for record presence
 

  was:
[https://github.com/apache/activemq-artemis/pull/3647] has re-introduced a 
blocking check for presence of a record while deleting it;  this check will be 
always performed, causing a caller that has specified `sync == false` to block 
awaiting the check (and the operation) to happen.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence (without blocking in the happy path): 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}
 
This method was useful to check in a non-blocking way record's presence by 
using an additional map to track known records.

2 solutions to this issue (that will likely impact other methods with sync == 
false that was using {{checkKnownRecordID}}) are:
# reintroduce {{checkKnownRecordID}}
# introduce a semantic change that won't check for record presence in case of 
sync == false and no callback specified
 


> Appending delete records can save waiting it to happen if sync is false
> ---
>
> Key: ARTEMIS-3620
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3620
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://github.com/apache/activemq-artemis/pull/3605, part of ARTEMIS-3327, 
> has introduced a blocking check for record's presence on delete.
> Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
> check for record presence that can save blocking in the happy path: 
> {code:java}
>   private boolean checkKnownRecordID(final long id, boolean strict) throws 
> Exception {
>   if (records.containsKey(id) || pendingRecords.contains(id) || 
> (compactor != null && compactor.containsRecord(id))) {
>  return true;
>   }
>   final SimpleFuture known = new SimpleFutureImpl<>();
>   // retry on the append thread. maybe the appender thread is not keeping 
> up.
>   appendExecutor.execute(new Runnable() {

[jira] [Updated] (ARTEMIS-3620) Appending delete records can save waiting it to happen if sync is false

2021-12-22 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3620:
-
Description: 
[https://github.com/apache/activemq-artemis/pull/3647] has re-introduced a 
blocking check for presence of a record while deleting it;  this check will be 
always performed, causing a caller that has specified `sync == false` to block 
awaiting the check (and the operation) to happen.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence (without blocking in the happy path): 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}
 
This method was useful to check in a non-blocking way record's presence by 
using an additional map to track known records.

2 solutions to this issue (that will likely impact other methods with sync == 
false that was using {{checkKnownRecordID}}) are:
# reintroduce {{checkKnownRecordID}}
# introduce a semantic change that won't check for record presence in case of 
sync == false and no callback specified
 

  was:
[https://github.com/apache/activemq-artemis/pull/3647] has re-introduced a 
blocking check for presence of a record while deleting it;  this check will be 
always performed, causing a caller that has specified `sync == false` to block 
awaiting the check (and the operation) to succeed.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence (without blocking in the happy path): 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}
 
This method was useful to check in a non-blocking way record's presence by 
using an additional map to track known records.

2 solutions to this issue (that will likely impact other methods with sync == 
false that was using {{checkKnownRecordID}}) are:
# reintroduce {{checkKnownRecordID}}
# introduce a semantic change that won't check for record presence in case of 
sync == false and no callback specified
 


> Appending delete records can save waiting it to happen if sync is false
> ---
>
> Key: ARTEMIS-3620
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3620
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> [https://github.com/apache/activemq-artemis/pull/3647] has re-introduced a 
> blocking check for presence of a record while deleting it;  this check will 
> be always performed, causing a caller that has specified `sync == false` to 
> block awaiting the check (and the operation) to happen.
> Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
> check for record presence (without blocking in the happy path): 
> 

[jira] [Updated] (ARTEMIS-3620) Appending delete records can save waiting it to happen if sync is false

2021-12-22 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3620:
-
Description: 
[https://github.com/apache/activemq-artemis/pull/3647] has re-introduced a 
blocking check for presence of a record while deleting it;  this check will be 
always performed, causing a caller that has specified `sync == false` to block 
awaiting the check (and the operation) to succeed.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence (without blocking in the happy path): 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}
 
This method was useful to check in a non-blocking way record's presence, if 
possible, by using an additional map to track known records.
2 solutions to this issue (that will likely impact other methods with sync == 
false that was using {{checkKnownRecordID}}) are:
# reintroduce {{checkKnownRecordID}}
# introduce a semantic change that won't check for record presence in case of 
sync == false and no callback specified
 

  was:
[https://github.com/apache/activemq-artemis/pull/3647] has re-introduced 
checking of presence of a record while deleting it;  this check will be always 
performed, causing a caller that has specified `sync == false` to block await 
the check (and the operation) to succeed making a not sync delete record to 
slow down.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence (without blocking) 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}
 
This method was useful to check in a non-blocking way record's presence, if 
possible, by using an additional map to track known records.
2 solutions to this issue (that will likely impact other methods with sync == 
false that was using {{checkKnownRecordID}}) are:
# reintroduce {{checkKnownRecordID}}
# introduce a semantic change that won't check for record presence in case of 
sync == false and no callback specified
 


> Appending delete records can save waiting it to happen if sync is false
> ---
>
> Key: ARTEMIS-3620
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3620
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [https://github.com/apache/activemq-artemis/pull/3647] has re-introduced a 
> blocking check for presence of a record while deleting it;  this check will 
> be always performed, causing a caller that has specified `sync == false` to 
> block awaiting the check (and the operation) to succeed.
> Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
> check for record 

[jira] [Updated] (ARTEMIS-3620) Appending delete records can save waiting it to happen if sync is false

2021-12-22 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3620:
-
Description: 
[https://github.com/apache/activemq-artemis/pull/3647] has re-introduced a 
blocking check for presence of a record while deleting it;  this check will be 
always performed, causing a caller that has specified `sync == false` to block 
awaiting the check (and the operation) to succeed.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence (without blocking in the happy path): 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}
 
This method was useful to check in a non-blocking way record's presence by 
using an additional map to track known records.

2 solutions to this issue (that will likely impact other methods with sync == 
false that was using {{checkKnownRecordID}}) are:
# reintroduce {{checkKnownRecordID}}
# introduce a semantic change that won't check for record presence in case of 
sync == false and no callback specified
 

  was:
[https://github.com/apache/activemq-artemis/pull/3647] has re-introduced a 
blocking check for presence of a record while deleting it;  this check will be 
always performed, causing a caller that has specified `sync == false` to block 
awaiting the check (and the operation) to succeed.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence (without blocking in the happy path): 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}
 
This method was useful to check in a non-blocking way record's presence, if 
possible, by using an additional map to track known records.
2 solutions to this issue (that will likely impact other methods with sync == 
false that was using {{checkKnownRecordID}}) are:
# reintroduce {{checkKnownRecordID}}
# introduce a semantic change that won't check for record presence in case of 
sync == false and no callback specified
 


> Appending delete records can save waiting it to happen if sync is false
> ---
>
> Key: ARTEMIS-3620
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3620
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [https://github.com/apache/activemq-artemis/pull/3647] has re-introduced a 
> blocking check for presence of a record while deleting it;  this check will 
> be always performed, causing a caller that has specified `sync == false` to 
> block awaiting the check (and the operation) to succeed.
> Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
> check for record presence (without blocking in the 

[jira] [Updated] (ARTEMIS-3620) Appending delete records can save waiting it to happen if sync is false

2021-12-22 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3620:
-
Description: 
[https://github.com/apache/activemq-artemis/pull/3647] has re-introduced 
checking of presence of a record while deleting it;  this check will be always 
performed, causing a caller that has specified `sync == false` to block await 
the check (and the operation) to succeed making a not sync delete record to 
slow down.
Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence (without blocking) 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}
 
This method was useful to check in a non-blocking way record's presence, if 
possible, by using an additional map to track known records.
2 solutions to this issue (that will likely impact other methods with sync == 
false that was using {{checkKnownRecordID}}) are:
# reintroduce {{checkKnownRecordID}}
# introduce a semantic change that won't check for record presence in case of 
sync == false and no callback specified
 

  was:
[https://github.com/apache/activemq-artemis/pull/3647] has re-introduced 
checking of presence of a record while deleting it;  this check will be always 
performed, causing a caller that has specified `sync == false` to block await 
the check (and the operation) to succeed making a not sync delete record to 
slow down.

Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence (without blocking)

 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}
 
This method was useful to check in a non-blocking way record's presence, if 
possible, by using an additional map to track known records.
2 solutions to this issue (that will likely impact other methods with sync == 
false that was using {{checkKnownRecordID}}) are:
# reintroduce {{checkKnownRecordID}}
# introduce a semantic change that won't check for record presence in case of 
sync == false and no callback specified
 


> Appending delete records can save waiting it to happen if sync is false
> ---
>
> Key: ARTEMIS-3620
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3620
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [https://github.com/apache/activemq-artemis/pull/3647] has re-introduced 
> checking of presence of a record while deleting it;  this check will be 
> always performed, causing a caller that has specified `sync == false` to 
> block await the check (and the operation) to succeed making a not sync delete 
> record to slow down.
> Before the mentioned change, the journal was 

[jira] [Updated] (ARTEMIS-3620) Appending delete records can save waiting it to happen if sync is false

2021-12-22 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3620:
-
Description: 
[https://github.com/apache/activemq-artemis/pull/3647] has re-introduced 
checking of presence of a record while deleting it;  this check will be always 
performed, causing a caller that has specified `sync == false` to block await 
the check (and the operation) to succeed making a not sync delete record to 
slow down.

Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
check for record presence (without blocking)

 
{code:java}
  private boolean checkKnownRecordID(final long id, boolean strict) throws 
Exception {
  if (records.containsKey(id) || pendingRecords.contains(id) || (compactor 
!= null && compactor.containsRecord(id))) {
 return true;
  }

  final SimpleFuture known = new SimpleFutureImpl<>();

  // retry on the append thread. maybe the appender thread is not keeping 
up.
  appendExecutor.execute(new Runnable() {
 @Override
 public void run() {
journalLock.readLock().lock();
try {

   known.set(records.containsKey(id)
  || pendingRecords.contains(id)
  || (compactor != null && compactor.containsRecord(id)));
} finally {
   journalLock.readLock().unlock();
}
 }
  });

  if (!known.get()) {
 if (strict) {
throw new IllegalStateException("Cannot find add info " + id + " on 
compactor or current records");
 }
 return false;
  } else {
 return true;
  }
   }
{code}
 
This method was useful to check in a non-blocking way record's presence, if 
possible, by using an additional map to track known records.
2 solutions to this issue (that will likely impact other methods with sync == 
false that was using {{checkKnownRecordID}}) are:
# reintroduce {{checkKnownRecordID}}
# introduce a semantic change that won't check for record presence in case of 
sync == false and no callback specified
 

> Appending delete records can save waiting it to happen if sync is false
> ---
>
> Key: ARTEMIS-3620
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3620
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [https://github.com/apache/activemq-artemis/pull/3647] has re-introduced 
> checking of presence of a record while deleting it;  this check will be 
> always performed, causing a caller that has specified `sync == false` to 
> block await the check (and the operation) to succeed making a not sync delete 
> record to slow down.
> Before the mentioned change, the journal was using {{checkKnownRecordID}} to 
> check for record presence (without blocking)
>  
> {code:java}
>   private boolean checkKnownRecordID(final long id, boolean strict) throws 
> Exception {
>   if (records.containsKey(id) || pendingRecords.contains(id) || 
> (compactor != null && compactor.containsRecord(id))) {
>  return true;
>   }
>   final SimpleFuture known = new SimpleFutureImpl<>();
>   // retry on the append thread. maybe the appender thread is not keeping 
> up.
>   appendExecutor.execute(new Runnable() {
>  @Override
>  public void run() {
> journalLock.readLock().lock();
> try {
>known.set(records.containsKey(id)
>   || pendingRecords.contains(id)
>   || (compactor != null && compactor.containsRecord(id)));
> } finally {
>journalLock.readLock().unlock();
> }
>  }
>   });
>   if (!known.get()) {
>  if (strict) {
> throw new IllegalStateException("Cannot find add info " + id + " 
> on compactor or current records");
>  }
>  return false;
>   } else {
>  return true;
>   }
>}
> {code}
>  
> This method was useful to check in a non-blocking way record's presence, if 
> possible, by using an additional map to track known records.
> 2 solutions to this issue (that will likely impact other methods with sync == 
> false that was using {{checkKnownRecordID}}) are:
> # reintroduce {{checkKnownRecordID}}
> # introduce a semantic change that won't check for record presence in case of 
> sync == false and no callback specified
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (ARTEMIS-3620) Appending delete records can save waiting it to happen if sync is false

2021-12-22 Thread Francesco Nigro (Jira)
Francesco Nigro created ARTEMIS-3620:


 Summary: Appending delete records can save waiting it to happen if 
sync is false
 Key: ARTEMIS-3620
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3620
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: Francesco Nigro






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (ARTEMIS-3618) Faster Artemis CORE client MessageListener::onMessage without SecurityManager

2021-12-20 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17462644#comment-17462644
 ] 

Francesco Nigro commented on ARTEMIS-3618:
--

With the fix applied any cost related the secured action just disappear

> Faster Artemis CORE client MessageListener::onMessage without SecurityManager
> -
>
> Key: ARTEMIS-3618
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3618
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Attachments: noSecurityManager.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently {{{}ClientConsumerImpl{}}}, responsible of calling JMS 
> {{{}MessageListener::onMessage{}}}, is installing/restoring the listener 
> thread's context ClassLoader by using a secured action regardless any 
> security manager is installed.
> This secured action (using {{{}AccessController::doPrivileged{}}}) is very 
> heavyweight and can often be as costy (or more) then the user/application 
> code handling the received message (see 
> [https://bugs.openjdk.java.net/browse/JDK-8062162] for more info).
> The {{SecurityManager}} will be removed in the future (see 
> [https://openjdk.java.net/jeps/411]) but until that moment would be nice to 
> reduce such cost at least if no {{SecurityManager}} is installed.
> This is a flamegraph showing the listener stack trace:
> !noSecurityManager.png|width=920,height=398!
> As the image shows, in violet, here's the cost of 
> {{AccessController::doPrivileged}} is no {{SecurityManager}} is installed ie 
> 14 samples
>  - handling the message costs 3 samples
>  - acknowledge it costs 19 samples
> TLDR {{AccessController::doPrivileged}} cost ~5 times handling the message 
> and nearly the same as acking back it



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3618) Faster Artemis CORE client MessageListener::onMessage without SecurityManager

2021-12-20 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3618:
-
Description: 
Currently {{{}ClientConsumerImpl{}}}, responsible of calling JMS 
{{{}MessageListener::onMessage{}}}, is installing/restoring the listener 
thread's context ClassLoader by using a secured action regardless any security 
manager is installed.
This secured action (using {{{}AccessController::doPrivileged{}}}) is very 
heavyweight and can often be as costy (or more) then the user/application code 
handling the received message (see 
[https://bugs.openjdk.java.net/browse/JDK-8062162] for more info).

The {{SecurityManager}} will be removed in the future (see 
[https://openjdk.java.net/jeps/411]) but until that moment would be nice to 
reduce such cost at least if no {{SecurityManager}} is installed.

This is a flamegraph showing the listener stack trace:
!noSecurityManager.png|width=920,height=398!

As the image shows, in violet, here's the cost of 
{{AccessController::doPrivileged}} is no {{SecurityManager}} is installed ie 14 
samples
 - handling the message costs 3 samples
 - acknowledge it costs 19 samples

TLDR {{AccessController::doPrivileged}} cost ~5 times handling the message and 
nearly the same as acking back it

  was:
Currently {{{}ClientConsumerImpl{}}}, responsible of calling JMS 
{{{}MessageListener::onMessage{}}}, is installing/restoring the listener 
thread's context ClassLoader by using a secured action regardless any security 
manager is installed.
This secured action (using {{{}AccessController::doPrivileged{}}}) is very 
heavyweight and can often be as costy (or more) then the user/application code 
handling the received message (see 
[https://bugs.openjdk.java.net/browse/JDK-8062162] for more info).

The {{SecurityManager}} will be removed in the future (see 
[https://openjdk.java.net/jeps/411]) but until that moment would be nice to 
reduce such cost at least if no {{SecurityManager}} is installed.

This is a flamegraph showing the listener stack trace:
!noSecurityManager.png|width=980,height=424!

As the image shows, in violet, here's the cost of 
{{AccessController::doPrivileged}} is no {{SecurityManager}} is installed ie 14 
samples
 - handling the message costs 3 samples
 - acknowledge it costs 19 samples

TLDR {{AccessController::doPrivileged}} cost ~5 times handling the message and 
nearly the same as acking back it


> Faster Artemis CORE client MessageListener::onMessage without SecurityManager
> -
>
> Key: ARTEMIS-3618
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3618
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Attachments: noSecurityManager.png
>
>
> Currently {{{}ClientConsumerImpl{}}}, responsible of calling JMS 
> {{{}MessageListener::onMessage{}}}, is installing/restoring the listener 
> thread's context ClassLoader by using a secured action regardless any 
> security manager is installed.
> This secured action (using {{{}AccessController::doPrivileged{}}}) is very 
> heavyweight and can often be as costy (or more) then the user/application 
> code handling the received message (see 
> [https://bugs.openjdk.java.net/browse/JDK-8062162] for more info).
> The {{SecurityManager}} will be removed in the future (see 
> [https://openjdk.java.net/jeps/411]) but until that moment would be nice to 
> reduce such cost at least if no {{SecurityManager}} is installed.
> This is a flamegraph showing the listener stack trace:
> !noSecurityManager.png|width=920,height=398!
> As the image shows, in violet, here's the cost of 
> {{AccessController::doPrivileged}} is no {{SecurityManager}} is installed ie 
> 14 samples
>  - handling the message costs 3 samples
>  - acknowledge it costs 19 samples
> TLDR {{AccessController::doPrivileged}} cost ~5 times handling the message 
> and nearly the same as acking back it



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3618) Faster Artemis CORE client MessageListener::onMessage without SecurityManager

2021-12-20 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3618:
-
Description: 
Currently {{{}ClientConsumerImpl{}}}, responsible of calling JMS 
{{{}MessageListener::onMessage{}}}, is installing/restoring the listener 
thread's context ClassLoader by using a secured action regardless any security 
manager is installed.
This secured action (using {{{}AccessController::doPrivileged{}}}) is very 
heavyweight and can often be as costy (or more) then the user/application code 
handling the received message (see 
[https://bugs.openjdk.java.net/browse/JDK-8062162] for more info).

The {{SecurityManager}} will be removed in the future (see 
[https://openjdk.java.net/jeps/411]) but until that moment would be nice to 
reduce such cost at least if no {{SecurityManager}} is installed.

This is a flamegraph showing the listener stack trace:
!noSecurityManager.png|width=980,height=424!

As the image shows, in violet, here's the cost of 
{{AccessController::doPrivileged}} is no {{SecurityManager}} is installed ie 14 
samples
 - handling the message costs 3 samples
 - acknowledge it costs 19 samples

TLDR {{AccessController::doPrivileged}} cost ~5 times handling the message and 
nearly the same as acking back it

  was:
Currently {{ClientConsumerImpl}}, responsible of calling JMS 
{{MessageListener::onMessage}}, is installing/restoring the listener thread's 
context ClassLoader by using a secured action regardless any security manager 
is installed.
This secured action (using {{AccessController::doPrivileged}}) is very 
heavyweight and can often be as costy (or more) then the user/application code 
handling the received message (see 
https://bugs.openjdk.java.net/browse/JDK-8062162 for more info).

The {{SecurityManager}} will be removed in the future (see 
https://openjdk.java.net/jeps/411) but until that moment would be nice to 
reduce such cost at least if no {{SecurityManager}} is installed.


This is a flamegraph showing the listener stack trace:
 !noSecurityManager.png! 

As the image shows, in violet, here's the cost of 
{{AccessController::doPrivileged}} is no {{SecurityManager}} is installed ie 14 
samples
- handling the message costs 3 samples
- acknowledge it costs 19 samples

TLDR {{AccessController::doPrivileged}} cost ~5 times handling the message and 
nearly the same as acking back it


> Faster Artemis CORE client MessageListener::onMessage without SecurityManager
> -
>
> Key: ARTEMIS-3618
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3618
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Attachments: noSecurityManager.png
>
>
> Currently {{{}ClientConsumerImpl{}}}, responsible of calling JMS 
> {{{}MessageListener::onMessage{}}}, is installing/restoring the listener 
> thread's context ClassLoader by using a secured action regardless any 
> security manager is installed.
> This secured action (using {{{}AccessController::doPrivileged{}}}) is very 
> heavyweight and can often be as costy (or more) then the user/application 
> code handling the received message (see 
> [https://bugs.openjdk.java.net/browse/JDK-8062162] for more info).
> The {{SecurityManager}} will be removed in the future (see 
> [https://openjdk.java.net/jeps/411]) but until that moment would be nice to 
> reduce such cost at least if no {{SecurityManager}} is installed.
> This is a flamegraph showing the listener stack trace:
> !noSecurityManager.png|width=980,height=424!
> As the image shows, in violet, here's the cost of 
> {{AccessController::doPrivileged}} is no {{SecurityManager}} is installed ie 
> 14 samples
>  - handling the message costs 3 samples
>  - acknowledge it costs 19 samples
> TLDR {{AccessController::doPrivileged}} cost ~5 times handling the message 
> and nearly the same as acking back it



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3618) Faster Artemis CORE client MessageListener::onMessage without SecurityManager

2021-12-20 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3618:
-
Description: 
Currently {{ClientConsumerImpl}}, responsible of calling JMS 
{{MessageListener::onMessage}}, is installing/restoring the listener thread's 
context ClassLoader by using a secured action regardless any security manager 
is installed.
This secured action (using {{AccessController::doPrivileged}}) is very 
heavyweight and can often be as costy (or more) then the user/application code 
handling the received message (see 
https://bugs.openjdk.java.net/browse/JDK-8062162 for more info).

The {{SecurityManager}} will be removed in the future (see 
https://openjdk.java.net/jeps/411) but until that moment would be nice to 
reduce such cost at least if no {{SecurityManager}} is installed.


This is a flamegraph showing the listener stack trace:
 !noSecurityManager.png! 

As the image shows, in violet, here's the cost of 
{{AccessController::doPrivileged}} is no {{SecurityManager}} is installed ie 14 
samples
- handling the message costs 3 samples
- acknowledge it costs 19 samples

TLDR {{AccessController::doPrivileged}} cost ~5 times handling the message and 
nearly the same as acking back it

  was:
Currently {{ClientConsumerImpl}}, responsible of calling JMS 
{{MessageListener::onMessage}}, is installing/restoring the listener thread's 
context ClassLoader by using a secured action regardless any security manager 
is installed.
This secured action (using {{AccessController::doPrivileged}}) is very 
heavyweight and can often be as costy as the user/application code handling the 
received message (see https://bugs.openjdk.java.net/browse/JDK-8062162 for more 
info).

The {{SecurityManager}} will be removed in the future (see 
https://openjdk.java.net/jeps/411) but until that moment would be nice to 
reduce such cost at least if no {{SecurityManager}} is installed.


> Faster Artemis CORE client MessageListener::onMessage without SecurityManager
> -
>
> Key: ARTEMIS-3618
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3618
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Attachments: noSecurityManager.png
>
>
> Currently {{ClientConsumerImpl}}, responsible of calling JMS 
> {{MessageListener::onMessage}}, is installing/restoring the listener thread's 
> context ClassLoader by using a secured action regardless any security manager 
> is installed.
> This secured action (using {{AccessController::doPrivileged}}) is very 
> heavyweight and can often be as costy (or more) then the user/application 
> code handling the received message (see 
> https://bugs.openjdk.java.net/browse/JDK-8062162 for more info).
> The {{SecurityManager}} will be removed in the future (see 
> https://openjdk.java.net/jeps/411) but until that moment would be nice to 
> reduce such cost at least if no {{SecurityManager}} is installed.
> This is a flamegraph showing the listener stack trace:
>  !noSecurityManager.png! 
> As the image shows, in violet, here's the cost of 
> {{AccessController::doPrivileged}} is no {{SecurityManager}} is installed ie 
> 14 samples
> - handling the message costs 3 samples
> - acknowledge it costs 19 samples
> TLDR {{AccessController::doPrivileged}} cost ~5 times handling the message 
> and nearly the same as acking back it



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3618) Faster Artemis CORE client MessageListener::onMessage without SecurityManager

2021-12-20 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3618:
-
Attachment: noSecurityManager.png

> Faster Artemis CORE client MessageListener::onMessage without SecurityManager
> -
>
> Key: ARTEMIS-3618
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3618
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
> Attachments: noSecurityManager.png
>
>
> Currently {{ClientConsumerImpl}}, responsible of calling JMS 
> {{MessageListener::onMessage}}, is installing/restoring the listener thread's 
> context ClassLoader by using a secured action regardless any security manager 
> is installed.
> This secured action (using {{AccessController::doPrivileged}}) is very 
> heavyweight and can often be as costy as the user/application code handling 
> the received message (see https://bugs.openjdk.java.net/browse/JDK-8062162 
> for more info).
> The {{SecurityManager}} will be removed in the future (see 
> https://openjdk.java.net/jeps/411) but until that moment would be nice to 
> reduce such cost at least if no {{SecurityManager}} is installed.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3618) Faster Artemis CORE client MessageListener::onMessage without SecurityManager

2021-12-20 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3618:
-
Description: 
Currently {{ClientConsumerImpl}}, responsible of calling JMS 
{{MessageListener::onMessage}}, is installing/restoring the listener thread's 
context ClassLoader by using a secured action regardless any security manager 
is installed.
This secured action (using {{AccessController::doPrivileged}}) is very 
heavyweight and can often be as costy as the user/application code handling the 
received message (see https://bugs.openjdk.java.net/browse/JDK-8062162 for more 
info).

The {{SecurityManager}} will be removed in the future (see 
https://openjdk.java.net/jeps/411) but until that moment would be nice to 
reduce such cost at least if no {{SecurityManager}} is installed.

  was:
Currently {{ClientConsumerImpl}}, responsible of calling JMS 
{{MessageListener::onMessage}}, is installing/restoring the listener thread's 
context ClassLoader by using a secured action regardless any security manager 
is installed.
This secured action (using {{AccessController::doPrivileged}}) is very 
heavyweight and can often be as costy as the user/application code handling the 
received message (see https://bugs.openjdk.java.net/browse/JDK-8062162 for more 
info)


> Faster Artemis CORE client MessageListener::onMessage without SecurityManager
> -
>
> Key: ARTEMIS-3618
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3618
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> Currently {{ClientConsumerImpl}}, responsible of calling JMS 
> {{MessageListener::onMessage}}, is installing/restoring the listener thread's 
> context ClassLoader by using a secured action regardless any security manager 
> is installed.
> This secured action (using {{AccessController::doPrivileged}}) is very 
> heavyweight and can often be as costy as the user/application code handling 
> the received message (see https://bugs.openjdk.java.net/browse/JDK-8062162 
> for more info).
> The {{SecurityManager}} will be removed in the future (see 
> https://openjdk.java.net/jeps/411) but until that moment would be nice to 
> reduce such cost at least if no {{SecurityManager}} is installed.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3618) Faster Artemis CORE client MessageListener::onMessage without SecurityManager

2021-12-20 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3618:
-
Summary: Faster Artemis CORE client MessageListener::onMessage without 
SecurityManager  (was: Artemis CORE client can skip a secured action on 
MessageListener's hot path)

> Faster Artemis CORE client MessageListener::onMessage without SecurityManager
> -
>
> Key: ARTEMIS-3618
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3618
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> Currently {{ClientConsumerImpl}}, responsible of calling JMS 
> {{MessageListener::onMessage}}, is installing/restoring the listener thread's 
> context ClassLoader by using a secured action regardless any security manager 
> is installed.
> This secured action (using {{AccessController::doPrivileged}}) is very 
> heavyweight and can often be as costy as the user/application code handling 
> the received message (see https://bugs.openjdk.java.net/browse/JDK-8062162 
> for more info)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3618) Faster Artemis CORE client MessageListener::onMessage without SecurityManager

2021-12-20 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3618:
-
Issue Type: Improvement  (was: Bug)

> Faster Artemis CORE client MessageListener::onMessage without SecurityManager
> -
>
> Key: ARTEMIS-3618
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3618
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> Currently {{ClientConsumerImpl}}, responsible of calling JMS 
> {{MessageListener::onMessage}}, is installing/restoring the listener thread's 
> context ClassLoader by using a secured action regardless any security manager 
> is installed.
> This secured action (using {{AccessController::doPrivileged}}) is very 
> heavyweight and can often be as costy as the user/application code handling 
> the received message (see https://bugs.openjdk.java.net/browse/JDK-8062162 
> for more info)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3618) Artemis CORE client can skip a secured action on MessageListener's hot path

2021-12-20 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3618:
-
Description: 
Currently {{ClientConsumerImpl}}, responsible of calling JMS 
{{MessageListener::onMessage}}, is installing/restoring the listener thread's 
context ClassLoader by using a secured action regardless any security manager 
is installed.
This secured action (using {{AccessController::doPrivileged}}) is very 
heavyweight and can often be as costy as the user/application code handling the 
received message (see https://bugs.openjdk.java.net/browse/JDK-8062162 for more 
info)

> Artemis CORE client can skip a secured action on MessageListener's hot path
> ---
>
> Key: ARTEMIS-3618
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3618
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> Currently {{ClientConsumerImpl}}, responsible of calling JMS 
> {{MessageListener::onMessage}}, is installing/restoring the listener thread's 
> context ClassLoader by using a secured action regardless any security manager 
> is installed.
> This secured action (using {{AccessController::doPrivileged}}) is very 
> heavyweight and can often be as costy as the user/application code handling 
> the received message (see https://bugs.openjdk.java.net/browse/JDK-8062162 
> for more info)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Work started] (ARTEMIS-3618) Artemis CORE client can skip a secured action on MessageListener's hot path

2021-12-20 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on ARTEMIS-3618 started by Francesco Nigro.

> Artemis CORE client can skip a secured action on MessageListener's hot path
> ---
>
> Key: ARTEMIS-3618
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3618
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (ARTEMIS-3618) Artemis CORE client can skip a secured action on MessageListener's hot path

2021-12-20 Thread Francesco Nigro (Jira)
Francesco Nigro created ARTEMIS-3618:


 Summary: Artemis CORE client can skip a secured action on 
MessageListener's hot path
 Key: ARTEMIS-3618
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3618
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Francesco Nigro






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3615) Artemis CORE client create a new Netty Event Loop group for each connection

2021-12-18 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3615:
-
Description: 
Currently Artemis's Core client is creating a whole new Netty event loop group 
sized by default using 3*available cores, although it serves just a single 
transport connection, requiring no more then a single thread.

Ideally event loop group(s) (and threads) should be shared on the same 
connection factory, allowing many connections to be handled on them (with N:M 
ratio) or each connection should have its own dedicated single threaded event 
loop (that seems a waste really, if a client box have more connections opened 
then available cores).
The most relevant problem of the current approach is that is going to waste 
native resources while both creating useless native threads and selector(s) 
that won't ever be used. In addition, setting `nioRemotingThreads`just won't 
have any effect.

  was:
Currently Artemis's Core client is creating a whole new Netty event loop group 
sized by default using 3*available cores, although it would serve just a single 
transport connection, that requires just a single thread to be served.

Ideally event loop group(s) (and threads) should be shared on the same 
connection factory, allowing many connections to be handled on them (with N:M 
ratio) or each connection should have its own dedicated single threaded event 
loop (that seems a waste really, if a client box have more connections opened 
then available cores).
The most relevant problem of the current approach is that is going to waste 
native resources while both creating useless native threads and selector(s) 
that won't ever be used. In addition, setting `nioRemotingThreads`just won't 
have any effect.


> Artemis CORE client create a new Netty Event Loop group for each connection
> ---
>
> Key: ARTEMIS-3615
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3615
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> Currently Artemis's Core client is creating a whole new Netty event loop 
> group sized by default using 3*available cores, although it serves just a 
> single transport connection, requiring no more then a single thread.
> Ideally event loop group(s) (and threads) should be shared on the same 
> connection factory, allowing many connections to be handled on them (with N:M 
> ratio) or each connection should have its own dedicated single threaded event 
> loop (that seems a waste really, if a client box have more connections opened 
> then available cores).
> The most relevant problem of the current approach is that is going to waste 
> native resources while both creating useless native threads and selector(s) 
> that won't ever be used. In addition, setting `nioRemotingThreads`just won't 
> have any effect.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3615) Artemis CORE client create a new Netty Event Loop group for each connection

2021-12-17 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3615:
-
Description: 
Currently Artemis's Core client is creating a whole new Netty event loop group 
sized by default using 3*available cores, although it would serve just a single 
transport connection, that requires just a single thread to be served.

Ideally event loop group(s) (and threads) should be shared on the same 
connection factory, allowing many connections to be handled on them (with N:M 
ratio) or each connection should have its own dedicated single threaded event 
loop (that seems a waste really, if a client box have more connections opened 
then available cores).
The most relevant problem of the current approach is that is going to waste 
native resources while both creating useless native threads and selector(s) 
that won't ever be used. In addition, setting `nioRemotingThreads`just won't 
have any effect.

  was:
Currently Artemis's Core client is creating a whole new Netty event loop group 
sized by default using 3*available cores, although it would serve just a single 
transport connection, that requires just a single thread to be served.

Ideally event loop group(s) (and threads) should be shared on the same 
connection factory, allowing many connections to be handled on them (with N:M 
ratio) or each connection should have its own dedicated single threaded event 
loop (that seems a waste really, if a client box have more connections opened 
then available cores).



> Artemis CORE client create a new Netty Event Loop group for each connection
> ---
>
> Key: ARTEMIS-3615
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3615
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> Currently Artemis's Core client is creating a whole new Netty event loop 
> group sized by default using 3*available cores, although it would serve just 
> a single transport connection, that requires just a single thread to be 
> served.
> Ideally event loop group(s) (and threads) should be shared on the same 
> connection factory, allowing many connections to be handled on them (with N:M 
> ratio) or each connection should have its own dedicated single threaded event 
> loop (that seems a waste really, if a client box have more connections opened 
> then available cores).
> The most relevant problem of the current approach is that is going to waste 
> native resources while both creating useless native threads and selector(s) 
> that won't ever be used. In addition, setting `nioRemotingThreads`just won't 
> have any effect.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3615) Artemis CORE client create a new Netty Event Loop group for each connection

2021-12-17 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3615:
-
Description: 
Currently Artemis's Core client is creating a whole new Netty event loop group 
sized by default using 3*available cores, although it would serve just a single 
transport connection, that requires just a single thread to be served.

Ideally event loop group(s) (and threads) should be shared on the same 
connection factory, allowing many connections to be handled on them (with N:M 
ratio) or each connection should have its own dedicated single threaded event 
loop (that seems a waste really, if a client box have more connections opened 
then available cores).


  was:
Currently Artemis's Core client is creating a whole new Netty event loop group 
(sized by default using 3*available cores) although it would serve just a 
single transport connection, that would just use a single thread (and core).
Ideally event loop group(s) (and threads) should be shared on the same 
connection factory or NettyConnector should just use a single thread for each 
transport connection (that seems a waste really, if a client box have more 
connections opened then available cores).



> Artemis CORE client create a new Netty Event Loop group for each connection
> ---
>
> Key: ARTEMIS-3615
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3615
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> Currently Artemis's Core client is creating a whole new Netty event loop 
> group sized by default using 3*available cores, although it would serve just 
> a single transport connection, that requires just a single thread to be 
> served.
> Ideally event loop group(s) (and threads) should be shared on the same 
> connection factory, allowing many connections to be handled on them (with N:M 
> ratio) or each connection should have its own dedicated single threaded event 
> loop (that seems a waste really, if a client box have more connections opened 
> then available cores).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3615) Artemis CORE client create a new Netty Event Loop group for each connection

2021-12-17 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3615:
-
Description: 
Currently Artemis's Core client is creating a whole new Netty event loop group 
(sized by default using 3*available cores) although it would serve just a 
single transport connection, that would just use a single thread (and core).
Ideally event loop group(s) (and threads) should be shared on the same 
connection factory or NettyConnector should just use a single thread for each 
transport connection (that seems a waste really, if a client box have more 
connections opened then available cores).


  was:
Currently Artemis's Core client is creating a whole new Netty event loop group 
(sized by default using 3*available cores) although it would serve just a 
single transport connection, that would just use a single thread (and core).
Ideally event loop group(s) (and threads) should be shared on the same 
connection factory or NettyConnector should just use a single thread for each 
transport connection (that seems a waste really, if a client box have more 
connections opened then available cores),


> Artemis CORE client create a new Netty Event Loop group for each connection
> ---
>
> Key: ARTEMIS-3615
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3615
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> Currently Artemis's Core client is creating a whole new Netty event loop 
> group (sized by default using 3*available cores) although it would serve just 
> a single transport connection, that would just use a single thread (and core).
> Ideally event loop group(s) (and threads) should be shared on the same 
> connection factory or NettyConnector should just use a single thread for each 
> transport connection (that seems a waste really, if a client box have more 
> connections opened then available cores).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (ARTEMIS-3615) Artemis CORE client create a new Netty Event Loop group for each connection

2021-12-17 Thread Francesco Nigro (Jira)
Francesco Nigro created ARTEMIS-3615:


 Summary: Artemis CORE client create a new Netty Event Loop group 
for each connection
 Key: ARTEMIS-3615
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3615
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Francesco Nigro
Assignee: Francesco Nigro


Currently Artemis's Core client is creating a whole new Netty event loop group 
(sized by default using 3*available cores) although it would serve just a 
single transport connection, that would just use a single thread (and core).
Ideally event loop group(s) (and threads) should be shared on the same 
connection factory or NettyConnector should just use a single thread for each 
transport connection (that seems a waste really, if a client box have more 
connections opened then available cores),



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3083) Set a default producer-window-size on cluster connection

2021-12-15 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3083:
-
Description: 
The current producer-window-size configuration for cluster connection is -1 ie 
unbounded: it means that in case of an intermittent slow network or scarce CPU 
resource on the receiving cluster node (due to GC activity, OOM or other 
reasons) both brokers risk to go OOM:
 * the sender one because of the resend cache on the channel of the cluster 
connection (still present because of confirmation window size defaulted to 1 
MB): unbounded granted credits means that it could grow unbounded while 
containing the clustered packets awaiting to get response or confirmation from 
the other node (that could be busy/overloaded and unable to answer anything 
back)
 * the receiver one due to the Actor abstraction on the cluster connection: the 
sender would try to send as much packets it can, regardless the ability of the 
receiver to consume them, that would accumulated on the actor mailbox (that's 
unbounded) instead of the TCP buffer (that's bounded)

  was:
The current producer-window-size configuration for cluster connection is -1 ie 
unbounded: it means that in case of an intermittent slow network or scarce CPU 
resource on the receiving cluster node (due to GC activity, OOM or other 
reasons) both brokers risk to go OOM:
 * the sender one because of the resend cache on the channel of the cluster 
connection (still present because of confirmation window size defaulted to 1 
MB): unbounded granted credits means that it could grow unbounded while 
containing the clustered packets awaiting to get response or confirmation from 
the other node (that could be busy/overloaded and unable to answer anything 
back)
 * the receiver one due to the Actor abstraction on the cluster connection: the 
sender would try to send as much packets it can, regardless the ability of the 
receiver to consume them


> Set a default producer-window-size on cluster connection
> 
>
> Key: ARTEMIS-3083
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3083
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> The current producer-window-size configuration for cluster connection is -1 
> ie unbounded: it means that in case of an intermittent slow network or scarce 
> CPU resource on the receiving cluster node (due to GC activity, OOM or other 
> reasons) both brokers risk to go OOM:
>  * the sender one because of the resend cache on the channel of the cluster 
> connection (still present because of confirmation window size defaulted to 1 
> MB): unbounded granted credits means that it could grow unbounded while 
> containing the clustered packets awaiting to get response or confirmation 
> from the other node (that could be busy/overloaded and unable to answer 
> anything back)
>  * the receiver one due to the Actor abstraction on the cluster connection: 
> the sender would try to send as much packets it can, regardless the ability 
> of the receiver to consume them, that would accumulated on the actor mailbox 
> (that's unbounded) instead of the TCP buffer (that's bounded)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3083) Set a default producer-window-size on cluster connection

2021-12-15 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3083:
-
Description: 
The current producer-window-size configuration for cluster connection is -1 ie 
unbounded: it means that in case of an intermittent slow network or scarce CPU 
resource on the receiving cluster node (due to GC activity, OOM or other 
reasons) both brokers risk to go OOM:
 * the sender one because of the resend cache on the channel of the cluster 
connection (still present because of confirmation window size defaulted to 1 
MB): unbounded granted credits means that it could grow unbounded while 
containing the clustered packets awaiting to get response or confirmation from 
the other node (that could be busy/overloaded and unable to answer anything 
back)
 * the receiver one due to the Actor abstraction on the cluster connection: the 
sender would try to send as much packets it can, regardless the ability of the 
receiver to consume them

  was:
The current producer-window-size configuration for cluster connection is -1 ie 
unbounded: it means that in case of an intermittent slow network or scarce CPU 
resource on the receiving cluster node (due to GC activity, OOM or other 
reasons) both brokers risk to go OOM:
 * the sender one because of the resend cache on the channel of the cluster 
connection: having unbounded granted credits means that it could grow unbounded 
while containing the clustered packets awaiting to get response
 * the receiver one because of the Actor abstraction on the cluster connection: 
because the sender would try to send as much packets it can, regardless the 
ability of the receiver to consume them


> Set a default producer-window-size on cluster connection
> 
>
> Key: ARTEMIS-3083
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3083
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> The current producer-window-size configuration for cluster connection is -1 
> ie unbounded: it means that in case of an intermittent slow network or scarce 
> CPU resource on the receiving cluster node (due to GC activity, OOM or other 
> reasons) both brokers risk to go OOM:
>  * the sender one because of the resend cache on the channel of the cluster 
> connection (still present because of confirmation window size defaulted to 1 
> MB): unbounded granted credits means that it could grow unbounded while 
> containing the clustered packets awaiting to get response or confirmation 
> from the other node (that could be busy/overloaded and unable to answer 
> anything back)
>  * the receiver one due to the Actor abstraction on the cluster connection: 
> the sender would try to send as much packets it can, regardless the ability 
> of the receiver to consume them



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3610) Artemis's Core JMS 2 CompletionListener with persistent messages should work by default

2021-12-15 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3610:
-
Description: 
JMS 2 spec allow non-persistent messages sent to get CompletionListener's 
callback called without any response coming from the broker, but persistent 
ones should use CompletionListener relying on broker's responses.

Right now if users won't configure confirmationWindowSize (that's -1 by 
default), they won't get *any* meaningful behaviour of CompletionListener both 
for persistent and non-persistent messages: we should provide a default 
configuration of confirmationWindowSize or just allow CompletionListener to 
work without configuring any, in order to let persistent messages to work as by 
JMS 2 spec.

  was:
JMS 2 spec allow non-persistent messages sent to get CompletionListener's 
callback called without any response coming from the broker, but persistent 
ones should block OR reliably use the CompletionListener relying on broker's 
responses.

Right now if users won't configure confirmationWindowSize (that's -1 by 
default), they won't get *any* meaningful behaviour of CompletionListener both 
for persistent and non-persistent messages: we should provide a default 
configuration of confirmationWindowSize or just allow CompletionListener to 
work without configuring any, in order to let persistent messages to work as by 
JMS 2 spec.


> Artemis's Core JMS 2 CompletionListener  with persistent messages should work 
> by default
> 
>
> Key: ARTEMIS-3610
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3610
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> JMS 2 spec allow non-persistent messages sent to get CompletionListener's 
> callback called without any response coming from the broker, but persistent 
> ones should use CompletionListener relying on broker's responses.
> Right now if users won't configure confirmationWindowSize (that's -1 by 
> default), they won't get *any* meaningful behaviour of CompletionListener 
> both for persistent and non-persistent messages: we should provide a default 
> configuration of confirmationWindowSize or just allow CompletionListener to 
> work without configuring any, in order to let persistent messages to work as 
> by JMS 2 spec.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3610) Artemis's Core JMS 2 CompletionListener with persistent messages should work by default

2021-12-15 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3610:
-
Description: 
JMS 2 spec allow non-persistent messages sent to get CompletionListener's 
callback called without any response coming from the broker, but persistent 
ones should block OR reliably use the CompletionListener relying on broker's 
responses.

Right now if users won't configure confirmationWindowSize (that's -1 by 
default), they won't get *any* meaningful behaviour of CompletionListener both 
for persistent and non-persistent messages: we should provide a default 
configuration of confirmationWindowSize or just allow CompletionListener to 
work without configuring any, in order to let persistent messages to work as by 
JMS 2 spec.

  was:
JMS 2 spec allow non-persistent messages sent to get CompletionListener's 
callback called without any response coming from the broker, but persistent 
ones should block OR reliably use the CompletionListener relying on broker's 
responses.

Right now if users won't configure confirmationWindowSize (that's -1 by 
default), they won't get *any* meaningful behaviour of CompletionListener both 
for persistent and non-persistent messages: we should provide a default 
configuration of confirmationWindowSize or just allow CompletionListener to 
work without configuring any, in order to let persistent messages to work as 
JMS 2 spec suggest re CompletionListener.


> Artemis's Core JMS 2 CompletionListener  with persistent messages should work 
> by default
> 
>
> Key: ARTEMIS-3610
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3610
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> JMS 2 spec allow non-persistent messages sent to get CompletionListener's 
> callback called without any response coming from the broker, but persistent 
> ones should block OR reliably use the CompletionListener relying on broker's 
> responses.
> Right now if users won't configure confirmationWindowSize (that's -1 by 
> default), they won't get *any* meaningful behaviour of CompletionListener 
> both for persistent and non-persistent messages: we should provide a default 
> configuration of confirmationWindowSize or just allow CompletionListener to 
> work without configuring any, in order to let persistent messages to work as 
> by JMS 2 spec.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (ARTEMIS-3610) Artemis's Core JMS 2 CompletionListener with persistent messages should work by default

2021-12-15 Thread Francesco Nigro (Jira)
Francesco Nigro created ARTEMIS-3610:


 Summary: Artemis's Core JMS 2 CompletionListener  with persistent 
messages should work by default
 Key: ARTEMIS-3610
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3610
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Francesco Nigro
Assignee: Francesco Nigro


JMS 2 spec allow non-persistent messages sent to get CompletionListener's 
callback called without any response coming from the broker, but persistent 
ones should block OR reliably use the CompletionListener relying on broker's 
responses.

Right now if users won't configure confirmationWindowSize (that's -1 by 
default), they won't get *any* meaningful behaviour of CompletionListener both 
for persistent and non-persistent messages: we should provide a default 
configuration of confirmationWindowSize or just allow CompletionListener to 
work without configuring any, in order to let persistent messages to work as 
JMS 2 spec suggest re CompletionListener.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (ARTEMIS-3609) Artemis's Core JMS 2 CompletionListener shouldn't be called within Netty thread

2021-12-15 Thread Francesco Nigro (Jira)
Francesco Nigro created ARTEMIS-3609:


 Summary: Artemis's Core JMS 2 CompletionListener shouldn't be 
called within Netty thread
 Key: ARTEMIS-3609
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3609
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Francesco Nigro
Assignee: Francesco Nigro


As this stack trace shows

{code:java}
at 
org.apache.activemq.artemis.cli.commands.messages.perf.SkeletalProducerLoadGenerator.onCompletion(SkeletalProducerLoadGenerator.java:142)
at 
org.apache.activemq.artemis.jms.client.ActiveMQMessageProducer$CompletionListenerWrapper.sendAcknowledged(ActiveMQMessageProducer.java:542)
at 
org.apache.activemq.artemis.core.client.impl.SendAcknowledgementHandlerWrapper.sendAcknowledged(SendAcknowledgementHandlerWrapper.java:43)
at 
org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext$2.callSendAck(ActiveMQSessionContext.java:233)
at 
org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQSessionContext$2.handleResponse(ActiveMQSessionContext.java:221)
at 
org.apache.activemq.artemis.core.protocol.core.impl.ResponseCache.handleResponse(ResponseCache.java:56)
at 
org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.handleAsyncResponse(ChannelImpl.java:754)
at 
org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.handlePacket(ChannelImpl.java:810)
at 
org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.doBufferReceived(RemotingConnectionImpl.java:426)
at 
org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.bufferReceived(RemotingConnectionImpl.java:394)
at 
org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl$DelegatingBufferHandler.bufferReceived(ClientSessionFactoryImpl.java:1247)
at 
org.apache.activemq.artemis.core.remoting.impl.netty.ActiveMQChannelHandler.channelRead(ActiveMQChannelHandler.java:73)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at 
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at 
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at 
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at 
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at 
org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
{code}
the CompletionListener callbacks are called from within Netty event loop, 
that's not a good idea because users could block there, causing client to break 
and stop responding




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (ARTEMIS-3587) After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the broker.

2021-11-23 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17448211#comment-17448211
 ] 

Francesco Nigro commented on ARTEMIS-3587:
--

I see. So it still means you can run async profiler -e lock or lock
profiling by using java flight recorder. This would help to spot any highly
contended or long wait to acquire a lock that can cause this stalk to
happen.
Cpu usage is intended with top Linux semantic? ie Can go beyond 100%?

Gc logs seems fine too?




> After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the 
> broker.
> --
>
> Key: ARTEMIS-3587
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3587
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.19.0
>Reporter: Sebastian T
>Priority: Major
>  Labels: performance
> Attachments: log_with_stacktrace.log
>
>
> After upgrading from Artemis 2.16.0 to 2.19.0 we now receive the following 
> message a few times a day:
> 2021-11-05 05:32:53.641 WARN 3850 --- [eduled-threads)] 
> o.a.a.a.u.c.CriticalMeasure  : Component 
> org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
> 2021-11-05 05:32:53.642 INFO 3850 --- [eduled-threads)] o.a.a.a.c.server  
>: AMQ224107: The Critical Analyzer detected slow paths on 
> the broker.  It is recommended that you enable trace logs on 
> org.apache.activemq.artemis.utils.critical while you troubleshoot this issue. 
> You should disable the trace logs when you have finished troubleshooting.
> This message is also logged via ActiveMQServerCriticalPlugin#criticalFailure 
> however the javadoc says: "This will be called before the broker is stopped." 
> The broker however is not stopped. So I think the javadoc and the behaviour 
> of the callback are not in line anymore too.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (ARTEMIS-3587) After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the broker.

2021-11-23 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17448024#comment-17448024
 ] 

Francesco Nigro edited comment on ARTEMIS-3587 at 11/23/21, 2:10 PM:
-

> Btw. what does it mean, that a queue is "expired"?

not the queue but the critical measure path ie it means that a specific code 
path has stalled for more then 2 minutes

My previous answers still hold, especially in relation to filters/selectors: 
there were evident CPU usage issues (eg CPU time for one core approaching 100% 
usage due to complex filter logic)?








was (Author: nigrofranz):
> Btw. what does it mean, that a queue is "expired"?

not the queue but the critical measure path ie it means that a specific code 
path has stalled for more then 2 minutes

My previous answers still hold, especially in related to filters/selectors: 
there were evident CPU usage issues (eg CPU time for one core approaching 100% 
usage due to complex filter logic)?







> After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the 
> broker.
> --
>
> Key: ARTEMIS-3587
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3587
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.19.0
>Reporter: Sebastian T
>Priority: Major
>  Labels: performance
> Attachments: log_with_stacktrace.log
>
>
> After upgrading from Artemis 2.16.0 to 2.19.0 we now receive the following 
> message a few times a day:
> 2021-11-05 05:32:53.641 WARN 3850 --- [eduled-threads)] 
> o.a.a.a.u.c.CriticalMeasure  : Component 
> org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
> 2021-11-05 05:32:53.642 INFO 3850 --- [eduled-threads)] o.a.a.a.c.server  
>: AMQ224107: The Critical Analyzer detected slow paths on 
> the broker.  It is recommended that you enable trace logs on 
> org.apache.activemq.artemis.utils.critical while you troubleshoot this issue. 
> You should disable the trace logs when you have finished troubleshooting.
> This message is also logged via ActiveMQServerCriticalPlugin#criticalFailure 
> however the javadoc says: "This will be called before the broker is stopped." 
> The broker however is not stopped. So I think the javadoc and the behaviour 
> of the callback are not in line anymore too.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (ARTEMIS-3587) After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the broker.

2021-11-23 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17448024#comment-17448024
 ] 

Francesco Nigro commented on ARTEMIS-3587:
--

> Btw. what does it mean, that a queue is "expired"?

not the queue but the critical measure path ie it means that a specific code 
path has stalled for more then 2 minutes

My previous answers still hold, especially in related to filters/selectors: 
there were evident CPU usage issues (eg CPU time for one core approaching 100% 
usage due to complex filter logic)?







> After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the 
> broker.
> --
>
> Key: ARTEMIS-3587
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3587
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.19.0
>Reporter: Sebastian T
>Priority: Major
>  Labels: performance
> Attachments: log_with_stacktrace.log
>
>
> After upgrading from Artemis 2.16.0 to 2.19.0 we now receive the following 
> message a few times a day:
> 2021-11-05 05:32:53.641 WARN 3850 --- [eduled-threads)] 
> o.a.a.a.u.c.CriticalMeasure  : Component 
> org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
> 2021-11-05 05:32:53.642 INFO 3850 --- [eduled-threads)] o.a.a.a.c.server  
>: AMQ224107: The Critical Analyzer detected slow paths on 
> the broker.  It is recommended that you enable trace logs on 
> org.apache.activemq.artemis.utils.critical while you troubleshoot this issue. 
> You should disable the trace logs when you have finished troubleshooting.
> This message is also logged via ActiveMQServerCriticalPlugin#criticalFailure 
> however the javadoc says: "This will be called before the broker is stopped." 
> The broker however is not stopped. So I think the javadoc and the behaviour 
> of the callback are not in line anymore too.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (ARTEMIS-3587) After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the broker.

2021-11-23 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447990#comment-17447990
 ] 

Francesco Nigro commented on ARTEMIS-3587:
--

I see that logs and JIRA description won't agree on which path has been expired 
ie

expired on path 4 (desc) vs expired on path 2 (log)

They can caused and seems 2 separate issues: the HW running the broker is the 
same? Have you tried with a recentish version?
Consider that having the stack trace to agree on the critical analyzer path is 
key to spot what's the issue here, or we can just guess...
Indeed my first guess looking at the log stack trace is that there must be some 
heavy filter logic in place; there is something re it that has changed recently?



> After upgrade: AMQ224107: The Critical Analyzer detected slow paths on the 
> broker.
> --
>
> Key: ARTEMIS-3587
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3587
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.19.0
>Reporter: Sebastian T
>Priority: Major
>  Labels: performance
> Attachments: log_with_stacktrace.log
>
>
> After upgrading from Artemis 2.16.0 to 2.19.0 we now receive the following 
> message a few times a day:
> 2021-11-05 05:32:53.641 WARN 3850 --- [eduled-threads)] 
> o.a.a.a.u.c.CriticalMeasure  : Component 
> org.apache.activemq.artemis.core.server.impl.QueueImpl is expired on path 4
> 2021-11-05 05:32:53.642 INFO 3850 --- [eduled-threads)] o.a.a.a.c.server  
>: AMQ224107: The Critical Analyzer detected slow paths on 
> the broker.  It is recommended that you enable trace logs on 
> org.apache.activemq.artemis.utils.critical while you troubleshoot this issue. 
> You should disable the trace logs when you have finished troubleshooting.
> This message is also logged via ActiveMQServerCriticalPlugin#criticalFailure 
> however the javadoc says: "This will be called before the broker is stopped." 
> The broker however is not stopped. So I think the javadoc and the behaviour 
> of the callback are not in line anymore too.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Work started] (ARTEMIS-3522) Implement performance tools to evaluate throughput and Response Under Load performance of Artemis

2021-11-17 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on ARTEMIS-3522 started by Francesco Nigro.

> Implement performance tools to evaluate throughput and Response Under Load 
> performance of Artemis
> -
>
> Key: ARTEMIS-3522
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3522
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker, JMS
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There are many performance benchmarks around eg [SoftwareMill 
> MqPerf|https://softwaremill.com/mqperf/] that could be used to test 
> performance of Artemis in specific scenario, but none is both simple and easy 
> to be composed with ad-hoc env setup scripts to perform a wide range of 
> different performance tests against the broker.
> This JIRA aim to provide CLI commands that could be used as building blocks 
> to perform:
> * all-out throughput tests
> * responsiveness under load tests (with no [coordinated 
> omission|http://highscalability.com/blog/2015/10/5/your-load-generator-is-probably-lying-to-you-take-the-red-pi.html])
>  ie fixed throughput (per producer) load
> * scalability tests
> The effort of this JIRA should produce CLI commands similar to [Apache Pulsar 
> Perf|https://pulsar.apache.org/docs/en/performance-pulsar-perf/] that could 
> be composed to create complete performance benchmark pipelines (eg using 
> [qDup|https://github.com/Hyperfoil/qDup] and 
> [Horreum|https://github.com/Hyperfoil/Horreum] on a CI/CD) or used as-it-is 
> by users to quickly check performance of the broker.
> Requirements:
> * support AMQP and Core protocol
> * cross JVMs with microseconds time measurement granularity
> *  support parsable output format 
> * suitable to perform scale tests
> The last requirement can be achieved both by using MessageListeners and async 
> producers available on [JMS 
> 2|https://javaee.github.io/jms-spec/pages/JMS20FinalRelease] although both 
> [qpid JMS|https://github.com/apache/qpid-jms] and Artemis Core protocols 
> blocks the producer caller thread ie the former on 
> [jmsConnection::send|https://github.com/apache/qpid-jms/blob/1622de679c3c6763db54e9ac506ef2412fbc4481/qpid-jms-client/src/main/java/org/apache/qpid/jms/JmsConnection.java#L773],
>  awaiting Netty threads to unblock it on 
> [AmqpFixedProducer::doSend|https://github.com/apache/qpid-jms/blob/1622de679c3c6763db54e9ac506ef2412fbc4481/qpid-jms-client/src/main/java/org/apache/qpid/jms/provider/amqp/AmqpFixedProducer.java#L169],
>  while the latter on 
> [ClientProducerImpl::sendRegularMessage|https://github.com/apache/activemq-artemis/blob/e364961c8f035613f3ce4e3bdb3430a17efb0ffd/artemis-core-client/src/main/java/org/apache/activemq/artemis/core/client/impl/ClientProducerImpl.java#L284-L294].
> This seems odd because [JMS 2's 
> CompletionListener|https://docs.oracle.com/javaee/7/api/javax/jms/CompletionListener.html]
>  should save any previous send operation to ever block and the user should 
> take care (despite being tedious and error-prone) to track the amount of 
> in-flight messages and limit it accordly (ie [Reactive Messaging's 
> Emitter|https://smallrye.io/smallrye-reactive-messaging/smallrye-reactive-messaging/2/emitter/emitter.html#emitter-overflow]
>  abstract it with its overflow policies to save blocking the caller thread). 
> If JMS 2 both impl cannot be turned into non-blocking then there're just 2 
> options:
> # using the blocking variant: it means that scalability tests requires using 
> machines with high core numbers 
> #  using [Reactive 
> Messaging|https://github.com/eclipse/microprofile-reactive-messaging], but 
> losing the ability to use local transactions (and maybe other JMS features)
> With the first option the number of producers threads can easily be much more 
> then the available cores, causing the load generator to benchmark OS (or the 
> runtime) ability to context switch threads instead of the broker. That's why 
> a non-blocking approach should be preferred.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3577) Save Core msg re-encoding due to msg copy

2021-11-16 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3577:
-
Description: 
ARTEMIS-3021 has introduced a check re encoding validity while computing memory 
estimate.
It means that if a message modified after being copied and encoded, it would be 
re-encoded while computing memory estimation: this is happening while moving 
messages, because the destination address is added after checking for large 
message that's causing a stealth message encoding.

  was:
ARTEMIS-3021 has introduced a more precise Core message memory estimation, but 
checking the encoded size after copied (to detect if msg should be treated as 
large) cause the msg to be encoded: any subsequent change on the msg eg 
changing its address, would cause the msg to be re-encoded.

This unnecessary re-encoding could be saved by performing address modification 
before checking the encoded size.


> Save Core msg re-encoding due to msg copy
> -
>
> Key: ARTEMIS-3577
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3577
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
>
> ARTEMIS-3021 has introduced a check re encoding validity while computing 
> memory estimate.
> It means that if a message modified after being copied and encoded, it would 
> be re-encoded while computing memory estimation: this is happening while 
> moving messages, because the destination address is added after checking for 
> large message that's causing a stealth message encoding.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Work started] (ARTEMIS-3578) Save SimpleString duplication and long[] allocation while moving Core messages

2021-11-16 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on ARTEMIS-3578 started by Francesco Nigro.

> Save SimpleString duplication and long[] allocation while moving Core messages
> --
>
> Key: ARTEMIS-3578
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3578
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> After performing a message copy as shown in ARTEMIS-3572, each of the copied 
> message reference the original address using a fresh SimpleString copy.
> The vararg long[] parameter is unused and has been removed to save unecessary 
> long[] allocation while using a single Long parameter.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Work started] (ARTEMIS-3577) Save Core msg re-encoding due to msg copy

2021-11-16 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on ARTEMIS-3577 started by Francesco Nigro.

> Save Core msg re-encoding due to msg copy
> -
>
> Key: ARTEMIS-3577
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3577
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
>
> ARTEMIS-3021 has introduced a more precise Core message memory estimation, 
> but checking the encoded size after copied (to detect if msg should be 
> treated as large) cause the msg to be encoded: any subsequent change on the 
> msg eg changing its address, would cause the msg to be re-encoded.
> This unnecessary re-encoding could be saved by performing address 
> modification before checking the encoded size.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Work started] (ARTEMIS-3021) OOM due to wrong CORE clustered message memory estimation

2021-11-16 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on ARTEMIS-3021 started by Francesco Nigro.

> OOM due to wrong CORE clustered message memory estimation
> -
>
> Key: ARTEMIS-3021
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3021
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> This is affecting clustered Core messages (persistent or not).
> The process that cause the wrong estimation is:
>  # add route information to the message
>  # get memory estimation for paging (ie address size estimation) without 
> accounting the new route information
>  # get message persist size for durable append on journal/to update queue 
> statistics, triggering a re-encoding
>  # re-encoding (can) enlarge the message buffer to be the next power of 2 
> available capacity
> The 2 fixes are:
>  * getting a correct memory estimation of the message (including the added 
> route information)
>  * save an excessive buffer growth caused by the default Netty's 
> ByteBuf::ensureWritable strategy



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (ARTEMIS-3577) Save Core msg re-encoding due to msg copy

2021-11-16 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro reassigned ARTEMIS-3577:


   Assignee: Francesco Nigro
Description: 
ARTEMIS-3021 has introduced a more precise Core message memory estimation, but 
checking the encoded size after copied (to detect if msg should be treated as 
large) cause the msg to be encoded: any subsequent change on the msg eg 
changing its address, would cause the msg to be re-encoded.

This unnecessary re-encoding could be saved by performing address modification 
before checking the encoded size.

> Save Core msg re-encoding due to msg copy
> -
>
> Key: ARTEMIS-3577
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3577
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Minor
>
> ARTEMIS-3021 has introduced a more precise Core message memory estimation, 
> but checking the encoded size after copied (to detect if msg should be 
> treated as large) cause the msg to be encoded: any subsequent change on the 
> msg eg changing its address, would cause the msg to be re-encoded.
> This unnecessary re-encoding could be saved by performing address 
> modification before checking the encoded size.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3578) Save SimpleString duplication and long[] allocation while moving Core messages

2021-11-16 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3578:
-
Description: 
After performing a message copy as shown in ARTEMIS-3572, each of the copied 
message reference the original address using a fresh SimpleString copy.
The vararg long[] parameter is unused and has been removed to save unecessary 
long[] allocation while using a single Long parameter.


  was:After performing a message copy as shown in ARTEMIS-3572, the copied 
messages 


> Save SimpleString duplication and long[] allocation while moving Core messages
> --
>
> Key: ARTEMIS-3578
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3578
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> After performing a message copy as shown in ARTEMIS-3572, each of the copied 
> message reference the original address using a fresh SimpleString copy.
> The vararg long[] parameter is unused and has been removed to save unecessary 
> long[] allocation while using a single Long parameter.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3578) Save SimpleString duplication and long[] allocation while moving Core messages

2021-11-16 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3578:
-
Summary: Save SimpleString duplication and long[] allocation while moving 
Core messages  (was: Save SimpleString duplication and long[] allocation while 
moving messages)

> Save SimpleString duplication and long[] allocation while moving Core messages
> --
>
> Key: ARTEMIS-3578
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3578
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> After performing a message copy as shown in ARTEMIS-3572, the copied messages 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3578) Save SimpleString duplication and long[] allocation while moving messages

2021-11-16 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3578:
-
Description: After performing a message copy as shown in ARTEMIS-3572, the 
copied messages 

> Save SimpleString duplication and long[] allocation while moving messages
> -
>
> Key: ARTEMIS-3578
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3578
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> After performing a message copy as shown in ARTEMIS-3572, the copied messages 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3578) Save SimpleString duplication and long[] allocation while moving messages

2021-11-16 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3578:
-
Summary: Save SimpleString duplication and long[] allocation while moving 
messages  (was: Save SimpleString duplication and long[] allocations while 
moving messages)

> Save SimpleString duplication and long[] allocation while moving messages
> -
>
> Key: ARTEMIS-3578
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3578
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (ARTEMIS-3578) Save SimpleString duplication and long[] allocations while moving messages

2021-11-16 Thread Francesco Nigro (Jira)
Francesco Nigro created ARTEMIS-3578:


 Summary: Save SimpleString duplication and long[] allocations 
while moving messages
 Key: ARTEMIS-3578
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3578
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: Francesco Nigro
Assignee: Francesco Nigro






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3577) Save Core msg re-encoding due to msg copy

2021-11-16 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3577:
-
Issue Type: Bug  (was: Improvement)
  Priority: Minor  (was: Major)

> Save Core msg re-encoding due to msg copy
> -
>
> Key: ARTEMIS-3577
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3577
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (ARTEMIS-3577) Save Core msg re-encoding due to msg copy

2021-11-16 Thread Francesco Nigro (Jira)
Francesco Nigro created ARTEMIS-3577:


 Summary: Save Core msg re-encoding due to msg copy
 Key: ARTEMIS-3577
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3577
 Project: ActiveMQ Artemis
  Issue Type: Improvement
Reporter: Francesco Nigro






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3575) Wrong address size estimation on broker restart

2021-11-15 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3575:
-
Description: 
Steps to reproduce:

Using the GUI in the console:
* Create a new multicast address named "mytest"
* Select the address and create a durable multicast queue named "mytest"
* Use the artemis CLI to produce messages. For example like this:
artemis producer --user admin --password admin --url tcp://localhost:61616 
--destination topic://mytest --message-count 1000 --message-size 40960 
--threads 4
Note the reported address memory used in the console: in the example above it 
is 160.26MB
* restart the broker
* the reported address memory is now below 1 MB

The error seems due to the paging store owner that's not correctly set on the 
message while loading it, preventing its memory estimation to be accounted into 
the address size.

> Wrong address size estimation on broker restart
> ---
>
> Key: ARTEMIS-3575
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3575
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Gary Tully
>Priority: Major
>
> Steps to reproduce:
> Using the GUI in the console:
> * Create a new multicast address named "mytest"
> * Select the address and create a durable multicast queue named "mytest"
> * Use the artemis CLI to produce messages. For example like this:
> artemis producer --user admin --password admin --url tcp://localhost:61616 
> --destination topic://mytest --message-count 1000 --message-size 40960 
> --threads 4
> Note the reported address memory used in the console: in the example above it 
> is 160.26MB
> * restart the broker
> * the reported address memory is now below 1 MB
> The error seems due to the paging store owner that's not correctly set on the 
> message while loading it, preventing its memory estimation to be accounted 
> into the address size.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (ARTEMIS-3575) Wrong address size estimation on broker restart

2021-11-15 Thread Francesco Nigro (Jira)
Francesco Nigro created ARTEMIS-3575:


 Summary: Wrong address size estimation on broker restart
 Key: ARTEMIS-3575
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3575
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Francesco Nigro
Assignee: Gary Tully






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (ARTEMIS-3572) Address Memory Used increases by 60% when messages are moved to another queue

2021-11-15 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17443938#comment-17443938
 ] 

Francesco Nigro edited comment on ARTEMIS-3572 at 11/16/21, 6:03 AM:
-

Yep, is it accurate. You can try the branch referenced in the PR I've sent and 
linked to the other JIRA, to check if it works for your use case


was (Author: nigrofranz):
Yep, is it accurate. You can try the branch refereced in the PR I've sent and 
linked to the other JIRA, to check if it works for your use case

> Address Memory Used increases by 60% when messages are moved to another queue
> -
>
> Key: ARTEMIS-3572
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3572
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0
>Reporter: Daniel Claesén
>Priority: Major
> Attachments: image-2021-11-15-13-21-48-545.png, 
> image-2021-11-15-13-22-04-106.png, msg-post.png, msg-pre.png, post.png, 
> pre.png, psot.png
>
>
> When I manually move messages from one durable queue to another the address 
> memory used increases by 60%. I expect the address memory used to remain the 
> same as before the messages were moved.
> The same thing happens when the broker moves messages automatically from a 
> queue to the DLQ.
> Steps to reproduce:
>  * Using the GUI in the console:
>  ** Create a new multicast address named "mytest"
>  ** Select the address and create a durable multicast queue named "mytest"
>  * Use the artemis CLI to produce messages. For example like this:
>  ** artemis producer --user admin --password admin --url 
> tcp://localhost:61616 --destination topic://mytest --message-count 1000 
> --message-size 40960 --threads 4
>  * Note the reported address memory used in the console
>  ** In the example above it is 160.26MB
>  * Use the GUI in the console to move messages. For example with the 
> following operation:
>  ** moveMessages(String, String)
>  *** Keep the filter param empty and enter "DLQ" in the otherQueueName param.
>  * The reported address memory used in the console is now 60% higher
>  ** In my example the reported size was 256.43MB
>  
> I have a Red Hat ticket 
> ([https://access.redhat.com/support/cases/#/case/03076511]) about this and it 
> was suggested that I should create Jira ticket to discuss it further with 
> developers. It was mentioned that management calls themselves use memory and 
> that this could be causing the issue, but I don't see why management calls 
> would use address memory.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (ARTEMIS-3572) Address Memory Used increases by 60% when messages are moved to another queue

2021-11-15 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17443938#comment-17443938
 ] 

Francesco Nigro edited comment on ARTEMIS-3572 at 11/15/21, 4:40 PM:
-

Yep, is it accurate. You can try the branch refereced in the PR I've sent and 
linked to the other JIRA, to check if it works for your use case


was (Author: nigrofranz):
Yep, is it accurate. You can try the branch refereced in the PR I've send and 
linked to the other JIRA, to check if it works for your use case

> Address Memory Used increases by 60% when messages are moved to another queue
> -
>
> Key: ARTEMIS-3572
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3572
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0
>Reporter: Daniel Claesén
>Priority: Major
> Attachments: image-2021-11-15-13-21-48-545.png, 
> image-2021-11-15-13-22-04-106.png, msg-post.png, msg-pre.png, post.png, 
> pre.png, psot.png
>
>
> When I manually move messages from one durable queue to another the address 
> memory used increases by 60%. I expect the address memory used to remain the 
> same as before the messages were moved.
> The same thing happens when the broker moves messages automatically from a 
> queue to the DLQ.
> Steps to reproduce:
>  * Using the GUI in the console:
>  ** Create a new multicast address named "mytest"
>  ** Select the address and create a durable multicast queue named "mytest"
>  * Use the artemis CLI to produce messages. For example like this:
>  ** artemis producer --user admin --password admin --url 
> tcp://localhost:61616 --destination topic://mytest --message-count 1000 
> --message-size 40960 --threads 4
>  * Note the reported address memory used in the console
>  ** In the example above it is 160.26MB
>  * Use the GUI in the console to move messages. For example with the 
> following operation:
>  ** moveMessages(String, String)
>  *** Keep the filter param empty and enter "DLQ" in the otherQueueName param.
>  * The reported address memory used in the console is now 60% higher
>  ** In my example the reported size was 256.43MB
>  
> I have a Red Hat ticket 
> ([https://access.redhat.com/support/cases/#/case/03076511]) about this and it 
> was suggested that I should create Jira ticket to discuss it further with 
> developers. It was mentioned that management calls themselves use memory and 
> that this could be causing the issue, but I don't see why management calls 
> would use address memory.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (ARTEMIS-3572) Address Memory Used increases by 60% when messages are moved to another queue

2021-11-15 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17443938#comment-17443938
 ] 

Francesco Nigro commented on ARTEMIS-3572:
--

Yep, is it accurate. You can try the branch refereced in the PR I've send and 
linked to the other JIRA, to check if it works for your use case

> Address Memory Used increases by 60% when messages are moved to another queue
> -
>
> Key: ARTEMIS-3572
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3572
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0
>Reporter: Daniel Claesén
>Priority: Major
> Attachments: image-2021-11-15-13-21-48-545.png, 
> image-2021-11-15-13-22-04-106.png, msg-post.png, msg-pre.png, post.png, 
> pre.png, psot.png
>
>
> When I manually move messages from one durable queue to another the address 
> memory used increases by 60%. I expect the address memory used to remain the 
> same as before the messages were moved.
> The same thing happens when the broker moves messages automatically from a 
> queue to the DLQ.
> Steps to reproduce:
>  * Using the GUI in the console:
>  ** Create a new multicast address named "mytest"
>  ** Select the address and create a durable multicast queue named "mytest"
>  * Use the artemis CLI to produce messages. For example like this:
>  ** artemis producer --user admin --password admin --url 
> tcp://localhost:61616 --destination topic://mytest --message-count 1000 
> --message-size 40960 --threads 4
>  * Note the reported address memory used in the console
>  ** In the example above it is 160.26MB
>  * Use the GUI in the console to move messages. For example with the 
> following operation:
>  ** moveMessages(String, String)
>  *** Keep the filter param empty and enter "DLQ" in the otherQueueName param.
>  * The reported address memory used in the console is now 60% higher
>  ** In my example the reported size was 256.43MB
>  
> I have a Red Hat ticket 
> ([https://access.redhat.com/support/cases/#/case/03076511]) about this and it 
> was suggested that I should create Jira ticket to discuss it further with 
> developers. It was mentioned that management calls themselves use memory and 
> that this could be causing the issue, but I don't see why management calls 
> would use address memory.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (ARTEMIS-3572) Address Memory Used increases by 60% when messages are moved to another queue

2021-11-15 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17443813#comment-17443813
 ] 

Francesco Nigro commented on ARTEMIS-3572:
--

And...I was wrong :D 

[~Daniel.Claesen] https://issues.apache.org/jira/browse/ARTEMIS-3021 (already 
opened by me) should fix this issue :P

> Address Memory Used increases by 60% when messages are moved to another queue
> -
>
> Key: ARTEMIS-3572
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3572
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0
>Reporter: Daniel Claesén
>Priority: Major
> Attachments: image-2021-11-15-13-21-48-545.png, 
> image-2021-11-15-13-22-04-106.png, msg-post.png, msg-pre.png, post.png, 
> pre.png, psot.png
>
>
> When I manually move messages from one durable queue to another the address 
> memory used increases by 60%. I expect the address memory used to remain the 
> same as before the messages were moved.
> The same thing happens when the broker moves messages automatically from a 
> queue to the DLQ.
> Steps to reproduce:
>  * Using the GUI in the console:
>  ** Create a new multicast address named "mytest"
>  ** Select the address and create a durable multicast queue named "mytest"
>  * Use the artemis CLI to produce messages. For example like this:
>  ** artemis producer --user admin --password admin --url 
> tcp://localhost:61616 --destination topic://mytest --message-count 1000 
> --message-size 40960 --threads 4
>  * Note the reported address memory used in the console
>  ** In the example above it is 160.26MB
>  * Use the GUI in the console to move messages. For example with the 
> following operation:
>  ** moveMessages(String, String)
>  *** Keep the filter param empty and enter "DLQ" in the otherQueueName param.
>  * The reported address memory used in the console is now 60% higher
>  ** In my example the reported size was 256.43MB
>  
> I have a Red Hat ticket 
> ([https://access.redhat.com/support/cases/#/case/03076511]) about this and it 
> was suggested that I should create Jira ticket to discuss it further with 
> developers. It was mentioned that management calls themselves use memory and 
> that this could be causing the issue, but I don't see why management calls 
> would use address memory.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (ARTEMIS-3572) Address Memory Used increases by 60% when messages are moved to another queue

2021-11-15 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17443792#comment-17443792
 ] 

Francesco Nigro edited comment on ARTEMIS-3572 at 11/15/21, 12:22 PM:
--

 !pre.png! 

 !post.png! 

 !image-2021-11-15-13-21-48-545.png! 

 !image-2021-11-15-13-22-04-106.png! 



As said in the previous comments (there are other opened issue re this by me, 
in the past), the additional memory footprint is due to heap usage due to the 
additional properties (that are not cached too): just looking at the 
TypedProperties ie properties field, help to see it.

I'm going to send a PR to improve it, but as said, the reason is known

properties map before moving is 368 bytes, while after, is 696 bytes (!!!)

Address size try to account memory heap occupation as well



was (Author: nigrofranz):
 !pre.png! 

 !post.png! 

 !msg-pre.png! 

 !msg-post.png! 



As said in the previous comments (there are other opened issue re this by me, 
in the past), the additional memory footprint is due to heap usage due to the 
additional properties (that are not cached too): just looking at the 
TypedProperties ie properties field, help to see it.

I'm going to send a PR to improve it, but as said, the reason is known

properties map before moving is 368 bytes, while after, is 696 bytes (!!!)

Address size try to account memory heap occupation as well


> Address Memory Used increases by 60% when messages are moved to another queue
> -
>
> Key: ARTEMIS-3572
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3572
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0
>Reporter: Daniel Claesén
>Priority: Major
> Attachments: image-2021-11-15-13-21-48-545.png, 
> image-2021-11-15-13-22-04-106.png, msg-post.png, msg-pre.png, post.png, 
> pre.png, psot.png
>
>
> When I manually move messages from one durable queue to another the address 
> memory used increases by 60%. I expect the address memory used to remain the 
> same as before the messages were moved.
> The same thing happens when the broker moves messages automatically from a 
> queue to the DLQ.
> Steps to reproduce:
>  * Using the GUI in the console:
>  ** Create a new multicast address named "mytest"
>  ** Select the address and create a durable multicast queue named "mytest"
>  * Use the artemis CLI to produce messages. For example like this:
>  ** artemis producer --user admin --password admin --url 
> tcp://localhost:61616 --destination topic://mytest --message-count 1000 
> --message-size 40960 --threads 4
>  * Note the reported address memory used in the console
>  ** In the example above it is 160.26MB
>  * Use the GUI in the console to move messages. For example with the 
> following operation:
>  ** moveMessages(String, String)
>  *** Keep the filter param empty and enter "DLQ" in the otherQueueName param.
>  * The reported address memory used in the console is now 60% higher
>  ** In my example the reported size was 256.43MB
>  
> I have a Red Hat ticket 
> ([https://access.redhat.com/support/cases/#/case/03076511]) about this and it 
> was suggested that I should create Jira ticket to discuss it further with 
> developers. It was mentioned that management calls themselves use memory and 
> that this could be causing the issue, but I don't see why management calls 
> would use address memory.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (ARTEMIS-3572) Address Memory Used increases by 60% when messages are moved to another queue

2021-11-15 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17443792#comment-17443792
 ] 

Francesco Nigro edited comment on ARTEMIS-3572 at 11/15/21, 12:20 PM:
--

 !pre.png! 

 !post.png! 

 !msg-pre.png! 

 !msg-post.png! 



As said in the previous comments (there are other opened issue re this by me, 
in the past), the additional memory footprint is due to heap usage due to the 
additional properties (that are not cached too): just looking at the 
TypedProperties ie properties field, help to see it.

I'm going to send a PR to improve it, but as said, the reason is known

properties map before moving is 368 bytes, while after, is 696 bytes (!!!)

Address size try to account memory heap occupation as well



was (Author: nigrofranz):
 !pre.png! 

 !post.png! 

 !msg-pre.png! 

 !msg-post.png! 

As said in the previous comments (there are other opened issue re this by me, 
in the past), the additional memory footprint is due to heap usage due to the 
additional properties (that are not cached too): just looking at the 
TypedProperties ie properties field, help to see it.

I'm going to send a PR to improve it, but as said, the reason is known



> Address Memory Used increases by 60% when messages are moved to another queue
> -
>
> Key: ARTEMIS-3572
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3572
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0
>Reporter: Daniel Claesén
>Priority: Major
> Attachments: msg-post.png, msg-pre.png, post.png, pre.png, psot.png
>
>
> When I manually move messages from one durable queue to another the address 
> memory used increases by 60%. I expect the address memory used to remain the 
> same as before the messages were moved.
> The same thing happens when the broker moves messages automatically from a 
> queue to the DLQ.
> Steps to reproduce:
>  * Using the GUI in the console:
>  ** Create a new multicast address named "mytest"
>  ** Select the address and create a durable multicast queue named "mytest"
>  * Use the artemis CLI to produce messages. For example like this:
>  ** artemis producer --user admin --password admin --url 
> tcp://localhost:61616 --destination topic://mytest --message-count 1000 
> --message-size 40960 --threads 4
>  * Note the reported address memory used in the console
>  ** In the example above it is 160.26MB
>  * Use the GUI in the console to move messages. For example with the 
> following operation:
>  ** moveMessages(String, String)
>  *** Keep the filter param empty and enter "DLQ" in the otherQueueName param.
>  * The reported address memory used in the console is now 60% higher
>  ** In my example the reported size was 256.43MB
>  
> I have a Red Hat ticket 
> ([https://access.redhat.com/support/cases/#/case/03076511]) about this and it 
> was suggested that I should create Jira ticket to discuss it further with 
> developers. It was mentioned that management calls themselves use memory and 
> that this could be causing the issue, but I don't see why management calls 
> would use address memory.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (ARTEMIS-3572) Address Memory Used increases by 60% when messages are moved to another queue

2021-11-15 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17443792#comment-17443792
 ] 

Francesco Nigro commented on ARTEMIS-3572:
--

 !pre.png! 

 !post.png! 

 !msg-pre.png! 

 !msg-post.png! 

As said in the previous comments (there are other opened issue re this by me, 
in the past), the additional memory footprint is due to heap usage due to the 
additional properties (that are not cached too): just looking at the 
TypedProperties ie properties field, help to see it.

I'm going to send a PR to improve it, but as said, the reason is known



> Address Memory Used increases by 60% when messages are moved to another queue
> -
>
> Key: ARTEMIS-3572
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3572
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0
>Reporter: Daniel Claesén
>Priority: Major
> Attachments: msg-post.png, msg-pre.png, post.png, pre.png, psot.png
>
>
> When I manually move messages from one durable queue to another the address 
> memory used increases by 60%. I expect the address memory used to remain the 
> same as before the messages were moved.
> The same thing happens when the broker moves messages automatically from a 
> queue to the DLQ.
> Steps to reproduce:
>  * Using the GUI in the console:
>  ** Create a new multicast address named "mytest"
>  ** Select the address and create a durable multicast queue named "mytest"
>  * Use the artemis CLI to produce messages. For example like this:
>  ** artemis producer --user admin --password admin --url 
> tcp://localhost:61616 --destination topic://mytest --message-count 1000 
> --message-size 40960 --threads 4
>  * Note the reported address memory used in the console
>  ** In the example above it is 160.26MB
>  * Use the GUI in the console to move messages. For example with the 
> following operation:
>  ** moveMessages(String, String)
>  *** Keep the filter param empty and enter "DLQ" in the otherQueueName param.
>  * The reported address memory used in the console is now 60% higher
>  ** In my example the reported size was 256.43MB
>  
> I have a Red Hat ticket 
> ([https://access.redhat.com/support/cases/#/case/03076511]) about this and it 
> was suggested that I should create Jira ticket to discuss it further with 
> developers. It was mentioned that management calls themselves use memory and 
> that this could be causing the issue, but I don't see why management calls 
> would use address memory.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3572) Address Memory Used increases by 60% when messages are moved to another queue

2021-11-15 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3572:
-
Attachment: msg-post.png

> Address Memory Used increases by 60% when messages are moved to another queue
> -
>
> Key: ARTEMIS-3572
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3572
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0
>Reporter: Daniel Claesén
>Priority: Major
> Attachments: msg-post.png, msg-pre.png, post.png, pre.png, psot.png
>
>
> When I manually move messages from one durable queue to another the address 
> memory used increases by 60%. I expect the address memory used to remain the 
> same as before the messages were moved.
> The same thing happens when the broker moves messages automatically from a 
> queue to the DLQ.
> Steps to reproduce:
>  * Using the GUI in the console:
>  ** Create a new multicast address named "mytest"
>  ** Select the address and create a durable multicast queue named "mytest"
>  * Use the artemis CLI to produce messages. For example like this:
>  ** artemis producer --user admin --password admin --url 
> tcp://localhost:61616 --destination topic://mytest --message-count 1000 
> --message-size 40960 --threads 4
>  * Note the reported address memory used in the console
>  ** In the example above it is 160.26MB
>  * Use the GUI in the console to move messages. For example with the 
> following operation:
>  ** moveMessages(String, String)
>  *** Keep the filter param empty and enter "DLQ" in the otherQueueName param.
>  * The reported address memory used in the console is now 60% higher
>  ** In my example the reported size was 256.43MB
>  
> I have a Red Hat ticket 
> ([https://access.redhat.com/support/cases/#/case/03076511]) about this and it 
> was suggested that I should create Jira ticket to discuss it further with 
> developers. It was mentioned that management calls themselves use memory and 
> that this could be causing the issue, but I don't see why management calls 
> would use address memory.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3572) Address Memory Used increases by 60% when messages are moved to another queue

2021-11-15 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3572:
-
Attachment: msg-pre.png

> Address Memory Used increases by 60% when messages are moved to another queue
> -
>
> Key: ARTEMIS-3572
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3572
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0
>Reporter: Daniel Claesén
>Priority: Major
> Attachments: msg-pre.png, post.png, pre.png, psot.png
>
>
> When I manually move messages from one durable queue to another the address 
> memory used increases by 60%. I expect the address memory used to remain the 
> same as before the messages were moved.
> The same thing happens when the broker moves messages automatically from a 
> queue to the DLQ.
> Steps to reproduce:
>  * Using the GUI in the console:
>  ** Create a new multicast address named "mytest"
>  ** Select the address and create a durable multicast queue named "mytest"
>  * Use the artemis CLI to produce messages. For example like this:
>  ** artemis producer --user admin --password admin --url 
> tcp://localhost:61616 --destination topic://mytest --message-count 1000 
> --message-size 40960 --threads 4
>  * Note the reported address memory used in the console
>  ** In the example above it is 160.26MB
>  * Use the GUI in the console to move messages. For example with the 
> following operation:
>  ** moveMessages(String, String)
>  *** Keep the filter param empty and enter "DLQ" in the otherQueueName param.
>  * The reported address memory used in the console is now 60% higher
>  ** In my example the reported size was 256.43MB
>  
> I have a Red Hat ticket 
> ([https://access.redhat.com/support/cases/#/case/03076511]) about this and it 
> was suggested that I should create Jira ticket to discuss it further with 
> developers. It was mentioned that management calls themselves use memory and 
> that this could be causing the issue, but I don't see why management calls 
> would use address memory.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3572) Address Memory Used increases by 60% when messages are moved to another queue

2021-11-15 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3572:
-
Attachment: post.png

> Address Memory Used increases by 60% when messages are moved to another queue
> -
>
> Key: ARTEMIS-3572
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3572
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0
>Reporter: Daniel Claesén
>Priority: Major
> Attachments: post.png, pre.png, psot.png
>
>
> When I manually move messages from one durable queue to another the address 
> memory used increases by 60%. I expect the address memory used to remain the 
> same as before the messages were moved.
> The same thing happens when the broker moves messages automatically from a 
> queue to the DLQ.
> Steps to reproduce:
>  * Using the GUI in the console:
>  ** Create a new multicast address named "mytest"
>  ** Select the address and create a durable multicast queue named "mytest"
>  * Use the artemis CLI to produce messages. For example like this:
>  ** artemis producer --user admin --password admin --url 
> tcp://localhost:61616 --destination topic://mytest --message-count 1000 
> --message-size 40960 --threads 4
>  * Note the reported address memory used in the console
>  ** In the example above it is 160.26MB
>  * Use the GUI in the console to move messages. For example with the 
> following operation:
>  ** moveMessages(String, String)
>  *** Keep the filter param empty and enter "DLQ" in the otherQueueName param.
>  * The reported address memory used in the console is now 60% higher
>  ** In my example the reported size was 256.43MB
>  
> I have a Red Hat ticket 
> ([https://access.redhat.com/support/cases/#/case/03076511]) about this and it 
> was suggested that I should create Jira ticket to discuss it further with 
> developers. It was mentioned that management calls themselves use memory and 
> that this could be causing the issue, but I don't see why management calls 
> would use address memory.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3572) Address Memory Used increases by 60% when messages are moved to another queue

2021-11-15 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3572:
-
Attachment: psot.png

> Address Memory Used increases by 60% when messages are moved to another queue
> -
>
> Key: ARTEMIS-3572
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3572
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0
>Reporter: Daniel Claesén
>Priority: Major
> Attachments: post.png, pre.png, psot.png
>
>
> When I manually move messages from one durable queue to another the address 
> memory used increases by 60%. I expect the address memory used to remain the 
> same as before the messages were moved.
> The same thing happens when the broker moves messages automatically from a 
> queue to the DLQ.
> Steps to reproduce:
>  * Using the GUI in the console:
>  ** Create a new multicast address named "mytest"
>  ** Select the address and create a durable multicast queue named "mytest"
>  * Use the artemis CLI to produce messages. For example like this:
>  ** artemis producer --user admin --password admin --url 
> tcp://localhost:61616 --destination topic://mytest --message-count 1000 
> --message-size 40960 --threads 4
>  * Note the reported address memory used in the console
>  ** In the example above it is 160.26MB
>  * Use the GUI in the console to move messages. For example with the 
> following operation:
>  ** moveMessages(String, String)
>  *** Keep the filter param empty and enter "DLQ" in the otherQueueName param.
>  * The reported address memory used in the console is now 60% higher
>  ** In my example the reported size was 256.43MB
>  
> I have a Red Hat ticket 
> ([https://access.redhat.com/support/cases/#/case/03076511]) about this and it 
> was suggested that I should create Jira ticket to discuss it further with 
> developers. It was mentioned that management calls themselves use memory and 
> that this could be causing the issue, but I don't see why management calls 
> would use address memory.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (ARTEMIS-3572) Address Memory Used increases by 60% when messages are moved to another queue

2021-11-15 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-3572:
-
Attachment: pre.png

> Address Memory Used increases by 60% when messages are moved to another queue
> -
>
> Key: ARTEMIS-3572
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3572
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0
>Reporter: Daniel Claesén
>Priority: Major
> Attachments: pre.png
>
>
> When I manually move messages from one durable queue to another the address 
> memory used increases by 60%. I expect the address memory used to remain the 
> same as before the messages were moved.
> The same thing happens when the broker moves messages automatically from a 
> queue to the DLQ.
> Steps to reproduce:
>  * Using the GUI in the console:
>  ** Create a new multicast address named "mytest"
>  ** Select the address and create a durable multicast queue named "mytest"
>  * Use the artemis CLI to produce messages. For example like this:
>  ** artemis producer --user admin --password admin --url 
> tcp://localhost:61616 --destination topic://mytest --message-count 1000 
> --message-size 40960 --threads 4
>  * Note the reported address memory used in the console
>  ** In the example above it is 160.26MB
>  * Use the GUI in the console to move messages. For example with the 
> following operation:
>  ** moveMessages(String, String)
>  *** Keep the filter param empty and enter "DLQ" in the otherQueueName param.
>  * The reported address memory used in the console is now 60% higher
>  ** In my example the reported size was 256.43MB
>  
> I have a Red Hat ticket 
> ([https://access.redhat.com/support/cases/#/case/03076511]) about this and it 
> was suggested that I should create Jira ticket to discuss it further with 
> developers. It was mentioned that management calls themselves use memory and 
> that this could be causing the issue, but I don't see why management calls 
> would use address memory.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (ARTEMIS-3572) Address Memory Used increases by 60% when messages are moved to another queue

2021-11-15 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17443690#comment-17443690
 ] 

Francesco Nigro commented on ARTEMIS-3572:
--

The reason why the size is different is because we track origin queue/address, 
adding more fields to the moved messages: it means that each moved message has 
an increased footprint and it is correctly accounted for on the destination 
address size.

> Address Memory Used increases by 60% when messages are moved to another queue
> -
>
> Key: ARTEMIS-3572
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3572
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.16.0
>Reporter: Daniel Claesén
>Priority: Major
>
> When I manually move messages from one durable queue to another the address 
> memory used increases by 60%. I expect the address memory used to remain the 
> same as before the messages were moved.
> The same thing happens when the broker moves messages automatically from a 
> queue to the DLQ.
> Steps to reproduce:
>  * Using the GUI in the console:
>  ** Create a new multicast address named "mytest"
>  ** Select the address and create a durable multicast queue named "mytest"
>  * Use the artemis CLI to produce messages. For example like this:
>  ** artemis producer --user admin --password admin --url 
> tcp://localhost:61616 --destination topic://mytest --message-count 1000 
> --message-size 40960 --threads 4
>  * Note the reported address memory used in the console
>  ** In the example above it is 160.26MB
>  * Use the GUI in the console to move messages. For example with the 
> following operation:
>  ** moveMessages(String, String)
>  *** Keep the filter param empty and enter "DLQ" in the otherQueueName param.
>  * The reported address memory used in the console is now 60% higher
>  ** In my example the reported size was 256.43MB
>  
> I have a Red Hat ticket 
> ([https://access.redhat.com/support/cases/#/case/03076511]) about this and it 
> was suggested that I should create Jira ticket to discuss it further with 
> developers. It was mentioned that management calls themselves use memory and 
> that this could be causing the issue, but I don't see why management calls 
> would use address memory.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (ARTEMIS-3030) Journal lock evaluation fails when NFS is temporarily disconnected

2021-10-20 Thread Francesco Nigro (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-3030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro resolved ARTEMIS-3030.
--
Resolution: Information Provided

> Journal lock evaluation fails when NFS is temporarily disconnected
> --
>
> Key: ARTEMIS-3030
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3030
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.16.0
>Reporter: Apache Dev
>Assignee: Francesco Nigro
>Priority: Blocker
>
> Same scenario of ARTEMIS-2421.
> If network between Live Broker (B1) and NFS Server is disconnected (for 
> example rejecting its TCP packets with iptables), after the lock lease 
> timeout this happens:
>  * Backup server (B2) becomes Live
>  * When NFS connectivity of B1 is restored, B1 remains Live
> So both broker are live.
> Issue seems caused by \{{java.nio.channels.FileLock#isValid}} used in 
> \{{org.apache.activemq.artemis.core.server.impl.FileLockNodeManager#isLiveLockLost}},
>  because it is always returning true, even if in the meanwhile the lock was 
> lost and taken by B2.
> Do you suggest to use specific mount options for NFS?
> Or the lock evaluation should be replaced with a more reliable mechanism? We 
> notice that \{{FileLock#isValid}} is returning a cached value (true), even 
> when NFS connectivity is down, so it would be better to use a validation 
> mechanism that forces querying the NFS server.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-3030) Journal lock evaluation fails when NFS is temporarily disconnected

2021-10-20 Thread Francesco Nigro (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-3030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17431222#comment-17431222
 ] 

Francesco Nigro commented on ARTEMIS-3030:
--

Closing as explained on 
https://issues.apache.org/jira/browse/ARTEMIS-3030?focusedCommentId=17418751=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17418751

> Journal lock evaluation fails when NFS is temporarily disconnected
> --
>
> Key: ARTEMIS-3030
> URL: https://issues.apache.org/jira/browse/ARTEMIS-3030
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.16.0
>Reporter: Apache Dev
>Priority: Blocker
>
> Same scenario of ARTEMIS-2421.
> If network between Live Broker (B1) and NFS Server is disconnected (for 
> example rejecting its TCP packets with iptables), after the lock lease 
> timeout this happens:
>  * Backup server (B2) becomes Live
>  * When NFS connectivity of B1 is restored, B1 remains Live
> So both broker are live.
> Issue seems caused by \{{java.nio.channels.FileLock#isValid}} used in 
> \{{org.apache.activemq.artemis.core.server.impl.FileLockNodeManager#isLiveLockLost}},
>  because it is always returning true, even if in the meanwhile the lock was 
> lost and taken by B2.
> Do you suggest to use specific mount options for NFS?
> Or the lock evaluation should be replaced with a more reliable mechanism? We 
> notice that \{{FileLock#isValid}} is returning a cached value (true), even 
> when NFS connectivity is down, so it would be better to use a validation 
> mechanism that forces querying the NFS server.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   5   6   7   8   9   10   >