[jira] [Commented] (ARTEMIS-2135) NPE in AMQPMessage getPriority

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659986#comment-16659986
 ] 

ASF GitHub Bot commented on ARTEMIS-2135:
-

GitHub user gaohoward opened a pull request:

https://github.com/apache/activemq-artemis/pull/2386

ARTEMIS-2135 Test multiple core consumers receiving amqp messages

This test can verify an issue fixed by the commit:
48e0fc8f42346d96bc809593a150e05a586787ee (ARTEMIS-2135)
(note the upstream fix is ARTEMIS-2096)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gaohoward/activemq-artemis d_26x_2034fix2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/activemq-artemis/pull/2386.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2386


commit 07355b1fa6f1d20a50f130df48dd8fe006932e54
Author: Howard Gao 
Date:   2018-10-19T13:40:33Z

ARTEMIS-2135 Test multiple core consumers receiving amqp messages

This test can verify an issue fixed by the commit:
48e0fc8f42346d96bc809593a150e05a586787ee (ARTEMIS-2135)
(note the upstream fix is ARTEMIS-2096)




> NPE in AMQPMessage getPriority
> --
>
> Key: ARTEMIS-2135
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2135
> Project: ActiveMQ Artemis
>  Issue Type: Task
>Affects Versions: 2.6.3
>Reporter: clebert suconic
>Assignee: clebert suconic
>Priority: Major
> Fix For: 2.6.4
>
>
> getHeader may eventually return null if AMQPCoreConverter is used in 
> multi-thread.
>  
>  
> This doesn't affect master currently as that was fixed at ARTEMIS-2096.
>  
> I am adding a test in master, and applying a fix in 2.6.x only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-2096) AMQP: Refactoring AMQPMessage abstraction for better consistency and performance

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659984#comment-16659984
 ] 

ASF GitHub Bot commented on ARTEMIS-2096:
-

Github user gaohoward commented on the issue:

https://github.com/apache/activemq-artemis/pull/2383
  
@clebertsuconic I'll send a PR to 2.6.x but not cherry-pick from this. 
Because on 2.6.x the fix is by a different commit not the one I mentioned here. 


> AMQP: Refactoring AMQPMessage abstraction for better consistency and 
> performance
> 
>
> Key: ARTEMIS-2096
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2096
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.6.3
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Major
> Fix For: 2.7.0
>
>
> The AMQPMessage abstraction used to wrap the AMQP message section has some 
> inconsistencies in how it manages the underlying data and the decoded AMQP 
> section obtained from the Proton-J codec as well as issues with state being 
> maintained in the presence of changes to the message made through the public 
> facing Message APIs
> A refactoring of the AMQPMessage class to better utilize the proton-j codec 
> to manage the message data and how it is parsed and re-encoded on change 
> needs to be done to ensure no corrupt messages are sent and that we are not 
> decoding and encoding sections of the message we are not intending to read or 
> change on the sever (We currently can decode message bodies or footer is a 
> few cases where we intend not to).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (ARTEMIS-1919) artemis-core-client TLS SNI and verifyHost operation are not independent

2018-10-22 Thread Justin Bertram (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram reassigned ARTEMIS-1919:
---

Assignee: Justin Bertram

> artemis-core-client TLS SNI and verifyHost operation are not independent
> 
>
> Key: ARTEMIS-1919
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1919
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.6.0
> Environment: Fedora 27
> OpenJDK 1.8.0_171
> Artemis master i.e. 2.7.0-SNAPSHOT build
> OCP 3.9 running the default haproxy implementation
>Reporter: Roddie Kieley
>Assignee: Justin Bertram
>Priority: Major
> Fix For: 2.7.0, 2.6.4
>
>
> In testing connecting to the broker using the core client via ./bin/artemis 
> producer through a haproxy configured with a tls passthrough configuration 
> that requires sni it is observed that SNI information is not passed unless 
> verifyHost is true even if sniHost is set on the URI.
> It is noted that with sniHost specified at the haproxy waypoint the if 
> verifyHost=false haproxy bounces the traffic to the no sni backend. If 
> verifyHost=true then haproxy passes it to the tcp backend and the traffic 
> reaches the broker at which point the connectivity fails.
> As a point of comparison, testing using the Qpid JMS client over AMQP with 
> verifyHost = false this works without problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1929) Race condition in STOMP with multiple, identical durable subcribers

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-1929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659973#comment-16659973
 ] 

ASF GitHub Bot commented on ARTEMIS-1929:
-

GitHub user jbertram opened a pull request:

https://github.com/apache/activemq-artemis/pull/2385

ARTEMIS-1929 race in STOMP identical durable subs



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jbertram/activemq-artemis ARTEMIS-1929

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/activemq-artemis/pull/2385.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2385


commit bc07456ec1cf495a820b77b9458d515afbe690d9
Author: Justin Bertram 
Date:   2018-09-25T21:41:27Z

ARTEMIS-1929 race in STOMP identical durable subs




> Race condition in STOMP with multiple, identical durable subcribers
> ---
>
> Key: ARTEMIS-1929
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1929
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Lionel Cons
>Assignee: Justin Bertram
>Priority: Major
> Attachments: ARTEMIS-1929.log
>
>
> When testing STOMP durable subscriptions, I got errors when having more that 
> one consumer on the same "queue" (i.e. with the same topic, {{client-id}} and 
> {{durable-subscription-name}}).
> Artemis reports:
> {code}
> 2018-06-13 11:27:45,548 WARN  
> [org.apache.activemq.artemis.core.protocol.stomp] AMQ222069: Sent ERROR frame 
> to STOMP client /127.0.0.1:45694: AMQ339016 Error creating subscription 
> 2836570-5b20e391-7fb0-a81b-1
> {code}
> Here is my setup:
>  * two STOMP consumers (different connections)
>  * using the same topic name, {{client-id}} and {{durable-subscription-name}} 
> triplet
> I got this when trying to reproduce what we use with ActiveMQ 5 virtual 
> queues, that is:
>  * producers sending to a topic
>  * different virtual queues attached (e.g. "test" and "prod")
>  * multiple concurrent consumers on each queue to split the consuming load



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ARTEMIS-1954) Eliminate all possible usage of JMSServerManager

2018-10-22 Thread Justin Bertram (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram resolved ARTEMIS-1954.
-
   Resolution: Fixed
Fix Version/s: 2.7.0

> Eliminate all possible usage of JMSServerManager
> 
>
> Key: ARTEMIS-1954
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1954
> Project: ActiveMQ Artemis
>  Issue Type: Task
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
> Fix For: 2.7.0
>
>
> Since Artemis 2.0 all JMS specific broker management classes, interfaces, and 
> methods have been deprecated in favor of their more general counter-parts.  
> Internal components and test which use the JMSServerManager should be changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-2051) Add trace logging for JDBC

2018-10-22 Thread Justin Bertram (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram updated ARTEMIS-2051:

Fix Version/s: 2.7.0

> Add trace logging for JDBC
> --
>
> Key: ARTEMIS-2051
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2051
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
> Fix For: 2.7.0, 2.6.3
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (ARTEMIS-2051) Add trace logging for JDBC

2018-10-22 Thread Justin Bertram (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram reopened ARTEMIS-2051:
-

> Add trace logging for JDBC
> --
>
> Key: ARTEMIS-2051
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2051
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
> Fix For: 2.7.0, 2.6.3
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-2051) Add trace logging for JDBC

2018-10-22 Thread Justin Bertram (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram closed ARTEMIS-2051.
---
Resolution: Fixed

> Add trace logging for JDBC
> --
>
> Key: ARTEMIS-2051
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2051
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
> Fix For: 2.7.0, 2.6.3
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ARTEMIS-2076) Can't change queue filter between restarts

2018-10-22 Thread Justin Bertram (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram resolved ARTEMIS-2076.
-
   Resolution: Fixed
Fix Version/s: 2.7.0

> Can't change queue filter between restarts
> --
>
> Key: ARTEMIS-2076
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2076
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
> Fix For: 2.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ARTEMIS-2087) Support masked passwords in management.xml

2018-10-22 Thread Justin Bertram (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram resolved ARTEMIS-2087.
-
   Resolution: Fixed
Fix Version/s: 2.6.4
   2.7.0

> Support masked passwords in management.xml
> --
>
> Key: ARTEMIS-2087
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2087
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
> Fix For: 2.7.0, 2.6.4
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ARTEMIS-2106) Failures during broker start are not clearly logged

2018-10-22 Thread Justin Bertram (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram resolved ARTEMIS-2106.
-
Resolution: Fixed

> Failures during broker start are not clearly logged
> ---
>
> Key: ARTEMIS-2106
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2106
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.6.3
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
> Fix For: 2.7.0, 2.6.4
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ARTEMIS-2107) Clarify identity for authn failures in notification

2018-10-22 Thread Justin Bertram (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram resolved ARTEMIS-2107.
-
   Resolution: Fixed
Fix Version/s: 2.6.4
   2.7.0

> Clarify identity for authn failures in notification
> ---
>
> Key: ARTEMIS-2107
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2107
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
> Fix For: 2.7.0, 2.6.4
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1884) ActiveMQ5-like message-level authorization

2018-10-22 Thread Justin Bertram (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-1884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659925#comment-16659925
 ] 

Justin Bertram commented on ARTEMIS-1884:
-

I'm not aware of any effort currently to implement this feature.

Also, the inability of a remoting interceptor to acquire the subject from the 
connection object is not a bug. Implementing such behavior would be a new 
feature.

> ActiveMQ5-like message-level authorization
> --
>
> Key: ARTEMIS-1884
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1884
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Reporter: Johan Stenberg
>Priority: Major
>
> In ActiveMQ 5 we have 
> [MessageAuthorizationPolicy|https://activemq.apache.org/maven/apidocs/org/apache/activemq/security/MessageAuthorizationPolicy.html].
>  We would need a similar feature in Artemis to being able to migrate to it.
> In Artemis exists the 
> [ActiveMQServerPlugin#beforeDeliver|https://github.com/apache/activemq-artemis/blob/master/artemis-server/src/main/java/org/apache/activemq/artemis/core/server/plugin/ActiveMQServerPlugin.java#L502]
>  callback method were we probably could prevent message delivery by throwing 
> an exception but using exceptions for flow control feels like a very wrong 
> thing to do here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-2096) AMQP: Refactoring AMQPMessage abstraction for better consistency and performance

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659905#comment-16659905
 ] 

ASF GitHub Bot commented on ARTEMIS-2096:
-

Github user gaohoward commented on the issue:

https://github.com/apache/activemq-artemis/pull/2383
  
ok will do.


> AMQP: Refactoring AMQPMessage abstraction for better consistency and 
> performance
> 
>
> Key: ARTEMIS-2096
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2096
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.6.3
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Major
> Fix For: 2.7.0
>
>
> The AMQPMessage abstraction used to wrap the AMQP message section has some 
> inconsistencies in how it manages the underlying data and the decoded AMQP 
> section obtained from the Proton-J codec as well as issues with state being 
> maintained in the presence of changes to the message made through the public 
> facing Message APIs
> A refactoring of the AMQPMessage class to better utilize the proton-j codec 
> to manage the message data and how it is parsed and re-encoded on change 
> needs to be done to ensure no corrupt messages are sent and that we are not 
> decoding and encoding sections of the message we are not intending to read or 
> change on the sever (We currently can decode message bodies or footer is a 
> few cases where we intend not to).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (AMQ-7080) Keep track of free pages - Update db.free file during checkpoints

2018-10-22 Thread Alan Protasio (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659200#comment-16659200
 ] 

Alan Protasio edited comment on AMQ-7080 at 10/23/18 12:41 AM:
---

Done [~gtully] :D Thanks
 And also the tests were updated...

[Update] [~gtully] This change to free the allocatedList - 
freePages(allocateList) - broke lots of tests (IE: 
org.apache.activemq.store.kahadb.disk.index.HashIndexTest)... So I changed it 
back. I would say we need a new Jira to track this (this can have lots of 
collateral effects) I can open it and try to figure out a solution if you 
think it worth...

Anyway... I change the how the test allocate and free pages on the tests that 
the full recovery is skipped (db.free is usable)...

Page firstPage = pf.allocate(numberOfFreePages * 2);
 for (int i = 0; i < numberOfFreePages; i++)

{ pf.freePage(firstPage.pageId + i); }

pf.flush();

One of the tests do a simple clean shutdown (unload + load), and so, the 
freepage is recovered throught  db.free... In this case if I allocate the free 
pages via tx.allocate, before and after the shutdown the number of free pages 
would be 0.


was (Author: alanprot):
Done [~gtully] :D Thanks
 And also the tests were updated...

[Update] [~gtully] This change to free the allocatedList - 
freePages(allocateList) - broke lots of tests... So changed it back - Probably 
we need a new Jira to track this I can open and try to see what I can do if 
you think work... (IE: 
org.apache.activemq.store.kahadb.disk.index.HashIndexTest)

Anyway... I change the tests to allocate and free pages without transactions... 
(so we dont add this behaviour in the test).

Page firstPage = pf.allocate(numberOfFreePages * 2);
 for (int i = 0; i < numberOfFreePages; i++)

{ pf.freePage(firstPage.pageId + i); }

pf.flush();

One of my tests is to test a normal clean shutdown 
(testFreePageRecoveryCleanShutdownAndRecoverFromDbFreeFile). In this test I 
only call unload and load... If I allocate the free pages via tx.allocate, the 
number of free pages would be 0...

> Keep track of free pages - Update db.free file during checkpoints
> -
>
> Key: AMQ-7080
> URL: https://issues.apache.org/jira/browse/AMQ-7080
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.15.6
>Reporter: Alan Protasio
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.16.0, 5.15.7
>
> Attachments: AMQ-7080-freeList-update.diff
>
>
> In a event of an unclean shutdown, Activemq loses the information about the 
> free pages in the index. In order to recover this information, ActiveMQ read 
> the whole index during shutdown searching for free pages and then save the 
> db.free file. This operation can take a long time, making the failover 
> slower. (during the shutdown, activemq will still hold the lock).
> From http://activemq.apache.org/shared-file-system-master-slave.html
> {quote}"If you have a SAN or shared file system it can be used to provide 
> high availability such that if a broker is killed, another broker can take 
> over immediately."
> {quote}
> Is important to note if the shutdown takes more than ACTIVEMQ_KILL_MAXSECONDS 
> seconds, any following shutdown will be unclean. This broker will stay in 
> this state unless the index is deleted (this state means that every failover 
> will take more then ACTIVEMQ_KILL_MAXSECONDS, so, if you increase this time 
> to 5 minutes, you fail over can take more than 5 minutes).
>  
> In order to prevent ActiveMQ reading the whole index file to search for free 
> pages, we can keep track of those on every Checkpoint. In order to do that we 
> need to be sure that db.data and db.free are in sync. To achieve that we can 
> have a attribute in the db.free page that is referenced by the db.data.
> So during the checkpoint we have:
> 1 - Save db.free and give a freePageUniqueId
> 2 - Save this freePageUniqueId in the db.data (metadata)
> In a crash, we can see if the db.data has the same freePageUniqueId as the 
> db.free. If this is the case we can safely use the free page information 
> contained in the db.free
> Now, the only way to read the whole index file again is IF the crash happens 
> btw step 1 and 2 (what is very unlikely).
> The drawback of this implementation is that we will have to save db.free 
> during the checkpoint, what can possibly increase the checkpoint time.
> Is also important to note that we CAN (and should) have stale data in db.free 
> as it is referencing stale db.data:
> Imagine the timeline:
> T0 -> P1, P2 and P3 are free.
> T1 -> Checkpoint
> T2 -> P1 got occupied.
> T3 -> Crash
> In the current scenario after the  Pagefile#load the P1 will be free and then 
> the replay will mark 

[jira] [Commented] (ARTEMIS-1884) ActiveMQ5-like message-level authorization

2018-10-22 Thread Shannon Gallagher (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-1884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659869#comment-16659869
 ] 

Shannon Gallagher commented on ARTEMIS-1884:


Hello,

Is there plans to start working on this feature?

I have encountered a need for message-level authorization, but am using Artemis 
(2.4.0). I have tried implementing the solution with remote interceptors, but 
am unable to extract the java subject from within the interceptor.

At this point, I think I would either need an Artemis feature similar to 
MessageAuthorizationPolicy in ActiveMQ 5 or some way to access the java subject 
from my interceptor.

PS Might I be asking this in the wrong place? Should I file a bug or feature 
ticket for interceptors to have access to the java subject? 

Thanks!

> ActiveMQ5-like message-level authorization
> --
>
> Key: ARTEMIS-1884
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1884
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Reporter: Johan Stenberg
>Priority: Major
>
> In ActiveMQ 5 we have 
> [MessageAuthorizationPolicy|https://activemq.apache.org/maven/apidocs/org/apache/activemq/security/MessageAuthorizationPolicy.html].
>  We would need a similar feature in Artemis to being able to migrate to it.
> In Artemis exists the 
> [ActiveMQServerPlugin#beforeDeliver|https://github.com/apache/activemq-artemis/blob/master/artemis-server/src/main/java/org/apache/activemq/artemis/core/server/plugin/ActiveMQServerPlugin.java#L502]
>  callback method were we probably could prevent message delivery by throwing 
> an exception but using exceptions for flow control feels like a very wrong 
> thing to do here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (AMQ-7080) Keep track of free pages - Update db.free file during checkpoints

2018-10-22 Thread Alan Protasio (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659200#comment-16659200
 ] 

Alan Protasio edited comment on AMQ-7080 at 10/23/18 12:05 AM:
---

Done [~gtully] :D Thanks
 And also the tests were updated...

[Update] [~gtully] This change to free the allocatedList - 
freePages(allocateList) - broke lots of tests... So changed it back - Probably 
we need a new Jira to track this I can open and try to see what I can do if 
you think work... (IE: 
org.apache.activemq.store.kahadb.disk.index.HashIndexTest)

Anyway... I change the tests to allocate and free pages without transactions... 
(so we dont add this behaviour in the test).

Page firstPage = pf.allocate(numberOfFreePages * 2);
 for (int i = 0; i < numberOfFreePages; i++)

{ pf.freePage(firstPage.pageId + i); }

pf.flush();

One of my tests is to test a normal clean shutdown 
(testFreePageRecoveryCleanShutdownAndRecoverFromDbFreeFile). In this test I 
only call unload and load... If I allocate the free pages via tx.allocate, the 
number of free pages would be 0...


was (Author: alanprot):
Done [~gtully] :D Thanks

And also the tests were updated...

> Keep track of free pages - Update db.free file during checkpoints
> -
>
> Key: AMQ-7080
> URL: https://issues.apache.org/jira/browse/AMQ-7080
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.15.6
>Reporter: Alan Protasio
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.16.0, 5.15.7
>
> Attachments: AMQ-7080-freeList-update.diff
>
>
> In a event of an unclean shutdown, Activemq loses the information about the 
> free pages in the index. In order to recover this information, ActiveMQ read 
> the whole index during shutdown searching for free pages and then save the 
> db.free file. This operation can take a long time, making the failover 
> slower. (during the shutdown, activemq will still hold the lock).
> From http://activemq.apache.org/shared-file-system-master-slave.html
> {quote}"If you have a SAN or shared file system it can be used to provide 
> high availability such that if a broker is killed, another broker can take 
> over immediately."
> {quote}
> Is important to note if the shutdown takes more than ACTIVEMQ_KILL_MAXSECONDS 
> seconds, any following shutdown will be unclean. This broker will stay in 
> this state unless the index is deleted (this state means that every failover 
> will take more then ACTIVEMQ_KILL_MAXSECONDS, so, if you increase this time 
> to 5 minutes, you fail over can take more than 5 minutes).
>  
> In order to prevent ActiveMQ reading the whole index file to search for free 
> pages, we can keep track of those on every Checkpoint. In order to do that we 
> need to be sure that db.data and db.free are in sync. To achieve that we can 
> have a attribute in the db.free page that is referenced by the db.data.
> So during the checkpoint we have:
> 1 - Save db.free and give a freePageUniqueId
> 2 - Save this freePageUniqueId in the db.data (metadata)
> In a crash, we can see if the db.data has the same freePageUniqueId as the 
> db.free. If this is the case we can safely use the free page information 
> contained in the db.free
> Now, the only way to read the whole index file again is IF the crash happens 
> btw step 1 and 2 (what is very unlikely).
> The drawback of this implementation is that we will have to save db.free 
> during the checkpoint, what can possibly increase the checkpoint time.
> Is also important to note that we CAN (and should) have stale data in db.free 
> as it is referencing stale db.data:
> Imagine the timeline:
> T0 -> P1, P2 and P3 are free.
> T1 -> Checkpoint
> T2 -> P1 got occupied.
> T3 -> Crash
> In the current scenario after the  Pagefile#load the P1 will be free and then 
> the replay will mark P1 as occupied or will occupied another page (now that 
> the recovery of free pages is done on shutdown)
> This change only make sure that db.data and db.free are in sync and showing 
> the reality in T1 (checkpoint), If they are in sync we can trust the db.free.
> This is a really fast draft of what i'm suggesting... If you guys agree, i 
> can create the proper patch after:
> [https://github.com/alanprot/activemq/commit/18036ef7214ef0eaa25c8650f40644dd8b4632a5]
>  
> This is related to https://issues.apache.org/jira/browse/AMQ-6590



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ARTEMIS-2108) Potential StackOverflowError when load balancing disabled

2018-10-22 Thread Justin Bertram (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram resolved ARTEMIS-2108.
-
   Resolution: Fixed
Fix Version/s: 2.6.4
   2.7.0

> Potential StackOverflowError when load balancing disabled
> -
>
> Key: ARTEMIS-2108
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2108
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
> Fix For: 2.7.0, 2.6.4
>
>
> It's possible that when a cluster has disabled message load balancing then a 
> message sent to a node that only has a corresponding remote queue binding 
> will trigger a stack overflow.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ARTEMIS-2112) Remove JMX properties from start scripts

2018-10-22 Thread Justin Bertram (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram resolved ARTEMIS-2112.
-
Resolution: Fixed

> Remove JMX properties from start scripts
> 
>
> Key: ARTEMIS-2112
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2112
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.6.3
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
> Fix For: 2.7.0, 2.6.4
>
>
> JMX configuration is now done via {{management.xml}}.  Configuring JMX via 
> the start scripts could result in unexpected behavior since the 
> {{authorisation}} configuration from {{management.xml}} would be ignored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMQ-7083) Project website has some obsolete content

2018-10-22 Thread David Tonhofer (JIRA)
David Tonhofer created AMQ-7083:
---

 Summary: Project website has some obsolete content
 Key: AMQ-7083
 URL: https://issues.apache.org/jira/browse/AMQ-7083
 Project: ActiveMQ
  Issue Type: Bug
  Components: Documentation
Reporter: David Tonhofer


1) On [http://activemq.apache.org/activemq-5156-release.html] there is still 
talk about MD5 sums whereas the hashes have moved to SHA512.

2) The link to the source is wrong on 
http://activemq.apache.org/activemq-5156-release.html

Given link:

[http://www.apache.org/dyn/closer.cgi?path=/activemq/5.15.5/activemq-parent-5.15.6-source-release.zip6]

Correct link:

[http://www.apache.org/dyn/closer.cgi?path=/activemq/5.15.6/activemq-parent-5.15.6-source-release.zip]

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ARTEMIS-1018) Duplicate error ids on different error messages

2018-10-22 Thread Justin Bertram (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram resolved ARTEMIS-1018.
-
   Resolution: Fixed
 Assignee: Justin Bertram
Fix Version/s: 2.6.4
   2.7.0

> Duplicate error ids on different error messages
> ---
>
> Key: ARTEMIS-1018
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1018
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.0.0
>Reporter: Jiri Daněk
>Assignee: Justin Bertram
>Priority: Minor
> Fix For: 2.7.0, 2.6.4
>
>
> Some messages happen to have the same id, for example
> {noformat}
>@Message(id = 119019, value = "Session is closed")
>ActiveMQObjectClosedException sessionClosed();
>   @Message(id = 119019, value = "Queue already exists {0}", format = 
> Message.Format.MESSAGE_FORMAT)
>ActiveMQQueueExistsException queueAlreadyExists(SimpleString queueName);
> {noformat}
> (I stumbled upon this when trying to send message from 
> artemis-jms-client-1.5.4 to artemis-server-2.0.0; I searched for the message 
> code of the message I got and got two results, instead of one.)
> It may not be a problem, since one is ActiveMQClientMessageBundle and the 
> other is ActiveMQMessage bundle, but still.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-2136) Lack of synchronization on CoreMessage.copy may cause issues

2018-10-22 Thread Justin Bertram (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram updated ARTEMIS-2136:

Issue Type: Bug  (was: Task)

> Lack of synchronization on CoreMessage.copy may cause issues
> 
>
> Key: ARTEMIS-2136
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2136
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.6.3
>Reporter: clebert suconic
>Assignee: clebert suconic
>Priority: Major
> Fix For: 2.7.0, 2.6.4
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (ARTEMIS-2136) Lack of synchronization on CoreMessage.copy may cause issues

2018-10-22 Thread Justin Bertram (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram resolved ARTEMIS-2136.
-
Resolution: Fixed

> Lack of synchronization on CoreMessage.copy may cause issues
> 
>
> Key: ARTEMIS-2136
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2136
> Project: ActiveMQ Artemis
>  Issue Type: Task
>Affects Versions: 2.6.3
>Reporter: clebert suconic
>Assignee: clebert suconic
>Priority: Major
> Fix For: 2.7.0, 2.6.4
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1996) MappedSequentialFileFactory may cause DirectByteBuffer memory leaks

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-1996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659502#comment-16659502
 ] 

ASF GitHub Bot commented on ARTEMIS-1996:
-

Github user franz1981 commented on the issue:

https://github.com/apache/activemq-artemis/pull/2199
  
@morefuntang Please check if master has fixed your issue with GC and close 
this one: it has been addressed on 
https://github.com/apache/activemq-artemis/pull/2250
With it there isn't any pooled direct buffers anymore, but each compaction 
will create just one direct ByteBuffer reusing it for the whole process and 
releasing it deterministically when finished :+1: 


> MappedSequentialFileFactory may cause DirectByteBuffer memory leaks
> ---
>
> Key: ARTEMIS-1996
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1996
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Reporter: tang pu
>Priority: Minor
>
> Because of some customization requirements, the readJournalFile method of 
> JournalImpl needs to be calledmultiple times. 
> During the stress test, it was found that almost every 5 hours, the Broker 
> appeared a Full GC.
> This is the information about the Full GC in the GC log.
> {color:#FF}2018-07-25T12:14:07.323+0800: 10089.523: [Full GC 
> (System.gc()) 6767M->253M(16G), 8.7138691 secs]{color}
> {color:#FF} [Eden: 632.0M(712.0M)->0.0B(816.0M) Survivors: 104.0M->0.0B 
> Heap: 6767.6M(16.0G)->253.9M(16.0G)], [Metaspace: 
> 36323K->35961K(1083392K)]{color}
> {color:#FF} [Times: user=2.56 sys=0.42, real=8.71 secs] {color}
> When the Full GC appears, the thread stack is as follows:
> {color:#FF}java.lang.System.gc(System.java:993){color}
> {color:#FF}java.nio.Bits.reserveMemory(Bits.java:666){color}
> {color:#FF}java.nio.DirectByteBuffer.(DirectByteBuffer.java:123){color}
> {color:#FF}java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311){color}
> {color:#FF}org.apache.activemq.artemis.core.io.mapped.MappedSequentialFileFactory.newBuffer(MappedSequentialFileFactory.java:109){color}
> {color:#FF}org.apache.activemq.artemis.core.journal.impl.JournalImpl.readJournalFile(JournalImpl.java:463){color}
> By analyzing the stack, it should be that the JVM's heap memory cannot be 
> allocated, causing the JVM to call the System.gc() method.
> In the Broker, MappedSequentialFileFactory caches off-heap memory through 
> ThreadLocal. Once the thread is evicted by the CompactExecutor(keepalive is 
> 60s) in the Journal, the heap memory is "leaked".
> {color:#FF}NIOSequentialFileFactory{color} also has the same problem
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1996) MappedSequentialFileFactory may cause DirectByteBuffer memory leaks

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-1996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659495#comment-16659495
 ] 

ASF GitHub Bot commented on ARTEMIS-1996:
-

Github user asfgit closed the pull request at:

https://github.com/apache/activemq-artemis/pull/2250


> MappedSequentialFileFactory may cause DirectByteBuffer memory leaks
> ---
>
> Key: ARTEMIS-1996
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1996
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Reporter: tang pu
>Priority: Minor
>
> Because of some customization requirements, the readJournalFile method of 
> JournalImpl needs to be calledmultiple times. 
> During the stress test, it was found that almost every 5 hours, the Broker 
> appeared a Full GC.
> This is the information about the Full GC in the GC log.
> {color:#FF}2018-07-25T12:14:07.323+0800: 10089.523: [Full GC 
> (System.gc()) 6767M->253M(16G), 8.7138691 secs]{color}
> {color:#FF} [Eden: 632.0M(712.0M)->0.0B(816.0M) Survivors: 104.0M->0.0B 
> Heap: 6767.6M(16.0G)->253.9M(16.0G)], [Metaspace: 
> 36323K->35961K(1083392K)]{color}
> {color:#FF} [Times: user=2.56 sys=0.42, real=8.71 secs] {color}
> When the Full GC appears, the thread stack is as follows:
> {color:#FF}java.lang.System.gc(System.java:993){color}
> {color:#FF}java.nio.Bits.reserveMemory(Bits.java:666){color}
> {color:#FF}java.nio.DirectByteBuffer.(DirectByteBuffer.java:123){color}
> {color:#FF}java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311){color}
> {color:#FF}org.apache.activemq.artemis.core.io.mapped.MappedSequentialFileFactory.newBuffer(MappedSequentialFileFactory.java:109){color}
> {color:#FF}org.apache.activemq.artemis.core.journal.impl.JournalImpl.readJournalFile(JournalImpl.java:463){color}
> By analyzing the stack, it should be that the JVM's heap memory cannot be 
> allocated, causing the JVM to call the System.gc() method.
> In the Broker, MappedSequentialFileFactory caches off-heap memory through 
> ThreadLocal. Once the thread is evicted by the CompactExecutor(keepalive is 
> 60s) in the Journal, the heap memory is "leaked".
> {color:#FF}NIOSequentialFileFactory{color} also has the same problem
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1996) MappedSequentialFileFactory may cause DirectByteBuffer memory leaks

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-1996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659492#comment-16659492
 ] 

ASF GitHub Bot commented on ARTEMIS-1996:
-

Github user clebertsuconic commented on a diff in the pull request:

https://github.com/apache/activemq-artemis/pull/2250#discussion_r227024184
  
--- Diff: 
tests/unit-tests/src/test/java/org/apache/activemq/artemis/tests/unit/core/journal/impl/fakes/FakeSequentialFileFactory.java
 ---
@@ -417,6 +418,18 @@ public synchronized void writeDirect(final ByteBuffer 
bytes, final boolean sync,
 
   }
 
+  @Override
+  public synchronized void blockingWriteDirect(ByteBuffer bytes,
--- End diff --

do you really need synchronized  here? this is synchronizing the factory.


> MappedSequentialFileFactory may cause DirectByteBuffer memory leaks
> ---
>
> Key: ARTEMIS-1996
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1996
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Reporter: tang pu
>Priority: Minor
>
> Because of some customization requirements, the readJournalFile method of 
> JournalImpl needs to be calledmultiple times. 
> During the stress test, it was found that almost every 5 hours, the Broker 
> appeared a Full GC.
> This is the information about the Full GC in the GC log.
> {color:#FF}2018-07-25T12:14:07.323+0800: 10089.523: [Full GC 
> (System.gc()) 6767M->253M(16G), 8.7138691 secs]{color}
> {color:#FF} [Eden: 632.0M(712.0M)->0.0B(816.0M) Survivors: 104.0M->0.0B 
> Heap: 6767.6M(16.0G)->253.9M(16.0G)], [Metaspace: 
> 36323K->35961K(1083392K)]{color}
> {color:#FF} [Times: user=2.56 sys=0.42, real=8.71 secs] {color}
> When the Full GC appears, the thread stack is as follows:
> {color:#FF}java.lang.System.gc(System.java:993){color}
> {color:#FF}java.nio.Bits.reserveMemory(Bits.java:666){color}
> {color:#FF}java.nio.DirectByteBuffer.(DirectByteBuffer.java:123){color}
> {color:#FF}java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311){color}
> {color:#FF}org.apache.activemq.artemis.core.io.mapped.MappedSequentialFileFactory.newBuffer(MappedSequentialFileFactory.java:109){color}
> {color:#FF}org.apache.activemq.artemis.core.journal.impl.JournalImpl.readJournalFile(JournalImpl.java:463){color}
> By analyzing the stack, it should be that the JVM's heap memory cannot be 
> allocated, causing the JVM to call the System.gc() method.
> In the Broker, MappedSequentialFileFactory caches off-heap memory through 
> ThreadLocal. Once the thread is evicted by the CompactExecutor(keepalive is 
> 60s) in the Journal, the heap memory is "leaked".
> {color:#FF}NIOSequentialFileFactory{color} also has the same problem
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-2141) Display "Filter" column in Consumers View of AdminUI

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659491#comment-16659491
 ] 

ASF GitHub Bot commented on ARTEMIS-2141:
-

GitHub user sebthom opened a pull request:

https://github.com/apache/activemq-artemis/pull/2384

ARTEMIS-2141 Add Filter column to Consumers View of AdminUI



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sebthom/activemq-artemis consumers-filter

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/activemq-artemis/pull/2384.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2384


commit ff1decdded7778316f8c6c9c1af74646e89c3022
Author: extern-thomschke-sebastian 
Date:   2018-10-22T15:24:04Z

ARTEMIS-2141 Add Filter column to Consumers View of AdminUI




> Display "Filter" column in Consumers View of AdminUI
> 
>
> Key: ARTEMIS-2141
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2141
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Web Console
>Affects Versions: 2.6.3
>Reporter: Sebastian T
>Priority: Trivial
>
> Currently the message filter of consumers is not displayed in the Consumers 
> View, this however is valuable info while troubleshooting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-2096) AMQP: Refactoring AMQPMessage abstraction for better consistency and performance

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659488#comment-16659488
 ] 

ASF GitHub Bot commented on ARTEMIS-2096:
-

Github user clebertsuconic commented on the issue:

https://github.com/apache/activemq-artemis/pull/2383
  
@gaohoward 

- I wouldn't mention external JIRAs here... 
- Please find a more inclusive message.. when I do git log, a message such 
as "Adding a Test" doesn't help much.




> AMQP: Refactoring AMQPMessage abstraction for better consistency and 
> performance
> 
>
> Key: ARTEMIS-2096
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2096
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.6.3
>Reporter: Timothy Bish
>Assignee: Timothy Bish
>Priority: Major
> Fix For: 2.7.0
>
>
> The AMQPMessage abstraction used to wrap the AMQP message section has some 
> inconsistencies in how it manages the underlying data and the decoded AMQP 
> section obtained from the Proton-J codec as well as issues with state being 
> maintained in the presence of changes to the message made through the public 
> facing Message APIs
> A refactoring of the AMQPMessage class to better utilize the proton-j codec 
> to manage the message data and how it is parsed and re-encoded on change 
> needs to be done to ensure no corrupt messages are sent and that we are not 
> decoding and encoding sections of the message we are not intending to read or 
> change on the sever (We currently can decode message bodies or footer is a 
> few cases where we intend not to).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7080) Keep track of free pages - Update db.free file during checkpoints

2018-10-22 Thread Alan Protasio (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659200#comment-16659200
 ] 

Alan Protasio commented on AMQ-7080:


Done [~gtully] :D Thanks

And also the tests were updated...

> Keep track of free pages - Update db.free file during checkpoints
> -
>
> Key: AMQ-7080
> URL: https://issues.apache.org/jira/browse/AMQ-7080
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.15.6
>Reporter: Alan Protasio
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.16.0, 5.15.7
>
> Attachments: AMQ-7080-freeList-update.diff
>
>
> In a event of an unclean shutdown, Activemq loses the information about the 
> free pages in the index. In order to recover this information, ActiveMQ read 
> the whole index during shutdown searching for free pages and then save the 
> db.free file. This operation can take a long time, making the failover 
> slower. (during the shutdown, activemq will still hold the lock).
> From http://activemq.apache.org/shared-file-system-master-slave.html
> {quote}"If you have a SAN or shared file system it can be used to provide 
> high availability such that if a broker is killed, another broker can take 
> over immediately."
> {quote}
> Is important to note if the shutdown takes more than ACTIVEMQ_KILL_MAXSECONDS 
> seconds, any following shutdown will be unclean. This broker will stay in 
> this state unless the index is deleted (this state means that every failover 
> will take more then ACTIVEMQ_KILL_MAXSECONDS, so, if you increase this time 
> to 5 minutes, you fail over can take more than 5 minutes).
>  
> In order to prevent ActiveMQ reading the whole index file to search for free 
> pages, we can keep track of those on every Checkpoint. In order to do that we 
> need to be sure that db.data and db.free are in sync. To achieve that we can 
> have a attribute in the db.free page that is referenced by the db.data.
> So during the checkpoint we have:
> 1 - Save db.free and give a freePageUniqueId
> 2 - Save this freePageUniqueId in the db.data (metadata)
> In a crash, we can see if the db.data has the same freePageUniqueId as the 
> db.free. If this is the case we can safely use the free page information 
> contained in the db.free
> Now, the only way to read the whole index file again is IF the crash happens 
> btw step 1 and 2 (what is very unlikely).
> The drawback of this implementation is that we will have to save db.free 
> during the checkpoint, what can possibly increase the checkpoint time.
> Is also important to note that we CAN (and should) have stale data in db.free 
> as it is referencing stale db.data:
> Imagine the timeline:
> T0 -> P1, P2 and P3 are free.
> T1 -> Checkpoint
> T2 -> P1 got occupied.
> T3 -> Crash
> In the current scenario after the  Pagefile#load the P1 will be free and then 
> the replay will mark P1 as occupied or will occupied another page (now that 
> the recovery of free pages is done on shutdown)
> This change only make sure that db.data and db.free are in sync and showing 
> the reality in T1 (checkpoint), If they are in sync we can trust the db.free.
> This is a really fast draft of what i'm suggesting... If you guys agree, i 
> can create the proper patch after:
> [https://github.com/alanprot/activemq/commit/18036ef7214ef0eaa25c8650f40644dd8b4632a5]
>  
> This is related to https://issues.apache.org/jira/browse/AMQ-6590



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7080) Keep track of free pages - Update db.free file during checkpoints

2018-10-22 Thread Gary Tully (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659158#comment-16659158
 ] 

Gary Tully commented on AMQ-7080:
-

 

[~alanprot] peek at [^AMQ-7080-freeList-update.diff] 

that will sort the update of the freelist with tx allocations that don't write 
any application data. 

> Keep track of free pages - Update db.free file during checkpoints
> -
>
> Key: AMQ-7080
> URL: https://issues.apache.org/jira/browse/AMQ-7080
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.15.6
>Reporter: Alan Protasio
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.16.0, 5.15.7
>
> Attachments: AMQ-7080-freeList-update.diff
>
>
> In a event of an unclean shutdown, Activemq loses the information about the 
> free pages in the index. In order to recover this information, ActiveMQ read 
> the whole index during shutdown searching for free pages and then save the 
> db.free file. This operation can take a long time, making the failover 
> slower. (during the shutdown, activemq will still hold the lock).
> From http://activemq.apache.org/shared-file-system-master-slave.html
> {quote}"If you have a SAN or shared file system it can be used to provide 
> high availability such that if a broker is killed, another broker can take 
> over immediately."
> {quote}
> Is important to note if the shutdown takes more than ACTIVEMQ_KILL_MAXSECONDS 
> seconds, any following shutdown will be unclean. This broker will stay in 
> this state unless the index is deleted (this state means that every failover 
> will take more then ACTIVEMQ_KILL_MAXSECONDS, so, if you increase this time 
> to 5 minutes, you fail over can take more than 5 minutes).
>  
> In order to prevent ActiveMQ reading the whole index file to search for free 
> pages, we can keep track of those on every Checkpoint. In order to do that we 
> need to be sure that db.data and db.free are in sync. To achieve that we can 
> have a attribute in the db.free page that is referenced by the db.data.
> So during the checkpoint we have:
> 1 - Save db.free and give a freePageUniqueId
> 2 - Save this freePageUniqueId in the db.data (metadata)
> In a crash, we can see if the db.data has the same freePageUniqueId as the 
> db.free. If this is the case we can safely use the free page information 
> contained in the db.free
> Now, the only way to read the whole index file again is IF the crash happens 
> btw step 1 and 2 (what is very unlikely).
> The drawback of this implementation is that we will have to save db.free 
> during the checkpoint, what can possibly increase the checkpoint time.
> Is also important to note that we CAN (and should) have stale data in db.free 
> as it is referencing stale db.data:
> Imagine the timeline:
> T0 -> P1, P2 and P3 are free.
> T1 -> Checkpoint
> T2 -> P1 got occupied.
> T3 -> Crash
> In the current scenario after the  Pagefile#load the P1 will be free and then 
> the replay will mark P1 as occupied or will occupied another page (now that 
> the recovery of free pages is done on shutdown)
> This change only make sure that db.data and db.free are in sync and showing 
> the reality in T1 (checkpoint), If they are in sync we can trust the db.free.
> This is a really fast draft of what i'm suggesting... If you guys agree, i 
> can create the proper patch after:
> [https://github.com/alanprot/activemq/commit/18036ef7214ef0eaa25c8650f40644dd8b4632a5]
>  
> This is related to https://issues.apache.org/jira/browse/AMQ-6590



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMQ-7080) Keep track of free pages - Update db.free file during checkpoints

2018-10-22 Thread Gary Tully (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-7080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-7080:

Attachment: AMQ-7080-freeList-update.diff

> Keep track of free pages - Update db.free file during checkpoints
> -
>
> Key: AMQ-7080
> URL: https://issues.apache.org/jira/browse/AMQ-7080
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.15.6
>Reporter: Alan Protasio
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.16.0, 5.15.7
>
> Attachments: AMQ-7080-freeList-update.diff
>
>
> In a event of an unclean shutdown, Activemq loses the information about the 
> free pages in the index. In order to recover this information, ActiveMQ read 
> the whole index during shutdown searching for free pages and then save the 
> db.free file. This operation can take a long time, making the failover 
> slower. (during the shutdown, activemq will still hold the lock).
> From http://activemq.apache.org/shared-file-system-master-slave.html
> {quote}"If you have a SAN or shared file system it can be used to provide 
> high availability such that if a broker is killed, another broker can take 
> over immediately."
> {quote}
> Is important to note if the shutdown takes more than ACTIVEMQ_KILL_MAXSECONDS 
> seconds, any following shutdown will be unclean. This broker will stay in 
> this state unless the index is deleted (this state means that every failover 
> will take more then ACTIVEMQ_KILL_MAXSECONDS, so, if you increase this time 
> to 5 minutes, you fail over can take more than 5 minutes).
>  
> In order to prevent ActiveMQ reading the whole index file to search for free 
> pages, we can keep track of those on every Checkpoint. In order to do that we 
> need to be sure that db.data and db.free are in sync. To achieve that we can 
> have a attribute in the db.free page that is referenced by the db.data.
> So during the checkpoint we have:
> 1 - Save db.free and give a freePageUniqueId
> 2 - Save this freePageUniqueId in the db.data (metadata)
> In a crash, we can see if the db.data has the same freePageUniqueId as the 
> db.free. If this is the case we can safely use the free page information 
> contained in the db.free
> Now, the only way to read the whole index file again is IF the crash happens 
> btw step 1 and 2 (what is very unlikely).
> The drawback of this implementation is that we will have to save db.free 
> during the checkpoint, what can possibly increase the checkpoint time.
> Is also important to note that we CAN (and should) have stale data in db.free 
> as it is referencing stale db.data:
> Imagine the timeline:
> T0 -> P1, P2 and P3 are free.
> T1 -> Checkpoint
> T2 -> P1 got occupied.
> T3 -> Crash
> In the current scenario after the  Pagefile#load the P1 will be free and then 
> the replay will mark P1 as occupied or will occupied another page (now that 
> the recovery of free pages is done on shutdown)
> This change only make sure that db.data and db.free are in sync and showing 
> the reality in T1 (checkpoint), If they are in sync we can trust the db.free.
> This is a really fast draft of what i'm suggesting... If you guys agree, i 
> can create the proper patch after:
> [https://github.com/alanprot/activemq/commit/18036ef7214ef0eaa25c8650f40644dd8b4632a5]
>  
> This is related to https://issues.apache.org/jira/browse/AMQ-6590



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-2023) Support 1x prefixes for JMS destinations instantiated using session

2018-10-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659132#comment-16659132
 ] 

ASF subversion and git services commented on ARTEMIS-2023:
--

Commit 292566e3906c510cfa055e5adaafec922af54ef7 in activemq-artemis's branch 
refs/heads/2.6.x from Clebert Suconic
[ https://git-wip-us.apache.org/repos/asf?p=activemq-artemis.git;h=292566e ]

ARTEMIS-2139 Reverting ARTEMIS-2023 for older clients prefixes.

Revert "ARTEMIS-2023 Avoiding boolean on every message for 1x and tests"

I'm reverting this into 2.6.x, however keeping it for master.

This reverts commit 685211434a84a7a6ca2a65b8d0c187740743cb0b. (However it's 
keeping the test changes)


> Support 1x prefixes for JMS destinations instantiated using session
> ---
>
> Key: ARTEMIS-2023
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2023
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
> Fix For: 2.7.0, 2.6.3
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-2023) Support 1x prefixes for JMS destinations instantiated using session

2018-10-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659133#comment-16659133
 ] 

ASF subversion and git services commented on ARTEMIS-2023:
--

Commit 292566e3906c510cfa055e5adaafec922af54ef7 in activemq-artemis's branch 
refs/heads/2.6.x from Clebert Suconic
[ https://git-wip-us.apache.org/repos/asf?p=activemq-artemis.git;h=292566e ]

ARTEMIS-2139 Reverting ARTEMIS-2023 for older clients prefixes.

Revert "ARTEMIS-2023 Avoiding boolean on every message for 1x and tests"

I'm reverting this into 2.6.x, however keeping it for master.

This reverts commit 685211434a84a7a6ca2a65b8d0c187740743cb0b. (However it's 
keeping the test changes)


> Support 1x prefixes for JMS destinations instantiated using session
> ---
>
> Key: ARTEMIS-2023
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2023
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
> Fix For: 2.7.0, 2.6.3
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-2139) Message sent to JMSReplyTo from old client does not find correct bindings

2018-10-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659131#comment-16659131
 ] 

ASF subversion and git services commented on ARTEMIS-2139:
--

Commit 292566e3906c510cfa055e5adaafec922af54ef7 in activemq-artemis's branch 
refs/heads/2.6.x from Clebert Suconic
[ https://git-wip-us.apache.org/repos/asf?p=activemq-artemis.git;h=292566e ]

ARTEMIS-2139 Reverting ARTEMIS-2023 for older clients prefixes.

Revert "ARTEMIS-2023 Avoiding boolean on every message for 1x and tests"

I'm reverting this into 2.6.x, however keeping it for master.

This reverts commit 685211434a84a7a6ca2a65b8d0c187740743cb0b. (However it's 
keeping the test changes)


> Message sent to JMSReplyTo from old client does not find correct bindings
> -
>
> Key: ARTEMIS-2139
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2139
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Martyn Taylor
>Assignee: Martyn Taylor
>Priority: Major
>
> JMSReplyTo destination set by older client contains incorrect address which 
> causes that reply message does not have correct binding and such message is 
> lost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7080) Keep track of free pages - Update db.free file during checkpoints

2018-10-22 Thread Christopher L. Shannon (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659112#comment-16659112
 ] 

Christopher L. Shannon commented on AMQ-7080:
-

[~alanprot] - Ok, sounds good, I think we need to get 5.15.7 done this week so 
this can go into 5.15.8 or 5.16.0, etc

> Keep track of free pages - Update db.free file during checkpoints
> -
>
> Key: AMQ-7080
> URL: https://issues.apache.org/jira/browse/AMQ-7080
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.15.6
>Reporter: Alan Protasio
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.16.0, 5.15.7
>
>
> In a event of an unclean shutdown, Activemq loses the information about the 
> free pages in the index. In order to recover this information, ActiveMQ read 
> the whole index during shutdown searching for free pages and then save the 
> db.free file. This operation can take a long time, making the failover 
> slower. (during the shutdown, activemq will still hold the lock).
> From http://activemq.apache.org/shared-file-system-master-slave.html
> {quote}"If you have a SAN or shared file system it can be used to provide 
> high availability such that if a broker is killed, another broker can take 
> over immediately."
> {quote}
> Is important to note if the shutdown takes more than ACTIVEMQ_KILL_MAXSECONDS 
> seconds, any following shutdown will be unclean. This broker will stay in 
> this state unless the index is deleted (this state means that every failover 
> will take more then ACTIVEMQ_KILL_MAXSECONDS, so, if you increase this time 
> to 5 minutes, you fail over can take more than 5 minutes).
>  
> In order to prevent ActiveMQ reading the whole index file to search for free 
> pages, we can keep track of those on every Checkpoint. In order to do that we 
> need to be sure that db.data and db.free are in sync. To achieve that we can 
> have a attribute in the db.free page that is referenced by the db.data.
> So during the checkpoint we have:
> 1 - Save db.free and give a freePageUniqueId
> 2 - Save this freePageUniqueId in the db.data (metadata)
> In a crash, we can see if the db.data has the same freePageUniqueId as the 
> db.free. If this is the case we can safely use the free page information 
> contained in the db.free
> Now, the only way to read the whole index file again is IF the crash happens 
> btw step 1 and 2 (what is very unlikely).
> The drawback of this implementation is that we will have to save db.free 
> during the checkpoint, what can possibly increase the checkpoint time.
> Is also important to note that we CAN (and should) have stale data in db.free 
> as it is referencing stale db.data:
> Imagine the timeline:
> T0 -> P1, P2 and P3 are free.
> T1 -> Checkpoint
> T2 -> P1 got occupied.
> T3 -> Crash
> In the current scenario after the  Pagefile#load the P1 will be free and then 
> the replay will mark P1 as occupied or will occupied another page (now that 
> the recovery of free pages is done on shutdown)
> This change only make sure that db.data and db.free are in sync and showing 
> the reality in T1 (checkpoint), If they are in sync we can trust the db.free.
> This is a really fast draft of what i'm suggesting... If you guys agree, i 
> can create the proper patch after:
> [https://github.com/alanprot/activemq/commit/18036ef7214ef0eaa25c8650f40644dd8b4632a5]
>  
> This is related to https://issues.apache.org/jira/browse/AMQ-6590



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1996) MappedSequentialFileFactory may cause DirectByteBuffer memory leaks

2018-10-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-1996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659093#comment-16659093
 ] 

ASF subversion and git services commented on ARTEMIS-1996:
--

Commit 2967df6a998b93cf471c39e25faa6d3a4c821ae0 in activemq-artemis's branch 
refs/heads/master from [~nigro@gmail.com]
[ https://git-wip-us.apache.org/repos/asf?p=activemq-artemis.git;h=2967df6 ]

ARTEMIS-1996 MappedSequentialFileFactory may cause DirectByteBuffer off-heap 
memory leaks

Compaction is now reusing direct ByteBuffers on both
reading and writing with explicit and deterministic
release to avoid high peak of native memory utilisation
after compaction.


> MappedSequentialFileFactory may cause DirectByteBuffer memory leaks
> ---
>
> Key: ARTEMIS-1996
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1996
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Reporter: tang pu
>Priority: Minor
>
> Because of some customization requirements, the readJournalFile method of 
> JournalImpl needs to be calledmultiple times. 
> During the stress test, it was found that almost every 5 hours, the Broker 
> appeared a Full GC.
> This is the information about the Full GC in the GC log.
> {color:#FF}2018-07-25T12:14:07.323+0800: 10089.523: [Full GC 
> (System.gc()) 6767M->253M(16G), 8.7138691 secs]{color}
> {color:#FF} [Eden: 632.0M(712.0M)->0.0B(816.0M) Survivors: 104.0M->0.0B 
> Heap: 6767.6M(16.0G)->253.9M(16.0G)], [Metaspace: 
> 36323K->35961K(1083392K)]{color}
> {color:#FF} [Times: user=2.56 sys=0.42, real=8.71 secs] {color}
> When the Full GC appears, the thread stack is as follows:
> {color:#FF}java.lang.System.gc(System.java:993){color}
> {color:#FF}java.nio.Bits.reserveMemory(Bits.java:666){color}
> {color:#FF}java.nio.DirectByteBuffer.(DirectByteBuffer.java:123){color}
> {color:#FF}java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311){color}
> {color:#FF}org.apache.activemq.artemis.core.io.mapped.MappedSequentialFileFactory.newBuffer(MappedSequentialFileFactory.java:109){color}
> {color:#FF}org.apache.activemq.artemis.core.journal.impl.JournalImpl.readJournalFile(JournalImpl.java:463){color}
> By analyzing the stack, it should be that the JVM's heap memory cannot be 
> allocated, causing the JVM to call the System.gc() method.
> In the Broker, MappedSequentialFileFactory caches off-heap memory through 
> ThreadLocal. Once the thread is evicted by the CompactExecutor(keepalive is 
> 60s) in the Journal, the heap memory is "leaked".
> {color:#FF}NIOSequentialFileFactory{color} also has the same problem
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-2141) Display "Filter" column in Consumers View of AdminUI

2018-10-22 Thread Sebastian (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659094#comment-16659094
 ] 

Sebastian commented on ARTEMIS-2141:


I created a PR https://github.com/apache/activemq-artemis/pull/2384

> Display "Filter" column in Consumers View of AdminUI
> 
>
> Key: ARTEMIS-2141
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2141
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Web Console
>Affects Versions: 2.6.3
>Reporter: Sebastian
>Priority: Trivial
>
> Currently the message filter of consumers is not displayed in the Consumers 
> View, this however is valuable info while troubleshooting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-2141) Display "Filter" column in Consumers View of AdminUI

2018-10-22 Thread Sebastian (JIRA)
Sebastian created ARTEMIS-2141:
--

 Summary: Display "Filter" column in Consumers View of AdminUI
 Key: ARTEMIS-2141
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2141
 Project: ActiveMQ Artemis
  Issue Type: New Feature
  Components: Web Console
Affects Versions: 2.6.3
Reporter: Sebastian


Currently the message filter of consumers is not displayed in the Consumers 
View, this however is valuable info while troubleshooting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-796) Client may shtudown when failover connection is reconnecting. We need to maintain at least 1 non-daemon thread alive.

2018-10-22 Thread Scarciello (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659079#comment-16659079
 ] 

Scarciello commented on AMQ-796:


Just a small reminder to tells that this issue is still in 5.12.1.

Adding these in FailoverTransport.class fix the issue like [~kimmking] said 
before..
{code:java}
reconnectTaskFactory = new TaskRunnerFactory();
reconnectTaskFactory.setDaemon(false); // to set daemon=false by kimmking
reconnectTaskFactory.init();
{code}
If i look that Jira this issue is solved from 5.6.0 but we are a lot here to 
explain that it is not corrected because we have to edit 
FailoverTRansport.class by ourselves. So he's it the only workaround for now or 
maybe is resolved in the last version?

 

> Client may shtudown when failover connection is reconnecting.  We need to 
> maintain at least 1 non-daemon thread alive.
> --
>
> Key: AMQ-796
> URL: https://issues.apache.org/jira/browse/AMQ-796
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 4.0, 5.3.0
>Reporter: Hiram Chirino
>Assignee: Rob Davies
>Priority: Major
> Fix For: 5.6.0, 4.0.3
>
> Attachments: AMQ-796.cmd, Sender.java, jstack_amq_5.6.0, 
> jstack_v5.8.0, threadDump.txt
>
>
> Dejan Reported on the User lists:
> Hi,
> after some experiments I found that this problem only exists if there are no
> other threads in the application. It seems like connection thread dies
> before it manages to reconnect. By starting another thread in the
> application, it succeeds to recover from master failure and reconnect to the
> slave broker. So I have a workaround for now, but it would be nice to make
> this work even for simple (single-threaded) clients.
> Regards,
> Dejan



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7080) Keep track of free pages - Update db.free file during checkpoints

2018-10-22 Thread Alan Protasio (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659074#comment-16659074
 ] 

Alan Protasio commented on AMQ-7080:


{quote}This looks like a bug. On tx.commit any newly allocated page (ie: one 
moves off of the free list) should be written to reflect the state change. On 
shutdown there will be a flush to ensure those writes hit disk, such that any 
re-read of the index to calculate the free list will find those allocated.
{quote}
This is the actual behaviour...  Look the actual test 
"testFreePageRecoveryUncleanShutdown"...

In this test we call tx.commit but the PageFile "pf" still with 0 free pages... 
Only the "pf2" (recovered after the unclean shutdown), has 10 free pages...

[~cshannon]

Well... I think this still a good improvement... It will avoid most of the 
recoveries at all... When you have a shared file system you can easily exhaust 
all you throughout (network or disk) and it hits your performance... Recovering 
the index file will exhaust it during the recovery time.

This is not a complicated change... But for sure it's not blocker for 5.15.7

> Keep track of free pages - Update db.free file during checkpoints
> -
>
> Key: AMQ-7080
> URL: https://issues.apache.org/jira/browse/AMQ-7080
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.15.6
>Reporter: Alan Protasio
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.16.0, 5.15.7
>
>
> In a event of an unclean shutdown, Activemq loses the information about the 
> free pages in the index. In order to recover this information, ActiveMQ read 
> the whole index during shutdown searching for free pages and then save the 
> db.free file. This operation can take a long time, making the failover 
> slower. (during the shutdown, activemq will still hold the lock).
> From http://activemq.apache.org/shared-file-system-master-slave.html
> {quote}"If you have a SAN or shared file system it can be used to provide 
> high availability such that if a broker is killed, another broker can take 
> over immediately."
> {quote}
> Is important to note if the shutdown takes more than ACTIVEMQ_KILL_MAXSECONDS 
> seconds, any following shutdown will be unclean. This broker will stay in 
> this state unless the index is deleted (this state means that every failover 
> will take more then ACTIVEMQ_KILL_MAXSECONDS, so, if you increase this time 
> to 5 minutes, you fail over can take more than 5 minutes).
>  
> In order to prevent ActiveMQ reading the whole index file to search for free 
> pages, we can keep track of those on every Checkpoint. In order to do that we 
> need to be sure that db.data and db.free are in sync. To achieve that we can 
> have a attribute in the db.free page that is referenced by the db.data.
> So during the checkpoint we have:
> 1 - Save db.free and give a freePageUniqueId
> 2 - Save this freePageUniqueId in the db.data (metadata)
> In a crash, we can see if the db.data has the same freePageUniqueId as the 
> db.free. If this is the case we can safely use the free page information 
> contained in the db.free
> Now, the only way to read the whole index file again is IF the crash happens 
> btw step 1 and 2 (what is very unlikely).
> The drawback of this implementation is that we will have to save db.free 
> during the checkpoint, what can possibly increase the checkpoint time.
> Is also important to note that we CAN (and should) have stale data in db.free 
> as it is referencing stale db.data:
> Imagine the timeline:
> T0 -> P1, P2 and P3 are free.
> T1 -> Checkpoint
> T2 -> P1 got occupied.
> T3 -> Crash
> In the current scenario after the  Pagefile#load the P1 will be free and then 
> the replay will mark P1 as occupied or will occupied another page (now that 
> the recovery of free pages is done on shutdown)
> This change only make sure that db.data and db.free are in sync and showing 
> the reality in T1 (checkpoint), If they are in sync we can trust the db.free.
> This is a really fast draft of what i'm suggesting... If you guys agree, i 
> can create the proper patch after:
> [https://github.com/alanprot/activemq/commit/18036ef7214ef0eaa25c8650f40644dd8b4632a5]
>  
> This is related to https://issues.apache.org/jira/browse/AMQ-6590



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7082) KahaDB index, recover free pages in parallel with start

2018-10-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659049#comment-16659049
 ] 

ASF subversion and git services commented on AMQ-7082:
--

Commit a9b81ace623d62e92b75ddc31c424c0ed4cae9d1 in activemq's branch 
refs/heads/activemq-5.15.x from gtully
[ https://git-wip-us.apache.org/repos/asf?p=activemq.git;h=a9b81ac ]

AMQ-7082 - ensure flush sees free list, use lazySet to avoid store barrier, 
thanks cshannon


> KahaDB index, recover free pages in parallel with start
> ---
>
> Key: AMQ-7082
> URL: https://issues.apache.org/jira/browse/AMQ-7082
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.15.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 5.16.0, 5.15.7
>
>
> AMQ-6590 fixes free page loss through recovery. The recover process can be 
> timely, which prevents fast failover, doing recovery on shutdown is 
> preferable, but it is still not ideal b/c it will hold onto the kahadb lock. 
> It also can stall shutdown unexpectedly.
> AMQ-7080 is going to tackle checkpointing the free list. This should help 
> avoid the need for recovery but it may still be necessary. If the perf hit is 
> significant this may need to be optional.
> There will still be the need to walk the index to find the free list.
> It is possible to run with no free list and grow, and we can do that while we 
> recover the free list in parallel, then merge the two at a safe point. This 
> we can do at startup.
> In cases where the disk is the bottleneck this won't help much, but it will 
> help failover and it will help shutdown, with a bit of luck the recovery will 
> complete before we stop.
>  
> Initially I thought this would be too complex, but if we concede some growth 
> while we recover, ie: start with an empty free list, it is should be straight 
> forward to merge with a recovered one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7082) KahaDB index, recover free pages in parallel with start

2018-10-22 Thread Gary Tully (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659033#comment-16659033
 ] 

Gary Tully commented on AMQ-7082:
-

[~cshannon] yep, good catch thanks. it is the lazySet semantic I was after, but 
I can't avoid calling it!

> KahaDB index, recover free pages in parallel with start
> ---
>
> Key: AMQ-7082
> URL: https://issues.apache.org/jira/browse/AMQ-7082
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.15.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 5.16.0, 5.15.7
>
>
> AMQ-6590 fixes free page loss through recovery. The recover process can be 
> timely, which prevents fast failover, doing recovery on shutdown is 
> preferable, but it is still not ideal b/c it will hold onto the kahadb lock. 
> It also can stall shutdown unexpectedly.
> AMQ-7080 is going to tackle checkpointing the free list. This should help 
> avoid the need for recovery but it may still be necessary. If the perf hit is 
> significant this may need to be optional.
> There will still be the need to walk the index to find the free list.
> It is possible to run with no free list and grow, and we can do that while we 
> recover the free list in parallel, then merge the two at a safe point. This 
> we can do at startup.
> In cases where the disk is the bottleneck this won't help much, but it will 
> help failover and it will help shutdown, with a bit of luck the recovery will 
> complete before we stop.
>  
> Initially I thought this would be too complex, but if we concede some growth 
> while we recover, ie: start with an empty free list, it is should be straight 
> forward to merge with a recovered one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7082) KahaDB index, recover free pages in parallel with start

2018-10-22 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16659028#comment-16659028
 ] 

ASF subversion and git services commented on AMQ-7082:
--

Commit fb1e642c15fa842e5e25b80d89293d0bb03225c0 in activemq's branch 
refs/heads/master from gtully
[ https://git-wip-us.apache.org/repos/asf?p=activemq.git;h=fb1e642 ]

AMQ-7082 - ensure flush sees free list, use lazySet to avoid store barrier, 
thanks cshannon


> KahaDB index, recover free pages in parallel with start
> ---
>
> Key: AMQ-7082
> URL: https://issues.apache.org/jira/browse/AMQ-7082
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.15.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 5.16.0, 5.15.7
>
>
> AMQ-6590 fixes free page loss through recovery. The recover process can be 
> timely, which prevents fast failover, doing recovery on shutdown is 
> preferable, but it is still not ideal b/c it will hold onto the kahadb lock. 
> It also can stall shutdown unexpectedly.
> AMQ-7080 is going to tackle checkpointing the free list. This should help 
> avoid the need for recovery but it may still be necessary. If the perf hit is 
> significant this may need to be optional.
> There will still be the need to walk the index to find the free list.
> It is possible to run with no free list and grow, and we can do that while we 
> recover the free list in parallel, then merge the two at a safe point. This 
> we can do at startup.
> In cases where the disk is the bottleneck this won't help much, but it will 
> help failover and it will help shutdown, with a bit of luck the recovery will 
> complete before we stop.
>  
> Initially I thought this would be too complex, but if we concede some growth 
> while we recover, ie: start with an empty free list, it is should be straight 
> forward to merge with a recovered one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7082) KahaDB index, recover free pages in parallel with start

2018-10-22 Thread Christopher L. Shannon (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16658959#comment-16658959
 ] 

Christopher L. Shannon commented on AMQ-7082:
-

[~gtully] - I took a look at the patch, it looks ok to me although I think the 
variable {{needsFreePageRecovery}} needs to be marked as volatile because it's 
updated async in the new free page recovery thread that kicks off.

> KahaDB index, recover free pages in parallel with start
> ---
>
> Key: AMQ-7082
> URL: https://issues.apache.org/jira/browse/AMQ-7082
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.15.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 5.16.0, 5.15.7
>
>
> AMQ-6590 fixes free page loss through recovery. The recover process can be 
> timely, which prevents fast failover, doing recovery on shutdown is 
> preferable, but it is still not ideal b/c it will hold onto the kahadb lock. 
> It also can stall shutdown unexpectedly.
> AMQ-7080 is going to tackle checkpointing the free list. This should help 
> avoid the need for recovery but it may still be necessary. If the perf hit is 
> significant this may need to be optional.
> There will still be the need to walk the index to find the free list.
> It is possible to run with no free list and grow, and we can do that while we 
> recover the free list in parallel, then merge the two at a safe point. This 
> we can do at startup.
> In cases where the disk is the bottleneck this won't help much, but it will 
> help failover and it will help shutdown, with a bit of luck the recovery will 
> complete before we stop.
>  
> Initially I thought this would be too complex, but if we concede some growth 
> while we recover, ie: start with an empty free list, it is should be straight 
> forward to merge with a recovered one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7080) Keep track of free pages - Update db.free file during checkpoints

2018-10-22 Thread Christopher L. Shannon (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16658848#comment-16658848
 ] 

Christopher L. Shannon commented on AMQ-7080:
-

Is this issue still necessary now that we have the async free page recovery? 
Just wondering if I should go ahead and start the process for 5.15.7

> Keep track of free pages - Update db.free file during checkpoints
> -
>
> Key: AMQ-7080
> URL: https://issues.apache.org/jira/browse/AMQ-7080
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.15.6
>Reporter: Alan Protasio
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.16.0, 5.15.7
>
>
> In a event of an unclean shutdown, Activemq loses the information about the 
> free pages in the index. In order to recover this information, ActiveMQ read 
> the whole index during shutdown searching for free pages and then save the 
> db.free file. This operation can take a long time, making the failover 
> slower. (during the shutdown, activemq will still hold the lock).
> From http://activemq.apache.org/shared-file-system-master-slave.html
> {quote}"If you have a SAN or shared file system it can be used to provide 
> high availability such that if a broker is killed, another broker can take 
> over immediately."
> {quote}
> Is important to note if the shutdown takes more than ACTIVEMQ_KILL_MAXSECONDS 
> seconds, any following shutdown will be unclean. This broker will stay in 
> this state unless the index is deleted (this state means that every failover 
> will take more then ACTIVEMQ_KILL_MAXSECONDS, so, if you increase this time 
> to 5 minutes, you fail over can take more than 5 minutes).
>  
> In order to prevent ActiveMQ reading the whole index file to search for free 
> pages, we can keep track of those on every Checkpoint. In order to do that we 
> need to be sure that db.data and db.free are in sync. To achieve that we can 
> have a attribute in the db.free page that is referenced by the db.data.
> So during the checkpoint we have:
> 1 - Save db.free and give a freePageUniqueId
> 2 - Save this freePageUniqueId in the db.data (metadata)
> In a crash, we can see if the db.data has the same freePageUniqueId as the 
> db.free. If this is the case we can safely use the free page information 
> contained in the db.free
> Now, the only way to read the whole index file again is IF the crash happens 
> btw step 1 and 2 (what is very unlikely).
> The drawback of this implementation is that we will have to save db.free 
> during the checkpoint, what can possibly increase the checkpoint time.
> Is also important to note that we CAN (and should) have stale data in db.free 
> as it is referencing stale db.data:
> Imagine the timeline:
> T0 -> P1, P2 and P3 are free.
> T1 -> Checkpoint
> T2 -> P1 got occupied.
> T3 -> Crash
> In the current scenario after the  Pagefile#load the P1 will be free and then 
> the replay will mark P1 as occupied or will occupied another page (now that 
> the recovery of free pages is done on shutdown)
> This change only make sure that db.data and db.free are in sync and showing 
> the reality in T1 (checkpoint), If they are in sync we can trust the db.free.
> This is a really fast draft of what i'm suggesting... If you guys agree, i 
> can create the proper patch after:
> [https://github.com/alanprot/activemq/commit/18036ef7214ef0eaa25c8650f40644dd8b4632a5]
>  
> This is related to https://issues.apache.org/jira/browse/AMQ-6590



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7080) Keep track of free pages - Update db.free file during checkpoints

2018-10-22 Thread Gary Tully (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16658770#comment-16658770
 ] 

Gary Tully commented on AMQ-7080:
-

{quote}When we call tx.allocate the pages are not included in the 
PageFile.freeList but they still marked as free inside (page.getType() == 
Page.PAGE_FREE_TYPE). So, on a clean shutdown db.free is saved and after the 
restart those pages are not in freeList. In a unclean shutdown, all the index 
is read and those pages are added in the freelist. 
{quote}
 

This looks like a bug. On tx.commit any newly allocated page (ie: one moves off 
of the free list) should be written to reflect the state change. On shutdown 
there will be a flush to ensure those writes hit disk, such that any re-read of 
the index to calculate the free list will find those allocated.

 

> Keep track of free pages - Update db.free file during checkpoints
> -
>
> Key: AMQ-7080
> URL: https://issues.apache.org/jira/browse/AMQ-7080
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.15.6
>Reporter: Alan Protasio
>Assignee: Jean-Baptiste Onofré
>Priority: Major
> Fix For: 5.16.0, 5.15.7
>
>
> In a event of an unclean shutdown, Activemq loses the information about the 
> free pages in the index. In order to recover this information, ActiveMQ read 
> the whole index during shutdown searching for free pages and then save the 
> db.free file. This operation can take a long time, making the failover 
> slower. (during the shutdown, activemq will still hold the lock).
> From http://activemq.apache.org/shared-file-system-master-slave.html
> {quote}"If you have a SAN or shared file system it can be used to provide 
> high availability such that if a broker is killed, another broker can take 
> over immediately."
> {quote}
> Is important to note if the shutdown takes more than ACTIVEMQ_KILL_MAXSECONDS 
> seconds, any following shutdown will be unclean. This broker will stay in 
> this state unless the index is deleted (this state means that every failover 
> will take more then ACTIVEMQ_KILL_MAXSECONDS, so, if you increase this time 
> to 5 minutes, you fail over can take more than 5 minutes).
>  
> In order to prevent ActiveMQ reading the whole index file to search for free 
> pages, we can keep track of those on every Checkpoint. In order to do that we 
> need to be sure that db.data and db.free are in sync. To achieve that we can 
> have a attribute in the db.free page that is referenced by the db.data.
> So during the checkpoint we have:
> 1 - Save db.free and give a freePageUniqueId
> 2 - Save this freePageUniqueId in the db.data (metadata)
> In a crash, we can see if the db.data has the same freePageUniqueId as the 
> db.free. If this is the case we can safely use the free page information 
> contained in the db.free
> Now, the only way to read the whole index file again is IF the crash happens 
> btw step 1 and 2 (what is very unlikely).
> The drawback of this implementation is that we will have to save db.free 
> during the checkpoint, what can possibly increase the checkpoint time.
> Is also important to note that we CAN (and should) have stale data in db.free 
> as it is referencing stale db.data:
> Imagine the timeline:
> T0 -> P1, P2 and P3 are free.
> T1 -> Checkpoint
> T2 -> P1 got occupied.
> T3 -> Crash
> In the current scenario after the  Pagefile#load the P1 will be free and then 
> the replay will mark P1 as occupied or will occupied another page (now that 
> the recovery of free pages is done on shutdown)
> This change only make sure that db.data and db.free are in sync and showing 
> the reality in T1 (checkpoint), If they are in sync we can trust the db.free.
> This is a really fast draft of what i'm suggesting... If you guys agree, i 
> can create the proper patch after:
> [https://github.com/alanprot/activemq/commit/18036ef7214ef0eaa25c8650f40644dd8b4632a5]
>  
> This is related to https://issues.apache.org/jira/browse/AMQ-6590



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-856) Support advanced destination options

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16658709#comment-16658709
 ] 

ASF GitHub Bot commented on ARTEMIS-856:


Github user michaelandrepearce commented on the issue:

https://github.com/apache/activemq-artemis/pull/2375
  
@clebertsuconic its just something spotted missing when we did work for 
jira 856 still not released, so as its related and not released still, have 
used that jira to keep it all associated. 


> Support advanced destination options
> 
>
> Key: ARTEMIS-856
> URL: https://issues.apache.org/jira/browse/ARTEMIS-856
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 2.6.2
>Reporter: Matt Pavlovich
>Assignee: Michael Andre Pearce
>Priority: Major
> Fix For: 2.7.0
>
>
> Add support enhancing destination consumer features (ActiveMQ 5.x parity):
> consumersBeforeDispatchStarts
>  timeBeforeDispatchStarts
> [http://activemq.apache.org/per-destination-policies.html]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-856) Support advanced destination options

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16658712#comment-16658712
 ] 

ASF GitHub Bot commented on ARTEMIS-856:


Github user michaelandrepearce commented on the issue:

https://github.com/apache/activemq-artemis/pull/2375
  
@clebertsuconic its just something spotted missing when we did work for 
jira 856 still not released, so as its related and not released still, have 
used that jira to keep it all associated. 


> Support advanced destination options
> 
>
> Key: ARTEMIS-856
> URL: https://issues.apache.org/jira/browse/ARTEMIS-856
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 2.6.2
>Reporter: Matt Pavlovich
>Assignee: Michael Andre Pearce
>Priority: Major
> Fix For: 2.7.0
>
>
> Add support enhancing destination consumer features (ActiveMQ 5.x parity):
> consumersBeforeDispatchStarts
>  timeBeforeDispatchStarts
> [http://activemq.apache.org/per-destination-policies.html]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-856) Support advanced destination options

2018-10-22 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16658710#comment-16658710
 ] 

ASF GitHub Bot commented on ARTEMIS-856:


Github user michaelandrepearce commented on the issue:

https://github.com/apache/activemq-artemis/pull/2375
  
@clebertsuconic its just something spotted missing when we did work for 
jira 856 still not released, so as its related and not released still, have 
used that jira to keep it all associated. 


> Support advanced destination options
> 
>
> Key: ARTEMIS-856
> URL: https://issues.apache.org/jira/browse/ARTEMIS-856
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 2.6.2
>Reporter: Matt Pavlovich
>Assignee: Michael Andre Pearce
>Priority: Major
> Fix For: 2.7.0
>
>
> Add support enhancing destination consumer features (ActiveMQ 5.x parity):
> consumersBeforeDispatchStarts
>  timeBeforeDispatchStarts
> [http://activemq.apache.org/per-destination-policies.html]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)