[jira] [Commented] (AMQ-8971) ActiveMQ OSGI feature, activemq-client, using JMS 2.0 bundle, which fails resolution, from 5.16.3 on

2022-07-25 Thread Jeff Genender (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-8971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17570975#comment-17570975
 ] 

Jeff Genender commented on AMQ-8971:


This looks good to me [~artnaseef] 

> ActiveMQ OSGI feature, activemq-client, using JMS 2.0 bundle, which fails 
> resolution, from 5.16.3 on
> 
>
> Key: AMQ-8971
> URL: https://issues.apache.org/jira/browse/AMQ-8971
> Project: ActiveMQ
>  Issue Type: Bug
>Reporter: Arthur Naseef
>Assignee: Jean-Baptiste Onofré
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Building an ActiveMQ client application in OSGI.  Using AMQ versions 5.16.3 
> through 5.17.1.
> After building the application and loading it together with the ActiveMQ 
> feature named {{{}activemq-client{}}}, a resolution error occurs with the 
> following details:
>  
> {code:java}
> osgi.wiring.package; 
> filter:="(&(osgi.wiring.package=javax.jms)(version>=1.1.0)(!(version>=2.0.0)))"
>  {code}
> Tracking this down, the {{activemq-client}} feature definition contains the 
> following:
>  
>  
> {code:java}
>  dependency="true">mvn:org.apache.geronimo.specs/geronimo-jms_2.0_spec/1.0-alpha-2
>  {code}
> Using the karaf console command, package:exports | grep javax.jms, after 
> loading the activemq-client feature, shows that this bundle ONLY exports the 
> 2.0.0 version:
>  
>  
> {code:java}
> javax.jms                                              │ 2.0.0       │ 73 │ 
> org.apache.geronimo.specs.geronimo-jms_2.0_spec {code}
>  
> The same feature in 5.16.2 contains the following definition:
>  
> {code:java}
>  dependency="true">mvn:org.apache.geronimo.specs/geronimo-jms_1.1_spec/1.1.1
>  {code}
>  
> All of the ACTIVEMQ modules, except for activemq-karaf are using the 
> following dependency:
>  
> {code:java}
>     
>       org.apache.geronimo.specs
>       geronimo-jms_1.1_spec
>     {code}
> 
> In Summary, compiling an ActiveMQ client application using activemq-client 
> from versions 5.16.3 through 5.17.1, the application fails to resovle in 
> Karaf by loading the activemq-client feature.
> *STEPS TO REPRODUCE*
>  * feature:repo-add 
> mvn:org.apache.activemq/activemq-karaf/5.17.1/xml/features-core
>  * feature:install activemq-client
>  * bundle:install ...application-bundle...
> *EXPECTED RESULTS*
>  * Successful load of the application bundle built against version 5.17.1 of 
> ActiveMQ artifacts after loading the activemq-client feature
> *ROOT CAUSE*
>  * Replacing the JMS 1.0 geronimo specification bundle with the 2.0 one in 
> the activemq-client feature causes this problem.  It also seems very odd 
> since no other internals of ActiveMQ use the JMS 2.0 spec at all.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (AMQ-7166) Upgrade MQTT Client

2019-03-11 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-7166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender updated AMQ-7166:
---
Affects Version/s: (was: 5.15.0)
   5.15.8

> Upgrade MQTT Client
> ---
>
> Key: AMQ-7166
> URL: https://issues.apache.org/jira/browse/AMQ-7166
> Project: ActiveMQ
>  Issue Type: Improvement
>Affects Versions: 5.15.8
>Reporter: Dejan Bosanac
>Assignee: Dejan Bosanac
>Priority: Major
> Fix For: 5.16.0
>
>
> Upgrade MQTT client library to newly released 1.15 to introduce improvements 
> in codec parsing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMQ-7166) Upgrade MQTT Client

2019-03-11 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-7166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender updated AMQ-7166:
---
Fix Version/s: 5.15.9

> Upgrade MQTT Client
> ---
>
> Key: AMQ-7166
> URL: https://issues.apache.org/jira/browse/AMQ-7166
> Project: ActiveMQ
>  Issue Type: Improvement
>Affects Versions: 5.15.8
>Reporter: Dejan Bosanac
>Assignee: Dejan Bosanac
>Priority: Major
> Fix For: 5.16.0, 5.15.9
>
>
> Upgrade MQTT client library to newly released 1.15 to introduce improvements 
> in codec parsing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7132) ActiveMQ reads lots of index pages upon startup (after a graceful or ungraceful shutdown)

2019-01-16 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16744106#comment-16744106
 ] 

Jeff Genender commented on AMQ-7132:


Ahh... makes sense since I am using a SSD and I believe Jamie was too.

> ActiveMQ reads lots of index pages upon startup (after a graceful or 
> ungraceful shutdown)
> -
>
> Key: AMQ-7132
> URL: https://issues.apache.org/jira/browse/AMQ-7132
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.15.8
>Reporter: Alan Protasio
>Assignee: Christopher L. Shannon
>Priority: Major
> Fix For: 5.16.0
>
> Attachments: output.tgz
>
>
> Hi.
> We noticed that ActiveMQ reads lots of pages in the index file when is 
> starting up to recover the destinations statistics:
> [https://github.com/apache/activemq/blob/master/activemq-kahadb-store/src/main/java/org/apache/activemq/store/kahadb/KahaDBStore.java#L819]
> Nowadays, in order to do that, activemq traverse the 
> storedDestination.locationIndex to get the messageCount and totalMessageSize 
> of each destination. For destinations with lots of messages this process can 
> take a while making the startup process take long time.
> In a case of a master-slave broker, this prevent the broker to fast failover 
> and does not meet what is stated on 
> [http://activemq.apache.org/shared-file-system-master-slave.html.]
> {quote}If you have a SAN or shared file system it can be used to provide 
> _high availability_ such that if a broker is killed, another broker can take 
> over immediately. 
> {quote}
> One solution for this is keep track of the destination statistics summary in 
> the index file and doing so, we dont need to read all the locationIndex on 
> the start up.
> The code change proposed is backward compatible but need a bump on the kahadb 
> version. If this information is not in the index, the broker will fall back 
> to the current implementation, which means that the first time people upgrade 
> to the new version, it will still have to read the locationIndex, but 
> subsequent restarts will be fast.
> This change should have a negligible performance impact during normal 
> activemq operation, as this change introduce a few more bytes of data to the 
> index and this information will be on checkpoints. Also, this new information 
> is synchronized with the locationIndex as they are update at the same 
> transaction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7132) ActiveMQ reads lots of index pages upon startup (after a graceful or ungraceful shutdown)

2019-01-15 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16743476#comment-16743476
 ] 

Jeff Genender commented on AMQ-7132:


Same here

{{---}}
{{ T E S T S}}
{{---}}
{{Running org.apache.activemq.broker.RecoveryStatsBrokerTest}}
{{Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.862 sec - 
in org.apache.activemq.broker.RecoveryStatsBrokerTest}}

{{Results :}}

{{Tests run: 3, Failures: 0, Errors: 0, Skipped: 0}}

Maybe we need to purge the .m2 repo on Jenkins?

> ActiveMQ reads lots of index pages upon startup (after a graceful or 
> ungraceful shutdown)
> -
>
> Key: AMQ-7132
> URL: https://issues.apache.org/jira/browse/AMQ-7132
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.15.8
>Reporter: Alan Protasio
>Assignee: Christopher L. Shannon
>Priority: Major
> Fix For: 5.16.0
>
> Attachments: output.tgz
>
>
> Hi.
> We noticed that ActiveMQ reads lots of pages in the index file when is 
> starting up to recover the destinations statistics:
> [https://github.com/apache/activemq/blob/master/activemq-kahadb-store/src/main/java/org/apache/activemq/store/kahadb/KahaDBStore.java#L819]
> Nowadays, in order to do that, activemq traverse the 
> storedDestination.locationIndex to get the messageCount and totalMessageSize 
> of each destination. For destinations with lots of messages this process can 
> take a while making the startup process take long time.
> In a case of a master-slave broker, this prevent the broker to fast failover 
> and does not meet what is stated on 
> [http://activemq.apache.org/shared-file-system-master-slave.html.]
> {quote}If you have a SAN or shared file system it can be used to provide 
> _high availability_ such that if a broker is killed, another broker can take 
> over immediately. 
> {quote}
> One solution for this is keep track of the destination statistics summary in 
> the index file and doing so, we dont need to read all the locationIndex on 
> the start up.
> The code change proposed is backward compatible but need a bump on the kahadb 
> version. If this information is not in the index, the broker will fall back 
> to the current implementation, which means that the first time people upgrade 
> to the new version, it will still have to read the locationIndex, but 
> subsequent restarts will be fast.
> This change should have a negligible performance impact during normal 
> activemq operation, as this change introduce a few more bytes of data to the 
> index and this information will be on checkpoints. Also, this new information 
> is synchronized with the locationIndex as they are update at the same 
> transaction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7132) ActiveMQ reads lots of index pages upon startup (after a graceful or ungraceful shutdown)

2019-01-15 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16743470#comment-16743470
 ] 

Jeff Genender commented on AMQ-7132:


Hi [~cshannon], It ran for me on MacOS 10.14.2 and passed.

> ActiveMQ reads lots of index pages upon startup (after a graceful or 
> ungraceful shutdown)
> -
>
> Key: AMQ-7132
> URL: https://issues.apache.org/jira/browse/AMQ-7132
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.15.8
>Reporter: Alan Protasio
>Assignee: Christopher L. Shannon
>Priority: Major
> Fix For: 5.16.0
>
> Attachments: output.tgz
>
>
> Hi.
> We noticed that ActiveMQ reads lots of pages in the index file when is 
> starting up to recover the destinations statistics:
> [https://github.com/apache/activemq/blob/master/activemq-kahadb-store/src/main/java/org/apache/activemq/store/kahadb/KahaDBStore.java#L819]
> Nowadays, in order to do that, activemq traverse the 
> storedDestination.locationIndex to get the messageCount and totalMessageSize 
> of each destination. For destinations with lots of messages this process can 
> take a while making the startup process take long time.
> In a case of a master-slave broker, this prevent the broker to fast failover 
> and does not meet what is stated on 
> [http://activemq.apache.org/shared-file-system-master-slave.html.]
> {quote}If you have a SAN or shared file system it can be used to provide 
> _high availability_ such that if a broker is killed, another broker can take 
> over immediately. 
> {quote}
> One solution for this is keep track of the destination statistics summary in 
> the index file and doing so, we dont need to read all the locationIndex on 
> the start up.
> The code change proposed is backward compatible but need a bump on the kahadb 
> version. If this information is not in the index, the broker will fall back 
> to the current implementation, which means that the first time people upgrade 
> to the new version, it will still have to read the locationIndex, but 
> subsequent restarts will be fast.
> This change should have a negligible performance impact during normal 
> activemq operation, as this change introduce a few more bytes of data to the 
> index and this information will be on checkpoints. Also, this new information 
> is synchronized with the locationIndex as they are update at the same 
> transaction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (AMQ-7132) ActiveMQ reads lots of index pages upon startup (after a graceful or ungraceful shutdown)

2019-01-15 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16743470#comment-16743470
 ] 

Jeff Genender edited comment on AMQ-7132 at 1/15/19 11:36 PM:
--

Hi [~cshannon], It ran for me on MacOS 10.14.2 and passed.  JDK 1.8.


was (Author: jgenender):
Hi [~cshannon], It ran for me on MacOS 10.14.2 and passed.

> ActiveMQ reads lots of index pages upon startup (after a graceful or 
> ungraceful shutdown)
> -
>
> Key: AMQ-7132
> URL: https://issues.apache.org/jira/browse/AMQ-7132
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.15.8
>Reporter: Alan Protasio
>Assignee: Christopher L. Shannon
>Priority: Major
> Fix For: 5.16.0
>
> Attachments: output.tgz
>
>
> Hi.
> We noticed that ActiveMQ reads lots of pages in the index file when is 
> starting up to recover the destinations statistics:
> [https://github.com/apache/activemq/blob/master/activemq-kahadb-store/src/main/java/org/apache/activemq/store/kahadb/KahaDBStore.java#L819]
> Nowadays, in order to do that, activemq traverse the 
> storedDestination.locationIndex to get the messageCount and totalMessageSize 
> of each destination. For destinations with lots of messages this process can 
> take a while making the startup process take long time.
> In a case of a master-slave broker, this prevent the broker to fast failover 
> and does not meet what is stated on 
> [http://activemq.apache.org/shared-file-system-master-slave.html.]
> {quote}If you have a SAN or shared file system it can be used to provide 
> _high availability_ such that if a broker is killed, another broker can take 
> over immediately. 
> {quote}
> One solution for this is keep track of the destination statistics summary in 
> the index file and doing so, we dont need to read all the locationIndex on 
> the start up.
> The code change proposed is backward compatible but need a bump on the kahadb 
> version. If this information is not in the index, the broker will fall back 
> to the current implementation, which means that the first time people upgrade 
> to the new version, it will still have to read the locationIndex, but 
> subsequent restarts will be fast.
> This change should have a negligible performance impact during normal 
> activemq operation, as this change introduce a few more bytes of data to the 
> index and this information will be on checkpoints. Also, this new information 
> is synchronized with the locationIndex as they are update at the same 
> transaction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7128) Broker does not react on IOException: Stale file handle and does not fully shutdown

2019-01-14 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16742264#comment-16742264
 ] 

Jeff Genender commented on AMQ-7128:


Unfortunately, Java is at the mercy of the OS reporting the lock is free.  My 
advice is to look at your OS and disk to see if it has settings to report a 
freed lock.  There really is not much we can do here.  If you are using NFS 
(and I am hoping NFS 4), there are a lot of tuning capabilities to be sure that 
lock is freed, which is beyond the scope of how we can help.

> Broker does not react on IOException: Stale file handle and does not fully 
> shutdown
> ---
>
> Key: AMQ-7128
> URL: https://issues.apache.org/jira/browse/AMQ-7128
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.15.4
>Reporter: Martin Lichtin
>Priority: Major
> Attachments: activemq-stale-file-handle-andshutdown-issue.txt
>
>
> Seeing a situation during which the broker can no longer write to the KahaDB 
> filesystem (IOException: Stale file handle). This "seems" to initiate a 
> shutdown (as indicated by subsequent 'Async Writer Thread Shutdown' 
> exceptions), but nothing happens.
> Only after around 3 hours, something does happen that triggers the actual 
> stopping of the broker service.
> However, then the shutdown never completes as the IOExceptionHandler that 
> initiates the broker.stop() is called again, and the throws a 
> 'SuppressReplyException' which KahaDB (wrongly?) interprets as yet another 
> issue and itself fails to stop the persistence adapter.
> Attaching a log where all this can be observed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMQ-7132) ActiveMQ reads lots of index pages upon startup (after a graceful or ungraceful shutdown)

2019-01-11 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender updated AMQ-7132:
---
Fix Version/s: (was: 5.15.9)

> ActiveMQ reads lots of index pages upon startup (after a graceful or 
> ungraceful shutdown)
> -
>
> Key: AMQ-7132
> URL: https://issues.apache.org/jira/browse/AMQ-7132
> Project: ActiveMQ
>  Issue Type: Test
>  Components: KahaDB
>Affects Versions: 5.15.8
>Reporter: Alan Protasio
>Assignee: Christopher L. Shannon
>Priority: Major
> Fix For: 5.16.0
>
>
> Hi.
> We noticed that ActiveMQ reads lots of pages in the index file when is 
> starting up to recover the destinations statistics:
> [https://github.com/apache/activemq/blob/master/activemq-kahadb-store/src/main/java/org/apache/activemq/store/kahadb/KahaDBStore.java#L819]
> Nowadays, in order to do that, activemq traverse the 
> storedDestination.locationIndex to get the messageCount and totalMessageSize 
> of each destination. For destinations with lots of messages this process can 
> take a while making the startup process take long time.
> In a case of a master-slave broker, this prevent the broker to fast failover 
> and does not meet what is stated on 
> [http://activemq.apache.org/shared-file-system-master-slave.html.]
> {quote}If you have a SAN or shared file system it can be used to provide 
> _high availability_ such that if a broker is killed, another broker can take 
> over immediately. 
> {quote}
> One solution for this is keep track of the destination statistics summary in 
> the index file and doing so, we dont need to read all the locationIndex on 
> the start up.
> The code change proposed is backward compatible but need a bump on the kahadb 
> version. If this information is not in the index, the broker will fall back 
> to the current implementation, which means that the first time people upgrade 
> to the new version, it will still have to read the locationIndex, but 
> subsequent restarts will be fast.
> This change should have a negligible performance impact during normal 
> activemq operation, as this change introduce a few more bytes of data to the 
> index and this information will be on checkpoints. Also, this new information 
> is synchronized with the locationIndex as they are update at the same 
> transaction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7132) ActiveMQ reads lots of index pages upon startup (after a graceful or ungraceful shutdown)

2019-01-11 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740759#comment-16740759
 ] 

Jeff Genender commented on AMQ-7132:


[~cshannon] Sounds good... feel free to do the merge.  Thanks!

> ActiveMQ reads lots of index pages upon startup (after a graceful or 
> ungraceful shutdown)
> -
>
> Key: AMQ-7132
> URL: https://issues.apache.org/jira/browse/AMQ-7132
> Project: ActiveMQ
>  Issue Type: Test
>  Components: KahaDB
>Affects Versions: 5.15.8
>Reporter: Alan Protasio
>Assignee: Christopher L. Shannon
>Priority: Major
> Fix For: 5.16.0
>
>
> Hi.
> We noticed that ActiveMQ reads lots of pages in the index file when is 
> starting up to recover the destinations statistics:
> [https://github.com/apache/activemq/blob/master/activemq-kahadb-store/src/main/java/org/apache/activemq/store/kahadb/KahaDBStore.java#L819]
> Nowadays, in order to do that, activemq traverse the 
> storedDestination.locationIndex to get the messageCount and totalMessageSize 
> of each destination. For destinations with lots of messages this process can 
> take a while making the startup process take long time.
> In a case of a master-slave broker, this prevent the broker to fast failover 
> and does not meet what is stated on 
> [http://activemq.apache.org/shared-file-system-master-slave.html.]
> {quote}If you have a SAN or shared file system it can be used to provide 
> _high availability_ such that if a broker is killed, another broker can take 
> over immediately. 
> {quote}
> One solution for this is keep track of the destination statistics summary in 
> the index file and doing so, we dont need to read all the locationIndex on 
> the start up.
> The code change proposed is backward compatible but need a bump on the kahadb 
> version. If this information is not in the index, the broker will fall back 
> to the current implementation, which means that the first time people upgrade 
> to the new version, it will still have to read the locationIndex, but 
> subsequent restarts will be fast.
> This change should have a negligible performance impact during normal 
> activemq operation, as this change introduce a few more bytes of data to the 
> index and this information will be on checkpoints. Also, this new information 
> is synchronized with the locationIndex as they are update at the same 
> transaction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (AMQ-7132) ActiveMQ reads lots of index pages upon startup (after a graceful or ungraceful shutdown)

2019-01-11 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740723#comment-16740723
 ] 

Jeff Genender edited comment on AMQ-7132 at 1/11/19 7:47 PM:
-

[~cshannon] this has the KahaDB version change, and has 5.15.9 listed.  Do we 
want this in 5.15.9, or is 5.16.0 alone the right spot for it?  It is 
"backward" compatible to a degree, but its enough of a change that may provide 
discomfort.  Thoughts?  I can go ahead and commit this if all are happy.


was (Author: jgenender):
[~cshannon] this has the KahaDB version change, and has 5.15.9 listed.  Do we 
want this in 5.15.9, or is 5.16.0 alone the right spot for it?  It is 
"backward" compatible to a degree, but its enough of a change that may provide 
discomfort.  Thoughts?

> ActiveMQ reads lots of index pages upon startup (after a graceful or 
> ungraceful shutdown)
> -
>
> Key: AMQ-7132
> URL: https://issues.apache.org/jira/browse/AMQ-7132
> Project: ActiveMQ
>  Issue Type: Test
>  Components: KahaDB
>Affects Versions: 5.15.8
>Reporter: Alan Protasio
>Priority: Major
> Fix For: 5.16.0, 5.15.9
>
>
> Hi.
> We noticed that ActiveMQ reads lots of pages in the index file when is 
> starting up to recover the destinations statistics:
> [https://github.com/apache/activemq/blob/master/activemq-kahadb-store/src/main/java/org/apache/activemq/store/kahadb/KahaDBStore.java#L819]
> Nowadays, in order to do that, activemq traverse the 
> storedDestination.locationIndex to get the messageCount and totalMessageSize 
> of each destination. For destinations with lots of messages this process can 
> take a while making the startup process take long time.
> In a case of a master-slave broker, this prevent the broker to fast failover 
> and does not meet what is stated on 
> [http://activemq.apache.org/shared-file-system-master-slave.html.]
> {quote}If you have a SAN or shared file system it can be used to provide 
> _high availability_ such that if a broker is killed, another broker can take 
> over immediately. 
> {quote}
> One solution for this is keep track of the destination statistics summary in 
> the index file and doing so, we dont need to read all the locationIndex on 
> the start up.
> The code change proposed is backward compatible but need a bump on the kahadb 
> version. If this information is not in the index, the broker will fall back 
> to the current implementation, which means that the first time people upgrade 
> to the new version, it will still have to read the locationIndex, but 
> subsequent restarts will be fast.
> This change should have a negligible performance impact during normal 
> activemq operation, as this change introduce a few more bytes of data to the 
> index and this information will be on checkpoints. Also, this new information 
> is synchronized with the locationIndex as they are update at the same 
> transaction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7132) ActiveMQ reads lots of index pages upon startup (after a graceful or ungraceful shutdown)

2019-01-11 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740723#comment-16740723
 ] 

Jeff Genender commented on AMQ-7132:


[~cshannon] this has the KahaDB version change, and has 5.15.9 listed.  Do we 
want this in 5.15.9, or is 5.16.0 alone the right spot for it?  It is 
"backward" compatible to a degree, but its enough of a change that may provide 
discomfort.  Thoughts?

> ActiveMQ reads lots of index pages upon startup (after a graceful or 
> ungraceful shutdown)
> -
>
> Key: AMQ-7132
> URL: https://issues.apache.org/jira/browse/AMQ-7132
> Project: ActiveMQ
>  Issue Type: Test
>  Components: KahaDB
>Affects Versions: 5.15.8
>Reporter: Alan Protasio
>Priority: Major
> Fix For: 5.16.0, 5.15.9
>
>
> Hi.
> We noticed that ActiveMQ reads lots of pages in the index file when is 
> starting up to recover the destinations statistics:
> [https://github.com/apache/activemq/blob/master/activemq-kahadb-store/src/main/java/org/apache/activemq/store/kahadb/KahaDBStore.java#L819]
> Nowadays, in order to do that, activemq traverse the 
> storedDestination.locationIndex to get the messageCount and totalMessageSize 
> of each destination. For destinations with lots of messages this process can 
> take a while making the startup process take long time.
> In a case of a master-slave broker, this prevent the broker to fast failover 
> and does not meet what is stated on 
> [http://activemq.apache.org/shared-file-system-master-slave.html.]
> {quote}If you have a SAN or shared file system it can be used to provide 
> _high availability_ such that if a broker is killed, another broker can take 
> over immediately. 
> {quote}
> One solution for this is keep track of the destination statistics summary in 
> the index file and doing so, we dont need to read all the locationIndex on 
> the start up.
> The code change proposed is backward compatible but need a bump on the kahadb 
> version. If this information is not in the index, the broker will fall back 
> to the current implementation, which means that the first time people upgrade 
> to the new version, it will still have to read the locationIndex, but 
> subsequent restarts will be fast.
> This change should have a negligible performance impact during normal 
> activemq operation, as this change introduce a few more bytes of data to the 
> index and this information will be on checkpoints. Also, this new information 
> is synchronized with the locationIndex as they are update at the same 
> transaction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7128) Broker does not react on IOException: Stale file handle and does not fully shutdown

2019-01-11 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740444#comment-16740444
 ] 

Jeff Genender commented on AMQ-7128:


Hi Martin, good to see you!  Yes, the shutdown issue could happen on a stale 
file lock.  I would look at your OS and logs for the OS, as this will likely be 
the culprit.  The stale lock is going to be caused by the OS (NFS/SMB/etc) and 
that may end up requiring some tuning and investigation to see why there is an 
issue.  I do not think this has to do with ActiveMQ.

> Broker does not react on IOException: Stale file handle and does not fully 
> shutdown
> ---
>
> Key: AMQ-7128
> URL: https://issues.apache.org/jira/browse/AMQ-7128
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.15.4
>Reporter: Martin Lichtin
>Priority: Major
> Attachments: activemq-stale-file-handle-andshutdown-issue.txt
>
>
> Seeing a situation during which the broker can no longer write to the KahaDB 
> filesystem (IOException: Stale file handle). This "seems" to initiate a 
> shutdown (as indicated by subsequent 'Async Writer Thread Shutdown' 
> exceptions), but nothing happens.
> Only after around 3 hours, something does happen that triggers the actual 
> stopping of the broker service.
> However, then the shutdown never completes as the IOExceptionHandler that 
> initiates the broker.stop() is called again, and the throws a 
> 'SuppressReplyException' which KahaDB (wrongly?) interprets as yet another 
> issue and itself fails to stop the persistence adapter.
> Attaching a log where all this can be observed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (AMQ-7128) Broker does not react on IOException: Stale file handle and does not fully shutdown

2019-01-11 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender reassigned AMQ-7128:
--

Assignee: (was: Jeff Genender)

> Broker does not react on IOException: Stale file handle and does not fully 
> shutdown
> ---
>
> Key: AMQ-7128
> URL: https://issues.apache.org/jira/browse/AMQ-7128
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.15.4
>Reporter: Martin Lichtin
>Priority: Major
> Attachments: activemq-stale-file-handle-andshutdown-issue.txt
>
>
> Seeing a situation during which the broker can no longer write to the KahaDB 
> filesystem (IOException: Stale file handle). This "seems" to initiate a 
> shutdown (as indicated by subsequent 'Async Writer Thread Shutdown' 
> exceptions), but nothing happens.
> Only after around 3 hours, something does happen that triggers the actual 
> stopping of the broker service.
> However, then the shutdown never completes as the IOExceptionHandler that 
> initiates the broker.stop() is called again, and the throws a 
> 'SuppressReplyException' which KahaDB (wrongly?) interprets as yet another 
> issue and itself fails to stop the persistence adapter.
> Attaching a log where all this can be observed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (AMQ-7128) Broker does not react on IOException: Stale file handle and does not fully shutdown

2019-01-11 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender reassigned AMQ-7128:
--

Assignee: Jeff Genender

> Broker does not react on IOException: Stale file handle and does not fully 
> shutdown
> ---
>
> Key: AMQ-7128
> URL: https://issues.apache.org/jira/browse/AMQ-7128
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.15.4
>Reporter: Martin Lichtin
>Assignee: Jeff Genender
>Priority: Major
> Attachments: activemq-stale-file-handle-andshutdown-issue.txt
>
>
> Seeing a situation during which the broker can no longer write to the KahaDB 
> filesystem (IOException: Stale file handle). This "seems" to initiate a 
> shutdown (as indicated by subsequent 'Async Writer Thread Shutdown' 
> exceptions), but nothing happens.
> Only after around 3 hours, something does happen that triggers the actual 
> stopping of the broker service.
> However, then the shutdown never completes as the IOExceptionHandler that 
> initiates the broker.stop() is called again, and the throws a 
> 'SuppressReplyException' which KahaDB (wrongly?) interprets as yet another 
> issue and itself fails to stop the persistence adapter.
> Attaching a log where all this can be observed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQCPP-619) Support for SSL wilcard certificate

2018-12-07 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQCPP-619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16712999#comment-16712999
 ] 

Jeff Genender commented on AMQCPP-619:
--

[~tabish121], according to 
[https://www.openssl.org/policies/releasestrat.html,] 0.9.8 is no longer 
supported and that was issued on 12/23/2014.  It appears that [~jgoodyear] 
followed the activemq-cpp as pointed out by [~rkieley] in the 
[http://activemq.2283324.n4.nabble.com/Discuss-ActiveMQ-CPP-Client-td4745898.html.]
   It looks like you were requested to chime in on this on 12/5 since this is 
your space.  It would also appear that Jamie is happy to back that out but he 
wants to make sure that it goes in the right place.  Could you please opine in 
that thread, so that he has some direction to do this right? That will help so 
he is not creating/reverting patches on a reactive level and this can be a more 
proactive process.  Your input there would greatly be appreciated.  Its great 
that you have some help with people looking at this code now, which I am sure 
you welcome.  Please help enable him to help you out.

> Support for SSL wilcard certificate
> ---
>
> Key: AMQCPP-619
> URL: https://issues.apache.org/jira/browse/AMQCPP-619
> Project: ActiveMQ C++ Client
>  Issue Type: New Feature
>Affects Versions: 3.9.4
>Reporter: Francois Godin
>Assignee: Jamie goodyear
>Priority: Major
> Attachments: amqcpp-619.patch, sslCertificateWildcard.patch
>
>
> SSL certificate can contain wildcard in the hostname. For example, the 
> certificate URL "*.proxy.app.com" should match the following address:
> * 1.proxy.app.com
> * 2.proxy.app.com
> Sadly, ActiveMQ-CPP simply compare the two values and thus does not accept 
> such certificates.
> The Openssl page https://wiki.openssl.org/index.php/Hostname_validation 
> describe some possible implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (AMQ-7091) O(n) Memory consumption when broker has inactive durable subscribes causing OOM

2018-11-12 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-7091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender resolved AMQ-7091.

   Resolution: Fixed
Fix Version/s: 5.15.8
   5.16.0

> O(n) Memory consumption when broker has inactive durable subscribes causing 
> OOM
> ---
>
> Key: AMQ-7091
> URL: https://issues.apache.org/jira/browse/AMQ-7091
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.15.7
>Reporter: Alan Protasio
>Priority: Major
> Fix For: 5.16.0, 5.15.8
>
> Attachments: After.png, Before.png, 
> InactiveDurableSubscriberTest.java, memoryAllocation.jpg
>
>
> Hi :D
> One of our brokers was bouncing indefinitely due OOM even though the load 
> (TPS) was pretty low.
> Getting the memory dump I could see that almost 90% of the memory was being 
> used by 
> [messageReferences|https://github.com/apache/activemq/blob/master/activemq-kahadb-store/src/main/java/org/apache/activemq/store/kahadb/MessageDatabase.java#L2368]
>  TreeMap to keep track of what messages were already acked by all Subscribes 
> in order to delete them.
> This seems to be a problem as if the broker has an inactive durable 
> subscribe, the memory footprint increase proportionally (O) with the number 
> of messages sent to the topic in question, causing the broker to die due OOM 
> sooner or later (the high memory footprint continue even after a restart).
> You can find attached (memoryAllocation.jpg) a screen shot showing my broker 
> using 90% of the memory to keep track of those messages, making it barely 
> usable.
> Looking at the code, I could do a change to change the messageReferences to 
> use a BTreeIndex:
> final TreeMap messageReferences = new TreeMap<>();
>  + BTreeIndex messageReferences;
> Making this change, the memory allocation of the broker stabilized and the 
> broker didn't run OOM anymore.
> Attached you can see the code that I used to reproduce this scenario, also 
> the memory utilization (HEAP and GC graphs) before and after this change.
> Before the change the broker died in 5 minutes and I could send 48. After 
> then change the broker was still pretty healthy after 5 minutes and i could 
> send 2265000 to the topic (almost 5x more due high GC pauses).
>  
> All test are passing: mvn clean install -Dactivemq.tests=all



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7091) O(n) Memory consumption when broker has inactive durable subscribes causing OOM

2018-11-12 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16683962#comment-16683962
 ] 

Jeff Genender commented on AMQ-7091:


[~gtully], what if the change was a configuration property? i.e. you can have 
the cache or you can set the property to do what is being proposed here?  So 
for those strapped for memory, they can just disable the cache?

> O(n) Memory consumption when broker has inactive durable subscribes causing 
> OOM
> ---
>
> Key: AMQ-7091
> URL: https://issues.apache.org/jira/browse/AMQ-7091
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.15.7
>Reporter: Alan Protasio
>Priority: Major
> Attachments: After.png, Before.png, 
> InactiveDurableSubscriberTest.java, memoryAllocation.jpg
>
>
> Hi :D
> One of our brokers was bouncing indefinitely due OOM even though the load 
> (TPS) was pretty low.
> Getting the memory dump I could see that almost 90% of the memory was being 
> used by 
> [messageReferences|https://github.com/apache/activemq/blob/master/activemq-kahadb-store/src/main/java/org/apache/activemq/store/kahadb/MessageDatabase.java#L2368]
>  TreeMap to keep track of what messages were already acked by all Subscribes 
> in order to delete them.
> This seems to be a problem as if the broker has an inactive durable 
> subscribe, the memory footprint increase proportionally (O) with the number 
> of messages sent to the topic in question, causing the broker to die due OOM 
> sooner or later (the high memory footprint continue even after a restart).
> You can find attached (memoryAllocation.jpg) a screen shot showing my broker 
> using 90% of the memory to keep track of those messages, making it barely 
> usable.
> Looking at the code, I could do a change to change the messageReferences to 
> use a BTreeIndex:
> final TreeMap messageReferences = new TreeMap<>();
>  + BTreeIndex messageReferences;
> Making this change, the memory allocation of the broker stabilized and the 
> broker didn't run OOM anymore.
> Attached you can see the code that I used to reproduce this scenario, also 
> the memory utilization (HEAP and GC graphs) before and after this change.
> Before the change the broker died in 5 minutes and I could send 48. After 
> then change the broker was still pretty healthy after 5 minutes and i could 
> send 2265000 to the topic (almost 5x more due high GC pauses).
>  
> All test are passing: mvn clean install -Dactivemq.tests=all



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7091) O(n) Memory consumption when broker has inactive durable subscribes causing OOM

2018-11-08 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16679987#comment-16679987
 ] 

Jeff Genender commented on AMQ-7091:


This looks good to me...the additional writes are gone.  [~gtully], since you 
are the resident expert in this area, please let us know what you think?

> O(n) Memory consumption when broker has inactive durable subscribes causing 
> OOM
> ---
>
> Key: AMQ-7091
> URL: https://issues.apache.org/jira/browse/AMQ-7091
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.15.7
>Reporter: Alan Protasio
>Priority: Major
> Attachments: After.png, Before.png, 
> InactiveDurableSubscriberTest.java, memoryAllocation.jpg
>
>
> Hi :D
> One of our brokers was bouncing indefinitely due OOM even though the load 
> (TPS) was pretty low.
> Getting the memory dump I could see that almost 90% of the memory was being 
> used by 
> [messageReferences|https://github.com/apache/activemq/blob/master/activemq-kahadb-store/src/main/java/org/apache/activemq/store/kahadb/MessageDatabase.java#L2368]
>  TreeMap to keep track of what messages were already acked by all Subscribes 
> in order to delete them.
> This seems to be a problem as if the broker has an inactive durable 
> subscribe, the memory footprint increase proportionally (O) with the number 
> of messages sent to the topic in question, causing the broker to die due OOM 
> sooner or later (the high memory footprint continue even after a restart).
> You can find attached (memoryAllocation.jpg) a screen shot showing my broker 
> using 90% of the memory to keep track of those messages, making it barely 
> usable.
> Looking at the code, I could do a change to change the messageReferences to 
> use a BTreeIndex:
> final TreeMap messageReferences = new TreeMap<>();
>  + BTreeIndex messageReferences;
> Making this change, the memory allocation of the broker stabilized and the 
> broker didn't run OOM anymore.
> Attached you can see the code that I used to reproduce this scenario, also 
> the memory utilization (HEAP and GC graphs) before and after this change.
> Before the change the broker died in 5 minutes and I could send 48. After 
> then change the broker was still pretty healthy after 5 minutes and i could 
> send 2265000 to the topic (almost 5x more due high GC pauses).
>  
> All test are passing: mvn clean install -Dactivemq.tests=all



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (AMQ-7091) O(n) Memory consumption when broker has inactive durable subscribes causing OOM

2018-11-08 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-7091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender reassigned AMQ-7091:
--

Assignee: Jeff Genender

> O(n) Memory consumption when broker has inactive durable subscribes causing 
> OOM
> ---
>
> Key: AMQ-7091
> URL: https://issues.apache.org/jira/browse/AMQ-7091
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.15.7
>Reporter: Alan Protasio
>Assignee: Jeff Genender
>Priority: Major
> Attachments: After.png, Before.png, 
> InactiveDurableSubscriberTest.java, memoryAllocation.jpg
>
>
> Hi :D
> One of our brokers was bouncing indefinitely due OOM even though the load 
> (TPS) was pretty low.
> Getting the memory dump I could see that almost 90% of the memory was being 
> used by 
> [messageReferences|https://github.com/apache/activemq/blob/master/activemq-kahadb-store/src/main/java/org/apache/activemq/store/kahadb/MessageDatabase.java#L2368]
>  TreeMap to keep track of what messages were already acked by all Subscribes 
> in order to delete them.
> This seems to be a problem as if the broker has an inactive durable 
> subscribe, the memory footprint increase proportionally (O) with the number 
> of messages sent to the topic in question, causing the broker to die due OOM 
> sooner or later (the high memory footprint continue even after a restart).
> You can find attached (memoryAllocation.jpg) a screen shot showing my broker 
> using 90% of the memory to keep track of those messages, making it barely 
> usable.
> Looking at the code, I could do a change to change the messageReferences to 
> use a BTreeIndex:
> final TreeMap messageReferences = new TreeMap<>();
>  + BTreeIndex messageReferences;
> Making this change, the memory allocation of the broker stabilized and the 
> broker didn't run OOM anymore.
> Attached you can see the code that I used to reproduce this scenario, also 
> the memory utilization (HEAP and GC graphs) before and after this change.
> Before the change the broker died in 5 minutes and I could send 48. After 
> then change the broker was still pretty healthy after 5 minutes and i could 
> send 2265000 to the topic (almost 5x more due high GC pauses).
>  
> All test are passing: mvn clean install -Dactivemq.tests=all



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (AMQ-7091) O(n) Memory consumption when broker has inactive durable subscribes causing OOM

2018-11-08 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-7091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender reassigned AMQ-7091:
--

Assignee: (was: Jeff Genender)

> O(n) Memory consumption when broker has inactive durable subscribes causing 
> OOM
> ---
>
> Key: AMQ-7091
> URL: https://issues.apache.org/jira/browse/AMQ-7091
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.15.7
>Reporter: Alan Protasio
>Priority: Major
> Attachments: After.png, Before.png, 
> InactiveDurableSubscriberTest.java, memoryAllocation.jpg
>
>
> Hi :D
> One of our brokers was bouncing indefinitely due OOM even though the load 
> (TPS) was pretty low.
> Getting the memory dump I could see that almost 90% of the memory was being 
> used by 
> [messageReferences|https://github.com/apache/activemq/blob/master/activemq-kahadb-store/src/main/java/org/apache/activemq/store/kahadb/MessageDatabase.java#L2368]
>  TreeMap to keep track of what messages were already acked by all Subscribes 
> in order to delete them.
> This seems to be a problem as if the broker has an inactive durable 
> subscribe, the memory footprint increase proportionally (O) with the number 
> of messages sent to the topic in question, causing the broker to die due OOM 
> sooner or later (the high memory footprint continue even after a restart).
> You can find attached (memoryAllocation.jpg) a screen shot showing my broker 
> using 90% of the memory to keep track of those messages, making it barely 
> usable.
> Looking at the code, I could do a change to change the messageReferences to 
> use a BTreeIndex:
> final TreeMap messageReferences = new TreeMap<>();
>  + BTreeIndex messageReferences;
> Making this change, the memory allocation of the broker stabilized and the 
> broker didn't run OOM anymore.
> Attached you can see the code that I used to reproduce this scenario, also 
> the memory utilization (HEAP and GC graphs) before and after this change.
> Before the change the broker died in 5 minutes and I could send 48. After 
> then change the broker was still pretty healthy after 5 minutes and i could 
> send 2265000 to the topic (almost 5x more due high GC pauses).
>  
> All test are passing: mvn clean install -Dactivemq.tests=all



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7093) KahaDB index, recover free pages in parallel with start (Continued)

2018-11-07 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16678785#comment-16678785
 ] 

Jeff Genender commented on AMQ-7093:


[~cshannon] and [~gtully], feel free to resolve this if it encompasses all of 
AMQ-7082 for the fixes.

> KahaDB index, recover free pages in parallel with start (Continued)
> ---
>
> Key: AMQ-7093
> URL: https://issues.apache.org/jira/browse/AMQ-7093
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.15.7
>Reporter: Jeff Genender
>Priority: Major
> Fix For: 5.16.0, 5.15.8
>
>
> AMQ-7082 was implemented to create a concurrent thread to handle the free 
> page recovery.  It was included as a part of 5.15.7.  There was some 
> additional add-on coding that was not a part of that release which had 
> introduced some potential bugs.  This was made to track the additional 
> commits for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7093) KahaDB index, recover free pages in parallel with start (Continued)

2018-11-07 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16678923#comment-16678923
 ] 

Jeff Genender commented on AMQ-7093:


[~cshannon] and [~gtully], based on the impact of 5.15.8 and potential for 
corruption, I think it may be a good idea to get 5.15.8 out fairly quickly with 
this in it.  Do you guys agree?

> KahaDB index, recover free pages in parallel with start (Continued)
> ---
>
> Key: AMQ-7093
> URL: https://issues.apache.org/jira/browse/AMQ-7093
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.15.7
>Reporter: Jeff Genender
>Priority: Major
> Fix For: 5.16.0, 5.15.8
>
>
> AMQ-7082 was implemented to create a concurrent thread to handle the free 
> page recovery.  It was included as a part of 5.15.7.  There was some 
> additional add-on coding that was not a part of that release which had 
> introduced some potential bugs.  This was made to track the additional 
> commits for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMQ-7093) KahaDB index, recover free pages in parallel with start (Continued)

2018-11-07 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-7093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender updated AMQ-7093:
---
Component/s: KahaDB

> KahaDB index, recover free pages in parallel with start (Continued)
> ---
>
> Key: AMQ-7093
> URL: https://issues.apache.org/jira/browse/AMQ-7093
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.15.7
>Reporter: Jeff Genender
>Priority: Major
> Fix For: 5.16.0, 5.15.8
>
>
> AMQ-7082 was implemented to create a concurrent thread to handle the free 
> page recovery.  It was included as a part of 5.15.7.  There was some 
> additional add-on coding that was not a part of that release which had 
> introduced some potential bugs.  This was made to track the additional 
> commits for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7093) KahaDB index, recover free pages in parallel with start (Continued)

2018-11-07 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16678783#comment-16678783
 ] 

Jeff Genender commented on AMQ-7093:


This includes the following commits for 5.15.7:

ca6293b55 AMQ-7082 We should make sure that pages managed during recovery are 
not recovered in error variation of patch from Alan Protasio 
 closes #317
505200b92 AMQ-7082 - fix compilation after merge
45d7676bd AMQ-7082 - Make sure that the recovery will only mark pages as free 
if they were created in a previous execution

 

For 5.16.0:

85859fd8d AMQ-7082 We should make sure that pages managed during recovery are 
not recovered in error variation of patch from Alan Protasio 
 closes #317
81062fde8 Merge branch 'AMQ-7082'
0d3433891 AMQ-7082 - Make sure that the recovery will only mark pages as free 
if they were created in a previous execution

 

 

 

> KahaDB index, recover free pages in parallel with start (Continued)
> ---
>
> Key: AMQ-7093
> URL: https://issues.apache.org/jira/browse/AMQ-7093
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.15.7
>Reporter: Jeff Genender
>Priority: Major
> Fix For: 5.16.0, 5.15.8
>
>
> AMQ-7082 was implemented to create a concurrent thread to handle the free 
> page recovery.  It was included as a part of 5.15.7.  There was some 
> additional add-on coding that was not a part of that release which had 
> introduced some potential bugs.  This was made to track the additional 
> commits for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMQ-7093) KahaDB index, recover free pages in parallel with start (Continued)

2018-11-07 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-7093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender updated AMQ-7093:
---
Fix Version/s: 5.16.0

> KahaDB index, recover free pages in parallel with start (Continued)
> ---
>
> Key: AMQ-7093
> URL: https://issues.apache.org/jira/browse/AMQ-7093
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.15.7
>Reporter: Jeff Genender
>Priority: Major
> Fix For: 5.16.0, 5.15.8
>
>
> AMQ-7082 was implemented to create a concurrent thread to handle the free 
> page recovery.  It was included as a part of 5.15.7.  There was some 
> additional add-on coding that was not a part of that release which had 
> introduced some potential bugs.  This was made to track the additional 
> commits for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (AMQ-7082) KahaDB index, recover free pages in parallel with start

2018-11-07 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender closed AMQ-7082.
--
Resolution: Fixed

Reopened and closed - continued with AMQ-7093

> KahaDB index, recover free pages in parallel with start
> ---
>
> Key: AMQ-7082
> URL: https://issues.apache.org/jira/browse/AMQ-7082
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.15.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 5.16.0, 5.15.7
>
>
> AMQ-6590 fixes free page loss through recovery. The recover process can be 
> timely, which prevents fast failover, doing recovery on shutdown is 
> preferable, but it is still not ideal b/c it will hold onto the kahadb lock. 
> It also can stall shutdown unexpectedly.
> AMQ-7080 is going to tackle checkpointing the free list. This should help 
> avoid the need for recovery but it may still be necessary. If the perf hit is 
> significant this may need to be optional.
> There will still be the need to walk the index to find the free list.
> It is possible to run with no free list and grow, and we can do that while we 
> recover the free list in parallel, then merge the two at a safe point. This 
> we can do at startup.
> In cases where the disk is the bottleneck this won't help much, but it will 
> help failover and it will help shutdown, with a bit of luck the recovery will 
> complete before we stop.
>  
> Initially I thought this would be too complex, but if we concede some growth 
> while we recover, ie: start with an empty free list, it is should be straight 
> forward to merge with a recovered one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (AMQ-7093) KahaDB index, recover free pages in parallel with start (Continued)

2018-11-07 Thread Jeff Genender (JIRA)
Jeff Genender created AMQ-7093:
--

 Summary: KahaDB index, recover free pages in parallel with start 
(Continued)
 Key: AMQ-7093
 URL: https://issues.apache.org/jira/browse/AMQ-7093
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.15.7
Reporter: Jeff Genender
 Fix For: 5.15.8


AMQ-7082 was implemented to create a concurrent thread to handle the free page 
recovery.  It was included as a part of 5.15.7.  There was some additional 
add-on coding that was not a part of that release which had introduced some 
potential bugs.  This was made to track the additional commits for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7082) KahaDB index, recover free pages in parallel with start

2018-11-07 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16678730#comment-16678730
 ] 

Jeff Genender commented on AMQ-7082:


[~jeromeinsf] - You have a valid point.  This Jira should have been left at 
5.15.7 since it was resolved, and a new Jira should have been opened to track 
5.15.8 with the additional changes that were submitted. [~gtully], Do you think 
that we set this back to the fix in 5.15.7, and get the changes for 5.15.8 in a 
new Jira to track this more clearly?

> KahaDB index, recover free pages in parallel with start
> ---
>
> Key: AMQ-7082
> URL: https://issues.apache.org/jira/browse/AMQ-7082
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.15.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 5.16.0, 5.15.8
>
>
> AMQ-6590 fixes free page loss through recovery. The recover process can be 
> timely, which prevents fast failover, doing recovery on shutdown is 
> preferable, but it is still not ideal b/c it will hold onto the kahadb lock. 
> It also can stall shutdown unexpectedly.
> AMQ-7080 is going to tackle checkpointing the free list. This should help 
> avoid the need for recovery but it may still be necessary. If the perf hit is 
> significant this may need to be optional.
> There will still be the need to walk the index to find the free list.
> It is possible to run with no free list and grow, and we can do that while we 
> recover the free list in parallel, then merge the two at a safe point. This 
> we can do at startup.
> In cases where the disk is the bottleneck this won't help much, but it will 
> help failover and it will help shutdown, with a bit of luck the recovery will 
> complete before we stop.
>  
> Initially I thought this would be too complex, but if we concede some growth 
> while we recover, ie: start with an empty free list, it is should be straight 
> forward to merge with a recovered one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7086) KahaDB - don't perform expensive gc run on shutdown

2018-10-25 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16663888#comment-16663888
 ] 

Jeff Genender commented on AMQ-7086:


[~gtully] my .02 is making it configurable is the way to go.

> KahaDB - don't perform expensive gc run on shutdown
> ---
>
> Key: AMQ-7086
> URL: https://issues.apache.org/jira/browse/AMQ-7086
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.15.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 5.16.0
>
>
> when looking at the speed of broker.stop with kahadb and the scheduler store. 
> There is a full gc run, which can be expensive as the whole index needs to be 
> traversed.
> Fast stop/restart is important for fast failover. Leaving gc for runtime, 
> where it has an effect on latency in the normal way, rather than 
> availability, is better.
>  
> I am wondering if there is a use case for gc only at shutdown if the 
> cleanupInterval <= 0, indicating that there were no gc at runtime. The 
> alternative is adding another boolean to the config or adding that back in if 
> the need arises.
> I am leaning towards just removing the gc call during shutdown.
>  
> Note: matching the indexCacheSize to the index file size, trading off with 
> memory, does help to speed up the index (read) traversal.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (AMQ-7082) KahaDB index, recover free pages in parallel with start

2018-10-21 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender resolved AMQ-7082.

Resolution: Fixed

> KahaDB index, recover free pages in parallel with start
> ---
>
> Key: AMQ-7082
> URL: https://issues.apache.org/jira/browse/AMQ-7082
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.15.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 5.16.0, 5.15.7
>
>
> AMQ-6590 fixes free page loss through recovery. The recover process can be 
> timely, which prevents fast failover, doing recovery on shutdown is 
> preferable, but it is still not ideal b/c it will hold onto the kahadb lock. 
> It also can stall shutdown unexpectedly.
> AMQ-7080 is going to tackle checkpointing the free list. This should help 
> avoid the need for recovery but it may still be necessary. If the perf hit is 
> significant this may need to be optional.
> There will still be the need to walk the index to find the free list.
> It is possible to run with no free list and grow, and we can do that while we 
> recover the free list in parallel, then merge the two at a safe point. This 
> we can do at startup.
> In cases where the disk is the bottleneck this won't help much, but it will 
> help failover and it will help shutdown, with a bit of luck the recovery will 
> complete before we stop.
>  
> Initially I thought this would be too complex, but if we concede some growth 
> while we recover, ie: start with an empty free list, it is should be straight 
> forward to merge with a recovered one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7082) KahaDB index, recover free pages in parallel with start

2018-10-21 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16658306#comment-16658306
 ] 

Jeff Genender commented on AMQ-7082:


[~gtully], I love the merge...this makes sense and is a quick combination.  
Since the flush() happens in the checkpointUpdate, that makes this thread safe. 
 Nice thinking about merging, that was a nice solution outside the box.

This definitely needs to go into 5.15.7.  I went ahead and did that as this 
closes the loop on any delays from freeList scanning.

 

> KahaDB index, recover free pages in parallel with start
> ---
>
> Key: AMQ-7082
> URL: https://issues.apache.org/jira/browse/AMQ-7082
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.15.0
>Reporter: Gary Tully
>Assignee: Gary Tully
>Priority: Major
> Fix For: 5.16.0, 5.15.7
>
>
> AMQ-6590 fixes free page loss through recovery. The recover process can be 
> timely, which prevents fast failover, doing recovery on shutdown is 
> preferable, but it is still not ideal b/c it will hold onto the kahadb lock. 
> It also can stall shutdown unexpectedly.
> AMQ-7080 is going to tackle checkpointing the free list. This should help 
> avoid the need for recovery but it may still be necessary. If the perf hit is 
> significant this may need to be optional.
> There will still be the need to walk the index to find the free list.
> It is possible to run with no free list and grow, and we can do that while we 
> recover the free list in parallel, then merge the two at a safe point. This 
> we can do at startup.
> In cases where the disk is the bottleneck this won't help much, but it will 
> help failover and it will help shutdown, with a bit of luck the recovery will 
> complete before we stop.
>  
> Initially I thought this would be too complex, but if we concede some growth 
> while we recover, ie: start with an empty free list, it is should be straight 
> forward to merge with a recovered one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7080) Keep track of free pages - Update db.free file during checkpoints

2018-10-19 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16656744#comment-16656744
 ] 

Jeff Genender commented on AMQ-7080:


[~gtully] Yes it is fundamental so that definitely makes sense.  Can you please 
give your thoughts on my comment above about the ACTIVEMQ_KILL_MAXSECONDS?  
Thanks.

> Keep track of free pages - Update db.free file during checkpoints
> -
>
> Key: AMQ-7080
> URL: https://issues.apache.org/jira/browse/AMQ-7080
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.15.6
>Reporter: Alan Protasio
>Priority: Major
>
> In a event of an unclean shutdown, Activemq loses the information about the 
> free pages in the index. In order to recover this information, ActiveMQ read 
> the whole index during shutdown searching for free pages and then save the 
> db.free file. This operation can take a long time, making the failover 
> slower. (during the shutdown, activemq will still hold the lock).
> From http://activemq.apache.org/shared-file-system-master-slave.html
> {quote}"If you have a SAN or shared file system it can be used to provide 
> high availability such that if a broker is killed, another broker can take 
> over immediately."
> {quote}
> Is important to note if the shutdown takes more than ACTIVEMQ_KILL_MAXSECONDS 
> seconds, any following shutdown will be unclean. This broker will stay in 
> this state unless the index is deleted (this state means that every failover 
> will take more then ACTIVEMQ_KILL_MAXSECONDS, so, if you increase this time 
> to 5 minutes, you fail over can take more than 5 minutes).
>  
> In order to prevent ActiveMQ reading the whole index file to search for free 
> pages, we can keep track of those on every Checkpoint. In order to do that we 
> need to be sure that db.data and db.free are in sync. To achieve that we can 
> have a attribute in the db.free page that is referenced by the db.data.
> So during the checkpoint we have:
> 1 - Save db.free and give a freePageUniqueId
> 2 - Save this freePageUniqueId in the db.data (metadata)
> In a crash, we can see if the db.data has the same freePageUniqueId as the 
> db.free. If this is the case we can safely use the free page information 
> contained in the db.free
> Now, the only way to read the whole index file again is IF the crash happens 
> btw step 1 and 2 (what is very unlikely).
> The drawback of this implementation is that we will have to save db.free 
> during the checkpoint, what can possibly increase the checkpoint time.
> Is also important to note that we CAN (and should) have stale data in db.free 
> as it is referencing stale db.data:
> Imagine the timeline:
> T0 -> P1, P2 and P3 are free.
> T1 -> Checkpoint
> T2 -> P1 got occupied.
> T3 -> Crash
> In the current scenario after the  Pagefile#load the P1 will be free and then 
> the replay will mark P1 as occupied or will occupied another page (now that 
> the recovery of free pages is done on shutdown)
> This change only make sure that db.data and db.free are in sync and showing 
> the reality in T1 (checkpoint), If they are in sync we can trust the db.free.
> This is a really fast draft of what i'm suggesting... If you guys agree, i 
> can create the proper patch after:
> [https://github.com/alanprot/activemq/commit/18036ef7214ef0eaa25c8650f40644dd8b4632a5]
>  
> This is related to https://issues.apache.org/jira/browse/AMQ-6590



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7080) Keep track of free pages - Update db.free file during checkpoints

2018-10-18 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655983#comment-16655983
 ] 

Jeff Genender commented on AMQ-7080:


[~gtully] I don't think there is a risk to the partial write because the 
metadata is written after the free list in his patch.  So assuming a partial 
write occurred to the freelist, in theory the metadata write wouldn't hold the 
same fingerprint, and a full index read would be required anyways.  Correct?

> Keep track of free pages - Update db.free file during checkpoints
> -
>
> Key: AMQ-7080
> URL: https://issues.apache.org/jira/browse/AMQ-7080
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.15.6
>Reporter: Alan Protasio
>Priority: Major
>
> In a event of an unclean shutdown, Activemq loses the information about the 
> free pages in the index. In order to recover this information, ActiveMQ read 
> the whole index during shutdown searching for free pages and then save the 
> db.free file. This operation can take a long time, making the failover 
> slower. (during the shutdown, activemq will still hold the lock).
> From http://activemq.apache.org/shared-file-system-master-slave.html
> {quote}"If you have a SAN or shared file system it can be used to provide 
> high availability such that if a broker is killed, another broker can take 
> over immediately."
> {quote}
> Is important to note if the shutdown takes more than ACTIVEMQ_KILL_MAXSECONDS 
> seconds, any following shutdown will be unclean. This broker will stay in 
> this state unless the index is deleted (this state means that every failover 
> will take more then ACTIVEMQ_KILL_MAXSECONDS, so, if you increase this time 
> to 5 minutes, you fail over can take more than 5 minutes).
>  
> In order to prevent ActiveMQ reading the whole index file to search for free 
> pages, we can keep track of those on every Checkpoint. In order to do that we 
> need to be sure that db.data and db.free are in sync. To achieve that we can 
> have a attribute in the db.free page that is referenced by the db.data.
> So during the checkpoint we have:
> 1 - Save db.free and give a freePageUniqueId
> 2 - Save this freePageUniqueId in the db.data (metadata)
> In a crash, we can see if the db.data has the same freePageUniqueId as the 
> db.free. If this is the case we can safely use the free page information 
> contained in the db.free
> Now, the only way to read the whole index file again is IF the crash happens 
> btw step 1 and 2 (what is very unlikely).
> The drawback of this implementation is that we will have to save db.free 
> during the checkpoint, what can possibly increase the checkpoint time.
> Is also important to note that we CAN (and should) have stale data in db.free 
> as it is referencing stale db.data:
> Imagine the timeline:
> T0 -> P1, P2 and P3 are free.
> T1 -> Checkpoint
> T2 -> P1 got occupied.
> T3 -> Crash
> In the current scenario after the  Pagefile#load the P1 will be free and then 
> the replay will mark P1 as occupied or will occupied another page (now that 
> the recovery of free pages is done on shutdown)
> This change only make sure that db.data and db.free are in sync and showing 
> the reality in T1 (checkpoint), If they are in sync we can trust the db.free.
> This is a really fast draft of what i'm suggesting... If you guys agree, i 
> can create the proper patch after:
> [https://github.com/alanprot/activemq/commit/18036ef7214ef0eaa25c8650f40644dd8b4632a5]
>  
> This is related to https://issues.apache.org/jira/browse/AMQ-6590



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7080) Keep track of free pages - Update db.free file during checkpoints

2018-10-18 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655811#comment-16655811
 ] 

Jeff Genender commented on AMQ-7080:


This looks good to me. My comments are the fingerprint shouldn't be a random 
due to potential clashes.  I would use a time in millis or use a time based 
UUID.  Anything that has an excellent chance of being unique.  

[~cshannon], you made some good comments about ACTIVEMQ_KILL_MAXSECONDS which 
can be an issue.  Luckily that setting is an easy changeable parameter.  
However, what's your's and [~gtully] thoughts on removing that from the 
'activemq stop', letting it stop normally, and perhaps create a new invoker, 
call it 'activemq force-stop', which could use the  ACTIVEMQ_KILL_MAXSECONDS 
parameter?  It seems to me that its been a long while since I have actually 
seen ActiveMQ "hang" on its own and slow shut downs have been the consequence 
of it doing its thing.  Any thoughts/opinions on this? I would be happy to do 
it if you guys find value in this.

> Keep track of free pages - Update db.free file during checkpoints
> -
>
> Key: AMQ-7080
> URL: https://issues.apache.org/jira/browse/AMQ-7080
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.15.6
>Reporter: Alan Protasio
>Priority: Major
>
> In a event of an unclean shutdown, Activemq loses the information about the 
> free pages in the index. In order to recover this information, ActiveMQ read 
> the whole index during shutdown searching for free pages and then save the 
> db.free file. This operation can take a long time, making the failover 
> slower. (during the shutdown, activemq will still hold the lock).
> From http://activemq.apache.org/shared-file-system-master-slave.html
> {quote}"If you have a SAN or shared file system it can be used to provide 
> high availability such that if a broker is killed, another broker can take 
> over immediately."
> {quote}
> Is important to note if the shutdown takes more than ACTIVEMQ_KILL_MAXSECONDS 
> seconds, any following shutdown will be unclean. This broker will stay in 
> this state unless the index is deleted (this state means that every failover 
> will take more then ACTIVEMQ_KILL_MAXSECONDS, so, if you increase this time 
> to 5 minutes, you fail over can take more than 5 minutes).
>  
> In order to prevent ActiveMQ reading the whole index file to search for free 
> pages, we can keep track of those on every Checkpoint. In order to do that we 
> need to be sure that db.data and db.free are in sync. To achieve that we can 
> have a attribute in the db.free page that is referenced by the db.data.
> So during the checkpoint we have:
> 1 - Save db.free and give a freePageUniqueId
> 2 - Save this freePageUniqueId in the db.data (metadata)
> In a crash, we can see if the db.data has the same freePageUniqueId as the 
> db.free. If this is the case we can safely use the free page information 
> contained in the db.free
> Now, the only way to read the whole index file again is IF the crash happens 
> btw step 1 and 2 (what is very unlikely).
> The drawback of this implementation is that we will have to save db.free 
> during the checkpoint, what can possibly increase the checkpoint time.
> Is also important to note that we CAN (and should) have stale data in db.free 
> as it is referencing stale db.data:
> Imagine the timeline:
> T0 -> P1, P2 and P3 are free.
> T1 -> Checkpoint
> T2 -> P1 got occupied.
> T3 -> Crash
> In the current scenario after the  Pagefile#load the P1 will be free and then 
> the replay will mark P1 as occupied or will occupied another page (now that 
> the recovery of free pages is done on shutdown)
> This change only make sure that db.data and db.free are in sync and showing 
> the reality in T1 (checkpoint), If they are in sync we can trust the db.free.
> This is a really fast draft of what i'm suggesting... If you guys agree, i 
> can create the proper patch after:
> [https://github.com/alanprot/activemq/commit/18036ef7214ef0eaa25c8650f40644dd8b4632a5]
>  
> This is related to https://issues.apache.org/jira/browse/AMQ-6590



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-6590) KahaDB index loses track of free pages on unclean shutdown

2018-10-18 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655414#comment-16655414
 ] 

Jeff Genender commented on AMQ-6590:


Also, add a unit test.  The unit test can show a normal checkpoint, and you can 
also exercise the failure by force calling 
brokerService.getPersistenceAdapter().checkpoint() and then changing the 
db.data or db.free fingerprint with some file I/O using the File object, then 
restarting and testing that it did a full recovery.  Should be straight 
forward. Open a new Jira on this :)

> KahaDB index loses track of free pages on unclean shutdown
> --
>
> Key: AMQ-6590
> URL: https://issues.apache.org/jira/browse/AMQ-6590
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.14.3
>Reporter: Christopher L. Shannon
>Assignee: Christopher L. Shannon
>Priority: Major
> Fix For: 5.15.0, 5.14.4, 5.16.0, 5.15.7
>
>
> I have discovered an issue with the KahaDB index recovery after an unclean 
> shutdown (OOM error, kill -9, etc) that leads to excessive disk space usage. 
> Normally on clean shutdown the index stores the known set of free pages to 
> db.free and reads that in on start up to know which pages can be re-used.  On 
> an unclean shutdown this is not written to disk so on start up the index is 
> supposed to scan the page file to figure out all of the free pages.
> Unfortunately it turns out that this scan of the page file is being done 
> before the total page count value has been set so when the iterator is 
> created it always thinks there are 0 pages to scan.
> The end result is that every time an unclean shutdown occurs all known free 
> pages are lost and no longer tracked.  This of course means new free pages 
> have to be allocated and all of the existing space is now lost which will 
> lead to excessive index file growth over time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-6590) KahaDB index loses track of free pages on unclean shutdown

2018-10-18 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16655236#comment-16655236
 ] 

Jeff Genender commented on AMQ-6590:


One more small comment, I would change from using Random as the fingerprint and 
perhaps use a time stamp or something.  The Random could potentially have a 
clash and possibly could have a collision which would cause corruption.  
Perhaps use a Time function or time based UUID as your finger print.  Since 
this will not be doing 2 checkpoints in under a millisecond, I think that would 
be the safest.

> KahaDB index loses track of free pages on unclean shutdown
> --
>
> Key: AMQ-6590
> URL: https://issues.apache.org/jira/browse/AMQ-6590
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.14.3
>Reporter: Christopher L. Shannon
>Assignee: Christopher L. Shannon
>Priority: Major
> Fix For: 5.15.0, 5.14.4, 5.16.0, 5.15.7
>
>
> I have discovered an issue with the KahaDB index recovery after an unclean 
> shutdown (OOM error, kill -9, etc) that leads to excessive disk space usage. 
> Normally on clean shutdown the index stores the known set of free pages to 
> db.free and reads that in on start up to know which pages can be re-used.  On 
> an unclean shutdown this is not written to disk so on start up the index is 
> supposed to scan the page file to figure out all of the free pages.
> Unfortunately it turns out that this scan of the page file is being done 
> before the total page count value has been set so when the iterator is 
> created it always thinks there are 0 pages to scan.
> The end result is that every time an unclean shutdown occurs all known free 
> pages are lost and no longer tracked.  This of course means new free pages 
> have to be allocated and all of the existing space is now lost which will 
> lead to excessive index file growth over time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-6590) KahaDB index loses track of free pages on unclean shutdown

2018-10-17 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16654678#comment-16654678
 ] 

Jeff Genender commented on AMQ-6590:


Hi Alan... that's awesome!  That would totally work in cutting down the 
potential for the recovery. May I suggest that you open a new Jira to track 
this since this one is resolved.  Right now parts of this are in multiple 
releases, so its own Jira may help ensure we know what release it would be a 
part of.   Your code looks pretty clear and I think it deserves its own JIRA  
Please reference this one in the new JIRA.

> KahaDB index loses track of free pages on unclean shutdown
> --
>
> Key: AMQ-6590
> URL: https://issues.apache.org/jira/browse/AMQ-6590
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.14.3
>Reporter: Christopher L. Shannon
>Assignee: Christopher L. Shannon
>Priority: Major
> Fix For: 5.15.0, 5.14.4, 5.16.0, 5.15.7
>
>
> I have discovered an issue with the KahaDB index recovery after an unclean 
> shutdown (OOM error, kill -9, etc) that leads to excessive disk space usage. 
> Normally on clean shutdown the index stores the known set of free pages to 
> db.free and reads that in on start up to know which pages can be re-used.  On 
> an unclean shutdown this is not written to disk so on start up the index is 
> supposed to scan the page file to figure out all of the free pages.
> Unfortunately it turns out that this scan of the page file is being done 
> before the total page count value has been set so when the iterator is 
> created it always thinks there are 0 pages to scan.
> The end result is that every time an unclean shutdown occurs all known free 
> pages are lost and no longer tracked.  This of course means new free pages 
> have to be allocated and all of the existing space is now lost which will 
> lead to excessive index file growth over time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7067) KahaDB Recovery can experience a dangling transaction when prepare and commit occur on different data files.

2018-10-17 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16653927#comment-16653927
 ] 

Jeff Genender commented on AMQ-7067:


Done.  Thanks.

> KahaDB Recovery can experience a dangling transaction when prepare and commit 
> occur on different data files.
> 
>
> Key: AMQ-7067
> URL: https://issues.apache.org/jira/browse/AMQ-7067
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB, XA
>Affects Versions: 5.15.6
>Reporter: Jamie goodyear
>Assignee: Gary Tully
>Priority: Critical
> Fix For: 5.16.0, 5.15.7
>
> Attachments: amq7067test.patch
>
>
> KahaDB Recovery can experience a dangling transaction when prepare and commit 
> occur on different data files.
> Scenario:
> A XA Transaction is started, message is prepared and sent into Broker.
> We then send into broker enough messages to file page file (100 message with 
> 512 * 1024 characters in message payload). This forces a new data file to be 
> created.
> Commit the XA transaction. Commit will land on the new data file.
> Restart the Broker.
> Upon restart a KahaDB recovery is executed.
> The prepare in PageFile 1 is not matched to Commit on PageFile 2, as such, it 
> will appear in recovered message state.
> Looking deeper into this scenario, it appears that the commit message is 
> GC'd, hence the prepare & commit can not be matched.
> The MessageDatabase only checks the following for GC:
> {color:#808080}// Don't GC files referenced by in-progress 
> tx{color}{color:#cc7832}if {color}(inProgressTxRange[{color:#6897bb}0{color}] 
> != {color:#cc7832}null{color}) {
>  {color:#cc7832}for {color}({color:#cc7832}int 
> {color}pendingTx=inProgressTxRange[{color:#6897bb}0{color}].getDataFileId(){color:#cc7832};
>  {color}pendingTx <= 
> inProgressTxRange[{color:#6897bb}1{color}].getDataFileId(){color:#cc7832}; 
> {color}pendingTx++) {
>  gcCandidateSet.remove(pendingTx){color:#cc7832};{color} }
>  }
> We need to become aware of where the prepare & commits occur in pagefiles 
> with respect to GCing files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-6590) KahaDB index loses track of free pages on unclean shutdown

2018-10-17 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16653831#comment-16653831
 ] 

Jeff Genender commented on AMQ-6590:


[~cshannon], I think the criticality of this is restart on a failed broker.  It 
could cause slowness in failover that delays startup of the broker.  In a clean 
shut down, yes the delay moved to the back, but its a known delay when you are 
shutting it down in a clean fashion.  Thus you can plan it.  This makes sense 
as during a failover, you likely need availability immediately since that 
generally is not planned.

> KahaDB index loses track of free pages on unclean shutdown
> --
>
> Key: AMQ-6590
> URL: https://issues.apache.org/jira/browse/AMQ-6590
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.14.3
>Reporter: Christopher L. Shannon
>Assignee: Christopher L. Shannon
>Priority: Major
> Fix For: 5.15.0, 5.14.4, 5.16.0, 5.15.7
>
>
> I have discovered an issue with the KahaDB index recovery after an unclean 
> shutdown (OOM error, kill -9, etc) that leads to excessive disk space usage. 
> Normally on clean shutdown the index stores the known set of free pages to 
> db.free and reads that in on start up to know which pages can be re-used.  On 
> an unclean shutdown this is not written to disk so on start up the index is 
> supposed to scan the page file to figure out all of the free pages.
> Unfortunately it turns out that this scan of the page file is being done 
> before the total page count value has been set so when the iterator is 
> created it always thinks there are 0 pages to scan.
> The end result is that every time an unclean shutdown occurs all known free 
> pages are lost and no longer tracked.  This of course means new free pages 
> have to be allocated and all of the existing space is now lost which will 
> lead to excessive index file growth over time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (AMQ-6590) KahaDB index loses track of free pages on unclean shutdown

2018-10-17 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16653629#comment-16653629
 ] 

Jeff Genender edited comment on AMQ-6590 at 10/17/18 2:25 PM:
--

Nice [~gtully].  I pushed this to 5.15.x (5.15.7) as this was supposed to be in 
the 5.15 line as well.  Probably should have been a new improvement JIRA.  
Updated the versions above as well.  Thank you!


was (Author: jgenender):
Nice Gary Tully.  I pushed this to 5.15.x (5.15.7) as this was supposed to be 
in the 5.15 line as well.  Probably should have been a new improvement JIRA.  
Updated the versions above as well.  Thank you!

> KahaDB index loses track of free pages on unclean shutdown
> --
>
> Key: AMQ-6590
> URL: https://issues.apache.org/jira/browse/AMQ-6590
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.14.3
>Reporter: Christopher L. Shannon
>Assignee: Christopher L. Shannon
>Priority: Major
> Fix For: 5.15.0, 5.14.4, 5.16.0, 5.15.7
>
>
> I have discovered an issue with the KahaDB index recovery after an unclean 
> shutdown (OOM error, kill -9, etc) that leads to excessive disk space usage. 
> Normally on clean shutdown the index stores the known set of free pages to 
> db.free and reads that in on start up to know which pages can be re-used.  On 
> an unclean shutdown this is not written to disk so on start up the index is 
> supposed to scan the page file to figure out all of the free pages.
> Unfortunately it turns out that this scan of the page file is being done 
> before the total page count value has been set so when the iterator is 
> created it always thinks there are 0 pages to scan.
> The end result is that every time an unclean shutdown occurs all known free 
> pages are lost and no longer tracked.  This of course means new free pages 
> have to be allocated and all of the existing space is now lost which will 
> lead to excessive index file growth over time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMQ-6590) KahaDB index loses track of free pages on unclean shutdown

2018-10-17 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender updated AMQ-6590:
---
Fix Version/s: 5.15.7
   5.16.0

> KahaDB index loses track of free pages on unclean shutdown
> --
>
> Key: AMQ-6590
> URL: https://issues.apache.org/jira/browse/AMQ-6590
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.14.3
>Reporter: Christopher L. Shannon
>Assignee: Christopher L. Shannon
>Priority: Major
> Fix For: 5.15.0, 5.14.4, 5.16.0, 5.15.7
>
>
> I have discovered an issue with the KahaDB index recovery after an unclean 
> shutdown (OOM error, kill -9, etc) that leads to excessive disk space usage. 
> Normally on clean shutdown the index stores the known set of free pages to 
> db.free and reads that in on start up to know which pages can be re-used.  On 
> an unclean shutdown this is not written to disk so on start up the index is 
> supposed to scan the page file to figure out all of the free pages.
> Unfortunately it turns out that this scan of the page file is being done 
> before the total page count value has been set so when the iterator is 
> created it always thinks there are 0 pages to scan.
> The end result is that every time an unclean shutdown occurs all known free 
> pages are lost and no longer tracked.  This of course means new free pages 
> have to be allocated and all of the existing space is now lost which will 
> lead to excessive index file growth over time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-6590) KahaDB index loses track of free pages on unclean shutdown

2018-10-17 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16653629#comment-16653629
 ] 

Jeff Genender commented on AMQ-6590:


Nice Gary Tully.  I pushed this to 5.15.x (5.15.7) as this was supposed to be 
in the 5.15 line as well.  Probably should have been a new improvement JIRA.  
Updated the versions above as well.  Thank you!

> KahaDB index loses track of free pages on unclean shutdown
> --
>
> Key: AMQ-6590
> URL: https://issues.apache.org/jira/browse/AMQ-6590
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.14.3
>Reporter: Christopher L. Shannon
>Assignee: Christopher L. Shannon
>Priority: Major
> Fix For: 5.15.0, 5.14.4, 5.16.0, 5.15.7
>
>
> I have discovered an issue with the KahaDB index recovery after an unclean 
> shutdown (OOM error, kill -9, etc) that leads to excessive disk space usage. 
> Normally on clean shutdown the index stores the known set of free pages to 
> db.free and reads that in on start up to know which pages can be re-used.  On 
> an unclean shutdown this is not written to disk so on start up the index is 
> supposed to scan the page file to figure out all of the free pages.
> Unfortunately it turns out that this scan of the page file is being done 
> before the total page count value has been set so when the iterator is 
> created it always thinks there are 0 pages to scan.
> The end result is that every time an unclean shutdown occurs all known free 
> pages are lost and no longer tracked.  This of course means new free pages 
> have to be allocated and all of the existing space is now lost which will 
> lead to excessive index file growth over time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (AMQ-7023) Add OWASP Dependency Check to build (all open source projects everywhere)

2018-07-28 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-7023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender resolved AMQ-7023.

   Resolution: Fixed
Fix Version/s: 5.15.5
   5.16.0

> Add OWASP Dependency Check to build (all open source projects everywhere)
> -
>
> Key: AMQ-7023
> URL: https://issues.apache.org/jira/browse/AMQ-7023
> Project: ActiveMQ
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 5.15.3, 5.15.4
> Environment: All development, build, test, environments.
>Reporter: Albert Baker
>Priority: Major
>  Labels: build, easyfix, security
> Fix For: 5.16.0, 5.15.5
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Please add OWASP Dependency Check to the ActiveMQ build (pom.xml).   OWASP DC 
> makes an outbound REST call to MITRE Common Vulnerabilities & Exposures (CVE) 
> to perform a lookup for each dependant .jar
> OWASP Dependency check : 
> [https://www.owasp.org/index.php/OWASP_Dependency_Check 
> |https://www.owasp.org/index.php/OWASP_Dependency_Check]has plug-ins for most 
> Java build/make types (ant, maven, gradle).   Add the appropriate command to 
> the nightly build to generate a report of all known vulnerabilities in 
> any/all third party libraries/dependencies that get pulled in. 
> Generating this report nightly/weekly will help inform the project's 
> development team if any dependant libraries have a reported known 
> vulneraility.  Project teams that keep up with removing vulnerabilities on a 
> weekly basis weekly basis will help protect businesses that rely on these 
> open source componets.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-6988) ActiveMQ 5.15.4 contains activemq-protobuf-1.1.jar which has three high severity CVEs against it.Discovered by adding OWASP Dependency check into ActiveMQ pom.xml and run

2018-07-28 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16560813#comment-16560813
 ] 

Jeff Genender commented on AMQ-6988:


Creating a Jira is a great start! :)

> ActiveMQ 5.15.4 contains activemq-protobuf-1.1.jar which has three high 
> severity CVEs against it.Discovered by adding OWASP Dependency check into 
> ActiveMQ pom.xml and running the OWASP report
> ---
>
> Key: AMQ-6988
> URL: https://issues.apache.org/jira/browse/AMQ-6988
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: webconsole
>Affects Versions: 5.15.4
> Environment: Environment: Customer environment is a mix of Linux and 
> Windows, Gig-LAN.  Will not accept the risk of having even one high severity 
> CVE in thier environment.
>Reporter: Albert Baker
>Priority: Blocker
>
> ActiveMQ 5.15.4 contains activemq-protobuf-1.1.jar which has two high 
> severity CVEs against it.
> Discovered by adding OWASP Dependency check into ActiveMQ pom.xml and running 
> the OWASP report
> CVE-2015-5183 Severity:High  CVSS Score: 7.5 (AV:N/AC:L/Au:N/C:P/I:P/A:P)
> CWE: CWE-254 Security Features The Hawtio console in A-MQ does not set 
> HTTPOnly or Secure attributes on cookies.
> CVE-2015-5184 Severity:High   CVSS Score: 7.5 (AV:N/AC:L/Au:N/C:P/I:P/A:P)
> CWE: CWE-254 Security Features The Hawtio console in A-MQ allows remote 
> attackers to obtain sensitive information and perform other unspecified 
> impact.
> CVE-2016-3088 Severity:High   CVSS Score: 7.5 (AV:N/AC:L/Au:N/C:P/I:P/A:P)
> CWE: CWE-20 Improper Input Validation
> The Fileserver web application in Apache ActiveMQ 5.x before 5.14.0 allows 
> remote attackers to upload and execute arbitrary files via an HTTP PUT 
> followed by an HTTP MOVE request.
> CONFIRM - 
> http://activemq.apache.org/security-advisories.data/CVE-2016-3088-announcement.txt
> EXPLOIT-DB - 42283
> MISC - http://www.zerodayinitiative.com/advisories/ZDI-16-356
> MISC - http://www.zerodayinitiative.com/advisories/ZDI-16-357
> REDHAT - RHSA-2016:2036



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-6988) ActiveMQ 5.15.4 contains activemq-protobuf-1.1.jar which has three high severity CVEs against it.Discovered by adding OWASP Dependency check into ActiveMQ pom.xml and run

2018-07-28 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16560763#comment-16560763
 ] 

Jeff Genender commented on AMQ-6988:


Hi Albert,

May I suggest that you create a Jira with for OWASP plugin and provide a patch? 
 It seems that you have tried this and that contribution would be great.  Based 
on CLS' response, I would think that we could get that in the project rather 
quickly.  In the time it took you to write up your response, it likely could 
have been included. :)

> ActiveMQ 5.15.4 contains activemq-protobuf-1.1.jar which has three high 
> severity CVEs against it.Discovered by adding OWASP Dependency check into 
> ActiveMQ pom.xml and running the OWASP report
> ---
>
> Key: AMQ-6988
> URL: https://issues.apache.org/jira/browse/AMQ-6988
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: webconsole
>Affects Versions: 5.15.4
> Environment: Environment: Customer environment is a mix of Linux and 
> Windows, Gig-LAN.  Will not accept the risk of having even one high severity 
> CVE in thier environment.
>Reporter: Albert Baker
>Priority: Blocker
>
> ActiveMQ 5.15.4 contains activemq-protobuf-1.1.jar which has two high 
> severity CVEs against it.
> Discovered by adding OWASP Dependency check into ActiveMQ pom.xml and running 
> the OWASP report
> CVE-2015-5183 Severity:High  CVSS Score: 7.5 (AV:N/AC:L/Au:N/C:P/I:P/A:P)
> CWE: CWE-254 Security Features The Hawtio console in A-MQ does not set 
> HTTPOnly or Secure attributes on cookies.
> CVE-2015-5184 Severity:High   CVSS Score: 7.5 (AV:N/AC:L/Au:N/C:P/I:P/A:P)
> CWE: CWE-254 Security Features The Hawtio console in A-MQ allows remote 
> attackers to obtain sensitive information and perform other unspecified 
> impact.
> CVE-2016-3088 Severity:High   CVSS Score: 7.5 (AV:N/AC:L/Au:N/C:P/I:P/A:P)
> CWE: CWE-20 Improper Input Validation
> The Fileserver web application in Apache ActiveMQ 5.x before 5.14.0 allows 
> remote attackers to upload and execute arbitrary files via an HTTP PUT 
> followed by an HTTP MOVE request.
> CONFIRM - 
> http://activemq.apache.org/security-advisories.data/CVE-2016-3088-announcement.txt
> EXPLOIT-DB - 42283
> MISC - http://www.zerodayinitiative.com/advisories/ZDI-16-356
> MISC - http://www.zerodayinitiative.com/advisories/ZDI-16-357
> REDHAT - RHSA-2016:2036



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7015) Startup performance improvement when log contains prepared transactions.

2018-07-27 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559864#comment-16559864
 ] 

Jeff Genender commented on AMQ-7015:


Thanks CSL.  I hope Gary can chime in above about a FILE parameter which I 
think would really help appease his concerns and this could be a cool 
enhancement for everyone.

> Startup performance improvement when log contains prepared transactions.
> 
>
> Key: AMQ-7015
> URL: https://issues.apache.org/jira/browse/AMQ-7015
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.15.4
>Reporter: Heath Kesler
>Assignee: Jeff Genender
>Priority: Minor
> Fix For: 5.16.0, 5.15.5
>
>
> I have a KahaDB that's performing a recovery on each startup.
> Digging deeper into it I've found that the issue is that the db.log contains 
> prepared transactions.
> The MessageDatabase will discard those entries in memory, however it does not 
> remove transaction info from those messages (I believe that's by design). So 
> on each restart, the broker will find those entries and again discard them in 
> memory.
> If I access the broker via JMX, I can go to the prepared XAs and execute a 
> clear on them one by one.
> When i restart my broker, i don't have a recovery attempted again.
> Performing a remove operation for each message can be very time consuming, so 
> i'd like to introduce an optional parameter to allow all prepared XAs to be 
> removed on recovery.
> Please see my forth coming patch with unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7015) Startup performance improvement when log contains prepared transactions.

2018-07-27 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559833#comment-16559833
 ] 

Jeff Genender commented on AMQ-7015:


My apologies CLS, I read what Gary wrong. I changed it.

Yes... that is community support.  They are users.  Call it what you want.

> Startup performance improvement when log contains prepared transactions.
> 
>
> Key: AMQ-7015
> URL: https://issues.apache.org/jira/browse/AMQ-7015
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.15.4
>Reporter: Heath Kesler
>Assignee: Jeff Genender
>Priority: Minor
> Fix For: 5.16.0, 5.15.5
>
>
> I have a KahaDB that's performing a recovery on each startup.
> Digging deeper into it I've found that the issue is that the db.log contains 
> prepared transactions.
> The MessageDatabase will discard those entries in memory, however it does not 
> remove transaction info from those messages (I believe that's by design). So 
> on each restart, the broker will find those entries and again discard them in 
> memory.
> If I access the broker via JMX, I can go to the prepared XAs and execute a 
> clear on them one by one.
> When i restart my broker, i don't have a recovery attempted again.
> Performing a remove operation for each message can be very time consuming, so 
> i'd like to introduce an optional parameter to allow all prepared XAs to be 
> removed on recovery.
> Please see my forth coming patch with unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (AMQ-7015) Startup performance improvement when log contains prepared transactions.

2018-07-27 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559808#comment-16559808
 ] 

Jeff Genender edited comment on AMQ-7015 at 7/27/18 2:23 PM:
-

I'm sorry Gary, that is not a technical reason, its a procedural reason.  This 
does not affect the running broker and there are dozens of settings in ActiveMQ 
that changes the way the broker works.  This does not alter the handling of XA 
transactions unless it is set, and it alters the setting just like the mbean 
does.  So I disagree.

The sad part is you are not offering other possibilities to have all of this.  
You are digging in your heals.  There is now community support for this option:

[http://activemq.2283324.n4.nabble.com/Discuss-AMQ7015-tp4741551.html]

 


was (Author: jgenender):
I'm sorry Gary, that is not a technical reason, its a procedural reason.  This 
does not affect the running broker and there are dozens of settings in ActiveMQ 
that changes the way the broker works.  This does not alter the handling of XA 
transactions unless it is set, and it alters the setting just like the mbean 
does.  So I disagree.

The sad part is you are not offering other possibilities to have all of this.  
You are digging in your heals.  There is now community support for this option:

http://activemq.2283324.n4.nabble.com/Discuss-AMQ7015-tp4741551.html

Unfortunately, unless you have a technical reason, your -1 doesn't stick and it 
goes in.  If you want to bring this to the PMC, we can discuss this there.

> Startup performance improvement when log contains prepared transactions.
> 
>
> Key: AMQ-7015
> URL: https://issues.apache.org/jira/browse/AMQ-7015
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.15.4
>Reporter: Heath Kesler
>Assignee: Jeff Genender
>Priority: Minor
> Fix For: 5.16.0, 5.15.5
>
>
> I have a KahaDB that's performing a recovery on each startup.
> Digging deeper into it I've found that the issue is that the db.log contains 
> prepared transactions.
> The MessageDatabase will discard those entries in memory, however it does not 
> remove transaction info from those messages (I believe that's by design). So 
> on each restart, the broker will find those entries and again discard them in 
> memory.
> If I access the broker via JMX, I can go to the prepared XAs and execute a 
> clear on them one by one.
> When i restart my broker, i don't have a recovery attempted again.
> Performing a remove operation for each message can be very time consuming, so 
> i'd like to introduce an optional parameter to allow all prepared XAs to be 
> removed on recovery.
> Please see my forth coming patch with unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7015) Startup performance improvement when log contains prepared transactions.

2018-07-27 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559827#comment-16559827
 ] 

Jeff Genender commented on AMQ-7015:


Lets do this... how about we add a FILE option to the attribute that looks for 
a file that can have "COMMIT" or "ROLLBACK" in it.  If the file exists, it does 
what it needs and then deletes the file, so it is one time.

By doing this allow you to have a permanent setting, a one time settings, or 
defualt to off.

Gary, will that work?

> Startup performance improvement when log contains prepared transactions.
> 
>
> Key: AMQ-7015
> URL: https://issues.apache.org/jira/browse/AMQ-7015
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.15.4
>Reporter: Heath Kesler
>Assignee: Jeff Genender
>Priority: Minor
> Fix For: 5.16.0, 5.15.5
>
>
> I have a KahaDB that's performing a recovery on each startup.
> Digging deeper into it I've found that the issue is that the db.log contains 
> prepared transactions.
> The MessageDatabase will discard those entries in memory, however it does not 
> remove transaction info from those messages (I believe that's by design). So 
> on each restart, the broker will find those entries and again discard them in 
> memory.
> If I access the broker via JMX, I can go to the prepared XAs and execute a 
> clear on them one by one.
> When i restart my broker, i don't have a recovery attempted again.
> Performing a remove operation for each message can be very time consuming, so 
> i'd like to introduce an optional parameter to allow all prepared XAs to be 
> removed on recovery.
> Please see my forth coming patch with unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7015) Startup performance improvement when log contains prepared transactions.

2018-07-27 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559808#comment-16559808
 ] 

Jeff Genender commented on AMQ-7015:


I'm sorry Gary, that is not a technical reason, its a procedural reason.  This 
does not affect the running broker and there are dozens of settings in ActiveMQ 
that changes the way the broker works.  This does not alter the handling of XA 
transactions unless it is set, and it alters the setting just like the mbean 
does.  So I disagree.

The sad part is you are not offering other possibilities to have all of this.  
You are digging in your heals.  There is now community support for this option:

http://activemq.2283324.n4.nabble.com/Discuss-AMQ7015-tp4741551.html

Unfortunately, unless you have a technical reason, your -1 doesn't stick and it 
goes in.  If you want to bring this to the PMC, we can discuss this there.

> Startup performance improvement when log contains prepared transactions.
> 
>
> Key: AMQ-7015
> URL: https://issues.apache.org/jira/browse/AMQ-7015
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.15.4
>Reporter: Heath Kesler
>Assignee: Jeff Genender
>Priority: Minor
> Fix For: 5.16.0, 5.15.5
>
>
> I have a KahaDB that's performing a recovery on each startup.
> Digging deeper into it I've found that the issue is that the db.log contains 
> prepared transactions.
> The MessageDatabase will discard those entries in memory, however it does not 
> remove transaction info from those messages (I believe that's by design). So 
> on each restart, the broker will find those entries and again discard them in 
> memory.
> If I access the broker via JMX, I can go to the prepared XAs and execute a 
> clear on them one by one.
> When i restart my broker, i don't have a recovery attempted again.
> Performing a remove operation for each message can be very time consuming, so 
> i'd like to introduce an optional parameter to allow all prepared XAs to be 
> removed on recovery.
> Please see my forth coming patch with unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7015) Startup performance improvement when log contains prepared transactions.

2018-07-27 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559734#comment-16559734
 ] 

Jeff Genender commented on AMQ-7015:


The mbean is fine for small numbers of records.  If you have a large XA batch, 
then the mbean can actually cause a OOM due to the amount.  So I disagree.  We 
need a way to configure this inside the broker as a part of recovery.  I do not 
see this any different than the 
ignoreMissingJournalfiles/checkForCorruptJournalFiles parameters.

Perhaps the user wants the setting permanent simply due to large batch sizes.  
This would certainly be an enhancement for that.  With the MBean *and* this 
parameter, you have the best of doing it all ways.

Your objection appears to be non-technical in nature.  Can you please provide a 
technical reason to not do this? Unless you have one, I do not see a reason to 
hold this back.

> Startup performance improvement when log contains prepared transactions.
> 
>
> Key: AMQ-7015
> URL: https://issues.apache.org/jira/browse/AMQ-7015
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.15.4
>Reporter: Heath Kesler
>Assignee: Jeff Genender
>Priority: Minor
> Fix For: 5.16.0, 5.15.5
>
>
> I have a KahaDB that's performing a recovery on each startup.
> Digging deeper into it I've found that the issue is that the db.log contains 
> prepared transactions.
> The MessageDatabase will discard those entries in memory, however it does not 
> remove transaction info from those messages (I believe that's by design). So 
> on each restart, the broker will find those entries and again discard them in 
> memory.
> If I access the broker via JMX, I can go to the prepared XAs and execute a 
> clear on them one by one.
> When i restart my broker, i don't have a recovery attempted again.
> Performing a remove operation for each message can be very time consuming, so 
> i'd like to introduce an optional parameter to allow all prepared XAs to be 
> removed on recovery.
> Please see my forth coming patch with unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7015) Startup performance improvement when log contains prepared transactions.

2018-07-26 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558411#comment-16558411
 ] 

Jeff Genender commented on AMQ-7015:


The new code calls commit and rollback with 
28819aea4aa981d33c710d9d5e26f3cb6e03c1de calls commit and rollback, with the 
store command.  Is that disabled?  From the code paths, it looks like it does a 
full commit or rollback and isn't disabled.  I think what I am failing to 
understand is how that does not act nearly identically to the MBean?

> Startup performance improvement when log contains prepared transactions.
> 
>
> Key: AMQ-7015
> URL: https://issues.apache.org/jira/browse/AMQ-7015
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.15.4
>Reporter: Heath Kesler
>Assignee: Jeff Genender
>Priority: Minor
> Fix For: 5.16.0, 5.15.5
>
>
> I have a KahaDB that's performing a recovery on each startup.
> Digging deeper into it I've found that the issue is that the db.log contains 
> prepared transactions.
> The MessageDatabase will discard those entries in memory, however it does not 
> remove transaction info from those messages (I believe that's by design). So 
> on each restart, the broker will find those entries and again discard them in 
> memory.
> If I access the broker via JMX, I can go to the prepared XAs and execute a 
> clear on them one by one.
> When i restart my broker, i don't have a recovery attempted again.
> Performing a remove operation for each message can be very time consuming, so 
> i'd like to introduce an optional parameter to allow all prepared XAs to be 
> removed on recovery.
> Please see my forth coming patch with unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (AMQ-7015) Startup performance improvement when log contains prepared transactions.

2018-07-26 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558378#comment-16558378
 ] 

Jeff Genender edited comment on AMQ-7015 at 7/26/18 2:50 PM:
-

Hi Gary,

 

I would love to have longer conversation to that we could discuss this more 
in-depth.  I am a bit confused.  The setting now acts nearly identical to the 
MBean, but works en-masse.  It allows you to select comit/rollback/ or nothing 
(default).  The unit tests appear to confirm that it works whereby the commit 
adds the messages through, the rollback removes them.  It seems to so the same 
as the MBean.  Could you help explain why this new change does not work?

 

The new code does all with KahaDB along with processing the XA (or not).  
Thanks!


was (Author: jgenender):
Hi Gary,

 

I would love to have longer conversation to that we could discuss this more 
in-depth.  I am a bit confused.  The setting now acts nearly identical to the 
MBean, but works en-masse.  It allows you to select comit/rollback/ or nothing 
(default).  The unit tests appear to confirm that it works whereby the commit 
adds the messages through, the rollback removes them.  It seems to so the same 
as the MBean.  Could you help explain why this new change does not work?

> Startup performance improvement when log contains prepared transactions.
> 
>
> Key: AMQ-7015
> URL: https://issues.apache.org/jira/browse/AMQ-7015
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.15.4
>Reporter: Heath Kesler
>Assignee: Jeff Genender
>Priority: Minor
> Fix For: 5.16.0, 5.15.5
>
>
> I have a KahaDB that's performing a recovery on each startup.
> Digging deeper into it I've found that the issue is that the db.log contains 
> prepared transactions.
> The MessageDatabase will discard those entries in memory, however it does not 
> remove transaction info from those messages (I believe that's by design). So 
> on each restart, the broker will find those entries and again discard them in 
> memory.
> If I access the broker via JMX, I can go to the prepared XAs and execute a 
> clear on them one by one.
> When i restart my broker, i don't have a recovery attempted again.
> Performing a remove operation for each message can be very time consuming, so 
> i'd like to introduce an optional parameter to allow all prepared XAs to be 
> removed on recovery.
> Please see my forth coming patch with unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (AMQ-7015) Startup performance improvement when log contains prepared transactions.

2018-07-26 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558378#comment-16558378
 ] 

Jeff Genender edited comment on AMQ-7015 at 7/26/18 2:46 PM:
-

Hi Gary,

 

I would love to have longer conversation to that we could discuss this more 
in-depth.  I am a bit confused.  The setting now acts nearly identical to the 
MBean, but works en-masse.  It allows you to select comit/rollback/ or nothing 
(default).  The unit tests appear to confirm that it works whereby the commit 
adds the messages through, the rollback removes them.  It seems to so the same 
as the MBean.  Could you help explain why this new change does not work?


was (Author: jgenender):
Hi Gary,

 

I would love to have longer conversation to that we could discuss this more 
in-depth.  I am a bit confused.  The setting now acts nearly identical to the 
MBean, but works en-masse.  It allows you to select comit/rollback/ nor nothing 
(default).  The unit tests appear to confirm that it works whereby the commit 
adds the messages through, the rollback removes them.  It seems to so the same 
as the MBean.  Could you help explain why this new change does not work?

> Startup performance improvement when log contains prepared transactions.
> 
>
> Key: AMQ-7015
> URL: https://issues.apache.org/jira/browse/AMQ-7015
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.15.4
>Reporter: Heath Kesler
>Assignee: Jeff Genender
>Priority: Minor
> Fix For: 5.16.0, 5.15.5
>
>
> I have a KahaDB that's performing a recovery on each startup.
> Digging deeper into it I've found that the issue is that the db.log contains 
> prepared transactions.
> The MessageDatabase will discard those entries in memory, however it does not 
> remove transaction info from those messages (I believe that's by design). So 
> on each restart, the broker will find those entries and again discard them in 
> memory.
> If I access the broker via JMX, I can go to the prepared XAs and execute a 
> clear on them one by one.
> When i restart my broker, i don't have a recovery attempted again.
> Performing a remove operation for each message can be very time consuming, so 
> i'd like to introduce an optional parameter to allow all prepared XAs to be 
> removed on recovery.
> Please see my forth coming patch with unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7015) Startup performance improvement when log contains prepared transactions.

2018-07-26 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558378#comment-16558378
 ] 

Jeff Genender commented on AMQ-7015:


Hi Gary,

 

I would love to have longer conversation to that we could discuss this more 
in-depth.  I am a bit confused.  The setting now acts nearly identical to the 
MBean, but works en-masse.  It allows you to select comit/rollback/ nor nothing 
(default).  The unit tests appear to confirm that it works whereby the commit 
adds the messages through, the rollback removes them.  It seems to so the same 
as the MBean.  Could you help explain why this new change does not work?

> Startup performance improvement when log contains prepared transactions.
> 
>
> Key: AMQ-7015
> URL: https://issues.apache.org/jira/browse/AMQ-7015
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.15.4
>Reporter: Heath Kesler
>Assignee: Jeff Genender
>Priority: Minor
> Fix For: 5.16.0, 5.15.5
>
>
> I have a KahaDB that's performing a recovery on each startup.
> Digging deeper into it I've found that the issue is that the db.log contains 
> prepared transactions.
> The MessageDatabase will discard those entries in memory, however it does not 
> remove transaction info from those messages (I believe that's by design). So 
> on each restart, the broker will find those entries and again discard them in 
> memory.
> If I access the broker via JMX, I can go to the prepared XAs and execute a 
> clear on them one by one.
> When i restart my broker, i don't have a recovery attempted again.
> Performing a remove operation for each message can be very time consuming, so 
> i'd like to introduce an optional parameter to allow all prepared XAs to be 
> removed on recovery.
> Please see my forth coming patch with unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7015) Startup performance improvement when log contains prepared transactions.

2018-07-25 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16556145#comment-16556145
 ] 

Jeff Genender commented on AMQ-7015:


Hi Gary,

Your idea makes a lot of sense.  Made the change to be 
{{purgeRecoveredXATransactionStrategy}} and may be never/commit/rollback.  
Thanks for your input and feedback!

> Startup performance improvement when log contains prepared transactions.
> 
>
> Key: AMQ-7015
> URL: https://issues.apache.org/jira/browse/AMQ-7015
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.15.4
>Reporter: Heath Kesler
>Assignee: Jeff Genender
>Priority: Minor
> Fix For: 5.16.0, 5.15.5
>
>
> I have a KahaDB that's performing a recovery on each startup.
> Digging deeper into it I've found that the issue is that the db.log contains 
> prepared transactions.
> The MessageDatabase will discard those entries in memory, however it does not 
> remove transaction info from those messages (I believe that's by design). So 
> on each restart, the broker will find those entries and again discard them in 
> memory.
> If I access the broker via JMX, I can go to the prepared XAs and execute a 
> clear on them one by one.
> When i restart my broker, i don't have a recovery attempted again.
> Performing a remove operation for each message can be very time consuming, so 
> i'd like to introduce an optional parameter to allow all prepared XAs to be 
> removed on recovery.
> Please see my forth coming patch with unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7015) Startup performance improvement when log contains prepared transactions.

2018-07-25 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16555882#comment-16555882
 ] 

Jeff Genender commented on AMQ-7015:


Hi Gary,

Patch appears to be a stop gap way to purge all prepared transactions that are 
stuck in recovery.  There is an mbean that allows full commit/rollback which is 
awesome.  But for large amounts of XA stuck in prepare, the mbean can be 
unwieldy and users need a quick way to clean them out otherwise a slow recovery 
is kicked off on every restart.  This is a switch that allows the XA messages 
to be purged from recovery in one shot.  Its a switch and its off by default. 
Its a convenience setting.

Let us know your thoughts on that.

> Startup performance improvement when log contains prepared transactions.
> 
>
> Key: AMQ-7015
> URL: https://issues.apache.org/jira/browse/AMQ-7015
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.15.4
>Reporter: Heath Kesler
>Assignee: Jeff Genender
>Priority: Minor
> Fix For: 5.16.0, 5.15.5
>
>
> I have a KahaDB that's performing a recovery on each startup.
> Digging deeper into it I've found that the issue is that the db.log contains 
> prepared transactions.
> The MessageDatabase will discard those entries in memory, however it does not 
> remove transaction info from those messages (I believe that's by design). So 
> on each restart, the broker will find those entries and again discard them in 
> memory.
> If I access the broker via JMX, I can go to the prepared XAs and execute a 
> clear on them one by one.
> When i restart my broker, i don't have a recovery attempted again.
> Performing a remove operation for each message can be very time consuming, so 
> i'd like to introduce an optional parameter to allow all prepared XAs to be 
> removed on recovery.
> Please see my forth coming patch with unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (AMQ-7015) Startup performance improvement when log contains prepared transactions.

2018-07-25 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender reassigned AMQ-7015:
--

Assignee: Jeff Genender

> Startup performance improvement when log contains prepared transactions.
> 
>
> Key: AMQ-7015
> URL: https://issues.apache.org/jira/browse/AMQ-7015
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.15.4
>Reporter: Heath Kesler
>Assignee: Jeff Genender
>Priority: Minor
> Fix For: 5.16.0, 5.15.5
>
>
> I have a KahaDB that's performing a recovery on each startup.
> Digging deeper into it I've found that the issue is that the db.log contains 
> prepared transactions.
> The MessageDatabase will discard those entries in memory, however it does not 
> remove transaction info from those messages (I believe that's by design). So 
> on each restart, the broker will find those entries and again discard them in 
> memory.
> If I access the broker via JMX, I can go to the prepared XAs and execute a 
> clear on them one by one.
> When i restart my broker, i don't have a recovery attempted again.
> Performing a remove operation for each message can be very time consuming, so 
> i'd like to introduce an optional parameter to allow all prepared XAs to be 
> removed on recovery.
> Please see my forth coming patch with unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-6995) ActiveMQ 5.15.4 activemq-ra-5.15.4.jar which has two high severity CVEs against it.

2018-07-23 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-6995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16553236#comment-16553236
 ] 

Jeff Genender commented on AMQ-6995:


Hawt.io is not in ActiveMQ 5.15.4.  What is the CVE for activemq-ra-5.15.4.jar?

> ActiveMQ 5.15.4 activemq-ra-5.15.4.jar which has two high severity CVEs 
> against it.
> ---
>
> Key: AMQ-6995
> URL: https://issues.apache.org/jira/browse/AMQ-6995
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: webconsole
>Affects Versions: 5.15.4
> Environment: Environment: Customer environment is a mix of Linux and 
> Windows, Gig-LAN (Medical & Finacial services).  Will not accept the risk of 
> having even one high severity CVE in thier environment. The cost of 
> (SOX/HIPPA) insurence is too high to allow even one CVE with newly deployed 
> systems.
>Reporter: Albert Baker
>Priority: Blocker
>
> CVE-2015-5183   Severity:High  CVSS Score: 7.5 (AV:N/AC:L/Au:N/C:P/I:P/A:P)
> CWE: CWE-254 Security Features
> The Hawtio console in A-MQ does not set HTTPOnly or Secure attributes on 
> cookies.
> CONFIRM - https://bugzilla.redhat.com/show_bug.cgi?id=1249182
> Vulnerable Software & Versions:
> cpe:/a:apache:activemq:-
> CVE-2015-5184 Severity:High CVSS Score: 7.5 (AV:N/AC:L/Au:N/C:P/I:P/A:P)
> CWE: CWE-254 Security Features
> The Hawtio console in A-MQ allows remote attackers to obtain sensitive 
> information and perform other unspecified impact.
> CONFIRM - https://bugzilla.redhat.c



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (AMQ-6996) ActiveMQ 5.15.4 xercesImpl-2.11.0.jar which has one high severity CVE against it.

2018-07-23 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-6996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender resolved AMQ-6996.

   Resolution: Fixed
Fix Version/s: 5.15.5
   5.16.0

Thanks for the patch Jamie!

> ActiveMQ 5.15.4 xercesImpl-2.11.0.jar which has one high severity CVE against 
> it.
> -
>
> Key: AMQ-6996
> URL: https://issues.apache.org/jira/browse/AMQ-6996
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker, webconsole
>Affects Versions: 5.15.4
> Environment: Environment: Customer environment is a mix of Linux and 
> Windows, Gig-LAN (Medical & Finacial services).  Will not accept the risk of 
> having even one high severity CVE in thier environment. The cost of 
> (SOX/HIPPA) insurence is too high to allow even one CVE with newly deployed 
> systems.
>Reporter: Albert Baker
>Priority: Blocker
> Fix For: 5.16.0, 5.15.5
>
>
> ActiveMQ 5.15.4 xercesImpl-2.11.0.jar which has one high severity CVE against 
> it.
> Discovered by adding OWASP Dependency check into ActiveMQ pom.xml and running 
> the OWASP report.
> CVE-2012-0881 Severity:High  CVSS Score: 7.8 (AV:N/AC:L/Au:N/C:N/I:N/A:C)
> CWE: CWE-399 Resource Management Errors
> Apache Xerces2 Java allows remote attackers to cause a denial of service (CPU 
> consumption) via a crafted message to an XML service, which triggers hash 
> table collisions.
> CONFIRM - https://bugzilla.redhat.com/show_bug.cgi?id=787104
> MLIST - [oss-security] 20140708 Summer bug cleaning - some Hash DoS stuff
> Vulnerable Software & Versions:
> cpe:/a:apache:xerces2_java:2.11.0 and all previous versions



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (AMQ-7015) Startup performance improvement when log contains prepared transactions.

2018-07-19 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender resolved AMQ-7015.

   Resolution: Fixed
Fix Version/s: 5.15.5
   5.16.0

> Startup performance improvement when log contains prepared transactions.
> 
>
> Key: AMQ-7015
> URL: https://issues.apache.org/jira/browse/AMQ-7015
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.15.4
>Reporter: Heath Kesler
>Priority: Minor
> Fix For: 5.16.0, 5.15.5
>
>
> I have a KahaDB that's performing a recovery on each startup.
> Digging deeper into it I've found that the issue is that the db.log contains 
> prepared transactions.
> The MessageDatabase will discard those entries in memory, however it does not 
> remove transaction info from those messages (I believe that's by design). So 
> on each restart, the broker will find those entries and again discard them in 
> memory.
> If I access the broker via JMX, I can go to the prepared XAs and execute a 
> clear on them one by one.
> When i restart my broker, i don't have a recovery attempted again.
> Performing a remove operation for each message can be very time consuming, so 
> i'd like to introduce an optional parameter to allow all prepared XAs to be 
> removed on recovery.
> Please see my forth coming patch with unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-7015) Startup performance improvement when log contains prepared transactions.

2018-07-19 Thread Jeff Genender (JIRA)


[ 
https://issues.apache.org/jira/browse/AMQ-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16549639#comment-16549639
 ] 

Jeff Genender commented on AMQ-7015:


Great patch Heath.  Thanks for your contribution!

> Startup performance improvement when log contains prepared transactions.
> 
>
> Key: AMQ-7015
> URL: https://issues.apache.org/jira/browse/AMQ-7015
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.15.4
>Reporter: Heath Kesler
>Priority: Minor
>
> I have a KahaDB that's performing a recovery on each startup.
> Digging deeper into it I've found that the issue is that the db.log contains 
> prepared transactions.
> The MessageDatabase will discard those entries in memory, however it does not 
> remove transaction info from those messages (I believe that's by design). So 
> on each restart, the broker will find those entries and again discard them in 
> memory.
> If I access the broker via JMX, I can go to the prepared XAs and execute a 
> clear on them one by one.
> When i restart my broker, i don't have a recovery attempted again.
> Performing a remove operation for each message can be very time consuming, so 
> i'd like to introduce an optional parameter to allow all prepared XAs to be 
> removed on recovery.
> Please see my forth coming patch with unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (AMQ-7013) XATransactionID hash function may generate duplicates.

2018-07-18 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-7013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender resolved AMQ-7013.

   Resolution: Fixed
Fix Version/s: 5.15.5
   5.16.0

Great patch, thanks Jamie!

> XATransactionID hash function may generate duplicates.
> --
>
> Key: AMQ-7013
> URL: https://issues.apache.org/jira/browse/AMQ-7013
> Project: ActiveMQ
>  Issue Type: Bug
>Reporter: Jamie goodyear
>Priority: Major
> Fix For: 5.16.0, 5.15.5
>
>
> XATransactionID hash function may generate duplicates.
> Scenario:
> XID formatId, GlobalTransaction, and BranchQualifier values are identical for 
> many entries. We need to use a better hash function to avoid populating a map 
> with many entries in the same bucket (results in bucket having O(n) 
> performance on recovery).
> Example using existing Hash Function:
> 2018-07-18 06:13:29,866 | INFO  | Recovering from the journal @1:28 | 
> org.apache.activemq.store.kahadb.MessageDatabase | main
> 2018-07-18 06:23:04,070 | INFO  | @2:484592, 10 entries recovered .. | 
> org.apache.activemq.store.kahadb.MessageDatabase | main
> 2018-07-18 06:23:04,099 | INFO  | Recovery replayed 100453 operations from 
> the journal in 574.233 seconds. | 
> org.apache.activemq.store.kahadb.MessageDatabase | main

> Using JenkinsHash:
> 2018-07-18 10:58:43,713 | INFO  | Recovering from the journal @1:28 | 
> org.apache.activemq.store.kahadb.MessageDatabase | main
> 2018-07-18 10:58:51,302 | INFO  | @2:484592, 10 entries recovered .. | 
> org.apache.activemq.store.kahadb.MessageDatabase | main
> 2018-07-18 10:58:51,329 | INFO  | Recovery replayed 100453 operations from 
> the journal in 7.618 seconds. | 
> org.apache.activemq.store.kahadb.MessageDatabase | main



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMQ-7011) Activemq 5.15.4 Stomp protocol allowed to enter deadlock via dispatch sync

2018-07-15 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-7011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender updated AMQ-7011:
---
Fix Version/s: 5.15.5

> Activemq 5.15.4 Stomp protocol allowed to enter deadlock via dispatch sync
> --
>
> Key: AMQ-7011
> URL: https://issues.apache.org/jira/browse/AMQ-7011
> Project: ActiveMQ
>  Issue Type: Bug
>Reporter: Jamie goodyear
>Priority: Major
> Fix For: 5.16.0, 5.15.5
>
>
> Activemq 5.15.4 Stomp protocol allowed to enter deadlock via dispatch sync.
> Scenario:
> Stomp client setting the following:
> header.put("id", subId);
> header.put("activemq.dispatchAsync", "false");
> The setup of locks between TopicSubscription and MutexTransport while using 
> Stomp in sync mode can result in a Deadlock as found below (Add and Destroy 
> calls processing), each lock is identified by a + or * to show lock order in 
> each stack trace.
> 

Found one Java-level deadlock:
> =
> "ActiveMQ Transport: tcp:///127.0.0.1:58303@61613":
>   waiting to lock monitor 0x7f9c565d4d28 (object 0x0007acc44708, a 
> java.lang.Object),
>   which is held by "ActiveMQ Transport: tcp:///127.0.0.1:58302@61613"
> "ActiveMQ Transport: tcp:///127.0.0.1:58302@61613":
>   waiting for ownable synchronizer 0x0007ac872730, (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync),
>   which is held by "ActiveMQ Transport: tcp:///127.0.0.1:58303@61613"
> Java stack information for the threads listed above:
> ===
> "ActiveMQ Transport: tcp:///127.0.0.1:58303@61613":
> ++  at 
> org.apache.activemq.broker.region.TopicSubscription.destroy(TopicSubscription.java:757)
>     - waiting to lock <0x0007acc44708> (a java.lang.Object)
>     at 
> org.apache.activemq.broker.region.AbstractRegion.destroySubscription(AbstractRegion.java:488)
>     at 
> org.apache.activemq.broker.jmx.ManagedTopicRegion.destroySubscription(ManagedTopicRegion.java:52)
>     at 
> org.apache.activemq.broker.region.AbstractRegion.removeConsumer(AbstractRegion.java:480)
>     at 
> org.apache.activemq.broker.region.TopicRegion.removeConsumer(TopicRegion.java:206)
>     at 
> org.apache.activemq.broker.region.RegionBroker.removeConsumer(RegionBroker.java:429)
>     at 
> org.apache.activemq.broker.jmx.ManagedRegionBroker.removeConsumer(ManagedRegionBroker.java:258)
>     at 
> org.apache.activemq.broker.BrokerFilter.removeConsumer(BrokerFilter.java:139)
>     at 
> org.apache.activemq.advisory.AdvisoryBroker.removeConsumer(AdvisoryBroker.java:352)
>     at 
> org.apache.activemq.broker.BrokerFilter.removeConsumer(BrokerFilter.java:139)
>     at 
> org.apache.activemq.broker.BrokerFilter.removeConsumer(BrokerFilter.java:139)
>     at 
> org.apache.activemq.broker.BrokerFilter.removeConsumer(BrokerFilter.java:139)
>     at 
> org.apache.activemq.broker.TransportConnection.processRemoveConsumer(TransportConnection.java:729)
>     at 
> org.apache.activemq.broker.TransportConnection.processRemoveSession(TransportConnection.java:768)
>     at 
> org.apache.activemq.broker.TransportConnection.processRemoveConnection(TransportConnection.java:879)
>     - locked <0x0007ac999f00> (a 
> org.apache.activemq.broker.jmx.ManagedTransportConnection)
>     at org.apache.activemq.command.RemoveInfo.visit(RemoveInfo.java:73)
>     at 
> org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:330)
>     at 
> org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:194)
> *   at 
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:45)
>     at 
> org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:301)
>     at 
> org.apache.activemq.transport.stomp.StompTransportFilter.sendToActiveMQ(StompTransportFilter.java:97)
>     at 
> org.apache.activemq.transport.stomp.ProtocolConverter.sendToActiveMQ(ProtocolConverter.java:202)
>     at 
> org.apache.activemq.transport.stomp.ProtocolConverter.onStompDisconnect(ProtocolConverter.java:838)
>     at 
> org.apache.activemq.transport.stomp.ProtocolConverter.onStompCommand(ProtocolConverter.java:267)
>     at 
> org.apache.activemq.transport.stomp.StompTransportFilter.onCommand(StompTransportFilter.java:85)
>     at 
> org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
>     at 
> org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:233)
>     at 
> org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:215)
>     at java.lang.Thread.run(Thread.java:748)
> "ActiveMQ Transport: tcp:///127.0.0.1:58302@61613":
>     at sun.misc.Unsafe.park(Native Method)
>     - parking to wait for  <0x0007ac872730> (a 
> 

[jira] [Updated] (AMQ-7011) Activemq 5.15.4 Stomp protocol allowed to enter deadlock via dispatch sync

2018-07-15 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-7011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender updated AMQ-7011:
---
Fix Version/s: 5.16.0

> Activemq 5.15.4 Stomp protocol allowed to enter deadlock via dispatch sync
> --
>
> Key: AMQ-7011
> URL: https://issues.apache.org/jira/browse/AMQ-7011
> Project: ActiveMQ
>  Issue Type: Bug
>Reporter: Jamie goodyear
>Priority: Major
> Fix For: 5.16.0, 5.15.5
>
>
> Activemq 5.15.4 Stomp protocol allowed to enter deadlock via dispatch sync.
> Scenario:
> Stomp client setting the following:
> header.put("id", subId);
> header.put("activemq.dispatchAsync", "false");
> The setup of locks between TopicSubscription and MutexTransport while using 
> Stomp in sync mode can result in a Deadlock as found below (Add and Destroy 
> calls processing), each lock is identified by a + or * to show lock order in 
> each stack trace.
> 

Found one Java-level deadlock:
> =
> "ActiveMQ Transport: tcp:///127.0.0.1:58303@61613":
>   waiting to lock monitor 0x7f9c565d4d28 (object 0x0007acc44708, a 
> java.lang.Object),
>   which is held by "ActiveMQ Transport: tcp:///127.0.0.1:58302@61613"
> "ActiveMQ Transport: tcp:///127.0.0.1:58302@61613":
>   waiting for ownable synchronizer 0x0007ac872730, (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync),
>   which is held by "ActiveMQ Transport: tcp:///127.0.0.1:58303@61613"
> Java stack information for the threads listed above:
> ===
> "ActiveMQ Transport: tcp:///127.0.0.1:58303@61613":
> ++  at 
> org.apache.activemq.broker.region.TopicSubscription.destroy(TopicSubscription.java:757)
>     - waiting to lock <0x0007acc44708> (a java.lang.Object)
>     at 
> org.apache.activemq.broker.region.AbstractRegion.destroySubscription(AbstractRegion.java:488)
>     at 
> org.apache.activemq.broker.jmx.ManagedTopicRegion.destroySubscription(ManagedTopicRegion.java:52)
>     at 
> org.apache.activemq.broker.region.AbstractRegion.removeConsumer(AbstractRegion.java:480)
>     at 
> org.apache.activemq.broker.region.TopicRegion.removeConsumer(TopicRegion.java:206)
>     at 
> org.apache.activemq.broker.region.RegionBroker.removeConsumer(RegionBroker.java:429)
>     at 
> org.apache.activemq.broker.jmx.ManagedRegionBroker.removeConsumer(ManagedRegionBroker.java:258)
>     at 
> org.apache.activemq.broker.BrokerFilter.removeConsumer(BrokerFilter.java:139)
>     at 
> org.apache.activemq.advisory.AdvisoryBroker.removeConsumer(AdvisoryBroker.java:352)
>     at 
> org.apache.activemq.broker.BrokerFilter.removeConsumer(BrokerFilter.java:139)
>     at 
> org.apache.activemq.broker.BrokerFilter.removeConsumer(BrokerFilter.java:139)
>     at 
> org.apache.activemq.broker.BrokerFilter.removeConsumer(BrokerFilter.java:139)
>     at 
> org.apache.activemq.broker.TransportConnection.processRemoveConsumer(TransportConnection.java:729)
>     at 
> org.apache.activemq.broker.TransportConnection.processRemoveSession(TransportConnection.java:768)
>     at 
> org.apache.activemq.broker.TransportConnection.processRemoveConnection(TransportConnection.java:879)
>     - locked <0x0007ac999f00> (a 
> org.apache.activemq.broker.jmx.ManagedTransportConnection)
>     at org.apache.activemq.command.RemoveInfo.visit(RemoveInfo.java:73)
>     at 
> org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:330)
>     at 
> org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:194)
> *   at 
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:45)
>     at 
> org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:301)
>     at 
> org.apache.activemq.transport.stomp.StompTransportFilter.sendToActiveMQ(StompTransportFilter.java:97)
>     at 
> org.apache.activemq.transport.stomp.ProtocolConverter.sendToActiveMQ(ProtocolConverter.java:202)
>     at 
> org.apache.activemq.transport.stomp.ProtocolConverter.onStompDisconnect(ProtocolConverter.java:838)
>     at 
> org.apache.activemq.transport.stomp.ProtocolConverter.onStompCommand(ProtocolConverter.java:267)
>     at 
> org.apache.activemq.transport.stomp.StompTransportFilter.onCommand(StompTransportFilter.java:85)
>     at 
> org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
>     at 
> org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:233)
>     at 
> org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:215)
>     at java.lang.Thread.run(Thread.java:748)
> "ActiveMQ Transport: tcp:///127.0.0.1:58302@61613":
>     at sun.misc.Unsafe.park(Native Method)
>     - parking to wait for  <0x0007ac872730> (a 
> 

[jira] [Resolved] (AMQ-7011) Activemq 5.15.4 Stomp protocol allowed to enter deadlock via dispatch sync

2018-07-15 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-7011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender resolved AMQ-7011.

Resolution: Fixed

Fixed.  Thanks Jamie for the contribution.  This follows the MQTT and AMQP way 
of handling this issue.

> Activemq 5.15.4 Stomp protocol allowed to enter deadlock via dispatch sync
> --
>
> Key: AMQ-7011
> URL: https://issues.apache.org/jira/browse/AMQ-7011
> Project: ActiveMQ
>  Issue Type: Bug
>Reporter: Jamie goodyear
>Priority: Major
>
> Activemq 5.15.4 Stomp protocol allowed to enter deadlock via dispatch sync.
> Scenario:
> Stomp client setting the following:
> header.put("id", subId);
> header.put("activemq.dispatchAsync", "false");
> The setup of locks between TopicSubscription and MutexTransport while using 
> Stomp in sync mode can result in a Deadlock as found below (Add and Destroy 
> calls processing), each lock is identified by a + or * to show lock order in 
> each stack trace.
> 

Found one Java-level deadlock:
> =
> "ActiveMQ Transport: tcp:///127.0.0.1:58303@61613":
>   waiting to lock monitor 0x7f9c565d4d28 (object 0x0007acc44708, a 
> java.lang.Object),
>   which is held by "ActiveMQ Transport: tcp:///127.0.0.1:58302@61613"
> "ActiveMQ Transport: tcp:///127.0.0.1:58302@61613":
>   waiting for ownable synchronizer 0x0007ac872730, (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync),
>   which is held by "ActiveMQ Transport: tcp:///127.0.0.1:58303@61613"
> Java stack information for the threads listed above:
> ===
> "ActiveMQ Transport: tcp:///127.0.0.1:58303@61613":
> ++  at 
> org.apache.activemq.broker.region.TopicSubscription.destroy(TopicSubscription.java:757)
>     - waiting to lock <0x0007acc44708> (a java.lang.Object)
>     at 
> org.apache.activemq.broker.region.AbstractRegion.destroySubscription(AbstractRegion.java:488)
>     at 
> org.apache.activemq.broker.jmx.ManagedTopicRegion.destroySubscription(ManagedTopicRegion.java:52)
>     at 
> org.apache.activemq.broker.region.AbstractRegion.removeConsumer(AbstractRegion.java:480)
>     at 
> org.apache.activemq.broker.region.TopicRegion.removeConsumer(TopicRegion.java:206)
>     at 
> org.apache.activemq.broker.region.RegionBroker.removeConsumer(RegionBroker.java:429)
>     at 
> org.apache.activemq.broker.jmx.ManagedRegionBroker.removeConsumer(ManagedRegionBroker.java:258)
>     at 
> org.apache.activemq.broker.BrokerFilter.removeConsumer(BrokerFilter.java:139)
>     at 
> org.apache.activemq.advisory.AdvisoryBroker.removeConsumer(AdvisoryBroker.java:352)
>     at 
> org.apache.activemq.broker.BrokerFilter.removeConsumer(BrokerFilter.java:139)
>     at 
> org.apache.activemq.broker.BrokerFilter.removeConsumer(BrokerFilter.java:139)
>     at 
> org.apache.activemq.broker.BrokerFilter.removeConsumer(BrokerFilter.java:139)
>     at 
> org.apache.activemq.broker.TransportConnection.processRemoveConsumer(TransportConnection.java:729)
>     at 
> org.apache.activemq.broker.TransportConnection.processRemoveSession(TransportConnection.java:768)
>     at 
> org.apache.activemq.broker.TransportConnection.processRemoveConnection(TransportConnection.java:879)
>     - locked <0x0007ac999f00> (a 
> org.apache.activemq.broker.jmx.ManagedTransportConnection)
>     at org.apache.activemq.command.RemoveInfo.visit(RemoveInfo.java:73)
>     at 
> org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:330)
>     at 
> org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:194)
> *   at 
> org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:45)
>     at 
> org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:301)
>     at 
> org.apache.activemq.transport.stomp.StompTransportFilter.sendToActiveMQ(StompTransportFilter.java:97)
>     at 
> org.apache.activemq.transport.stomp.ProtocolConverter.sendToActiveMQ(ProtocolConverter.java:202)
>     at 
> org.apache.activemq.transport.stomp.ProtocolConverter.onStompDisconnect(ProtocolConverter.java:838)
>     at 
> org.apache.activemq.transport.stomp.ProtocolConverter.onStompCommand(ProtocolConverter.java:267)
>     at 
> org.apache.activemq.transport.stomp.StompTransportFilter.onCommand(StompTransportFilter.java:85)
>     at 
> org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
>     at 
> org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:233)
>     at 
> org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:215)
>     at java.lang.Thread.run(Thread.java:748)
> "ActiveMQ Transport: tcp:///127.0.0.1:58302@61613":
>     at sun.misc.Unsafe.park(Native Method)
>   

[jira] [Resolved] (AMQ-7002) Activemq SchedulerBroker doSchedule can schedule duplicate jobIds leading to runtime exception

2018-06-27 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-7002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender resolved AMQ-7002.

Resolution: Fixed

> Activemq SchedulerBroker doSchedule can schedule duplicate jobIds leading to 
> runtime exception 
> ---
>
> Key: AMQ-7002
> URL: https://issues.apache.org/jira/browse/AMQ-7002
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker, Job Scheduler
>Affects Versions: 5.15.4
> Environment: Java 8
> AMQ 5.15.4
>Reporter: Jamie goodyear
>Assignee: Gary Tully
>Priority: Major
> Fix For: 5.16.0, 5.15.5
>
> Attachments: AMQ7002-messageId.patch, amq7002-master.patch
>
>
> Under load we've observed that SchedulerBroker will attempt to schedule jobs 
> using the same JobId.
> When JobScheduleView attempts to process these jobs we'll encounter an 
> exception during the below put call:
> {color:#bbb529}@Override{color}{color:#cc7832}public {color}TabularData 
> {color:#ffc66d}getAllJobs{color}() {color:#cc7832}throws {color}Exception {
>  OpenTypeFactory factory = 
> OpenTypeSupport.getFactory(Job.{color:#cc7832}class{color}){color:#cc7832};{color}
>  CompositeType ct = factory.getCompositeType(){color:#cc7832};{color} 
> TabularType tt = {color:#cc7832}new 
> {color}TabularType({color:#6a8759}"Scheduled Jobs"{color}{color:#cc7832}, 
> {color}{color:#6a8759}"Scheduled Jobs"{color}{color:#cc7832}, 
> {color}ct{color:#cc7832}, new {color}String[] { {color:#6a8759}"jobId" 
> \{color}});{color} TabularDataSupport rc = {color:#cc7832}new 
> {color}TabularDataSupport(tt){color:#cc7832};{color} List jobs = 
> {color:#cc7832}this{color}.{color:#9876aa}jobScheduler{color}.getAllJobs(){color:#cc7832};{color}{color:#cc7832}
>  for {color}(Job job : jobs) {
>  rc.put({color:#cc7832}new {color}CompositeDataSupport(ct{color:#cc7832}, 
> {color}factory.getFields(job))){color:#cc7832};{color} }
>  {color:#cc7832}return {color}rc;
>  \{color}}
> This can be triggered by clicking on the Schduled tab in the webconsole.
> The error only occurs due to duplicate JobIds.
> Debugging this error, we can see that two jobs with different payloads have 
> the same JobId - this should not be allowed to occur.
> We need to ensure that JobIds are unique.
> Note:
> In test scenario virtual topics are in use, with two consumers.
> Redelivery plugin is also in use on the Broker.
> 
>      sendToDlqIfMaxRetriesExceeded="false">
>     
>     
>     
>      initialRedeliveryDelay="6" maximumRedeliveries="20" 
> maximumRedeliveryDelay="30" useExponentialBackOff="true"/>
>     
>     
>     
>     
>     



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMQ-7002) Activemq SchedulerBroker doSchedule can schedule duplicate jobIds leading to runtime exception

2018-06-27 Thread Jeff Genender (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMQ-7002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender updated AMQ-7002:
---
Fix Version/s: 5.15.5

> Activemq SchedulerBroker doSchedule can schedule duplicate jobIds leading to 
> runtime exception 
> ---
>
> Key: AMQ-7002
> URL: https://issues.apache.org/jira/browse/AMQ-7002
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker, Job Scheduler
>Affects Versions: 5.15.4
> Environment: Java 8
> AMQ 5.15.4
>Reporter: Jamie goodyear
>Assignee: Gary Tully
>Priority: Major
> Fix For: 5.16.0, 5.15.5
>
> Attachments: AMQ7002-messageId.patch, amq7002-master.patch
>
>
> Under load we've observed that SchedulerBroker will attempt to schedule jobs 
> using the same JobId.
> When JobScheduleView attempts to process these jobs we'll encounter an 
> exception during the below put call:
> {color:#bbb529}@Override{color}{color:#cc7832}public {color}TabularData 
> {color:#ffc66d}getAllJobs{color}() {color:#cc7832}throws {color}Exception {
>  OpenTypeFactory factory = 
> OpenTypeSupport.getFactory(Job.{color:#cc7832}class{color}){color:#cc7832};{color}
>  CompositeType ct = factory.getCompositeType(){color:#cc7832};{color} 
> TabularType tt = {color:#cc7832}new 
> {color}TabularType({color:#6a8759}"Scheduled Jobs"{color}{color:#cc7832}, 
> {color}{color:#6a8759}"Scheduled Jobs"{color}{color:#cc7832}, 
> {color}ct{color:#cc7832}, new {color}String[] { {color:#6a8759}"jobId" 
> \{color}});{color} TabularDataSupport rc = {color:#cc7832}new 
> {color}TabularDataSupport(tt){color:#cc7832};{color} List jobs = 
> {color:#cc7832}this{color}.{color:#9876aa}jobScheduler{color}.getAllJobs(){color:#cc7832};{color}{color:#cc7832}
>  for {color}(Job job : jobs) {
>  rc.put({color:#cc7832}new {color}CompositeDataSupport(ct{color:#cc7832}, 
> {color}factory.getFields(job))){color:#cc7832};{color} }
>  {color:#cc7832}return {color}rc;
>  \{color}}
> This can be triggered by clicking on the Schduled tab in the webconsole.
> The error only occurs due to duplicate JobIds.
> Debugging this error, we can see that two jobs with different payloads have 
> the same JobId - this should not be allowed to occur.
> We need to ensure that JobIds are unique.
> Note:
> In test scenario virtual topics are in use, with two consumers.
> Redelivery plugin is also in use on the Broker.
> 
>      sendToDlqIfMaxRetriesExceeded="false">
>     
>     
>     
>      initialRedeliveryDelay="6" maximumRedeliveries="20" 
> maximumRedeliveryDelay="30" useExponentialBackOff="true"/>
>     
>     
>     
>     
>     



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-6930) bin/activemq should allow stdout/stderr to some file instead of /dev/null for daemon mode

2018-04-10 Thread Jeff Genender (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433142#comment-16433142
 ] 

Jeff Genender commented on AMQ-6930:


Thanks Alvin!  Great patch and the test case was much appreciated!

> bin/activemq should allow stdout/stderr to some file instead of /dev/null for 
> daemon mode
> -
>
> Key: AMQ-6930
> URL: https://issues.apache.org/jira/browse/AMQ-6930
> Project: ActiveMQ
>  Issue Type: Wish
>Affects Versions: 5.15.0, 5.15.1, 5.15.2, 5.15.3
>Reporter: Alvin Lin
>Priority: Major
> Fix For: 5.16.0, 5.15.4
>
>
> if I do "bin/activemq start" the ActiveMQ process is started with 
> stdout/stdin redirected to /dev/null. 
> This makes it hard to debug issue like out of memory error because we can't 
> see any log, for example, when the JVM flag "ExitOnOutOfMemoryError" is 
> turned on. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMQ-6930) bin/activemq should allow stdout/stderr to some file instead of /dev/null for daemon mode

2018-04-10 Thread Jeff Genender (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender updated AMQ-6930:
---
Fix Version/s: 5.15.4
   5.16.0

> bin/activemq should allow stdout/stderr to some file instead of /dev/null for 
> daemon mode
> -
>
> Key: AMQ-6930
> URL: https://issues.apache.org/jira/browse/AMQ-6930
> Project: ActiveMQ
>  Issue Type: Wish
>Affects Versions: 5.15.0, 5.15.1, 5.15.2, 5.15.3
>Reporter: Alvin Lin
>Priority: Major
> Fix For: 5.16.0, 5.15.4
>
>
> if I do "bin/activemq start" the ActiveMQ process is started with 
> stdout/stdin redirected to /dev/null. 
> This makes it hard to debug issue like out of memory error because we can't 
> see any log, for example, when the JVM flag "ExitOnOutOfMemoryError" is 
> turned on. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-6930) bin/activemq should allow stdout/stderr to some file instead of /dev/null for daemon mode

2018-04-10 Thread Jeff Genender (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16433132#comment-16433132
 ] 

Jeff Genender commented on AMQ-6930:


Added fix versions

> bin/activemq should allow stdout/stderr to some file instead of /dev/null for 
> daemon mode
> -
>
> Key: AMQ-6930
> URL: https://issues.apache.org/jira/browse/AMQ-6930
> Project: ActiveMQ
>  Issue Type: Wish
>Affects Versions: 5.15.0, 5.15.1, 5.15.2, 5.15.3
>Reporter: Alvin Lin
>Priority: Major
> Fix For: 5.16.0, 5.15.4
>
>
> if I do "bin/activemq start" the ActiveMQ process is started with 
> stdout/stdin redirected to /dev/null. 
> This makes it hard to debug issue like out of memory error because we can't 
> see any log, for example, when the JVM flag "ExitOnOutOfMemoryError" is 
> turned on. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-6203) KahaDB: Allow rewrite of message acks in older logs which prevent cleanup

2016-04-12 Thread Jeff Genender (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15237193#comment-15237193
 ] 

Jeff Genender commented on AMQ-6203:


Outstanding...thanks for getting this ;-)  You saved a lot of headaches ;-)

> KahaDB: Allow rewrite of message acks in older logs which prevent cleanup
> -
>
> Key: AMQ-6203
> URL: https://issues.apache.org/jira/browse/AMQ-6203
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.13.0, 5.13.1, 5.12.3, 5.13.2
>Reporter: Timothy Bish
>Assignee: Timothy Bish
> Fix For: 5.14.0
>
>
> There are cases where a chain of journal logs can grow due to acks for 
> messages in older logs needing to be kept so that on recovery proper state 
> can be restored and older messages not be resurrected.  
> In many cases just moving the acks from one log forward to a new log can free 
> an entire chain during subsequent GC cycles.  The 'compacted' ack log can be 
> written during the time between GC cycles without the index lock being held 
> meaning normal broker operations can continue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-6203) KahaDB: Allow rewrite of message acks in older logs which prevent cleanup

2016-04-08 Thread Jeff Genender (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232641#comment-15232641
 ] 

Jeff Genender commented on AMQ-6203:


Yep... if it can be off by default and allow someone to turn it on, I think 
that totally would be a good bridge to have in those point releases.  It allows 
an escape route in the event it doesn't work and only impacts those who know 
enough to turn it on.  Thoughts?

> KahaDB: Allow rewrite of message acks in older logs which prevent cleanup
> -
>
> Key: AMQ-6203
> URL: https://issues.apache.org/jira/browse/AMQ-6203
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.13.0, 5.13.1, 5.12.3, 5.13.2
>Reporter: Timothy Bish
>Assignee: Timothy Bish
> Fix For: 5.14.0
>
>
> There are cases where a chain of journal logs can grow due to acks for 
> messages in older logs needing to be kept so that on recovery proper state 
> can be restored and older messages not be resurrected.  
> In many cases just moving the acks from one log forward to a new log can free 
> an entire chain during subsequent GC cycles.  The 'compacted' ack log can be 
> written during the time between GC cycles without the index lock being held 
> meaning normal broker operations can continue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-6203) KahaDB: Allow rewrite of message acks in older logs which prevent cleanup

2016-04-08 Thread Jeff Genender (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232409#comment-15232409
 ] 

Jeff Genender commented on AMQ-6203:


Hi Tim, thanks for the quick response.  May I ask how is this not applicable to 
a point release from the perspective that it has the potential to fix a blocker 
bug(s) that affects 3 major versions of ActiveMQ?  The code looks good and it 
seems that the behavior of this fix is transparent to the user.  Its a good fix 
that has a great impact and it really would be great to get it into those other 
point releases.  AMQ-5695 looks heavily connected as we have determined form 
log files that it looks like the acks are what cause the log file leaks in 
those versions.

> KahaDB: Allow rewrite of message acks in older logs which prevent cleanup
> -
>
> Key: AMQ-6203
> URL: https://issues.apache.org/jira/browse/AMQ-6203
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.13.0, 5.13.1, 5.12.3, 5.13.2
>Reporter: Timothy Bish
>Assignee: Timothy Bish
> Fix For: 5.14.0
>
>
> There are cases where a chain of journal logs can grow due to acks for 
> messages in older logs needing to be kept so that on recovery proper state 
> can be restored and older messages not be resurrected.  
> In many cases just moving the acks from one log forward to a new log can free 
> an entire chain during subsequent GC cycles.  The 'compacted' ack log can be 
> written during the time between GC cycles without the index lock being held 
> meaning normal broker operations can continue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5695) KahaDB not cleaning up log files

2016-04-08 Thread Jeff Genender (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232383#comment-15232383
 ] 

Jeff Genender commented on AMQ-5695:


5.13.1 still has it.

It would be nice to see if AMQ-6203 fix will resolve this one.

> KahaDB not cleaning up log files
> 
>
> Key: AMQ-5695
> URL: https://issues.apache.org/jira/browse/AMQ-5695
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.11.1
>Reporter: Stefan Warten
>Priority: Blocker
>
> Since we have upgraded ActiveMQ from 5.10.0 to 5.11.1, KahaDB is not cleaning 
> up log files properly. It seems to keep all of them. It helped once to 
> restart the service and minutes later, ActiveMQ cleaned up 95% of log files 
> but mostly it is not cleaning up at all.
> When partition was full, I stopped ActiveMQ, copied the KahaDB to another 
> host and started it with empty queues again. Then I forwarded all messages 
> from that other host back. Even when all messages were forwarded and all 
> queues were empty, the old KahaDB log files were not cleaned up.
> I stopped ActiveMQ, removed db.data and db.redo to rebuild index which took 
> around 3h (350GB of log files) but still, the log files are not cleaned up.
> [...]
> 2015-03-30 18:21:55,532 | INFO  | @13786:158508, 32130 entries recovered 
> .. | org.apache.activemq.store.kahadb.MessageDatabase | WrapperSimpleAppMain
> 2015-03-30 18:22:02,090 | INFO  | Recovery replayed 321378917 operations from 
> the journal in 9226.159 seconds. | 
> org.apache.activemq.store.kahadb.MessageDatabase | WrapperSimpleAppMain
> 2015-03-30 18:22:02,402 | INFO  | installing runtimeConfiguration plugin | 
> org.apache.activemq.plugin.RuntimeConfigurationPlugin | WrapperSimpleAppMain
> 2015-03-30 18:22:04,576 | INFO  | Apache ActiveMQ 5.11.1 
> (prd-mig-02-sat.example.com, 
> ID:prd-mig-02-sat.example.com-26260-1427730492201-1:1) is starting | 
> org.apache.activemq.broker.BrokerService | WrapperSimpleAppMain
> 2015-03-30 18:22:04,946 | INFO  | pending local transactions: [] | 
> org.apache.activemq.store.kahadb.MultiKahaDBTransactionStore | 
> WrapperSimpleAppMain
> 2015-03-30 18:22:08,488 | INFO  | Configuration class path resource 
> [activemq.xml] | org.apache.activemq.plugin.RuntimeConfigurationBroker | 
> WrapperSimpleAppMain
> 2015-03-30 18:22:12,198 | INFO  | Listening for connections at: 
> nio://prd-mig-02-sat.example.com:61616?transport.reuseAddress=true | 
> org.apache.activemq.transport.TransportServerThreadSupport | 
> WrapperSimpleAppMain
> 2015-03-30 18:22:12,199 | INFO  | Connector openwire started | 
> org.apache.activemq.broker.TransportConnector | WrapperSimpleAppMain
> 2015-03-30 18:22:12,232 | INFO  | Listening for connections at: 
> stomp+nio://prd-mig-02-sat.example.com:61613?transport.closeAsync=false=true
>  | org.apache.activemq.transport.TransportServerThreadSupport | 
> WrapperSimpleAppMain
> 2015-03-30 18:22:12,234 | INFO  | Connector stomp started | 
> org.apache.activemq.broker.TransportConnector | WrapperSimpleAppMain
> 2015-03-30 18:22:12,237 | INFO  | Establishing network connection from 
> vm://prd-mig-02-sat.example.com?async=false=true to 
> tcp://172.42.15.40:61616 | 
> org.apache.activemq.network.DiscoveryNetworkConnector | WrapperSimpleAppMain
> 2015-03-30 18:22:12,286 | INFO  | Connector vm://prd-mig-02-sat.example.com 
> started | org.apache.activemq.broker.TransportConnector | WrapperSimpleAppMain
> 2015-03-30 18:22:12,318 | INFO  | Establishing network connection from 
> vm://prd-mig-02-sat.example.com?async=false=true to 
> tcp://172.42.15.39:61616 | 
> org.apache.activemq.network.DiscoveryNetworkConnector | WrapperSimpleAppMain
> 2015-03-30 18:22:12,321 | INFO  | Network Connector 
> DiscoveryNetworkConnector:FORWARDER:BrokerService[prd-mig-02-sat.example.com] 
> started | org.apache.activemq.network.NetworkConnector | WrapperSimpleAppMain
> 2015-03-30 18:22:12,325 | INFO  | Apache ActiveMQ 5.11.1 
> (prd-mig-02-sat.example.com, 
> ID:prd-mig-02-sat.example.com-26260-1427730492201-1:1) started | 
> org.apache.activemq.broker.BrokerService | WrapperSimpleAppMain
> 2015-03-30 18:22:12,326 | INFO  | For help or more information please see: 
> http://activemq.apache.org | org.apache.activemq.broker.BrokerService | 
> WrapperSimpleAppMain
> 2015-03-30 18:22:12,381 | INFO  | Network connection between 
> vm://prd-mig-02-sat.example.com#0 and tcp:///172.42.15.40:61616@58567 
> (prdvip-amq-01-sat.example.com) has been established. | 
> org.apache.activemq.network.DemandForwardingBridgeSupport | 
> triggerStartAsyncNetworkBridgeCreation: 
> remoteBroker=tcp:///172.42.15.40:61616@58567, localBroker= 
> vm://prd-mig-02-sat.example.com#0
> 2015-03-30 18:22:12,381 | INFO  | Network connection between 
> vm://prd-mig-02-sat.example.com#2 and tcp:///172.42.15.39:61616@4523 
> 

[jira] [Commented] (AMQ-6203) KahaDB: Allow rewrite of message acks in older logs which prevent cleanup

2016-04-08 Thread Jeff Genender (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15232348#comment-15232348
 ] 

Jeff Genender commented on AMQ-6203:


This actually looks like its related to AMQ-5695. Any reason why this wasn't 
back ported to the aforementioned version branch? If this truly does fix up 
those versions, this is a good candidate to be in 5.13.x.  I also believe this 
affects 5.11.x, 5.12. as well.  See AMQ-5695 for details.

> KahaDB: Allow rewrite of message acks in older logs which prevent cleanup
> -
>
> Key: AMQ-6203
> URL: https://issues.apache.org/jira/browse/AMQ-6203
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: KahaDB
>Affects Versions: 5.13.0, 5.13.1, 5.12.3, 5.13.2
>Reporter: Timothy Bish
>Assignee: Timothy Bish
> Fix For: 5.14.0
>
>
> There are cases where a chain of journal logs can grow due to acks for 
> messages in older logs needing to be kept so that on recovery proper state 
> can be restored and older messages not be resurrected.  
> In many cases just moving the acks from one log forward to a new log can free 
> an entire chain during subsequent GC cycles.  The 'compacted' ack log can be 
> written during the time between GC cycles without the index lock being held 
> meaning normal broker operations can continue.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (AMQ-6207) KahaDB: corruption of the index possible on sudden stop of the broker

2016-04-08 Thread Jeff Genender (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-6207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender updated AMQ-6207:
---
Comment: was deleted

(was: This actually looks like its related to AMQ-5695.  Any reason why this 
wasn't back ported to the aforementioned version branches?  If this truly does 
fix up those versions, this is a good candidate to be in those other releases.)

> KahaDB: corruption of the index possible on sudden stop of the broker
> -
>
> Key: AMQ-6207
> URL: https://issues.apache.org/jira/browse/AMQ-6207
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: KahaDB
>Affects Versions: 5.11.4, 5.12.3, 5.13.2
>Reporter: Timothy Bish
>Assignee: Timothy Bish
> Fix For: 5.14.0, 5.13.3
>
> Attachments: kahadb-corruption.tar.bz2
>
>
> On a sudden stop of the broker it's possible for the KahaDB index to become 
> corrupt and the broker will refuse to start.  The issue is in the PageFile 
> code that is mixing writes to both the recovery file and the index file.  The 
> writes need to happen in a deterministic way such that the recovery file 
> isn't missing data that might make it into the the main index file.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (AMQ-6175) ActiveMQ webconsole breaks when supressMBean is used

2016-02-18 Thread Jeff Genender (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-6175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender closed AMQ-6175.
--
Resolution: Fixed

Updates ManagementContext and ManagedRegionBroker to provide protected/public 
access produce filtered lists of the Mbeans for lists/sets.  Changed the 
BrokerView to use the filtered versions.

> ActiveMQ webconsole breaks when supressMBean is used
> 
>
> Key: AMQ-6175
> URL: https://issues.apache.org/jira/browse/AMQ-6175
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: webconsole
>Affects Versions: 5.12.1, 5.13.0, 5.13.1
>Reporter: Jeff Genender
>Priority: Blocker
> Fix For: 5.14.0, 5.13.2
>
>
> AMQ-5656 which included the suppressMBean function broke the web console that 
> comes with ActiveMQ.  The proxied calls to ManagedRegionBroker will obtain 
> objects that are not registered with the MBean server and thus the web 
> console breaks with invalid JSP when executed, which ultimately is caused by 
> javax.management.InstanceNotFoundException.  The fix for this is to have the 
> web calls filter out lists that contain any non-MBeans. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (AMQ-5656) Support selective MBean creation

2016-02-18 Thread Jeff Genender (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender closed AMQ-5656.
--
Resolution: Fixed

Will close this issue and track it in AMQ-6175

> Support selective MBean creation
> 
>
> Key: AMQ-5656
> URL: https://issues.apache.org/jira/browse/AMQ-5656
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker, JMX
>Reporter: Martin Lichtin
>Assignee: Gary Tully
>Priority: Blocker
>  Labels: jmx, scalability
> Fix For: 5.12.0
>
>
> A continuation of 
> http://activemq.2283324.n4.nabble.com/How-to-disable-MBeans-creation-tp4692863p4692904.html
>  where I asked about a feature to suppress MBean creation for certain 
> objects, such as sessions, producers, consumers.
> Quoting Gary:
> {quote}
> There is a single code entry point 
> ([ManagementContext.registerMBean|https://github.com/apache/activemq/blob/master/activemq-broker/src/main/java/org/apache/activemq/broker/jmx/ManagementContext.java#L391])
>  for all MBean registration in the broker so gating that on a filter or 
> regexp match may be all that we need.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-6175) ActiveMQ webconsole breaks when supressMBean is used

2016-02-18 Thread Jeff Genender (JIRA)
Jeff Genender created AMQ-6175:
--

 Summary: ActiveMQ webconsole breaks when supressMBean is used
 Key: AMQ-6175
 URL: https://issues.apache.org/jira/browse/AMQ-6175
 Project: ActiveMQ
  Issue Type: Bug
  Components: webconsole
Affects Versions: 5.13.1, 5.13.0, 5.12.1
Reporter: Jeff Genender
Priority: Blocker
 Fix For: 5.14.0, 5.13.2


AMQ-5656 which included the suppressMBean function broke the web console that 
comes with ActiveMQ.  The proxied calls to ManagedRegionBroker will obtain 
objects that are not registered with the MBean server and thus the web console 
breaks with invalid JSP when executed, which ultimately is caused by 
javax.management.InstanceNotFoundException.  The fix for this is to have the 
web calls filter out lists that contain any non-MBeans. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5656) Support selective MBean creation

2016-02-18 Thread Jeff Genender (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender updated AMQ-5656:
---
  Priority: Blocker  (was: Major)
Issue Type: Bug  (was: Improvement)

> Support selective MBean creation
> 
>
> Key: AMQ-5656
> URL: https://issues.apache.org/jira/browse/AMQ-5656
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker, JMX
>Reporter: Martin Lichtin
>Assignee: Gary Tully
>Priority: Blocker
>  Labels: jmx, scalability
> Fix For: 5.12.0
>
>
> A continuation of 
> http://activemq.2283324.n4.nabble.com/How-to-disable-MBeans-creation-tp4692863p4692904.html
>  where I asked about a feature to suppress MBean creation for certain 
> objects, such as sessions, producers, consumers.
> Quoting Gary:
> {quote}
> There is a single code entry point 
> ([ManagementContext.registerMBean|https://github.com/apache/activemq/blob/master/activemq-broker/src/main/java/org/apache/activemq/broker/jmx/ManagementContext.java#L391])
>  for all MBean registration in the broker so gating that on a filter or 
> regexp match may be all that we need.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (AMQ-5656) Support selective MBean creation

2016-02-18 Thread Jeff Genender (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Genender reopened AMQ-5656:


This change breaks the webconsole and is not complete.  When selectively 
allowing JMX entires, the ManagementContext still sends back a list onjects 
even though those MBeans are not allowed to register.  Hence the webconsole 
blows up with:

org.apache.jasper.JasperException: An exception occurred processing JSP page 
/topics.jsp at line 55

52: 
53: 
54: 
55: 
56: ">
57: 
58: 

The mbeans need to be filtered on request for their lists for what is being 
supressed through the ManagedRegionBroker, or the webconsole will need 
filtering code everywhere for MBean calls that have no ability to be called.

> Support selective MBean creation
> 
>
> Key: AMQ-5656
> URL: https://issues.apache.org/jira/browse/AMQ-5656
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: Broker, JMX
>Reporter: Martin Lichtin
>Assignee: Gary Tully
>  Labels: jmx, scalability
> Fix For: 5.12.0
>
>
> A continuation of 
> http://activemq.2283324.n4.nabble.com/How-to-disable-MBeans-creation-tp4692863p4692904.html
>  where I asked about a feature to suppress MBean creation for certain 
> objects, such as sessions, producers, consumers.
> Quoting Gary:
> {quote}
> There is a single code entry point 
> ([ManagementContext.registerMBean|https://github.com/apache/activemq/blob/master/activemq-broker/src/main/java/org/apache/activemq/broker/jmx/ManagementContext.java#L391])
>  for all MBean registration in the broker so gating that on a filter or 
> regexp match may be all that we need.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5721) Update AMQ to commons-pool2

2015-08-24 Thread Jeff Genender (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14709612#comment-14709612
 ] 

Jeff Genender commented on AMQ-5721:


Yes... it has worked as well as the OSGi unit tests.  Can you open a new JIRA 
to update it to 2.4.1 and we can get to it.

 Update AMQ to commons-pool2
 ---

 Key: AMQ-5721
 URL: https://issues.apache.org/jira/browse/AMQ-5721
 Project: ActiveMQ
  Issue Type: Bug
Reporter: Jeff Genender
Assignee: Dejan Bosanac
 Fix For: 5.12.0


 Update ActiveMQ to use commons-pool2 instead of commons-pool. AMQ-5636 will 
 need it.  The JMS pool and other components should use it as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5903) Message headers are lost when using the Broker Component for Camel

2015-07-29 Thread Jeff Genender (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14646982#comment-14646982
 ] 

Jeff Genender commented on AMQ-5903:


and Paul Gale ;-)

 Message headers are lost when using the Broker Component for Camel
 --

 Key: AMQ-5903
 URL: https://issues.apache.org/jira/browse/AMQ-5903
 Project: ActiveMQ
  Issue Type: Bug
  Components: activemq-camel
Affects Versions: 5.11.1
Reporter: Heath Kesler
 Fix For: 5.12.0

 Attachments: amq-5903.patch


 When using the broker camel component as defined here:  
 http://activemq.apache.org/broker-camel-component.html
 There appears to be an undocumented limitation of the broker component's 
 current implementation. I need to know whether said limitation is by design 
 or an oversight. If it's an oversight then I can submit a patch for it.
 This example route does not work as expected - the JMSXGroupID header is lost 
 when received by the broker component.
 route 
 from uri=broker:queue:test/ 
 setHeader headerName=JMSXGroupID 
 constant123/constant 
 /setHeader 
 to uri=broker:queue:test/ 
 /route
 After single stepping with a debugger the component executes this code: 
 https://github.com/apache/activemq/blob/master/activemq-camel/src/main/java/org/apache/activemq/camel/component/broker/BrokerProducer.java#L102
 As you can see from the method's implementation it only copies over a fixed 
 set of six well-known headers. All other headers on the inbound message are 
 discarded. Why not copy over every header? Consequently the JMSXGroupID 
 header is not copied, despite being present on the inbound message.
 This would appear to be a bug in my opinion as I do not believe we should be 
 loosing any headers on a message in this case.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)