[jira] [Commented] (ARTEMIS-747) Multiple CDATA events during import fails

2016-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515203#comment-15515203
 ] 

ASF GitHub Bot commented on ARTEMIS-747:


Github user clebertsuconic commented on a diff in the pull request:

https://github.com/apache/activemq-artemis/pull/791#discussion_r80174580
  
--- Diff: 
artemis-cli/src/main/java/org/apache/activemq/artemis/cli/commands/tools/XmlDataImporter.java
 ---
@@ -444,33 +444,59 @@ private void processMessageBody(Message message) 
throws XMLStreamException, IOEx
  }
   }
   reader.next();
+  ActiveMQServerLogger.LOGGER.debug("XMLStreamReader impl: " + reader);
--- End diff --

use  if (log.isDebug()) log.debug(...)

And get a logger for this class.

This is change I have made in all the longer not long ago.


> Multiple CDATA events during import fails
> -
>
> Key: ARTEMIS-747
> URL: https://issues.apache.org/jira/browse/ARTEMIS-747
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>
> Message bodies are written to XML as Base64 encoded CDATA elements. Some 
> parser implementations won't read the entire CDATA element at once (e.g. 
> Woodstox) so it's possible for multiple CDATA events to be combined into a 
> single Base64 encoded string.  You can't decode bits and pieces of each 
> CDATA.  Each CDATA has to be decoded in its entirety.  The current importer 
> doesn't deal with this properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-390) ReplicationAddMessage java.lang.IllegalStateException: Cannot find add info

2016-09-22 Thread Damien Hollis (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514811#comment-15514811
 ] 

Damien Hollis commented on ARTEMIS-390:
---

We have been having the same problem (using Artemis 1.4.0).  In our case it 
occurs when starting our tomcat server (artemis is embedded in a web 
application).  However, on restarting the server, the problem disappeared.

{noformat}
Caused by: java.lang.IllegalStateException: Cannot find add info 2148856738
at 
org.apache.activemq.artemis.core.journal.impl.JournalImpl.appendDeleteRecord(JournalImpl.java:793)
at 
org.apache.activemq.artemis.core.journal.impl.JournalBase.appendDeleteRecord(JournalBase.java:206)
at 
org.apache.activemq.artemis.core.journal.impl.JournalImpl.appendDeleteRecord(JournalImpl.java:79)
at 
org.apache.activemq.artemis.core.persistence.impl.journal.AbstractJournalStorageManager.deleteAddressSetting(AbstractJournalStorageManager.java:826)
at 
org.apache.activemq.artemis.core.management.impl.ActiveMQServerControlImpl.removeAddressSettings(ActiveMQServerControlImpl.java:1756)
{noformat}

We also had a similar problem when we upgraded from Artemis 1.3.0 to 1.4.0 but 
in that case our only option was to delete the data directories - luckily we 
were just testing and this was not a big issue for us but it would be in 
production.

> ReplicationAddMessage java.lang.IllegalStateException: Cannot find add info
> ---
>
> Key: ARTEMIS-390
> URL: https://issues.apache.org/jira/browse/ARTEMIS-390
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 1.2.0
> Environment: production
>Reporter: Howard Nguyen
>Assignee: Justin Bertram
>
> {code}
> 00:05:12,746 WARN  [org.apache.activemq.artemis.core.server] AMQ222086: error 
> handling packet PACKET(ReplicationAddMessage)[type=91, channelID=2, 
> packetObject=ReplicationAddMessage] for replication: 
> java.lang.IllegalStateException: Cannot find add info 226853
> at 
> org.apache.activemq.artemis.core.journal.impl.JournalImpl.appendUpdateRecord(JournalImpl.java:756)
>  [artemis-journal-1.2.0.jar:1.2.0]
> at 
> org.apache.activemq.artemis.core.journal.impl.JournalBase.appendUpdateRecord(JournalBase.java:183)
>  [artemis-journal-1.2.0.jar:1.2.0]
> at 
> org.apache.activemq.artemis.core.journal.impl.JournalImpl.appendUpdateRecord(JournalImpl.java:78)
>  [artemis-journal-1.2.0.jar:1.2.0]
> at 
> org.apache.activemq.artemis.core.journal.impl.JournalBase.appendUpdateRecord(JournalBase.java:129)
>  [artemis-journal-1.2.0.jar:1.2.0]
> at 
> org.apache.activemq.artemis.core.journal.impl.JournalImpl.appendUpdateRecord(JournalImpl.java:78)
>  [artemis-journal-1.2.0.jar:1.2.0]
> at 
> org.apache.activemq.artemis.core.replication.ReplicationEndpoint.handleAppendAddRecord(ReplicationEndpoint.java:668)
>  [artemis-server-1.2.0.jar:1.2.0]
> at 
> org.apache.activemq.artemis.core.replication.ReplicationEndpoint.handlePacket(ReplicationEndpoint.java:167)
>  [artemis-server-1.2.0.jar:1.2.0]
> at 
> org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.handlePacket(ChannelImpl.java:594)
>  [artemis-core-client-1.2.0.jar:1.2.0]
> at 
> org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.doBufferReceived(RemotingConnectionImpl.java:368)
>  [artemis-core-client-1.2.0.jar:1.2.0]
> at 
> org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.bufferReceived(RemotingConnectionImpl.java:350)
>  [artemis-core-client-1.2.0.jar:1.2.0]
> at 
> org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl$DelegatingBufferHandler.bufferReceived(ClientSessionFactoryImpl.java:1140)
>  [artemis-core-client-1.2.0.jar:1.2.0]
> at 
> org.apache.activemq.artemis.core.remoting.impl.netty.ActiveMQChannelHandler.channelRead(ActiveMQChannelHandler.java:68)
>  [artemis-core-client-1.2.0.jar:1.2.0]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
>  [netty-all-4.0.32.Final.jar:4.0.32.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
>  [netty-all-4.0.32.Final.jar:4.0.32.Final]
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
>  [netty-all-4.0.32.Final.jar:4.0.32.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
>  [netty-all-4.0.32.Final.jar:4.0.32.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
>  [netty-all-4.0.32.Final.jar:4.0.32.Final]
>   

[jira] [Commented] (ARTEMIS-741) memory leak when using STOMP protocol

2016-09-22 Thread Mitchell Ackerman (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514647#comment-15514647
 ] 

Mitchell Ackerman commented on ARTEMIS-741:
---

BTW, this pcap is from a capture between our STOMP 1.1 client and the server 
running HornetQ.  We don't yet have our client talking to ArtemisMQ, but i've 
been using a test STOMP 1.0 client to test the ArtemisMQ server.  I can see if 
I can get a pcap for the 1.1 client if you like.

> memory leak when using STOMP protocol
> -
>
> Key: ARTEMIS-741
> URL: https://issues.apache.org/jira/browse/ARTEMIS-741
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker, Stomp
>Affects Versions: 1.4.0
> Environment: JDK 8, Apache Tomcat server or standalone app, Linux or 
> Windows
>Reporter: Mitchell Ackerman
>Assignee: Justin Bertram
> Attachments: dump-10.89.1.31.pcap
>
>
> ArtemisMQ exhibits a memory leak when using the STOMP protocol.
> Steps to reproduce:
> 1. Configure a server with a JMS topic, my example uses an EmbeddedJMS server
> 2. Connect to the server using the STOMP protocol
> 3. Subscribe to the topic with a selector 
> 4. publish some messages to the topic that match the selector (this step may 
> not be necessary)
> 5. Unsubscribe from the topic
> 6. publish some messages to the topic that match the selector
> The messages published after the unsubscribe are retained in a QueueImpl 
> object, messageReferences queue and are never cleaned up unless the client 
> disconnects.  The QueueImpl object has 0 Consumers (ConsumerList size is 0), 
> and the QueueImpl object retains the filter from the subscription.
> See also 
> http://activemq.2283324.n4.nabble.com/potential-memory-leak-when-using-STOMP-protocol-td4716643.html
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-741) memory leak when using STOMP protocol

2016-09-22 Thread Mitchell Ackerman (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mitchell Ackerman updated ARTEMIS-741:
--
Attachment: dump-10.89.1.31.pcap

See packet 39 for an example of a subscription with an ID

> memory leak when using STOMP protocol
> -
>
> Key: ARTEMIS-741
> URL: https://issues.apache.org/jira/browse/ARTEMIS-741
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker, Stomp
>Affects Versions: 1.4.0
> Environment: JDK 8, Apache Tomcat server or standalone app, Linux or 
> Windows
>Reporter: Mitchell Ackerman
>Assignee: Justin Bertram
> Attachments: dump-10.89.1.31.pcap
>
>
> ArtemisMQ exhibits a memory leak when using the STOMP protocol.
> Steps to reproduce:
> 1. Configure a server with a JMS topic, my example uses an EmbeddedJMS server
> 2. Connect to the server using the STOMP protocol
> 3. Subscribe to the topic with a selector 
> 4. publish some messages to the topic that match the selector (this step may 
> not be necessary)
> 5. Unsubscribe from the topic
> 6. publish some messages to the topic that match the selector
> The messages published after the unsubscribe are retained in a QueueImpl 
> object, messageReferences queue and are never cleaned up unless the client 
> disconnects.  The QueueImpl object has 0 Consumers (ConsumerList size is 0), 
> and the QueueImpl object retains the filter from the subscription.
> See also 
> http://activemq.2283324.n4.nabble.com/potential-memory-leak-when-using-STOMP-protocol-td4716643.html
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-741) memory leak when using STOMP protocol

2016-09-22 Thread Justin Bertram (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514588#comment-15514588
 ] 

Justin Bertram commented on ARTEMIS-741:


When you subscribe to the topic are you passing an ID?

> memory leak when using STOMP protocol
> -
>
> Key: ARTEMIS-741
> URL: https://issues.apache.org/jira/browse/ARTEMIS-741
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker, Stomp
>Affects Versions: 1.4.0
> Environment: JDK 8, Apache Tomcat server or standalone app, Linux or 
> Windows
>Reporter: Mitchell Ackerman
>Assignee: Justin Bertram
>
> ArtemisMQ exhibits a memory leak when using the STOMP protocol.
> Steps to reproduce:
> 1. Configure a server with a JMS topic, my example uses an EmbeddedJMS server
> 2. Connect to the server using the STOMP protocol
> 3. Subscribe to the topic with a selector 
> 4. publish some messages to the topic that match the selector (this step may 
> not be necessary)
> 5. Unsubscribe from the topic
> 6. publish some messages to the topic that match the selector
> The messages published after the unsubscribe are retained in a QueueImpl 
> object, messageReferences queue and are never cleaned up unless the client 
> disconnects.  The QueueImpl object has 0 Consumers (ConsumerList size is 0), 
> and the QueueImpl object retains the filter from the subscription.
> See also 
> http://activemq.2283324.n4.nabble.com/potential-memory-leak-when-using-STOMP-protocol-td4716643.html
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-741) memory leak when using STOMP protocol

2016-09-22 Thread Mitchell Ackerman (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514573#comment-15514573
 ] 

Mitchell Ackerman commented on ARTEMIS-741:
---

1.1, but it occurs with 1.0 also

> memory leak when using STOMP protocol
> -
>
> Key: ARTEMIS-741
> URL: https://issues.apache.org/jira/browse/ARTEMIS-741
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker, Stomp
>Affects Versions: 1.4.0
> Environment: JDK 8, Apache Tomcat server or standalone app, Linux or 
> Windows
>Reporter: Mitchell Ackerman
>Assignee: Justin Bertram
>
> ArtemisMQ exhibits a memory leak when using the STOMP protocol.
> Steps to reproduce:
> 1. Configure a server with a JMS topic, my example uses an EmbeddedJMS server
> 2. Connect to the server using the STOMP protocol
> 3. Subscribe to the topic with a selector 
> 4. publish some messages to the topic that match the selector (this step may 
> not be necessary)
> 5. Unsubscribe from the topic
> 6. publish some messages to the topic that match the selector
> The messages published after the unsubscribe are retained in a QueueImpl 
> object, messageReferences queue and are never cleaned up unless the client 
> disconnects.  The QueueImpl object has 0 Consumers (ConsumerList size is 0), 
> and the QueueImpl object retains the filter from the subscription.
> See also 
> http://activemq.2283324.n4.nabble.com/potential-memory-leak-when-using-STOMP-protocol-td4716643.html
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-748) AddressSize show a negative number.

2016-09-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fábio Gomes dos Santos updated ARTEMIS-748:
---
Affects Version/s: 1.3.0

> AddressSize show a negative number. 
> 
>
> Key: ARTEMIS-748
> URL: https://issues.apache.org/jira/browse/ARTEMIS-748
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Fábio Gomes dos Santos
>
> AddressSize shows a negative number on jmx
> Is a Mbean, and his path is:
> {code}
> org.apache.activemq.artemis:type=Broker,brokerName=\"{#ARTEMIS_BROKER}\",module=Core,serviceType=Address,name=\"jms.queue.{#QUEUE_NAME}\"",AddressSize
>  {code}
> i don't know if this is really a bug or this number is expected.
> The queue as no message, but return this:
> {code}NameValue   TypeDisplay NameUpdate Interval Description
> AddressSize -3636362longAddressSize -1  N/A
> {code}
> I think this occurs after the queue starts to paging. But not sure...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-748) AddressSize show a negative number.

2016-09-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fábio Gomes dos Santos updated ARTEMIS-748:
---
Description: 
AddressSize shows a negative number on jmx
Is a Mbean, and his path is:

{code}
org.apache.activemq.artemis:type=Broker,brokerName=\"{#ARTEMIS_BROKER}\",module=Core,serviceType=Address,name=\"jms.queue.{#QUEUE_NAME}\"",AddressSize
 {code}

i don't know if this is really a bug or this number is expected.

The queue as no message, but return this:
{code}NameValue   TypeDisplay NameUpdate Interval Description
AddressSize -3636362longAddressSize -1  N/A
{code}

I think this occurs after the queue starts to paging. But not sure...

  was:
AddressSize shows a negative number on jmx
Is a Mbean, and his path is:

{code}
org.apache.activemq.artemis:type=Broker,brokerName=\"{#ARTEMIS_BROKER}\",module=Core,serviceType=Address,name=\"jms.queue.{#QUEUE_NAME}\"",AddressSize
 {code}

i don't know if this is really a bug or this number is expected.

The queue as no message, but return this:
{code}NameValue   TypeDisplay NameUpdate Interval Description
AddressSize -3636362longAddressSize -1  N/A
{code}

I think this occurs after the queue start to paging. But not sure...


> AddressSize show a negative number. 
> 
>
> Key: ARTEMIS-748
> URL: https://issues.apache.org/jira/browse/ARTEMIS-748
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Fábio Gomes dos Santos
>
> AddressSize shows a negative number on jmx
> Is a Mbean, and his path is:
> {code}
> org.apache.activemq.artemis:type=Broker,brokerName=\"{#ARTEMIS_BROKER}\",module=Core,serviceType=Address,name=\"jms.queue.{#QUEUE_NAME}\"",AddressSize
>  {code}
> i don't know if this is really a bug or this number is expected.
> The queue as no message, but return this:
> {code}NameValue   TypeDisplay NameUpdate Interval Description
> AddressSize -3636362longAddressSize -1  N/A
> {code}
> I think this occurs after the queue starts to paging. But not sure...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-748) AddressSize show a negative number.

2016-09-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fábio Gomes dos Santos updated ARTEMIS-748:
---
Description: 
AddressSize shows a negative number on jmx
Is a Mbean, and his path is:

{code}
org.apache.activemq.artemis:type=Broker,brokerName=\"{#ARTEMIS_BROKER}\",module=Core,serviceType=Address,name=\"jms.queue.{#QUEUE_NAME}\"",AddressSize
 {code}

i don't know if this is really a bug or this number is expected.

The queue as no message, but return this:
{code}NameValue   TypeDisplay NameUpdate Interval Description
AddressSize -3636362longAddressSize -1  N/A
{code}

I think this occurs after the queue start to paging. But not sure...

  was:
AddressSize shows a negative number on jmx
Is a Mbean, and his path is:

{code}
org.apache.activemq.artemis:type=Broker,brokerName=\"{#ARTEMIS_BROKER}\",module=Core,serviceType=Address,name=\"jms.queue.{#QUEUE_NAME}\"",AddressSize
 {code}

i don't know if this is really a bug or this number is expected.

The queue as no message, but return this:
{code}NameValue   TypeDisplay NameUpdate Interval Description
AddressSize -3636362longAddressSize -1  N/A
{code}

O think this occurs after the queue start to paging. But not sure...


> AddressSize show a negative number. 
> 
>
> Key: ARTEMIS-748
> URL: https://issues.apache.org/jira/browse/ARTEMIS-748
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Fábio Gomes dos Santos
>
> AddressSize shows a negative number on jmx
> Is a Mbean, and his path is:
> {code}
> org.apache.activemq.artemis:type=Broker,brokerName=\"{#ARTEMIS_BROKER}\",module=Core,serviceType=Address,name=\"jms.queue.{#QUEUE_NAME}\"",AddressSize
>  {code}
> i don't know if this is really a bug or this number is expected.
> The queue as no message, but return this:
> {code}NameValue   TypeDisplay NameUpdate Interval Description
> AddressSize -3636362longAddressSize -1  N/A
> {code}
> I think this occurs after the queue start to paging. But not sure...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-748) AddressSize show a negative number.

2016-09-22 Thread JIRA
Fábio Gomes dos Santos created ARTEMIS-748:
--

 Summary: AddressSize show a negative number. 
 Key: ARTEMIS-748
 URL: https://issues.apache.org/jira/browse/ARTEMIS-748
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Fábio Gomes dos Santos


AddressSize shows a negative number on jmx
Basically this information is provided by the jmx interface.
Is a Mbean, and his path is:

{code}
org.apache.activemq.artemis:type=Broker,brokerName=\"{#ARTEMIS_BROKER}\",module=Core,serviceType=Address,name=\"jms.queue.{#QUEUE_NAME}\"",AddressSize
 {code}

i don't know if this is really a bug or this number is expected.

The queue as no message, but return this:
{code}NameValue   TypeDisplay NameUpdate Interval Description
AddressSize -3636362longAddressSize -1  N/A
{code}

O think this occurs after the queue start to paging. But not sure...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5618) Infinite loop in log replay with Replicated LevelDB

2016-09-22 Thread Pablo Lozano (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514423#comment-15514423
 ] 

Pablo Lozano commented on AMQ-5618:
---

I have done some digging around.. and it seems that it will sooner or later 
will stop the loop.. eventually at least.
I dont have the setup of Replicated LevelDB at hand right now but from I can 
see is that is basically lost track about the last position on the records of 
LevelDB.

>From the logs in my case it would try to iterate from 0 to the last position 
>it knows exists. So in my case it would iterate until 32610993430.
The fix from AMQ-5300 seems not to be executed as this replay of the logs of 
LevelDB is being triggered by a call which bypasses its fix. The fix basically 
is not to start replaying from 0 but from the last record registered.

{noformat}
2015-01-29 19:42:55,740 -0600 WARN  56426 [ActiveMQ 
BrokerService[mailsystemBroker] Task-2] LevelDBClient  - Could not load message 
seq: 162542 from DataLocator(11e952fc, 1754)
2015-01-29 19:42:55,740 -0600 WARN  56426 [ActiveMQ 
BrokerService[mailsystemBroker] Task-2] RecordLog  - No reader available for 
position: 11e999ce, log_infos: 
{32610993430=LogInfo(/m2/tomcat7.0/work/navigator-mail-mq/6009/work/repDB/000797c44516.log,32610993430,0)}
2015-01-29 19:42:55,740 -0600 WARN  56426 [ActiveMQ 
BrokerService[mailsystemBroker] Task-2] LevelDBClient  - Could not load message 
seq: 162552 from DataLocator(11e999ce, 1754)
2015-01-29 19:42:55,740 -0600 WARN  56426 [ActiveMQ 
BrokerService[mailsystemBroker] Task-2] RecordLog  - No reader available for 
position: 11e9a7f8, log_infos: 
{32610993430=LogInfo(/m2/tomcat7.0/work/navigator-mail-mq/6009/work/repDB/000797c44516.log,32610993430,0)}
2015-01-29 19:42:55,740 -0600 WARN  56426 [ActiveMQ 
BrokerService[mailsystemBroker] Task-2] LevelDBClient  - Could not load message 
seq: 162554 from DataLocator(11e9a7f8, 1754)
2015-01-29 19:42:55,740 -0600 WARN  56426 [ActiveMQ 
BrokerService[mailsystemBroker] Task-2] RecordLog  - No reader available for 
position: 11e9b622, log_infos: 
{32610993430=LogInfo(/m2/tomcat7.0/work/navigator-mail-mq/6009/work/repDB/000797c44516.log,32610993430,0)}
2015-01-29 19:42:55,741 -0600 WARN  56427 [ActiveMQ 
BrokerService[mailsystemBroker] Task-2] LevelDBClient  - Could not load message 
seq: 162556 from DataLocator(11e9b622, 1754)
{noformat}

I lost most of my log files from this issue but if someone can attach then if 
possible at Trace level and if possible with a LevelDB included I may be able 
to take a look deeper. My Scala is not the best but I think I can help track 
the root of the issue

> Infinite loop in log replay with Replicated LevelDB
> ---
>
> Key: AMQ-5618
> URL: https://issues.apache.org/jira/browse/AMQ-5618
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.11.0, 5.11.1
> Environment: Linux, Google Compute Engine
>Reporter: Artem Karpenko
>Priority: Critical
>
> This is very similar to AMQ-5300 except that I use replicatedLevelDB 
> persistence adapter and in order to reproduce I don't have to delete any 
> index files.
> Setup: 1 ZK instance, 3 AMQ nodes.
> One of the AMQ configs:
> {code}
>  replicas="3"
> bind="tcp://0.0.0.0:61619"
> zkAddress="instance-6:2181"
> zkPath="/activemq/leveldb-stores"
> hostname="instance-7" />
> {code}
> Difference between nodes is only in hostname attribute.
> The way to reproduce is almost the same as in AMQ-5300: 
> # Produce lots of messages to generate several log files in leveldb data 
> directory.
> # Consume _some_ messages until you see "Deleting log" in activemq.log.
> # Restart master. Wait for system to rebalance itself. Everything's fine at 
> this point.
> # Restart the second master.
> # Observe the massive (infinite?) logging on slave and relatively calm but 
> still possibly infinite logging on master.
> This is what the first master logs after it's restarted:
> {code}
> 2015-02-25 21:37:08,338 | DEBUG | Download session connected... | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:08,582 | INFO  | Slave skipping download of: 
> log/190be289.log | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,099 | INFO  | Slave skipping download of: 
> log/0642f848.log | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,411 | INFO  | Slave skipping download of: 
> log/0c85f06d.log | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 

[jira] [Commented] (ARTEMIS-747) Multiple CDATA events during import fails

2016-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514367#comment-15514367
 ] 

ASF GitHub Bot commented on ARTEMIS-747:


GitHub user jbertram opened a pull request:

https://github.com/apache/activemq-artemis/pull/791

ARTEMIS-747 multiple CDATA events on import fails



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jbertram/activemq-artemis ARTEMIS-747

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/activemq-artemis/pull/791.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #791


commit 5838b2c11c0776303f912e4738d6ac4448d22988
Author: jbertram 
Date:   2016-09-16T15:25:08Z

ARTEMIS-747 multiple CDATA events on import fails




> Multiple CDATA events during import fails
> -
>
> Key: ARTEMIS-747
> URL: https://issues.apache.org/jira/browse/ARTEMIS-747
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>
> Message bodies are written to XML as Base64 encoded CDATA elements. Some 
> parser implementations won't read the entire CDATA element at once (e.g. 
> Woodstox) so it's possible for multiple CDATA events to be combined into a 
> single Base64 encoded string.  You can't decode bits and pieces of each 
> CDATA.  Each CDATA has to be decoded in its entirety.  The current importer 
> doesn't deal with this properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-747) Multiple CDATA events during import fails

2016-09-22 Thread Justin Bertram (JIRA)
Justin Bertram created ARTEMIS-747:
--

 Summary: Multiple CDATA events during import fails
 Key: ARTEMIS-747
 URL: https://issues.apache.org/jira/browse/ARTEMIS-747
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 1.4.0
Reporter: Justin Bertram
Assignee: Justin Bertram


Message bodies are written to XML as Base64 encoded CDATA elements. Some parser 
implementations won't read the entire CDATA element at once (e.g. Woodstox) so 
it's possible for multiple CDATA events to be combined into a single Base64 
encoded string.  You can't decode bits and pieces of each CDATA.  Each CDATA 
has to be decoded in its entirety.  The current importer doesn't deal with this 
properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-745) Messages sent to a jms topic address are not expiring in temporary queue created via core API

2016-09-22 Thread Ruben Cala (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruben Cala updated ARTEMIS-745:
---
Summary: Messages sent to a jms topic address are not expiring in temporary 
queue created via core API  (was: Messages sent to a jms topi address are not 
expiring in temporary queue created via core API)

> Messages sent to a jms topic address are not expiring in temporary queue 
> created via core API
> -
>
> Key: ARTEMIS-745
> URL: https://issues.apache.org/jira/browse/ARTEMIS-745
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 1.1.0
> Environment: Redhat Linux 6.2
>Reporter: Ruben Cala
>
> I am publishing messages to a topic address (jms.topic.).  I set the 
> message expiration to be 2 seconds (publishing via core api).  I have two 
> consumers, one using core api, one using generic stomp protocol.  The core 
> api creates a temporary queue to receive the messages from the  address.  The 
> stomp consumer relies on the auto-generated temporary queue mechanism 
> provided by the broker.  To investigate slow consumer scenarios, both 
> consumers are not acknowledging the messages.
> In JConsole for the stomp consumer, its queue MessagesAcknowledged attribute 
> count rises, while the MessageCount stays constant with the MessagesAdded 
> count (low number around 5 usually).
> For the core api consumer, however, its queue MessagesAcknowledged attribute 
> count stays at 0, while its MessageCount attribute increases with the 
> MessagesAdded number.  This eventually causes the broker to run out of memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (ARTEMIS-746) Messages sent to a jms topic address are not expiring and are remaining in the queue of core api consumer

2016-09-22 Thread Ruben Cala (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruben Cala closed ARTEMIS-746.
--
Resolution: Duplicate

Dup of ARTEMIS-745

> Messages sent to a jms topic address are not expiring and are remaining in 
> the queue of core api consumer
> -
>
> Key: ARTEMIS-746
> URL: https://issues.apache.org/jira/browse/ARTEMIS-746
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 1.1.0
> Environment: Redhat Linux 6.2
>Reporter: Ruben Cala
>
> I am pushing messages to a jms topic address (jms.topic.) on a 
> standalone server. The publisher is using core api. Two consumers, one using 
> core api, the other using generic stomp protocol. The core api consumer 
> creates a temporary queue to get the messages from the address. The stomp 
> client relies on the temporary queue created for it by the broker via the jms 
> topic functionality. To test slow client scenarios, both consumers are not 
> acknowledging the messages received.
> In jconsole for the queues, I see the MessageCount attribute for the stomp 
> client stay at a low constant number (2), while the MessagesAcknowledged 
> number climbs with the MessagesAdded attribute, as expected. For the core api 
> consumer, however, the MessageCount matches the MessagesAdded attribute, 
> while the MessageCount remains at zero.
> I see in artemis.log a null pointer exception:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-6439) Active MQ Failover Exception

2016-09-22 Thread akhil (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513953#comment-15513953
 ] 

akhil commented on AMQ-6439:


Hi Tim , 

I agree with you and i am making sure that it should be reproducible and will 
be giving the steps as well. I am going to test with the latest version and 
update and eliminate the external mailing threads.

Thanks,
Akhil.

> Active MQ Failover Exception
> 
>
> Key: AMQ-6439
> URL: https://issues.apache.org/jira/browse/AMQ-6439
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker, networkbridge, Transport
>Affects Versions: 5.11.4
> Environment: Mac OS , Active MQ 5.11 , Java 8 Producer , Java 8 
> Consumer
>Reporter: akhil
>
> I am using the common file storage as a KahaDB mount for the two brokers and 
> they are not in the network. I am running the local producer anad consumer 
> against two of the brokers using the failover string. It is working as 
> expected in case of master slave topology in case of DB locks. The producer 
> is able to switch the connection from active master to slave or slave to 
> master during the failover but consumer is having the issue. it is going into 
> the idle state after certain time and never ever reconnecting to the new 
> master broker. you can find mored details in this forum where i have placed 
> the example and screen shots. The consumer channel is getting blocked and you 
> can find more info here :
> http://activemq.2283324.n4.nabble.com/JMS-exception-during-the-Failover-td4716047.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5618) Infinite loop in log replay with Replicated LevelDB

2016-09-22 Thread William Brendel (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513740#comment-15513740
 ] 

William Brendel commented on AMQ-5618:
--

Another data point: My team also encountered this issue recently, in June 2016, 
while evaluating replicated LevelDB storage for our application. We were using 
AMQ 5.13.3 and ZK 3.4.8 with 3 instances. Like Jonathan G, our database seemed 
to be corrupted beyond repair, causing us to eliminate replicated LevelDB as an 
option.

> Infinite loop in log replay with Replicated LevelDB
> ---
>
> Key: AMQ-5618
> URL: https://issues.apache.org/jira/browse/AMQ-5618
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.11.0, 5.11.1
> Environment: Linux, Google Compute Engine
>Reporter: Artem Karpenko
>Priority: Critical
>
> This is very similar to AMQ-5300 except that I use replicatedLevelDB 
> persistence adapter and in order to reproduce I don't have to delete any 
> index files.
> Setup: 1 ZK instance, 3 AMQ nodes.
> One of the AMQ configs:
> {code}
>  replicas="3"
> bind="tcp://0.0.0.0:61619"
> zkAddress="instance-6:2181"
> zkPath="/activemq/leveldb-stores"
> hostname="instance-7" />
> {code}
> Difference between nodes is only in hostname attribute.
> The way to reproduce is almost the same as in AMQ-5300: 
> # Produce lots of messages to generate several log files in leveldb data 
> directory.
> # Consume _some_ messages until you see "Deleting log" in activemq.log.
> # Restart master. Wait for system to rebalance itself. Everything's fine at 
> this point.
> # Restart the second master.
> # Observe the massive (infinite?) logging on slave and relatively calm but 
> still possibly infinite logging on master.
> This is what the first master logs after it's restarted:
> {code}
> 2015-02-25 21:37:08,338 | DEBUG | Download session connected... | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:08,582 | INFO  | Slave skipping download of: 
> log/190be289.log | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,099 | INFO  | Slave skipping download of: 
> log/0642f848.log | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,411 | INFO  | Slave skipping download of: 
> log/0c85f06d.log | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,838 | INFO  | Slave skipping download of: 
> log/12c8e921.log | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,842 | INFO  | Slave requested: 
> 1c9373b4.index/CURRENT | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,846 | INFO  | Slave requested: 
> 1c9373b4.index/MANIFEST-02 | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,850 | INFO  | Slave requested: 
> 1c9373b4.index/03.log | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,857 | INFO  | Attaching... Downloaded 0.02/95.65 kb and 
> 1/3 files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,859 | INFO  | Attaching... Downloaded 0.06/95.65 kb and 
> 2/3 files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,861 | INFO  | Attaching... Downloaded 95.65/95.65 kb and 
> 3/3 files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,862 | INFO  | Attached | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,878 | DEBUG | Taking a snapshot of the current index: 
> /usr/local/apache-activemq-5.11.1/data/replicatedLevelDB/1c9373b4.index
>  | org.apache.activemq.leveldb.LevelDBClient | Thread-2
> 2015-02-25 21:37:10,352 | DEBUG | Recovering from last index snapshot at: 
> /usr/local/apache-activemq-5.11.1/data/replicatedLevelDB/dirty.index | 
> org.apache.activemq.leveldb.LevelDBClient | Thread-2
> {code}
> Right after that everything seems fine. But as soon as I stop the new master, 
> the another new master (that would be the third one) logs
> {code}
> 2015-02-25 21:38:43,876 | INFO  | Promoted to master | 
> org.apache.activemq.leveldb.replicated.MasterElector | main-EventThread
> 2015-02-25 21:38:43,894 | INFO  | Using the pure 

[jira] [Commented] (AMQ-5618) Infinite loop in log replay with Replicated LevelDB

2016-09-22 Thread Jonathan G (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513715#comment-15513715
 ] 

Jonathan G commented on AMQ-5618:
-

Hello Pablo,

Yes, we have amq 5.14 and zook 3.4.8. 3 - 3 instances of each with master/slave.
We found the very same repl. leveldb corruption that the ticket reporter had. 
Our db was corrupted beyond repair.

> Infinite loop in log replay with Replicated LevelDB
> ---
>
> Key: AMQ-5618
> URL: https://issues.apache.org/jira/browse/AMQ-5618
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.11.0, 5.11.1
> Environment: Linux, Google Compute Engine
>Reporter: Artem Karpenko
>Priority: Critical
>
> This is very similar to AMQ-5300 except that I use replicatedLevelDB 
> persistence adapter and in order to reproduce I don't have to delete any 
> index files.
> Setup: 1 ZK instance, 3 AMQ nodes.
> One of the AMQ configs:
> {code}
>  replicas="3"
> bind="tcp://0.0.0.0:61619"
> zkAddress="instance-6:2181"
> zkPath="/activemq/leveldb-stores"
> hostname="instance-7" />
> {code}
> Difference between nodes is only in hostname attribute.
> The way to reproduce is almost the same as in AMQ-5300: 
> # Produce lots of messages to generate several log files in leveldb data 
> directory.
> # Consume _some_ messages until you see "Deleting log" in activemq.log.
> # Restart master. Wait for system to rebalance itself. Everything's fine at 
> this point.
> # Restart the second master.
> # Observe the massive (infinite?) logging on slave and relatively calm but 
> still possibly infinite logging on master.
> This is what the first master logs after it's restarted:
> {code}
> 2015-02-25 21:37:08,338 | DEBUG | Download session connected... | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:08,582 | INFO  | Slave skipping download of: 
> log/190be289.log | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,099 | INFO  | Slave skipping download of: 
> log/0642f848.log | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,411 | INFO  | Slave skipping download of: 
> log/0c85f06d.log | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,838 | INFO  | Slave skipping download of: 
> log/12c8e921.log | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,842 | INFO  | Slave requested: 
> 1c9373b4.index/CURRENT | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,846 | INFO  | Slave requested: 
> 1c9373b4.index/MANIFEST-02 | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,850 | INFO  | Slave requested: 
> 1c9373b4.index/03.log | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,857 | INFO  | Attaching... Downloaded 0.02/95.65 kb and 
> 1/3 files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,859 | INFO  | Attaching... Downloaded 0.06/95.65 kb and 
> 2/3 files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,861 | INFO  | Attaching... Downloaded 95.65/95.65 kb and 
> 3/3 files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,862 | INFO  | Attached | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,878 | DEBUG | Taking a snapshot of the current index: 
> /usr/local/apache-activemq-5.11.1/data/replicatedLevelDB/1c9373b4.index
>  | org.apache.activemq.leveldb.LevelDBClient | Thread-2
> 2015-02-25 21:37:10,352 | DEBUG | Recovering from last index snapshot at: 
> /usr/local/apache-activemq-5.11.1/data/replicatedLevelDB/dirty.index | 
> org.apache.activemq.leveldb.LevelDBClient | Thread-2
> {code}
> Right after that everything seems fine. But as soon as I stop the new master, 
> the another new master (that would be the third one) logs
> {code}
> 2015-02-25 21:38:43,876 | INFO  | Promoted to master | 
> org.apache.activemq.leveldb.replicated.MasterElector | main-EventThread
> 2015-02-25 21:38:43,894 | INFO  | Using the pure java LevelDB implementation. 
> | org.apache.activemq.leveldb.LevelDBClient | ActiveMQ 
> BrokerService[localhost] Task-5
> 

[jira] [Commented] (AMQ-6439) Active MQ Failover Exception

2016-09-22 Thread Timothy Bish (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513684#comment-15513684
 ] 

Timothy Bish commented on AMQ-6439:
---

First, its best not to point to mailing list threads in JIRA issues, provide 
all the relevant information in the issue as the list messages can go missing 
etc.  

Second it is usually a good idea to try the latest broker release before 
reporting an issue as there are quite a few fixes in each release and you are 
running a rather old version it seems.

If you think there is an actual bug then we'd need a way to reproduce the issue 
otherwise we will have to close it out as incomplete given that we can't just 
guess what might be wrong.  If you need support the mailing lists are the place 
to ask questions, JIRA is for real bugs.  

> Active MQ Failover Exception
> 
>
> Key: AMQ-6439
> URL: https://issues.apache.org/jira/browse/AMQ-6439
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker, networkbridge, Transport
>Affects Versions: 5.11.4
> Environment: Mac OS , Active MQ 5.11 , Java 8 Producer , Java 8 
> Consumer
>Reporter: akhil
>
> I am using the common file storage as a KahaDB mount for the two brokers and 
> they are not in the network. I am running the local producer anad consumer 
> against two of the brokers using the failover string. It is working as 
> expected in case of master slave topology in case of DB locks. The producer 
> is able to switch the connection from active master to slave or slave to 
> master during the failover but consumer is having the issue. it is going into 
> the idle state after certain time and never ever reconnecting to the new 
> master broker. you can find mored details in this forum where i have placed 
> the example and screen shots. The consumer channel is getting blocked and you 
> can find more info here :
> http://activemq.2283324.n4.nabble.com/JMS-exception-during-the-Failover-td4716047.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-743) Default the queue address to the queue name

2016-09-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513485#comment-15513485
 ] 

ASF GitHub Bot commented on ARTEMIS-743:


Github user asfgit closed the pull request at:

https://github.com/apache/activemq-artemis/pull/787


> Default the queue address to the queue name
> ---
>
> Key: ARTEMIS-743
> URL: https://issues.apache.org/jira/browse/ARTEMIS-743
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>Reporter: Francesco Nigro
>  Labels: features
>
> In many instances, users will want the queue name and address to be the same. 
> The latter could default to the queue name, and then it would be safe to omit 
> the address in the queue config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-743) Default the queue address to the queue name

2016-09-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513484#comment-15513484
 ] 

ASF subversion and git services commented on ARTEMIS-743:
-

Commit c002cf13b84308549db99c07918bf1075a5b75be in activemq-artemis's branch 
refs/heads/master from [~nigro@gmail.com]
[ https://git-wip-us.apache.org/repos/asf?p=activemq-artemis.git;h=c002cf1 ]

ARTEMIS-743 Created QueueConfig that replace and enable additional behaviours 
on QueueFactory.
Added Filter predicate.


> Default the queue address to the queue name
> ---
>
> Key: ARTEMIS-743
> URL: https://issues.apache.org/jira/browse/ARTEMIS-743
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>Reporter: Francesco Nigro
>  Labels: features
>
> In many instances, users will want the queue name and address to be the same. 
> The latter could default to the queue name, and then it would be safe to omit 
> the address in the queue config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-6437) Old KahaDB log files not removed

2016-09-22 Thread Christopher L. Shannon (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513043#comment-15513043
 ] 

Christopher L. Shannon commented on AMQ-6437:
-

You can view the archives and find your post on the mailing list along with the 
responses: http://mail-archives.apache.org/mod_mbox/activemq-users/

> Old KahaDB log files not removed
> 
>
> Key: AMQ-6437
> URL: https://issues.apache.org/jira/browse/AMQ-6437
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.13.0, 5.14.0
>Reporter: Steve Hill
>
> Old messages get stuck in KahaDB and never get cleaned up.  The oldest file 
> is from July 14, 2016.  Since then we have restarted ActiveMQ multiple times, 
> and all servers in environment. This is occuring in our test environment as 
> well as production environment.  Currently the only solution is to stop 
> ActiveMQ and remove the KahaDB log directory.
> Follows is the log of a checkpoint from today 9/21/2016
>  2016-09-21 13:27:34,313 [eckpoint Worker] DEBUG MessageDatabase  
>   - Checkpoint started.
>  2016-09-21 13:27:34,322 [eckpoint Worker] TRACE MessageDatabase  
>   - Last update: 1438:1243461, full gc candidates set: [674, 691, 699, 705, 
> 711, 790, 858, 865, 866, 877, 888, 899, 904, 909, 910, 918, 930, 939, 940, 
> 949, 960, 968, 975, 980, 981, 1051, 1066, 1067, 1074, 1075, 1082, 1083, 1093, 
> , 1126, 1137, 1138, 1155, 1167, 1277, 1279, 1280, 1281, 1282, 1283, 1284, 
> 1288, 1289, 1291, 1294, 1295, 1296, 1297, 1298, 1299, 1300, 1301, 1307, 1308, 
> 1309, 1310, 1311, 1312, 1314, 1315, 1316, 1317, 1318, 1319, 1320, 1321, 1322, 
> 1323, 1324, 1325, 1326, 1327, 1328, 1329, 1330, 1336, 1365, 1366, 1367, 1368, 
> 1369, 1370, 1371, 1372, 1373, 1374, 1375, 1376, 1377, 1378, 1379, 1380, 1387, 
> 1388, 1389, 1390, 1391, 1392, 1393, 1394, 1395, 1396, 1397, 1398, 1399, 1400, 
> 1401, 1402, 1403, 1404, 1405, 1406, 1407, 1408, 1409, 1410, 1411, 1412, 1413, 
> 1414, 1415, 1416, 1417, 1418, 1419, 1420, 1421, 1423, 1424, 1425, 1426, 1427, 
> 1428, 1429, 1430, 1431, 1432, 1433, 1434, 1435, 1436, 1437, 1438, 1439]
>  2016-09-21 13:27:34,322 [eckpoint Worker] TRACE MessageDatabase  
>   - gc candidates after producerSequenceIdTrackerLocation:1438, [674, 691, 
> 699, 705, 711, 790, 858, 865, 866, 877, 888, 899, 904, 909, 910, 918, 930, 
> 939, 940, 949, 960, 968, 975, 980, 981, 1051, 1066, 1067, 1074, 1075, 1082, 
> 1083, 1093, , 1126, 1137, 1138, 1155, 1167, 1277, 1279, 1280, 1281, 1282, 
> 1283, 1284, 1288, 1289, 1291, 1294, 1295, 1296, 1297, 1298, 1299, 1300, 1301, 
> 1307, 1308, 1309, 1310, 1311, 1312, 1314, 1315, 1316, 1317, 1318, 1319, 1320, 
> 1321, 1322, 1323, 1324, 1325, 1326, 1327, 1328, 1329, 1330, 1336, 1365, 1366, 
> 1367, 1368, 1369, 1370, 1371, 1372, 1373, 1374, 1375, 1376, 1377, 1378, 1379, 
> 1380, 1387, 1388, 1389, 1390, 1391, 1392, 1393, 1394, 1395, 1396, 1397, 1398, 
> 1399, 1400, 1401, 1402, 1403, 1404, 1405, 1406, 1407, 1408, 1409, 1410, 1411, 
> 1412, 1413, 1414, 1415, 1416, 1417, 1418, 1419, 1420, 1421, 1423, 1424, 1425, 
> 1426, 1427, 1428, 1429, 1430, 1431, 1432, 1433, 1434, 1435, 1436, 1437, 1439]
>  2016-09-21 13:27:34,323 [eckpoint Worker] TRACE MessageDatabase  
>   - gc candidates after ackMessageFileMapLocation:1438, [674, 691, 699, 705, 
> 711, 790, 858, 865, 866, 877, 888, 899, 904, 909, 910, 918, 930, 939, 940, 
> 949, 960, 968, 975, 980, 981, 1051, 1066, 1067, 1074, 1075, 1082, 1083, 1093, 
> , 1126, 1137, 1138, 1155, 1167, 1277, 1279, 1280, 1281, 1282, 1283, 1284, 
> 1288, 1289, 1291, 1294, 1295, 1296, 1297, 1298, 1299, 1300, 1301, 1307, 1308, 
> 1309, 1310, 1311, 1312, 1314, 1315, 1316, 1317, 1318, 1319, 1320, 1321, 1322, 
> 1323, 1324, 1325, 1326, 1327, 1328, 1329, 1330, 1336, 1365, 1366, 1367, 1368, 
> 1369, 1370, 1371, 1372, 1373, 1374, 1375, 1376, 1377, 1378, 1379, 1380, 1387, 
> 1388, 1389, 1390, 1391, 1392, 1393, 1394, 1395, 1396, 1397, 1398, 1399, 1400, 
> 1401, 1402, 1403, 1404, 1405, 1406, 1407, 1408, 1409, 1410, 1411, 1412, 1413, 
> 1414, 1415, 1416, 1417, 1418, 1419, 1420, 1421, 1423, 1424, 1425, 1426, 1427, 
> 1428, 1429, 1430, 1431, 1432, 1433, 1434, 1435, 1436, 1437, 1439]
>  2016-09-21 13:27:34,324 [eckpoint Worker] TRACE MessageDatabase  
>   - gc candidates after tx range:[980:15980880, 980:15980880], [674, 691, 
> 699, 705, 711, 790, 858, 865, 866, 877, 888, 899, 904, 909, 910, 918, 930, 
> 939, 940, 949, 960, 968, 975, 981, 1051, 1066, 1067, 1074, 1075, 1082, 1083, 
> 1093, , 1126, 1137, 1138, 1155, 1167, 1277, 1279, 1280, 1281, 1282, 1283, 
> 1284, 1288, 1289, 1291, 1294, 1295, 1296, 1297, 1298, 1299, 1300, 1301, 1307, 
> 1308, 1309, 1310, 1311, 1312, 1314, 1315, 1316, 1317, 1318, 1319, 1320, 1321, 
> 1322, 1323, 1324, 1325, 1326, 1327, 1328, 1329, 1330, 1336, 1365, 1366, 1367, 
> 

[jira] [Resolved] (ARTEMIS-668) Artemis does not handle reject on AMQP with Tx and presettled messages as spec outlines

2016-09-22 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor resolved ARTEMIS-668.
---
   Resolution: Fixed
Fix Version/s: 1.4.0

> Artemis does not handle reject on AMQP with Tx and presettled messages as 
> spec outlines
> ---
>
> Key: ARTEMIS-668
> URL: https://issues.apache.org/jira/browse/ARTEMIS-668
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Martyn Taylor
>Assignee: Martyn Taylor
> Fix For: 1.4.0
>
>
> The spec states: 
> {quote}
> The delivered message will not be made available at the node until the 
> transaction has been
> successfully discharged. If the transaction is rolled back then the delivery 
> is not made available.
> Should the resource be unable to process the delivery it MUST NOT allow the 
> successful dis-
> charge of the associated transaction. This may be communicated by immediately 
> destroying the
> controlling link on which the transaction was declared, or by rejecting any 
> attempt to discharge
> the transaction where the fail flag is not set to true.
> {quote}
> We should add the appropriate behviour.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (ARTEMIS-668) Artemis does not handle reject on AMQP with Tx and presettled messages as spec outlines

2016-09-22 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor reassigned ARTEMIS-668:
-

Assignee: Martyn Taylor

> Artemis does not handle reject on AMQP with Tx and presettled messages as 
> spec outlines
> ---
>
> Key: ARTEMIS-668
> URL: https://issues.apache.org/jira/browse/ARTEMIS-668
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Martyn Taylor
>Assignee: Martyn Taylor
> Fix For: 1.4.0
>
>
> The spec states: 
> {quote}
> The delivered message will not be made available at the node until the 
> transaction has been
> successfully discharged. If the transaction is rolled back then the delivery 
> is not made available.
> Should the resource be unable to process the delivery it MUST NOT allow the 
> successful dis-
> charge of the associated transaction. This may be communicated by immediately 
> destroying the
> controlling link on which the transaction was declared, or by rejecting any 
> attempt to discharge
> the transaction where the fail flag is not set to true.
> {quote}
> We should add the appropriate behviour.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (ARTEMIS-636) Implement address full BLOCK for AMQP protocol

2016-09-22 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor reassigned ARTEMIS-636:
-

Assignee: Martyn Taylor

> Implement address full BLOCK for AMQP protocol
> --
>
> Key: ARTEMIS-636
> URL: https://issues.apache.org/jira/browse/ARTEMIS-636
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>Reporter: Martyn Taylor
>Assignee: Martyn Taylor
> Fix For: 1.4.0
>
>
> Artemis currently does not implement address full BLOCK sematics for the AMQP 
> protocol.  AMQP supports flow control and syntax for describing rejected 
> messages.  Artemis should make use of these features to implement an 
> appropriate BLOCK mechanism, protecting the broker from OOM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (ARTEMIS-636) Implement address full BLOCK for AMQP protocol

2016-09-22 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor resolved ARTEMIS-636.
---
   Resolution: Fixed
Fix Version/s: 1.4.0

> Implement address full BLOCK for AMQP protocol
> --
>
> Key: ARTEMIS-636
> URL: https://issues.apache.org/jira/browse/ARTEMIS-636
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>Reporter: Martyn Taylor
>Assignee: Martyn Taylor
> Fix For: 1.4.0
>
>
> Artemis currently does not implement address full BLOCK sematics for the AMQP 
> protocol.  AMQP supports flow control and syntax for describing rejected 
> messages.  Artemis should make use of these features to implement an 
> appropriate BLOCK mechanism, protecting the broker from OOM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (ARTEMIS-666) misleading link detach error when creating a link to an address which does not exist

2016-09-22 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor resolved ARTEMIS-666.
---
   Resolution: Fixed
Fix Version/s: 1.5.0

> misleading link detach error when creating a link to an address which does 
> not exist
> 
>
> Key: ARTEMIS-666
> URL: https://issues.apache.org/jira/browse/ARTEMIS-666
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Martyn Taylor
>Assignee: Martyn Taylor
> Fix For: 1.5.0
>
>
> If you try to create a link (in this case I was creating a JMS 
> MessageProducer) to an AMQP address (e.g 'queue') which does not have a 
> related node (e.g a Queue) then the broker detaches the link with an error. 
> Fair enough.
> However, the actual link detach error returned has some issues:
> It uses error-condition value of 'amqp:internal-error', whereas 
> 'amqp:not-found' is probably more appropriate and typically expected.
> The description says "AMQ219003: error finding temporary queue, AMQ219002: 
> target address does not exist". The link was not being created to a temporary 
> queue in either the JMS or AMQP ('dynamic address') senses, which makes the 
> description somewhat misleading.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-714) JDBC Store improvement

2016-09-22 Thread Martyn Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15512842#comment-15512842
 ] 

Martyn Taylor commented on ARTEMIS-714:
---

[~jmesnil] Is this now complete?

> JDBC Store improvement
> --
>
> Key: ARTEMIS-714
> URL: https://issues.apache.org/jira/browse/ARTEMIS-714
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 1.1.0
>Reporter: Jeff Mesnil
>
> We plan to integrate with Artemis JDBC store in our application server.
> After a code review, we saw 2 main improvements that would make the code more 
> flexible and easier to maintain.
> First, in our app server, we have our sophisticated way to configure access 
> to databases. We would like to be able to pass a DataSource instance to 
> Artemis JDBC store instead of a (driver class name / URL) tuple. 
> If the DataSource object is set, we create a Connection from it, otherwise we 
> use the current code to create the connection from a class name + URL. This 
> will introduce no changes to use of standalone Artemis broker.
> The second improvement is to make the SQLProvider injectable instead of 
> relying on hard-coded class provided by Artemis jars.
> We would create an instance of the SQLProvider in our integration code and 
> pass it to Artemis JDBC store. This will make it simpler to support new types 
> of databases (or fix issues in the SQLProvider implementations) without 
> requiring a new release of Artemis for that.
> If the SQLProvider instance injected in the JDBC store is null, the current 
> code will be executed.
> Does these improvements sound correct?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (ARTEMIS-723) AMQP subscriptions aren't deleted properly

2016-09-22 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor resolved ARTEMIS-723.
---
Resolution: Fixed

> AMQP subscriptions aren't deleted properly
> --
>
> Key: ARTEMIS-723
> URL: https://issues.apache.org/jira/browse/ARTEMIS-723
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 1.3.0
>Reporter: Andy Taylor
>Assignee: Andy Taylor
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (ARTEMIS-726) NPE in JDBCJournalStorageManager

2016-09-22 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor resolved ARTEMIS-726.
---
   Resolution: Fixed
Fix Version/s: 1.5.0

> NPE in JDBCJournalStorageManager 
> -
>
> Key: ARTEMIS-726
> URL: https://issues.apache.org/jira/browse/ARTEMIS-726
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Martyn Taylor
>Assignee: Martyn Taylor
> Fix For: 1.5.0
>
>
> 4:14:51,688 WARN  
> [org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl] 
> (ServerService Thread Pool -- 64) null: java.lang.NullPointerException
> at 
> org.apache.activemq.artemis.core.persistence.impl.journal.JournalStorageManager.injectMonitor(JournalStorageManager.java:735)
> at 
> org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.injectMonitor(ActiveMQServerImpl.java:2069)
> at 
> org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.initialisePart2(ActiveMQServerImpl.java:2058)
> at 
> org.apache.activemq.artemis.core.server.impl.LiveOnlyActivation.run(LiveOnlyActivation.java:63)
> at 
> org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl.start(ActiveMQServerImpl.java:447)
> at 
> org.apache.activemq.artemis.jms.server.impl.JMSServerManagerImpl.start(JMSServerManagerImpl.java:412)
> at 
> org.wildfly.extension.messaging.activemq.jms.JMSService.doStart(JMSService.java:199)
> at 
> org.wildfly.extension.messaging.activemq.jms.JMSService.access$000(JMSService.java:63)
> at 
> org.wildfly.extension.messaging.activemq.jms.JMSService$1.run(JMSService.java:97)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (ARTEMIS-731) Durable subscription reconnect with different address, no-local and select does not recreate subscription queue

2016-09-22 Thread Martyn Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martyn Taylor reassigned ARTEMIS-731:
-

Assignee: Martyn Taylor

> Durable subscription reconnect with different address, no-local and select 
> does not recreate subscription queue
> ---
>
> Key: ARTEMIS-731
> URL: https://issues.apache.org/jira/browse/ARTEMIS-731
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Martyn Taylor
>Assignee: Martyn Taylor
>
> For JMS <-> AMQP mapping support, The AMQP protocol manager should detect JMS 
> capabilities and properly recreate any subscriptions queues on a durable 
> subscription.  As per JMS spec a durable subscrtipion queue should be 
> recreated if either the address, no-local boolean or selector changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)