[jira] [Commented] (ARTEMIS-739) Large messages failing with "(Too many open files)"

2016-09-20 Thread Damien Hollis (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508582#comment-15508582
 ] 

Damien Hollis commented on ARTEMIS-739:
---

In an attempt to try to track down the issue, I monitored files being opened 
and closed.  I noted that when large messages are moved to the DLQ, a copy is 
made (which involves opening the large message file) but I did not see the file 
associated with the copy being closed.

The stacktrace looks like this:
{noformat}
java.io.RandomAccessFile.open(String, int) RandomAccessFile.java
java.io.RandomAccessFile.(File, String) RandomAccessFile.java:243
org.apache.activemq.artemis.core.io.nio.NIOSequentialFile.open(int, boolean) 
NIOSequentialFile.java:91
org.apache.activemq.artemis.core.io.nio.NIOSequentialFile.open() 
NIOSequentialFile.java:85
org.apache.activemq.artemis.core.persistence.impl.journal.LargeServerMessageImpl.openFile()
 LargeServerMessageImpl.java:403
org.apache.activemq.artemis.core.persistence.impl.journal.LargeServerMessageImpl.validateFile()
 LargeServerMessageImpl.java:380
org.apache.activemq.artemis.core.persistence.impl.journal.JournalStorageManager.createLargeMessage(long,
 MessageInternal) JournalStorageManager.java:445
org.apache.activemq.artemis.core.persistence.impl.journal.LargeServerMessageImpl.copy(long)
 LargeServerMessageImpl.java:278
org.apache.activemq.artemis.core.server.impl.ServerMessageImpl.makeCopyForExpiryOrDLA(long,
 MessageReference, boolean, boolean) ServerMessageImpl.java:214
org.apache.activemq.artemis.core.server.impl.QueueImpl.makeCopy(MessageReference,
 boolean, boolean) QueueImpl.java:2315
org.apache.activemq.artemis.core.server.impl.QueueImpl.makeCopy(MessageReference,
 boolean) QueueImpl.java:2297
org.apache.activemq.artemis.core.server.impl.QueueImpl.move(Transaction, 
SimpleString, MessageReference, boolean, boolean) QueueImpl.java:2381
org.apache.activemq.artemis.core.server.impl.QueueImpl.sendToDeadLetterAddress(Transaction,
 MessageReference, SimpleString) QueueImpl.java:2356
org.apache.activemq.artemis.core.server.impl.QueueImpl.checkRedelivery(MessageReference,
 long, boolean) QueueImpl.java:2124
org.apache.activemq.artemis.core.server.impl.RefsOperation.afterRollback(Transaction)
 RefsOperation.java:93
org.apache.activemq.artemis.core.transaction.impl.TransactionImpl.afterRollback(List)
 TransactionImpl.java:559
org.apache.activemq.artemis.core.transaction.impl.TransactionImpl.access$200(TransactionImpl,
 List) TransactionImpl.java:36
org.apache.activemq.artemis.core.transaction.impl.TransactionImpl$4.done() 
TransactionImpl.java:404
org.apache.activemq.artemis.core.persistence.impl.journal.OperationContextImpl$1.run()
 OperationContextImpl.java:242
org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor$ExecutorTask.run()
 OrderedExecutorFactory.java:103
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker) 
ThreadPoolExecutor.java:1142
java.util.concurrent.ThreadPoolExecutor$Worker.run() ThreadPoolExecutor.java:617
java.lang.Thread.run() Thread.java:745
{noformat}

It would seem that somewhere around the move we should closing or releasing the 
large message.
{noformat}
   private void move(final Transaction originalTX,
 final SimpleString address,
 final MessageReference ref,
 final boolean rejectDuplicate,
 final AckReason reason) throws Exception {
  Transaction tx;

  if (originalTX != null) {
 tx = originalTX;
  }
  else {
 // if no TX we create a new one to commit at the end
 tx = new TransactionImpl(storageManager);
  }

  ServerMessage copyMessage = makeCopy(ref, reason == AckReason.EXPIRED);

  copyMessage.setAddress(address);

  postOffice.route(copyMessage, null, tx, false, rejectDuplicate);

  acknowledge(tx, ref, reason);

  if (originalTX == null) {
 tx.commit();
  }
   }
{noformat}

I also noticed that LargeMessages implement finalize() and release their 
resources at that point but if the message is still sitting in a queue, it 
seems that an active reference to the message remains.

> Large messages failing with "(Too many open files)"
> ---
>
> Key: ARTEMIS-739
> URL: https://issues.apache.org/jira/browse/ARTEMIS-739
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Damien Hollis
>Priority: Critical
>
> We are processing a large number of large messages and many of them are being 
> put into the dead-letter queue as part of a transaction rollback.  I suspect 
> that during the rollback the large message is not being released properly and 
> as a result the error below eventually occurs.  This may be a more general 
> issue but so far we 

[jira] [Commented] (ARTEMIS-739) Large messages failing with "(Too many open files)"

2016-09-20 Thread Damien Hollis (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508336#comment-15508336
 ] 

Damien Hollis commented on ARTEMIS-739:
---

Retested with 1.4.0 and the problem still exists.

> Large messages failing with "(Too many open files)"
> ---
>
> Key: ARTEMIS-739
> URL: https://issues.apache.org/jira/browse/ARTEMIS-739
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Damien Hollis
>Priority: Critical
>
> We are processing a large number of large messages and many of them are being 
> put into the dead-letter queue as part of a transaction rollback.  I suspect 
> that during the rollback the large message is not being released properly and 
> as a result the error below eventually occurs.  This may be a more general 
> issue but so far we have only seen this issue when processing messages 
> successfully (although another person in the team mentioned that there seem 
> to be a lot of large messages hanging around).
> I noted we are not using the latest version, so I'm in the process of 
> creating a new build and I will test with version 1.4 later today or tomorrow.
> {noformat}
> org.apache.activemq.artemis.core.server | AMQ222010: Critical IO Error, 
> shutting
> down the server. file=NIOSequentialFile 
> /var/data/artemis/large-messages/2147660860.msg, 
> message=/var/data/artemis/large-messages/2147660860.msg (Too many open files)
> org.apache.activemq.artemis.api.core.ActiveMQIOErrorException: 
> /var/data/artemis/large-messages/2147660860.msg (Too many open files)
> at 
> org.apache.activemq.artemis.core.io.nio.NIOSequentialFile.open(NIOSequentialFile.java:101)
> at 
> org.apache.activemq.artemis.core.io.nio.NIOSequentialFile.open(NIOSequentialFile.java:85)
> at 
> org.apache.activemq.artemis.core.persistence.impl.journal.LargeServerMessageImpl$DecodingContext.open(LargeServerMessageImpl.java:426)
> at 
> org.apache.activemq.artemis.core.server.impl.ServerConsumerImpl$LargeMessageDeliverer.deliver(ServerConsumerImpl.java:1131)
> at 
> org.apache.activemq.artemis.core.server.impl.ServerConsumerImpl.proceedDeliver(ServerConsumerImpl.java:414)
> at 
> org.apache.activemq.artemis.core.server.impl.QueueImpl.proceedDeliver(QueueImpl.java:2464)
> at 
> org.apache.activemq.artemis.core.server.impl.QueueImpl.deliver(QueueImpl.java:1956)
> at 
> org.apache.activemq.artemis.core.server.impl.QueueImpl.access$1500(QueueImpl.java:99)
> at 
> org.apache.activemq.artemis.core.server.impl.QueueImpl$DeliverRunner.run(QueueImpl.java:2695)
> at 
> org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor$ExecutorTask.run(OrderedExecutorFactory.java:103)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: 
> /var/data/artemis/large-messages/2147660860.msg (Too many open files)
> at java.io.RandomAccessFile.open0(Native Method)
> at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
> at java.io.RandomAccessFile.(RandomAccessFile.java:243)
> at 
> org.apache.activemq.artemis.core.io.nio.NIOSequentialFile.open(NIOSequentialFile.java:91)
> ... 12 common frames omitted
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-739) Large messages failing with "(Too many open files)"

2016-09-20 Thread Damien Hollis (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damien Hollis updated ARTEMIS-739:
--
Affects Version/s: 1.4.0

> Large messages failing with "(Too many open files)"
> ---
>
> Key: ARTEMIS-739
> URL: https://issues.apache.org/jira/browse/ARTEMIS-739
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Damien Hollis
>Priority: Critical
>
> We are processing a large number of large messages and many of them are being 
> put into the dead-letter queue as part of a transaction rollback.  I suspect 
> that during the rollback the large message is not being released properly and 
> as a result the error below eventually occurs.  This may be a more general 
> issue but so far we have only seen this issue when processing messages 
> successfully (although another person in the team mentioned that there seem 
> to be a lot of large messages hanging around).
> I noted we are not using the latest version, so I'm in the process of 
> creating a new build and I will test with version 1.4 later today or tomorrow.
> {noformat}
> org.apache.activemq.artemis.core.server | AMQ222010: Critical IO Error, 
> shutting
> down the server. file=NIOSequentialFile 
> /var/data/artemis/large-messages/2147660860.msg, 
> message=/var/data/artemis/large-messages/2147660860.msg (Too many open files)
> org.apache.activemq.artemis.api.core.ActiveMQIOErrorException: 
> /var/data/artemis/large-messages/2147660860.msg (Too many open files)
> at 
> org.apache.activemq.artemis.core.io.nio.NIOSequentialFile.open(NIOSequentialFile.java:101)
> at 
> org.apache.activemq.artemis.core.io.nio.NIOSequentialFile.open(NIOSequentialFile.java:85)
> at 
> org.apache.activemq.artemis.core.persistence.impl.journal.LargeServerMessageImpl$DecodingContext.open(LargeServerMessageImpl.java:426)
> at 
> org.apache.activemq.artemis.core.server.impl.ServerConsumerImpl$LargeMessageDeliverer.deliver(ServerConsumerImpl.java:1131)
> at 
> org.apache.activemq.artemis.core.server.impl.ServerConsumerImpl.proceedDeliver(ServerConsumerImpl.java:414)
> at 
> org.apache.activemq.artemis.core.server.impl.QueueImpl.proceedDeliver(QueueImpl.java:2464)
> at 
> org.apache.activemq.artemis.core.server.impl.QueueImpl.deliver(QueueImpl.java:1956)
> at 
> org.apache.activemq.artemis.core.server.impl.QueueImpl.access$1500(QueueImpl.java:99)
> at 
> org.apache.activemq.artemis.core.server.impl.QueueImpl$DeliverRunner.run(QueueImpl.java:2695)
> at 
> org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor$ExecutorTask.run(OrderedExecutorFactory.java:103)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: 
> /var/data/artemis/large-messages/2147660860.msg (Too many open files)
> at java.io.RandomAccessFile.open0(Native Method)
> at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
> at java.io.RandomAccessFile.(RandomAccessFile.java:243)
> at 
> org.apache.activemq.artemis.core.io.nio.NIOSequentialFile.open(NIOSequentialFile.java:91)
> ... 12 common frames omitted
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5618) Infinite loop in log replay with Replicated LevelDB

2016-09-20 Thread Pablo Lozano (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15508175#comment-15508175
 ] 

Pablo Lozano commented on AMQ-5618:
---

Actually no, and my team discarded the usage of Replicated Level DB
But on a certain hope, another team from my company started using Replicated 
Level DB and until now they haven't seen this issue occur.
They are running 5.13.3  and Java version of LevelDB. I think they have 
stumbled with other issues related to Zookeeper but not this one.

Important question, is this issue still occurring to you?

Thanks,
Pablo 

> Infinite loop in log replay with Replicated LevelDB
> ---
>
> Key: AMQ-5618
> URL: https://issues.apache.org/jira/browse/AMQ-5618
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.11.0, 5.11.1
> Environment: Linux, Google Compute Engine
>Reporter: Artem Karpenko
>Priority: Critical
>
> This is very similar to AMQ-5300 except that I use replicatedLevelDB 
> persistence adapter and in order to reproduce I don't have to delete any 
> index files.
> Setup: 1 ZK instance, 3 AMQ nodes.
> One of the AMQ configs:
> {code}
>  replicas="3"
> bind="tcp://0.0.0.0:61619"
> zkAddress="instance-6:2181"
> zkPath="/activemq/leveldb-stores"
> hostname="instance-7" />
> {code}
> Difference between nodes is only in hostname attribute.
> The way to reproduce is almost the same as in AMQ-5300: 
> # Produce lots of messages to generate several log files in leveldb data 
> directory.
> # Consume _some_ messages until you see "Deleting log" in activemq.log.
> # Restart master. Wait for system to rebalance itself. Everything's fine at 
> this point.
> # Restart the second master.
> # Observe the massive (infinite?) logging on slave and relatively calm but 
> still possibly infinite logging on master.
> This is what the first master logs after it's restarted:
> {code}
> 2015-02-25 21:37:08,338 | DEBUG | Download session connected... | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:08,582 | INFO  | Slave skipping download of: 
> log/190be289.log | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,099 | INFO  | Slave skipping download of: 
> log/0642f848.log | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,411 | INFO  | Slave skipping download of: 
> log/0c85f06d.log | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,838 | INFO  | Slave skipping download of: 
> log/12c8e921.log | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,842 | INFO  | Slave requested: 
> 1c9373b4.index/CURRENT | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,846 | INFO  | Slave requested: 
> 1c9373b4.index/MANIFEST-02 | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,850 | INFO  | Slave requested: 
> 1c9373b4.index/03.log | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,857 | INFO  | Attaching... Downloaded 0.02/95.65 kb and 
> 1/3 files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,859 | INFO  | Attaching... Downloaded 0.06/95.65 kb and 
> 2/3 files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,861 | INFO  | Attaching... Downloaded 95.65/95.65 kb and 
> 3/3 files | org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,862 | INFO  | Attached | 
> org.apache.activemq.leveldb.replicated.SlaveLevelDBStore | 
> hawtdispatch-DEFAULT-1
> 2015-02-25 21:37:09,878 | DEBUG | Taking a snapshot of the current index: 
> /usr/local/apache-activemq-5.11.1/data/replicatedLevelDB/1c9373b4.index
>  | org.apache.activemq.leveldb.LevelDBClient | Thread-2
> 2015-02-25 21:37:10,352 | DEBUG | Recovering from last index snapshot at: 
> /usr/local/apache-activemq-5.11.1/data/replicatedLevelDB/dirty.index | 
> org.apache.activemq.leveldb.LevelDBClient | Thread-2
> {code}
> Right after that everything seems fine. But as soon as I stop the new master, 
> the another new master (that would be the third one) logs
> {code}
> 2015-02-25 21:38:43,876 | INFO  | Promoted to master | 
> 

[jira] [Updated] (ARTEMIS-741) memory leak when using STOMP protocol

2016-09-20 Thread Mitchell Ackerman (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mitchell Ackerman updated ARTEMIS-741:
--
Component/s: Broker

> memory leak when using STOMP protocol
> -
>
> Key: ARTEMIS-741
> URL: https://issues.apache.org/jira/browse/ARTEMIS-741
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker, Stomp
>Affects Versions: 1.4.0
> Environment: JDK 8, Apache Tomcat server or standalone app, Linux or 
> Windows
>Reporter: Mitchell Ackerman
>
> ArtemisMQ exhibits a memory leak when using the STOMP protocol.
> Steps to reproduce:
> 1. Configure a server with a JMS topic, my example uses an EmbeddedJMS server
> 2. Connect to the server using the STOMP protocol
> 3. Subscribe to the topic with a selector 
> 4. publish some messages to the topic that match the selector (this step may 
> not be necessary)
> 5. Unsubscribe from the topic
> 6. publish some messages to the topic that match the selector
> The messages published after the unsubscribe are retained in a QueueImpl 
> object, messageReferences queue and are never cleaned up unless the client 
> disconnects.  The QueueImpl object has 0 Consumers (ConsumerList size is 0), 
> and the QueueImpl object retains the filter from the subscription.
> See also 
> http://activemq.2283324.n4.nabble.com/potential-memory-leak-when-using-STOMP-protocol-td4716643.html
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-741) memory leak when using STOMP protocol

2016-09-20 Thread Mitchell Ackerman (JIRA)
Mitchell Ackerman created ARTEMIS-741:
-

 Summary: memory leak when using STOMP protocol
 Key: ARTEMIS-741
 URL: https://issues.apache.org/jira/browse/ARTEMIS-741
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Stomp
Affects Versions: 1.4.0
 Environment: JDK 8, Apache Tomcat server or standalone app, Linux or 
Windows
Reporter: Mitchell Ackerman


ArtemisMQ exhibits a memory leak when using the STOMP protocol.

Steps to reproduce:

1. Configure a server with a JMS topic, my example uses an EmbeddedJMS server
2. Connect to the server using the STOMP protocol
3. Subscribe to the topic with a selector 
4. publish some messages to the topic that match the selector (this step may 
not be necessary)
5. Unsubscribe from the topic
6. publish some messages to the topic that match the selector

The messages published after the unsubscribe are retained in a QueueImpl 
object, messageReferences queue and are never cleaned up unless the client 
disconnects.  The QueueImpl object has 0 Consumers (ConsumerList size is 0), 
and the QueueImpl object retains the filter from the subscription.

See also 
http://activemq.2283324.n4.nabble.com/potential-memory-leak-when-using-STOMP-protocol-td4716643.html
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5129) Substitute TimeTask with ScheduledExecutorService in org.apache.activemq.thread.Scheduler

2016-09-20 Thread clebert suconic (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507196#comment-15507196
 ] 

clebert suconic commented on AMQ-5129:
--

1966?

if you change anything here.. look at a simple class I made on Artemis: 

https://github.com/apache/activemq-artemis/blob/master/artemis-commons/src/main/java/org/apache/activemq/artemis/core/server/ActiveMQScheduledComponent.java


It's a common pattern that makes it easier to manage. Also this is backed by an 
executor, so if a Scheduler ever hungs it wouldn't starve the Scheduler... But 
I also have the OrderedExecutor there, what makes it simpler.

> Substitute TimeTask with ScheduledExecutorService in 
> org.apache.activemq.thread.Scheduler
> -
>
> Key: AMQ-5129
> URL: https://issues.apache.org/jira/browse/AMQ-5129
> Project: ActiveMQ
>  Issue Type: Wish
>Affects Versions: 5.9.0
>Reporter: matteo rulli
> Attachments: proposed-patch.txt
>
>
> Since Timer has only one execution thread, long-running task can affect other 
> scheduled tasks. Besides, runtime exceptions thrown in TimerTasks kill the 
> only one running thread, bringing down the entire Scheduler.
> I have the suspect that all this could relate to AMQ-3938: sometimes in very 
> busy environments I experience exaclty the same problem: a slow leakeage due 
> to temp queues that are not deleted. Since 
> org.apache.activemq.broker.region.RegionBroker uses a Scheduler to activate 
> the purgeInactiveDestinations, a crashed timer could explain why 
> purgeInactiveDestinations stops working.
> I attached a tentative patch to migrate timer to ScheduledExecutorService. 
> Hope this helps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5129) Substitute TimeTask with ScheduledExecutorService in org.apache.activemq.thread.Scheduler

2016-09-20 Thread Christopher L. Shannon (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507089#comment-15507089
 ] 

Christopher L. Shannon commented on AMQ-5129:
-

While I don't disagree that the scheduler could be looked at it to modernize a 
bit and maybe switch to using ScheduledExecutorService, I would recommend that 
you try a newer version of the broker as well to see if it helps your issue.  
If the timer tasks are suddenly not working then there's probably some other 
underlying issue going on that is the root cause which might have already been 
fixed in a newer version.  The newest version is 5.14.0 and there have been 
1966 commits since 5.9.1 so quite a lot has changed since 5.9.1 was released.

> Substitute TimeTask with ScheduledExecutorService in 
> org.apache.activemq.thread.Scheduler
> -
>
> Key: AMQ-5129
> URL: https://issues.apache.org/jira/browse/AMQ-5129
> Project: ActiveMQ
>  Issue Type: Wish
>Affects Versions: 5.9.0
>Reporter: matteo rulli
> Attachments: proposed-patch.txt
>
>
> Since Timer has only one execution thread, long-running task can affect other 
> scheduled tasks. Besides, runtime exceptions thrown in TimerTasks kill the 
> only one running thread, bringing down the entire Scheduler.
> I have the suspect that all this could relate to AMQ-3938: sometimes in very 
> busy environments I experience exaclty the same problem: a slow leakeage due 
> to temp queues that are not deleted. Since 
> org.apache.activemq.broker.region.RegionBroker uses a Scheduler to activate 
> the purgeInactiveDestinations, a crashed timer could explain why 
> purgeInactiveDestinations stops working.
> I attached a tentative patch to migrate timer to ScheduledExecutorService. 
> Hope this helps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-723) AMQP subscriptions aren't deleted properly

2016-09-20 Thread Justin Bertram (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15507035#comment-15507035
 ] 

Justin Bertram commented on ARTEMIS-723:


This looks done.  Can it be resolved?

> AMQP subscriptions aren't deleted properly
> --
>
> Key: ARTEMIS-723
> URL: https://issues.apache.org/jira/browse/ARTEMIS-723
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 1.3.0
>Reporter: Andy Taylor
>Assignee: Andy Taylor
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5129) Substitute TimeTask with ScheduledExecutorService in org.apache.activemq.thread.Scheduler

2016-09-20 Thread jack patwork (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506965#comment-15506965
 ] 

jack patwork commented on AMQ-5129:
---

I've seen the same thing on 5.9.1. I'm running a 4 node cluster and 
intermittently, every few weeks, the scheduler stops working. I haven't yet 
been able to spot what triggers this. I'm using the purgeInactiveDestinations 
on dynamic topics that are being getting and purged a few times a minute so in 
order to keep the memory under control this is critical. When it stops within a 
few hours the affected instance is out of memory and needs a restart.

I think the scheduler definitely needs to be more robust and handle failures 
without stopping. It's a concern that the the scheduler can stop without any 
trace of failure in the logs, apart from a canary watch of the scheduler log 
lines.


> Substitute TimeTask with ScheduledExecutorService in 
> org.apache.activemq.thread.Scheduler
> -
>
> Key: AMQ-5129
> URL: https://issues.apache.org/jira/browse/AMQ-5129
> Project: ActiveMQ
>  Issue Type: Wish
>Affects Versions: 5.9.0
>Reporter: matteo rulli
> Attachments: proposed-patch.txt
>
>
> Since Timer has only one execution thread, long-running task can affect other 
> scheduled tasks. Besides, runtime exceptions thrown in TimerTasks kill the 
> only one running thread, bringing down the entire Scheduler.
> I have the suspect that all this could relate to AMQ-3938: sometimes in very 
> busy environments I experience exaclty the same problem: a slow leakeage due 
> to temp queues that are not deleted. Since 
> org.apache.activemq.broker.region.RegionBroker uses a Scheduler to activate 
> the purgeInactiveDestinations, a crashed timer could explain why 
> purgeInactiveDestinations stops working.
> I attached a tentative patch to migrate timer to ScheduledExecutorService. 
> Hope this helps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-6432) Improve 'Failed to load next journal location: null' warning output

2016-09-20 Thread Christopher L. Shannon (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506902#comment-15506902
 ] 

Christopher L. Shannon commented on AMQ-6432:
-

Time shouldn't have anything to do with it because a Location in the journal is 
essentially just 2 pieces of information, the file and the offset and time 
isn't involved.  Since this issue has happened a few more times can you enable 
debug logging for the MessageDatabase class to try and capture the stack trace?

> Improve 'Failed to load next journal location: null' warning output
> ---
>
> Key: AMQ-6432
> URL: https://issues.apache.org/jira/browse/AMQ-6432
> Project: ActiveMQ
>  Issue Type: Improvement
>Affects Versions: 5.14.0
>Reporter: Martin Lichtin
>
> Seeing
> {noformat}
> 2016-09-19 15:11:30,270 | WARN  | ournal Checkpoint Worker | MessageDatabase  
> | 
>emq.store.kahadb.MessageDatabase 2104 | 102 - 
> org.apache.activemq.activemq-osgi - 5.14.0 | 
>Failed to load next journal location: null
> {noformat}
> it'd be great to improve the output in such a case (Journal Checkpoint Worker 
> ).
> Why not show the exception stack (it seems weird to only show the stack when 
> debug level is enabled).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (ARTEMIS-734) Message expired while being moved on the cluster bridge does not follow the address setting configuration

2016-09-20 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic closed ARTEMIS-734.
---
Resolution: Fixed

> Message expired while being moved on the cluster bridge does not follow the 
> address setting configuration
> -
>
> Key: ARTEMIS-734
> URL: https://issues.apache.org/jira/browse/ARTEMIS-734
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: clebert suconic
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-740) Auto-reload diverts from broker.xml

2016-09-20 Thread clebert suconic (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506796#comment-15506796
 ] 

clebert suconic commented on ARTEMIS-740:
-

The implementation should be simple.


on ActiveMQServerImpl, ConfigurationFileReloader, on the reload method, someone 
should:

```
  for (DivertConfiguration config : 
configuration.getDivertConfigurations()) {
 deployDivert(config);
  }
```


Although there is an extra step to be done. Right now the code would duplicate 
diverts. So we would need to verify if the Divert already exists before 
duplicating it.

I don't expect to delete diverts, same way as this is not removing queues.

> Auto-reload diverts from broker.xml
> ---
>
> Key: ARTEMIS-740
> URL: https://issues.apache.org/jira/browse/ARTEMIS-740
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 1.4.0
>Reporter: Ville Skyttä
> Fix For: 1.5.0
>
>
> Automatic reloading of diverts from broker.xml would be equally useful as for 
> addresses, security, and jms destinations, see ARTEMIS-601



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (ARTEMIS-740) Auto-reload diverts from broker.xml

2016-09-20 Thread clebert suconic (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506796#comment-15506796
 ] 

clebert suconic edited comment on ARTEMIS-740 at 9/20/16 2:52 PM:
--

The implementation should be simple.


on ActiveMQServerImpl, ConfigurationFileReloader, on the reload method, someone 
should:


  for (DivertConfiguration config : 
configuration.getDivertConfigurations()) {
 deployDivert(config);
  }



Although there is an extra step to be done. Right now the code would duplicate 
diverts. So we would need to verify if the Divert already exists before 
duplicating it.

I don't expect to delete diverts, same way as this is not removing queues.


was (Author: clebertsuconic):
The implementation should be simple.


on ActiveMQServerImpl, ConfigurationFileReloader, on the reload method, someone 
should:

```
  for (DivertConfiguration config : 
configuration.getDivertConfigurations()) {
 deployDivert(config);
  }
```


Although there is an extra step to be done. Right now the code would duplicate 
diverts. So we would need to verify if the Divert already exists before 
duplicating it.

I don't expect to delete diverts, same way as this is not removing queues.

> Auto-reload diverts from broker.xml
> ---
>
> Key: ARTEMIS-740
> URL: https://issues.apache.org/jira/browse/ARTEMIS-740
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 1.4.0
>Reporter: Ville Skyttä
> Fix For: 1.5.0
>
>
> Automatic reloading of diverts from broker.xml would be equally useful as for 
> addresses, security, and jms destinations, see ARTEMIS-601



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-738) Improving Transaction support on AMQP

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506664#comment-15506664
 ] 

ASF GitHub Bot commented on ARTEMIS-738:


Github user mtaylor commented on a diff in the pull request:

https://github.com/apache/activemq-artemis/pull/782#discussion_r79620023
  
--- Diff: 
artemis-protocols/artemis-proton-plug/src/main/java/org/proton/plug/context/server/ProtonServerSessionContext.java
 ---
@@ -60,6 +61,9 @@ public int serverDelivery(Object message, Object 
consumer, int deliveryCount) th
 
public void addTransactionHandler(Coordinator coordinator, Receiver 
receiver) {
   ProtonTransactionHandler transactionHandler = new 
ProtonTransactionHandler(sessionSPI);
+
+  
coordinator.setCapabilities(Symbol.getSymbol("[\"amqp:local-transactions\",\"amqp:multi-txns-per-ssn\",\"amqp:multi-ssns-per-txn\"]"));
--- End diff --

Like this:
```java

coordinator.setCapabilities(Symbol.getSymbol("amqp:local-transactions"),
   
Symbol.getSymbol("amqp:multi-txns-per-ssn"),
   
Symbol.getSymbol("amqp:multi-ssns-per-txn"));
```


> Improving Transaction support on AMQP
> -
>
> Key: ARTEMIS-738
> URL: https://issues.apache.org/jira/browse/ARTEMIS-738
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: clebert suconic
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-738) Improving Transaction support on AMQP

2016-09-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506715#comment-15506715
 ] 

ASF GitHub Bot commented on ARTEMIS-738:


Github user gemmellr commented on a diff in the pull request:

https://github.com/apache/activemq-artemis/pull/782#discussion_r79623050
  
--- Diff: 
artemis-protocols/artemis-proton-plug/src/main/java/org/proton/plug/context/server/ProtonServerSessionContext.java
 ---
@@ -60,6 +61,9 @@ public int serverDelivery(Object message, Object 
consumer, int deliveryCount) th
 
public void addTransactionHandler(Coordinator coordinator, Receiver 
receiver) {
   ProtonTransactionHandler transactionHandler = new 
ProtonTransactionHandler(sessionSPI);
+
+  
coordinator.setCapabilities(Symbol.getSymbol("[\"amqp:local-transactions\",\"amqp:multi-txns-per-ssn\",\"amqp:multi-ssns-per-txn\"]"));
--- End diff --

Yep.


> Improving Transaction support on AMQP
> -
>
> Key: ARTEMIS-738
> URL: https://issues.apache.org/jira/browse/ARTEMIS-738
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: clebert suconic
> Fix For: 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-740) Auto-reload diverts from broker.xml

2016-09-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/ARTEMIS-740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506712#comment-15506712
 ] 

Ville Skyttä commented on ARTEMIS-740:
--

I'm not working on it at the moment. But if you can give me some general 
directions/instructions where and how to implement it, I can take a look 
sometime unless someone beats me to it.

> Auto-reload diverts from broker.xml
> ---
>
> Key: ARTEMIS-740
> URL: https://issues.apache.org/jira/browse/ARTEMIS-740
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 1.4.0
>Reporter: Ville Skyttä
> Fix For: 1.5.0
>
>
> Automatic reloading of diverts from broker.xml would be equally useful as for 
> addresses, security, and jms destinations, see ARTEMIS-601



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (ARTEMIS-580) Add setting to control global memory usage

2016-09-20 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic closed ARTEMIS-580.
---
Resolution: Fixed

> Add setting to control global memory usage
> --
>
> Key: ARTEMIS-580
> URL: https://issues.apache.org/jira/browse/ARTEMIS-580
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 1.3.0
>Reporter: Lionel Cons
>Assignee: clebert suconic
>Priority: Blocker
> Fix For: 1.4.0
>
>
> AFAIK, the only way to prevent Artemis from consuming all the heap and dying 
> with OOM errors is to set {{max-size-bytes}}.
> This per-address setting is not suitable for brokers with many addresses that 
> have different usage patterns. For instance, on a broker used for testing, 
> Artemis complained that:
> {code}
> 2016-06-20 13:20:03,107 [org.apache.activemq.artemis.core.server] WARN 
> AMQ05: OutOfMemoryError possible! There are currently 400 addresses with 
> a total max-size-bytes of 4,194,304,000 bytes, but the maximum memory 
> available is 764,411,904 bytes.
> {code}
> These 400 addresses are not used anymore and will eventually be removed.
> In contrast, ActiveMQ 5.x has a much more useful global setting to control 
> how much memory (in total) the broker will use. See {{memoryUsage}} in 
> http://activemq.apache.org/producer-flow-control.html.
> Could Artemis also use a global memory setting to limit its memory usage?
> What to do when hitting this limit (DROP, BLOCK, PAGE...) could stay 
> per-address.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-580) Add setting to control global memory usage

2016-09-20 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-580:

Fix Version/s: (was: 1.5.0)
   1.4.0

> Add setting to control global memory usage
> --
>
> Key: ARTEMIS-580
> URL: https://issues.apache.org/jira/browse/ARTEMIS-580
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 1.3.0
>Reporter: Lionel Cons
>Assignee: clebert suconic
>Priority: Blocker
> Fix For: 1.4.0
>
>
> AFAIK, the only way to prevent Artemis from consuming all the heap and dying 
> with OOM errors is to set {{max-size-bytes}}.
> This per-address setting is not suitable for brokers with many addresses that 
> have different usage patterns. For instance, on a broker used for testing, 
> Artemis complained that:
> {code}
> 2016-06-20 13:20:03,107 [org.apache.activemq.artemis.core.server] WARN 
> AMQ05: OutOfMemoryError possible! There are currently 400 addresses with 
> a total max-size-bytes of 4,194,304,000 bytes, but the maximum memory 
> available is 764,411,904 bytes.
> {code}
> These 400 addresses are not used anymore and will eventually be removed.
> In contrast, ActiveMQ 5.x has a much more useful global setting to control 
> how much memory (in total) the broker will use. See {{memoryUsage}} in 
> http://activemq.apache.org/producer-flow-control.html.
> Could Artemis also use a global memory setting to limit its memory usage?
> What to do when hitting this limit (DROP, BLOCK, PAGE...) could stay 
> per-address.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-580) Add setting to control global memory usage

2016-09-20 Thread clebert suconic (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506687#comment-15506687
 ] 

clebert suconic commented on ARTEMIS-580:
-




This was done in the context of 
https://issues.apache.org/jira/browse/ARTEMIS-581

> Add setting to control global memory usage
> --
>
> Key: ARTEMIS-580
> URL: https://issues.apache.org/jira/browse/ARTEMIS-580
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 1.3.0
>Reporter: Lionel Cons
>Assignee: clebert suconic
>Priority: Blocker
> Fix For: 1.5.0
>
>
> AFAIK, the only way to prevent Artemis from consuming all the heap and dying 
> with OOM errors is to set {{max-size-bytes}}.
> This per-address setting is not suitable for brokers with many addresses that 
> have different usage patterns. For instance, on a broker used for testing, 
> Artemis complained that:
> {code}
> 2016-06-20 13:20:03,107 [org.apache.activemq.artemis.core.server] WARN 
> AMQ05: OutOfMemoryError possible! There are currently 400 addresses with 
> a total max-size-bytes of 4,194,304,000 bytes, but the maximum memory 
> available is 764,411,904 bytes.
> {code}
> These 400 addresses are not used anymore and will eventually be removed.
> In contrast, ActiveMQ 5.x has a much more useful global setting to control 
> how much memory (in total) the broker will use. See {{memoryUsage}} in 
> http://activemq.apache.org/producer-flow-control.html.
> Could Artemis also use a global memory setting to limit its memory usage?
> What to do when hitting this limit (DROP, BLOCK, PAGE...) could stay 
> per-address.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ARTEMIS-740) Auto-reload diverts from broker.xml

2016-09-20 Thread clebert suconic (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15506680#comment-15506680
 ] 

clebert suconic commented on ARTEMIS-740:
-

[~scop] are you doing this? just asking to make sure I wouldn't duplicate 
efforts.

> Auto-reload diverts from broker.xml
> ---
>
> Key: ARTEMIS-740
> URL: https://issues.apache.org/jira/browse/ARTEMIS-740
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 1.4.0
>Reporter: Ville Skyttä
> Fix For: 1.5.0
>
>
> Automatic reloading of diverts from broker.xml would be equally useful as for 
> addresses, security, and jms destinations, see ARTEMIS-601



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (ARTEMIS-740) Auto-reload diverts from broker.xml

2016-09-20 Thread clebert suconic (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

clebert suconic updated ARTEMIS-740:

Fix Version/s: 1.5.0

> Auto-reload diverts from broker.xml
> ---
>
> Key: ARTEMIS-740
> URL: https://issues.apache.org/jira/browse/ARTEMIS-740
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 1.4.0
>Reporter: Ville Skyttä
> Fix For: 1.5.0
>
>
> Automatic reloading of diverts from broker.xml would be equally useful as for 
> addresses, security, and jms destinations, see ARTEMIS-601



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQCPP-605) ActiveMQ CPP clients don't support IPv6

2016-09-20 Thread MARIUS KAPEA (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQCPP-605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MARIUS KAPEA updated AMQCPP-605:

Summary: ActiveMQ CPP clients don't support IPv6  (was: CLONE - CPP clients 
support IPv6 and ssl transportation through http/socks(4or5) proxy)

> ActiveMQ CPP clients don't support IPv6
> ---
>
> Key: AMQCPP-605
> URL: https://issues.apache.org/jira/browse/AMQCPP-605
> Project: ActiveMQ C++ Client
>  Issue Type: Bug
>  Components: Decaf, Transports
>Affects Versions: 3.9.3
>Reporter: MARIUS KAPEA
>Assignee: Timothy Bish
>Priority: Blocker
>
> #ActiveMQ CPP 3.9.3
> We want to use ActiveMQ Cpp client with the IPv6 but we are facing an issue.
>  => There is no syntax  error, but the client(Producer) is not sending 
> packets.
> ActiveMQ broker supports IPv6 protocol. But the current ActiveMQ-CPP clients 
> don't.
> I am wondering if today  there is an issue related to the IPv6 on 
> ActiveMQ-CPP client?
> Could you please help us?
> Kind regards,
> Marius



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQCPP-605) CLONE - CPP clients support IPv6 and ssl transportation through http/socks(4or5) proxy

2016-09-20 Thread MARIUS KAPEA (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQCPP-605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MARIUS KAPEA updated AMQCPP-605:

Issue Type: Bug  (was: New Feature)

> CLONE - CPP clients support IPv6 and ssl transportation through 
> http/socks(4or5) proxy
> --
>
> Key: AMQCPP-605
> URL: https://issues.apache.org/jira/browse/AMQCPP-605
> Project: ActiveMQ C++ Client
>  Issue Type: Bug
>  Components: Decaf, Transports
>Affects Versions: 3.9.3
>Reporter: MARIUS KAPEA
>Assignee: Timothy Bish
>Priority: Blocker
>
> #ActiveMQ CPP 3.9.3
> We want to use ActiveMQ Cpp client with the IPv6 but we are facing an issue.
>  => There is no syntax  error, but the client(Producer) is not sending 
> packets.
> ActiveMQ broker supports IPv6 protocol. But the current ActiveMQ-CPP clients 
> don't.
> I am wondering if today  there is an issue related to the IPv6 on 
> ActiveMQ-CPP client?
> Could you please help us?
> Kind regards,
> Marius



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQCPP-605) CLONE - CPP clients support IPv6 and ssl transportation through http/socks(4or5) proxy

2016-09-20 Thread MARIUS KAPEA (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQCPP-605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MARIUS KAPEA updated AMQCPP-605:

Patch Info:   (was: Patch Available)

> CLONE - CPP clients support IPv6 and ssl transportation through 
> http/socks(4or5) proxy
> --
>
> Key: AMQCPP-605
> URL: https://issues.apache.org/jira/browse/AMQCPP-605
> Project: ActiveMQ C++ Client
>  Issue Type: New Feature
>  Components: Decaf, Transports
>Affects Versions: 3.9.3
>Reporter: MARIUS KAPEA
>Assignee: Timothy Bish
>Priority: Blocker
>
> #ActiveMQ CPP 3.9.3
> We want to use ActiveMQ Cpp client with the IPv6 but we are facing an issue.
>  => There is no syntax  error, but the client(Producer) is not sending 
> packets.
> ActiveMQ broker supports IPv6 protocol. But the current ActiveMQ-CPP clients 
> don't.
> I am wondering if today  there is an issue related to the IPv6 on 
> ActiveMQ-CPP client?
> Could you please help us?
> Kind regards,
> Marius



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQCPP-605) CLONE - CPP clients support IPv6 and ssl transportation through http/socks(4or5) proxy

2016-09-20 Thread MARIUS KAPEA (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQCPP-605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MARIUS KAPEA updated AMQCPP-605:

Priority: Blocker  (was: Minor)

> CLONE - CPP clients support IPv6 and ssl transportation through 
> http/socks(4or5) proxy
> --
>
> Key: AMQCPP-605
> URL: https://issues.apache.org/jira/browse/AMQCPP-605
> Project: ActiveMQ C++ Client
>  Issue Type: New Feature
>  Components: Decaf, Transports
>Affects Versions: 3.9.3
>Reporter: MARIUS KAPEA
>Assignee: Timothy Bish
>Priority: Blocker
>
> #ActiveMQ CPP 3.9.3
> We want to use ActiveMQ Cpp client with the IPv6 but we are facing an issue.
>  => There is no syntax  error, but the client(Producer) is not sending 
> packets.
> ActiveMQ broker supports IPv6 protocol. But the current ActiveMQ-CPP clients 
> don't.
> I am wondering if today  there is an issue related to the IPv6 on 
> ActiveMQ-CPP client?
> Could you please help us?
> Kind regards,
> Marius



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQCPP-605) CLONE - CPP clients support IPv6 and ssl transportation through http/socks(4or5) proxy

2016-09-20 Thread MARIUS KAPEA (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQCPP-605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MARIUS KAPEA updated AMQCPP-605:

Affects Version/s: (was: 3.4.5)
   3.9.3

> CLONE - CPP clients support IPv6 and ssl transportation through 
> http/socks(4or5) proxy
> --
>
> Key: AMQCPP-605
> URL: https://issues.apache.org/jira/browse/AMQCPP-605
> Project: ActiveMQ C++ Client
>  Issue Type: New Feature
>  Components: Decaf, Transports
>Affects Versions: 3.9.3
>Reporter: MARIUS KAPEA
>Assignee: Timothy Bish
>Priority: Minor
>
> #ActiveMQ CPP 3.9.3
> We want to use ActiveMQ Cpp client with the IPv6 but we are facing an issue.
>  => There is no syntax  error, but the client(Producer) is not sending 
> packets.
> ActiveMQ broker supports IPv6 protocol. But the current ActiveMQ-CPP clients 
> don't.
> I am wondering if today  there is an issue related to the IPv6 on 
> ActiveMQ-CPP client?
> Could you please help us?
> Kind regards,
> Marius



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQCPP-605) CLONE - CPP clients support IPv6 and ssl transportation through http/socks(4or5) proxy

2016-09-20 Thread MARIUS KAPEA (JIRA)
MARIUS KAPEA created AMQCPP-605:
---

 Summary: CLONE - CPP clients support IPv6 and ssl transportation 
through http/socks(4or5) proxy
 Key: AMQCPP-605
 URL: https://issues.apache.org/jira/browse/AMQCPP-605
 Project: ActiveMQ C++ Client
  Issue Type: New Feature
  Components: Decaf, Transports
Affects Versions: 3.4.5
Reporter: MARIUS KAPEA
Assignee: Timothy Bish
Priority: Minor


1. ActiveMQ broker supports IPv6 protocol. But current ActiveMQ-CPP clients 
don't. My partner and I modified ActiveMQ-CPP 3.4.5 to support IPv6.

2. In some cases activemq-cpp clients have to connect to activemq through a 
proxy. But current tcp/ssl transport does not provide this functionality.
My partner and I modified ActiveMQ-CPP 3.4.5 to enable SSL transport to support 
http/socks4/socks5 proxy.

3. In some cases the certification of ActiveMQ broker can't be regenerated. 
However, ActiveMQ-CPP clients need the "commonName" field in the certification 
matches the server host name. My partner and I modified ActiveMQ-CPP 3.4.5 to 
overwrite the server host name.

***
What is changed:
A local patched ActiveMQ-CPP, based on 3.4.5, 
which supports IPv6 and Http / Socks 4 / Socks 5 proxy for SSL transport.

Proxy usgae:
To enable proxy for SSL transport, use these system properties 

property comments

decaf.net.ssl.proxyType values:http/socks4/socks5
decaf.net.ssl.proxyHost proxy address
decaf.net.ssl.proxyPort proxy port
decaf.net.ssl.proxyUser proxy user name
decaf.net.ssl.proxyPassword proxy password


Overwriting server host name usage:
To overwrite the field, use these system properties 

property comments

decaf.net.ssl.serverNamethe name matched "CommonName" field

source: ipv6_proxy.patch

***



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (AMQCPP-604) Exception in decaf::lang::Thread::runCallback

2016-09-20 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQCPP-604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish closed AMQCPP-604.
---
Resolution: Invalid

This is a very old version and is not supported.  If you want to check that a 
bug is fixed in a later release the best way to do that is to test it yourself. 
 If you have questions then the users mailing list is the place to ask them.  

> Exception in decaf::lang::Thread::runCallback
> -
>
> Key: AMQCPP-604
> URL: https://issues.apache.org/jira/browse/AMQCPP-604
> Project: ActiveMQ C++ Client
>  Issue Type: Bug
>  Components: Decaf
>Affects Versions: 2.2.3
> Environment: activemq cpp 2.2.3
>Reporter: ramesh
>Assignee: Timothy Bish
>
> We have upgraded activemq cpp from 2.1.3 to 2.2.3 due to below known issue in 
> 2.1.3.
> #6  0x7f030582ca5f in 
> activemq::exceptions::ActiveMQException::buildMessage (this= out>, 
> format=0x7f030595d618 "No valid response received for command: %s, check 
> broker.", vargs=) at 
> activemq/exceptions/ActiveMQException.cpp:76
> buffer = 0x7f02d0009910 "No valid response received for command: "
> written = 
> size = 256
> #7  0x7f030583f518 in 
> activemq::transport::CommandIOException::CommandIOException 
> (this=0x7f02d000f020, 
> file=0x7f030595d5e0 "activemq/transport/filters/ResponseCorrelator.cpp", 
> lineNumber=146, 
> msg=0x7f030595d618 "No valid response received for command: %s, check 
> broker.") at ./activemq/transport/CommandIOException.h:67
> vargs = {{gp_offset = 48, fp_offset = 48, overflow_arg_area = 
> 0x7f0290f5a750, reg_save_area = 0x7f0290f5a670}}
> Once it is upgraded to 2.2.3 the issue is resolved. But we are getting decaf  
> exception and resulted in to crash of my application -
> Thread 1:
> #4  0x0062a168 in std::string::_Rep::_S_empty_rep_storage ()
> #5  0x7f291ea4d6bd in decaf::lang::Exception::printStackTrace 
> (this=0x7f28fa13fc08, stream=...) at decaf/lang/Exception.cpp:137
> #6  0x7f292444a3fd in MyExceptionListener::onException (this=0x25886c0, 
> ex=...) at MyExceptionListener.cpp:37
> #7  0x7f291e9e170f in fire (this=0x7f28bc018590, source= out>, ex=) at 
> ./activemq/connector/openwire/OpenWireConnector.h:262
> #8  activemq::connector::openwire::OpenWireConnector::onTransportException 
> (this=0x7f28bc018590, source=, ex=)
> at activemq/connector/openwire/OpenWireConnector.cpp:1484
> #9  0x7f291e9df045 in fire (this=0x7f28bc02e5e0, source= out>, ex=...) at ./activemq/transport/TransportFilter.h:75
> #10 
> activemq::connector::openwire::OpenWireFormatNegotiator::onTransportException 
> (this=0x7f28bc02e5e0, source=, ex=...)
> at activemq/connector/openwire/OpenWireFormatNegotiator.cpp:184
> #11 0x7f291ea0bb50 in fire (this=0x7f28bc007510, source= out>, ex=...) at ./activemq/transport/TransportFilter.h:75
> #12 activemq::transport::filters::ResponseCorrelator::onTransportException 
> (this=0x7f28bc007510, source=, ex=...) at 
> activemq/transport/filters/ResponseCorrelator.cpp:296
> #13 0x7f291ea05609 in fire (this=0x0, source=, 
> ex=...) at activemq/transport/TransportFilter.h:75
> #14 activemq::transport::TransportFilter::onTransportException (this=0x0, 
> source=, ex=...) at 
> activemq/transport/TransportFilter.cpp:55
> #15 0x7f291ea06cf2 in activemq::transport::IOTransport::run 
> (this=0x7f28bc0076f0) at activemq/transport/IOTransport.h:105
> #16 0x7f291ea50190 in decaf::lang::Thread::runCallback (self=0x225d7b0, 
> param=0x7f28fa13fc08) at decaf/lang/Thread.cpp:125
> Thread 2:
> #0  0x7f291e39b68c in pthread_cond_wait@@GLIBC_2.3.2 () from 
> /lib64/libpthread.so.0
> #1  0x7f291ea62ac3 in decaf::util::concurrent::Mutex::wait 
> (this=0x7f28bc03f250, millisecs=4294967295) at 
> decaf/util/concurrent/Mutex.cpp:116
> #2  0x7f291ea6299f in decaf::util::concurrent::Mutex::wait 
> (this=0x7f28bc03f6c4) at decaf/util/concurrent/Mutex.cpp:82
> #3  0x7f291e99678f in activemq::core::ActiveMQSessionExecutor::run 
> (this=0x7f28bc03f220) at activemq/core/ActiveMQSessionExecutor.cpp:222
> #4  0x7f291ea50190 in decaf::lang::Thread::runCallback (self=0x225d7e0, 
> param=0x80) at decaf/lang/Thread.cpp:125
> #5  0x7f291e397aa1 in start_thread () from /lib64/libpthread.so.0
> #6  0x7f291d74093d in clone () from /lib64/libc.so.6
> It seems synchronization issue between thread 1 and thread 2 causes the 
> exception  in decaf.
> Can you please let me know if issue is fixed in latest versions to check it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ARTEMIS-740) Auto-reload diverts from broker.xml

2016-09-20 Thread JIRA
Ville Skyttä created ARTEMIS-740:


 Summary: Auto-reload diverts from broker.xml
 Key: ARTEMIS-740
 URL: https://issues.apache.org/jira/browse/ARTEMIS-740
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: Broker
Affects Versions: 1.4.0
Reporter: Ville Skyttä


Automatic reloading of diverts from broker.xml would be equally useful as for 
addresses, security, and jms destinations, see ARTEMIS-601



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQCPP-604) Exception in decaf::lang::Thread::runCallback

2016-09-20 Thread ramesh (JIRA)
ramesh created AMQCPP-604:
-

 Summary: Exception in decaf::lang::Thread::runCallback
 Key: AMQCPP-604
 URL: https://issues.apache.org/jira/browse/AMQCPP-604
 Project: ActiveMQ C++ Client
  Issue Type: Bug
  Components: Decaf
Affects Versions: 2.2.3
 Environment: activemq cpp 2.2.3
Reporter: ramesh
Assignee: Timothy Bish



We have upgraded activemq cpp from 2.1.3 to 2.2.3 due to below known issue in 
2.1.3.

#6  0x7f030582ca5f in activemq::exceptions::ActiveMQException::buildMessage 
(this=, 
format=0x7f030595d618 "No valid response received for command: %s, check 
broker.", vargs=) at 
activemq/exceptions/ActiveMQException.cpp:76
buffer = 0x7f02d0009910 "No valid response received for command: "
written = 
size = 256
#7  0x7f030583f518 in 
activemq::transport::CommandIOException::CommandIOException 
(this=0x7f02d000f020, 
file=0x7f030595d5e0 "activemq/transport/filters/ResponseCorrelator.cpp", 
lineNumber=146, 
msg=0x7f030595d618 "No valid response received for command: %s, check 
broker.") at ./activemq/transport/CommandIOException.h:67
vargs = {{gp_offset = 48, fp_offset = 48, overflow_arg_area = 
0x7f0290f5a750, reg_save_area = 0x7f0290f5a670}}


Once it is upgraded to 2.2.3 the issue is resolved. But we are getting decaf  
exception and resulted in to crash of my application -

Thread 1:

#4  0x0062a168 in std::string::_Rep::_S_empty_rep_storage ()

#5  0x7f291ea4d6bd in decaf::lang::Exception::printStackTrace 
(this=0x7f28fa13fc08, stream=...) at decaf/lang/Exception.cpp:137
#6  0x7f292444a3fd in MyExceptionListener::onException (this=0x25886c0, 
ex=...) at MyExceptionListener.cpp:37
#7  0x7f291e9e170f in fire (this=0x7f28bc018590, source=, ex=) at 
./activemq/connector/openwire/OpenWireConnector.h:262
#8  activemq::connector::openwire::OpenWireConnector::onTransportException 
(this=0x7f28bc018590, source=, ex=)
at activemq/connector/openwire/OpenWireConnector.cpp:1484
#9  0x7f291e9df045 in fire (this=0x7f28bc02e5e0, source=, ex=...) at ./activemq/transport/TransportFilter.h:75
#10 
activemq::connector::openwire::OpenWireFormatNegotiator::onTransportException 
(this=0x7f28bc02e5e0, source=, ex=...)
at activemq/connector/openwire/OpenWireFormatNegotiator.cpp:184

#11 0x7f291ea0bb50 in fire (this=0x7f28bc007510, source=, ex=...) at ./activemq/transport/TransportFilter.h:75
#12 activemq::transport::filters::ResponseCorrelator::onTransportException 
(this=0x7f28bc007510, source=, ex=...) at 
activemq/transport/filters/ResponseCorrelator.cpp:296
#13 0x7f291ea05609 in fire (this=0x0, source=, ex=...) 
at activemq/transport/TransportFilter.h:75
#14 activemq::transport::TransportFilter::onTransportException (this=0x0, 
source=, ex=...) at 
activemq/transport/TransportFilter.cpp:55
#15 0x7f291ea06cf2 in activemq::transport::IOTransport::run 
(this=0x7f28bc0076f0) at activemq/transport/IOTransport.h:105
#16 0x7f291ea50190 in decaf::lang::Thread::runCallback (self=0x225d7b0, 
param=0x7f28fa13fc08) at decaf/lang/Thread.cpp:125


Thread 2:

#0  0x7f291e39b68c in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0

#1  0x7f291ea62ac3 in decaf::util::concurrent::Mutex::wait 
(this=0x7f28bc03f250, millisecs=4294967295) at 
decaf/util/concurrent/Mutex.cpp:116
#2  0x7f291ea6299f in decaf::util::concurrent::Mutex::wait 
(this=0x7f28bc03f6c4) at decaf/util/concurrent/Mutex.cpp:82
#3  0x7f291e99678f in activemq::core::ActiveMQSessionExecutor::run 
(this=0x7f28bc03f220) at activemq/core/ActiveMQSessionExecutor.cpp:222
#4  0x7f291ea50190 in decaf::lang::Thread::runCallback (self=0x225d7e0, 
param=0x80) at decaf/lang/Thread.cpp:125
#5  0x7f291e397aa1 in start_thread () from /lib64/libpthread.so.0
#6  0x7f291d74093d in clone () from /lib64/libc.so.6

It seems synchronization issue between thread 1 and thread 2 causes the 
exception  in decaf.

Can you please let me know if issue is fixed in latest versions to check it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)