[jira] [Created] (ARTEMIS-1645) Diverted messages cannot be retried from DLQ

2018-01-30 Thread Niels Lippke (JIRA)
Niels Lippke created ARTEMIS-1645:
-

 Summary: Diverted messages cannot be retried from DLQ
 Key: ARTEMIS-1645
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1645
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Niels Lippke
 Attachments: QueueControlTest.java

Given a topic _SOURCE_ and a divert which forwards a message _M_ to a queue 
 _TARGET_. Consumer fails to process _M_ and _M_ is being send to DLQ. 
   
 If you now retry _M_ from DLQ it is not send to _TARGET_ but you'll get 
 {{AMQ222196: Could not find binding ...}}
 instead.
 And even worse the message is lost afterwards (removed from DLQ)! 
   
 My suspecion is, that the message properties are not correct regarding 
 {{_AMQ_ORIG_ADDRESS}} and {{_AMQ_ORIG_QUEUE.}} 
 Is: {{_AMQ_ORIG_ADDRESS=, _AMQ_ORIG_QUEUE=TARGET}} 
 Should be:  {{_AMQ_ORIG_ADDRESS=, 
_AMQ_ORIG_QUEUE=TARGET}} 
   
 Attached you'll find a testcase "testRetryDivertedMessage" which 
 demonstrates the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1642) Add log info to FileStoreMonitor

2018-01-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16346198#comment-16346198
 ] 

ASF GitHub Bot commented on ARTEMIS-1642:
-

Github user gaohoward commented on the issue:

https://github.com/apache/activemq-artemis/pull/1823
  
The test failure has nothing to do with the changes and it passes on my 
local env. I'll kick off a new Jenkins.


> Add log info to FileStoreMonitor
> 
>
> Key: ARTEMIS-1642
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1642
> Project: ActiveMQ Artemis
>  Issue Type: Task
>  Components: Broker
>Affects Versions: 2.4.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.5.0
>
>
> Adding log info in case that an IOException is thrown from the underlying
> file system to provide information for debugging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1030) Support equivilent ActiveMQ 5.x Virtual Topic Naming Abilities

2018-01-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16346197#comment-16346197
 ] 

ASF GitHub Bot commented on ARTEMIS-1030:
-

Github user gaohoward commented on a diff in the pull request:

https://github.com/apache/activemq-artemis/pull/1815#discussion_r164944682
  
--- Diff: docs/user-manual/en/protocols-interoperability.md ---
@@ -215,9 +215,11 @@ The first is the 5.x style destination filter that 
identifies the destination as
 The second identifies the number of ```paths``` that identify the consumer 
queue such that it can be parsed from the
 destination.
 For example, the default 5.x virtual topic with consumer prefix of 
```Consumer.*.```, would require a
-```virtualTopicConsumerWildcards``` filter of:
+```virtualTopicConsumerWildcards``` filter of ```Consumer.*.>;2```. As url 
parameter this transforms to ```Consumer.*.%3E%3B2``` when
+the url significant characters ```;,``` are escaped with their hex code 
points. 
--- End diff --

should there be ```>;``` instead of ```;,```?


> Support equivilent ActiveMQ 5.x Virtual Topic Naming Abilities
> --
>
> Key: ARTEMIS-1030
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1030
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>Reporter: Martyn Taylor
>Assignee: Gary Tully
>Priority: Major
> Fix For: 2.5.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1632) Upgrade JBoss logging to 3.3.1.Final

2018-01-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16346191#comment-16346191
 ] 

ASF GitHub Bot commented on ARTEMIS-1632:
-

Github user asfgit closed the pull request at:

https://github.com/apache/activemq-artemis/pull/1814


> Upgrade JBoss logging to 3.3.1.Final
> 
>
> Key: ARTEMIS-1632
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1632
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 2.4.0
>Reporter: Dejan Bosanac
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1632) Upgrade JBoss logging to 3.3.1.Final

2018-01-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16346190#comment-16346190
 ] 

ASF subversion and git services commented on ARTEMIS-1632:
--

Commit 23fa91cd0c913c0e25b43eb28098cd6c7b9a6085 in activemq-artemis's branch 
refs/heads/master from [~dejanb]
[ https://git-wip-us.apache.org/repos/asf?p=activemq-artemis.git;h=23fa91c ]

ARTEMIS-1632 Upgrade JBoss logging to 3.3.1.Final


> Upgrade JBoss logging to 3.3.1.Final
> 
>
> Key: ARTEMIS-1632
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1632
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Affects Versions: 2.4.0
>Reporter: Dejan Bosanac
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1643) Compaction must check against NULL records while replaying

2018-01-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16346168#comment-16346168
 ] 

ASF GitHub Bot commented on ARTEMIS-1643:
-

Github user asfgit closed the pull request at:

https://github.com/apache/activemq-artemis/pull/1825


> Compaction must check against NULL records while replaying
> --
>
> Key: ARTEMIS-1643
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1643
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> JournalCompactor could throw AMQ142028: Error replaying pending commands 
> after compacting: java.lang.NullPointerException under huge load while 
> replaying because UpdateCompactCommand isn't checking against null 
> JournalRecords.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1643) Compaction must check against NULL records while replaying

2018-01-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16346166#comment-16346166
 ] 

ASF subversion and git services commented on ARTEMIS-1643:
--

Commit 78a2e3a8f06135ab090f149703e4f2d8e85ee556 in activemq-artemis's branch 
refs/heads/master from [~nigro@gmail.com]
[ https://git-wip-us.apache.org/repos/asf?p=activemq-artemis.git;h=78a2e3a ]

ARTEMIS-1643 Compaction must check against NULL records while replaying

JournalCompactor.UpdateCompactCommand::execute is checking if updateRecord is 
null to avoid on replay under huge load that will be thrown AMQ142028.


> Compaction must check against NULL records while replaying
> --
>
> Key: ARTEMIS-1643
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1643
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> JournalCompactor could throw AMQ142028: Error replaying pending commands 
> after compacting: java.lang.NullPointerException under huge load while 
> replaying because UpdateCompactCommand isn't checking against null 
> JournalRecords.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1639) HornetQClientProtocolManager sending unsupported packet

2018-01-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16346129#comment-16346129
 ] 

ASF GitHub Bot commented on ARTEMIS-1639:
-

Github user gaohoward commented on the issue:

https://github.com/apache/activemq-artemis/pull/1819
  
@clebertsuconic forgot to mention that I fixed an issue where the connector 
configure passed from HornetQ server contains the hornetq's netty factory class 
name, which is not available in Artemis, I added a 'translation' to convert it 
to Artemis netty factory class.


> HornetQClientProtocolManager sending unsupported packet
> ---
>
> Key: ARTEMIS-1639
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1639
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.4.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.5.0
>
>
> HornetQClientProtocolManager is used to connect HornteQ servers. During 
> reconnect, it sends a CheckFailoverMessage packet to the server as part of 
> reconnection. This packet is not supported by HornetQ server (existing 
> release), so it will break the backward compatibility.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1639) HornetQClientProtocolManager sending unsupported packet

2018-01-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16346122#comment-16346122
 ] 

ASF GitHub Bot commented on ARTEMIS-1639:
-

Github user gaohoward commented on the issue:

https://github.com/apache/activemq-artemis/pull/1819
  
@clebertsuconic Hi Clebert, I added a new test to test the Artemis client 
failover on HornetQ server.
The set up is a live and a backup HornetQ server. Stop the live and the 
client fails over. 
Can you take a look?
Thanks



> HornetQClientProtocolManager sending unsupported packet
> ---
>
> Key: ARTEMIS-1639
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1639
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.4.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.5.0
>
>
> HornetQClientProtocolManager is used to connect HornteQ servers. During 
> reconnect, it sends a CheckFailoverMessage packet to the server as part of 
> reconnection. This packet is not supported by HornetQ server (existing 
> release), so it will break the backward compatibility.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1644) Legacy clients can't access addresses/queues explicitly configured with "jms.queue." and "jms.topic." prefixes

2018-01-30 Thread Justin Bertram (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram updated ARTEMIS-1644:

Description: 
There is logic in the broker to detect legacy clients (i.e. from Artemis 1.5.x 
and HornetQ) which will:
 * automatically set anycastPrefix and multicastPrefix to "jms.queue." and 
"jms.topic." respectively
 * automatically convert queue/address names in network packets

In general this works perfectly for legacy clients.  However, if there are 
addresses or queues on the broker explicitly configured with either 
"jms.queue." or "jms.topic." then these legacy clients will not be able to 
access them.

> Legacy clients can't access addresses/queues explicitly configured with 
> "jms.queue." and "jms.topic." prefixes
> --
>
> Key: ARTEMIS-1644
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1644
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Justin Bertram
>Assignee: Justin Bertram
>Priority: Major
>
> There is logic in the broker to detect legacy clients (i.e. from Artemis 
> 1.5.x and HornetQ) which will:
>  * automatically set anycastPrefix and multicastPrefix to "jms.queue." and 
> "jms.topic." respectively
>  * automatically convert queue/address names in network packets
> In general this works perfectly for legacy clients.  However, if there are 
> addresses or queues on the broker explicitly configured with either 
> "jms.queue." or "jms.topic." then these legacy clients will not be able to 
> access them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1644) Legacy clients can't access addresses/queues explicitly configured with "jms.queue." and "jms.topic." prefixes

2018-01-30 Thread Justin Bertram (JIRA)
Justin Bertram created ARTEMIS-1644:
---

 Summary: Legacy clients can't access addresses/queues explicitly 
configured with "jms.queue." and "jms.topic." prefixes
 Key: ARTEMIS-1644
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1644
 Project: ActiveMQ Artemis
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Justin Bertram
Assignee: Justin Bertram






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-550) Add support for virtual topic consumers

2018-01-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345473#comment-16345473
 ] 

ASF GitHub Bot commented on ARTEMIS-550:


Github user mattrpav commented on the issue:

https://github.com/apache/activemq-artemis/pull/1820
  
Will do


> Add support for virtual topic consumers
> ---
>
> Key: ARTEMIS-550
> URL: https://issues.apache.org/jira/browse/ARTEMIS-550
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 1.3.0
>Reporter: Benjamin Graf
>Assignee: Martyn Taylor
>Priority: Major
> Attachments: image-2018-01-26-09-02-08-192.png
>
>
> Artemis should support virtual topic consumers as alternative to topic 
> subscriptions as ActiveMQ itself does.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1643) Compaction must check against NULL records while replaying

2018-01-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345472#comment-16345472
 ] 

ASF GitHub Bot commented on ARTEMIS-1643:
-

GitHub user franz1981 opened a pull request:

https://github.com/apache/activemq-artemis/pull/1825

ARTEMIS-1643 Compaction must check against NULL records while replaying

JournalCompactor.UpdateCompactCommand::execute is checking if updateRecord 
is null to avoid on replay under huge load that will be thrown AMQ142028.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/franz1981/activemq-artemis npe_compact

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/activemq-artemis/pull/1825.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1825


commit 5cbab9b01ae598403ac0294d90d718de0db1d114
Author: Francesco Nigro 
Date:   2018-01-30T17:18:07Z

ARTEMIS-1643 Compaction must check against NULL records while replaying

JournalCompactor.UpdateCompactCommand::execute is checking if updateRecord 
is null to avoid on replay under huge load that will be thrown AMQ142028.




> Compaction must check against NULL records while replaying
> --
>
> Key: ARTEMIS-1643
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1643
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> JournalCompactor could throw AMQ142028: Error replaying pending commands 
> after compacting: java.lang.NullPointerException under huge load while 
> replaying because UpdateCompactCommand isn't checking against null 
> JournalRecords.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1643) Compaction must check against NULL records while replaying

2018-01-30 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1643:
-
Summary: Compaction must check against NULL records while replaying  (was: 
AMQ142028: Error replaying pending commands after compacting: 
java.lang.NullPointerException)

> Compaction must check against NULL records while replaying
> --
>
> Key: ARTEMIS-1643
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1643
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> JournalCompactor could throw AMQ142028: Error replaying pending commands 
> after compacting: java.lang.NullPointerException under huge load while 
> replaying because UpdateCompactCommand isn't checking against null 
> JournalRecords.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1643) AMQ142028: Error replaying pending commands after compacting: java.lang.NullPointerException

2018-01-30 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1643:
-
Description: JournalCompactor could throw AMQ142028: Error replaying 
pending commands after compacting: java.lang.NullPointerException under huge 
load while replaying because UpdateCompactCommand isn't checking against null 
JournalRecords.  (was: Under huge load the compactor could throw )

> AMQ142028: Error replaying pending commands after compacting: 
> java.lang.NullPointerException
> 
>
> Key: ARTEMIS-1643
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1643
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> JournalCompactor could throw AMQ142028: Error replaying pending commands 
> after compacting: java.lang.NullPointerException under huge load while 
> replaying because UpdateCompactCommand isn't checking against null 
> JournalRecords.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1643) AMQ142028: Error replaying pending commands after compacting: java.lang.NullPointerException

2018-01-30 Thread Francesco Nigro (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated ARTEMIS-1643:
-
Description: Under huge load the compactor could throw 

> AMQ142028: Error replaying pending commands after compacting: 
> java.lang.NullPointerException
> 
>
> Key: ARTEMIS-1643
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1643
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> Under huge load the compactor could throw 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1643) AMQ142028: Error replaying pending commands after compacting: java.lang.NullPointerException

2018-01-30 Thread Francesco Nigro (JIRA)
Francesco Nigro created ARTEMIS-1643:


 Summary: AMQ142028: Error replaying pending commands after 
compacting: java.lang.NullPointerException
 Key: ARTEMIS-1643
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1643
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Francesco Nigro
Assignee: Francesco Nigro






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1587) Add setting to control the queue durable property for auto-created addresses

2018-01-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345372#comment-16345372
 ] 

ASF GitHub Bot commented on ARTEMIS-1587:
-

Github user stanlyDoge commented on the issue:

https://github.com/apache/activemq-artemis/pull/1775
  
Can I close this PR?


> Add setting to control the queue durable property for auto-created addresses
> 
>
> Key: ARTEMIS-1587
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1587
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 2.4.0
>Reporter: Johan Stenberg
>Assignee: Martyn Taylor
>Priority: Major
>
> When pre-defining queues in the broker.xml the durable property can be 
> specified. Auto-created queues are currently always durable. It would be 
> useful to extend the AddressSettings so that default queue durability for 
> auto-created queues can be specified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMQ-6894) Excessive number of connections by failover transport with priorityBackup

2018-01-30 Thread Andrei Shakirin (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-6894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrei Shakirin updated AMQ-6894:
-
Description: 
My clients connect to AMQ with this connection string:

(tcp://amq1:61616,tcp://amq2:61616)?randomize=false&priorityBackup=true

 It works - for some time. But sooner or later my AMQ server becomes 
unresponsive because the host it runs on runs out of resources (threads).

Suddenly AMQ Server log explodes with the messages like:

{code}
2018-01-26 09:26:16,909 | WARN  | Failed to register MBean org.apache.activemq 
:type=Broker,brokerName=activemq-vm-primary,connector=clientConnectors,connect

orName=default,connectionViewType=clientId,connectionName=ID_ca8f70e115d0-3708

7-1516883370639-0_22 | org.apache.activemq.broker.jmx.ManagedTransportConnecti

on | ActiveMQ Transport: tcp:///172.10.7.56:55548@61616

2018-01-26 09:26:21,375 | WARN  | Ignoring ack received before dispatch; result 
of failover with an outstanding ack. Acked messages will be replayed if present 
on this broker. Ignored ack: MessageAck \{commandId = 157, responseRequired = 
false, ackType = 2, consumerId = ID:ca8f70e115d0-37087-1516883370639-1:22:10:1, 
firstMessageId = ID:a95345a9c0df-33771-1516883685728-1:17:5:1:23, lastMessageId 
= ID:a95345a9c0df-33771-1516883685728-1:17:5:1:23, destination = 
queue://MY_QUEUE_OUT, transactionId = null, messageCount = 1, poisonCause = 
null} | org.apache.activemq.broker.region.PrefetchSubscription | ActiveMQ 
Transport: tcp:///172.16.6.56:55464@61616

2018-01-26 09:26:39,211 | WARN  | Transport Connection to: 
tcp://172.10.6.56:55860 failed: java.net.SocketException: Connection reset | 
org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ 
InactivityMonitor Worker

2018-01-26 09:26:47,175 | WARN  | Transport Connection to: 
tcp://172.10.6.56:57012 failed: java.net.SocketException: Broken pipe (Write 
failed) | org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ 
InactivityMonitor Worker
{code}

After short period of time AMQ server comes out of resources with 
"java.lang.OutOfMemoryError: unable to create new native thread" error. The AMQ 
service process in this case has a huge number of threads (some thousands)

 

The client side log contains a lot of reconnection attempts messages like:

{code}
2018-01-26 00:10:31,387 WARN    
[\{{bundle.name,org.apache.activemq.activemq-osgi}{bundle.version,5.14.1}\{bundle.id,181}}]
 [null]  org.apache.activemq.transport.failover.FailoverTransport  
Failed to connect to [tcp://activemq-vm-primary:61616, 
tcp://activemq-vm-secondary:61616] after: 810 attempt(s) continuing to retry.
{code}

It seems that client creates a huge number of connections by failover retry and 
after some time kills the server.

Issue looks very similar to described in 
https://issues.apache.org/jira/browse/AMQ-6603, however server isn't configured 
with access control settings.

I found the description of similar problem into 
[http://activemq.2283324.n4.nabble.com/ActiveMQ-5-2-OutOfMemoryError-unable-to-create-new-native-thread-td2366585.html],
  but without concrete suggestion.

 

Part of server log is attached

  was:
My clients connect to AMQ with this connection string:

(tcp://amq1:61616,tcp://amq2:61616)?randomize=false&priorityBackup=true

 It works - for some time. But sooner or later my AMQ server becomes 
unresponsive because the host it runs on runs out of resources (threads).

Suddenly AMQ Server log explodes with the messages like:

{code}
2018-01-26 09:26:16,909 | WARN  | Failed to register MBean org.apache.activemq 
:type=Broker,brokerName=activemq-vm-primary,connector=clientConnectors,connect

orName=default,connectionViewType=clientId,connectionName=ID_ca8f70e115d0-3708

7-1516883370639-0_22 | org.apache.activemq.broker.jmx.ManagedTransportConnecti

on | ActiveMQ Transport: tcp:///172.16.6.56:55548@61616

2018-01-26 09:26:21,375 | WARN  | Ignoring ack received before dispatch; result 
of failover with an outstanding ack. Acked messages will be replayed if present 
on this broker. Ignored ack: MessageAck \{commandId = 157, responseRequired = 
false, ackType = 2, consumerId = ID:ca8f70e115d0-37087-1516883370639-1:22:10:1, 
firstMessageId = ID:a95345a9c0df-33771-1516883685728-1:17:5:1:23, lastMessageId 
= ID:a95345a9c0df-33771-1516883685728-1:17:5:1:23, destination = 
queue://Q.CHECKOUT.AUFTRAG_OUT, transactionId = null, messageCount = 1, 
poisonCause = null} | org.apache.activemq.broker.region.PrefetchSubscription | 
ActiveMQ Transport: tcp:///172.16.6.56:55464@61616

2018-01-26 09:26:39,211 | WARN  | Transport Connection to: 
tcp://172.16.6.56:55860 failed: java.net.SocketException: Connection reset | 
org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ 
InactivityMonitor Worker

2018-01-26 09:26:47,175 | WARN  | Transport Connection to: 
tcp://172.16.6.56:57012 failed: java.net.SocketException: Broken pipe 

[jira] [Updated] (AMQ-6894) Excessive number of connections by failover transport with priorityBackup

2018-01-30 Thread Andrei Shakirin (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-6894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrei Shakirin updated AMQ-6894:
-
Attachment: activemq-part.zip

> Excessive number of connections by failover transport with priorityBackup
> -
>
> Key: AMQ-6894
> URL: https://issues.apache.org/jira/browse/AMQ-6894
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.14.5
>Reporter: Andrei Shakirin
>Priority: Major
> Attachments: activemq-part.zip
>
>
> My clients connect to AMQ with this connection string:
> (tcp://amq1:61616,tcp://amq2:61616)?randomize=false&priorityBackup=true
>  It works - for some time. But sooner or later my AMQ server becomes 
> unresponsive because the host it runs on runs out of resources (threads).
> Suddenly AMQ Server log explodes with the messages like:
> {code}
> 2018-01-26 09:26:16,909 | WARN  | Failed to register MBean 
> org.apache.activemq 
> :type=Broker,brokerName=activemq-vm-primary,connector=clientConnectors,connect
> orName=default,connectionViewType=clientId,connectionName=ID_ca8f70e115d0-3708
> 7-1516883370639-0_22 | org.apache.activemq.broker.jmx.ManagedTransportConnecti
> on | ActiveMQ Transport: tcp:///172.16.6.56:55548@61616
> 2018-01-26 09:26:21,375 | WARN  | Ignoring ack received before dispatch; 
> result of failover with an outstanding ack. Acked messages will be replayed 
> if present on this broker. Ignored ack: MessageAck \{commandId = 157, 
> responseRequired = false, ackType = 2, consumerId = 
> ID:ca8f70e115d0-37087-1516883370639-1:22:10:1, firstMessageId = 
> ID:a95345a9c0df-33771-1516883685728-1:17:5:1:23, lastMessageId = 
> ID:a95345a9c0df-33771-1516883685728-1:17:5:1:23, destination = 
> queue://Q.CHECKOUT.AUFTRAG_OUT, transactionId = null, messageCount = 1, 
> poisonCause = null} | org.apache.activemq.broker.region.PrefetchSubscription 
> | ActiveMQ Transport: tcp:///172.16.6.56:55464@61616
> 2018-01-26 09:26:39,211 | WARN  | Transport Connection to: 
> tcp://172.16.6.56:55860 failed: java.net.SocketException: Connection reset | 
> org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ 
> InactivityMonitor Worker
> 2018-01-26 09:26:47,175 | WARN  | Transport Connection to: 
> tcp://172.16.6.56:57012 failed: java.net.SocketException: Broken pipe (Write 
> failed) | org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ 
> InactivityMonitor Worker
> {code}
> After short period of time AMQ server comes out of resources with 
> "java.lang.OutOfMemoryError: unable to create new native thread" error. The 
> AMQ service process in this case has a huge number of threads (some thousands)
>  
> The client side log contains a lot of reconnection attempts messages like:
> {code}
> 2018-01-26 00:10:31,387 WARN    
> [\{{bundle.name,org.apache.activemq.activemq-osgi}{bundle.version,5.14.1}\{bundle.id,181}}]
>  [null]  org.apache.activemq.transport.failover.FailoverTransport  
> Failed to connect to [tcp://activemq-vm-primary:61616, 
> tcp://activemq-vm-secondary:61616] after: 810 attempt(s) continuing to retry.
> {code}
> It seems that client creates a huge number of connections by failover retry 
> and after some time kills the server.
> Issue looks very similar to described in 
> https://issues.apache.org/jira/browse/AMQ-6603, however server isn't 
> configured with access control settings.
> I found the description of similar problem into 
> [http://activemq.2283324.n4.nabble.com/ActiveMQ-5-2-OutOfMemoryError-unable-to-create-new-native-thread-td2366585.html],
>   but without concrete suggestion.
>  
> Part of server log is attached



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMQ-6894) Excessive number of connections by failover transport with priorityBackup

2018-01-30 Thread Andrei Shakirin (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-6894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrei Shakirin updated AMQ-6894:
-
Description: 
My clients connect to AMQ with this connection string:

(tcp://amq1:61616,tcp://amq2:61616)?randomize=false&priorityBackup=true

 It works - for some time. But sooner or later my AMQ server becomes 
unresponsive because the host it runs on runs out of resources (threads).

Suddenly AMQ Server log explodes with the messages like:

{code}
2018-01-26 09:26:16,909 | WARN  | Failed to register MBean org.apache.activemq 
:type=Broker,brokerName=activemq-vm-primary,connector=clientConnectors,connect

orName=default,connectionViewType=clientId,connectionName=ID_ca8f70e115d0-3708

7-1516883370639-0_22 | org.apache.activemq.broker.jmx.ManagedTransportConnecti

on | ActiveMQ Transport: tcp:///172.16.6.56:55548@61616

2018-01-26 09:26:21,375 | WARN  | Ignoring ack received before dispatch; result 
of failover with an outstanding ack. Acked messages will be replayed if present 
on this broker. Ignored ack: MessageAck \{commandId = 157, responseRequired = 
false, ackType = 2, consumerId = ID:ca8f70e115d0-37087-1516883370639-1:22:10:1, 
firstMessageId = ID:a95345a9c0df-33771-1516883685728-1:17:5:1:23, lastMessageId 
= ID:a95345a9c0df-33771-1516883685728-1:17:5:1:23, destination = 
queue://Q.CHECKOUT.AUFTRAG_OUT, transactionId = null, messageCount = 1, 
poisonCause = null} | org.apache.activemq.broker.region.PrefetchSubscription | 
ActiveMQ Transport: tcp:///172.16.6.56:55464@61616

2018-01-26 09:26:39,211 | WARN  | Transport Connection to: 
tcp://172.16.6.56:55860 failed: java.net.SocketException: Connection reset | 
org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ 
InactivityMonitor Worker

2018-01-26 09:26:47,175 | WARN  | Transport Connection to: 
tcp://172.16.6.56:57012 failed: java.net.SocketException: Broken pipe (Write 
failed) | org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ 
InactivityMonitor Worker
{code}

After short period of time AMQ server comes out of resources with 
"java.lang.OutOfMemoryError: unable to create new native thread" error. The AMQ 
service process in this case has a huge number of threads (some thousands)

 

The client side log contains a lot of reconnection attempts messages like:

{code}
2018-01-26 00:10:31,387 WARN    
[\{{bundle.name,org.apache.activemq.activemq-osgi}{bundle.version,5.14.1}\{bundle.id,181}}]
 [null]  org.apache.activemq.transport.failover.FailoverTransport  
Failed to connect to [tcp://activemq-vm-primary:61616, 
tcp://activemq-vm-secondary:61616] after: 810 attempt(s) continuing to retry.
{code}

It seems that client creates a huge number of connections by failover retry and 
after some time kills the server.

Issue looks very similar to described in 
https://issues.apache.org/jira/browse/AMQ-6603, however server isn't configured 
with access control settings.

I found the description of similar problem into 
[http://activemq.2283324.n4.nabble.com/ActiveMQ-5-2-OutOfMemoryError-unable-to-create-new-native-thread-td2366585.html],
  but without concrete suggestion.

 

Part of server log is attached

  was:
My clients connect to AMQ with this connection string:

(tcp://amq1:61616,tcp://amq2:61616)?randomize=false&priorityBackup=true

 It works - for some time. But sooner or later my AMQ server becomes 
unresponsive because the host it runs on runs out of resources (threads).

Suddenly AMQ Server log explodes with the messages like:

 \{code}

2018-01-26 09:26:16,909 | WARN  | Failed to register MBean org.apache.activemq 
:type=Broker,brokerName=activemq-vm-primary,connector=clientConnectors,connect

orName=default,connectionViewType=clientId,connectionName=ID_ca8f70e115d0-3708

7-1516883370639-0_22 | org.apache.activemq.broker.jmx.ManagedTransportConnecti

on | ActiveMQ Transport: tcp:///172.16.6.56:55548@61616

 

2018-01-26 09:26:21,375 | WARN  | Ignoring ack received before dispatch; result 
of failover with an outstanding ack. Acked messages will be replayed if present 
on this broker. Ignored ack: MessageAck \{commandId = 157, responseRequired = 
false, ackType = 2, consumerId = ID:ca8f70e115d0-37087-1516883370639-1:22:10:1, 
firstMessageId = ID:a95345a9c0df-33771-1516883685728-1:17:5:1:23, lastMessageId 
= ID:a95345a9c0df-33771-1516883685728-1:17:5:1:23, destination = 
queue://Q.CHECKOUT.AUFTRAG_OUT, transactionId = null, messageCount = 1, 
poisonCause = null} | org.apache.activemq.broker.region.PrefetchSubscription | 
ActiveMQ Transport: tcp:///172.16.6.56:55464@61616

 

2018-01-26 09:26:39,211 | WARN  | Transport Connection to: 
tcp://172.16.6.56:55860 failed: java.net.SocketException: Connection reset | 
org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ 
InactivityMonitor Worker

 

2018-01-26 09:26:47,175 | WARN  | Transport Connection to: 
tcp://172.16.6.56:57012 failed: java.net.SocketE

[jira] [Created] (AMQ-6894) Excessive number of connections by failover transport with priorityBackup

2018-01-30 Thread Andrei Shakirin (JIRA)
Andrei Shakirin created AMQ-6894:


 Summary: Excessive number of connections by failover transport 
with priorityBackup
 Key: AMQ-6894
 URL: https://issues.apache.org/jira/browse/AMQ-6894
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.14.5
Reporter: Andrei Shakirin


My clients connect to AMQ with this connection string:

(tcp://amq1:61616,tcp://amq2:61616)?randomize=false&priorityBackup=true

 It works - for some time. But sooner or later my AMQ server becomes 
unresponsive because the host it runs on runs out of resources (threads).

Suddenly AMQ Server log explodes with the messages like:

 \{code}

2018-01-26 09:26:16,909 | WARN  | Failed to register MBean org.apache.activemq 
:type=Broker,brokerName=activemq-vm-primary,connector=clientConnectors,connect

orName=default,connectionViewType=clientId,connectionName=ID_ca8f70e115d0-3708

7-1516883370639-0_22 | org.apache.activemq.broker.jmx.ManagedTransportConnecti

on | ActiveMQ Transport: tcp:///172.16.6.56:55548@61616

 

2018-01-26 09:26:21,375 | WARN  | Ignoring ack received before dispatch; result 
of failover with an outstanding ack. Acked messages will be replayed if present 
on this broker. Ignored ack: MessageAck \{commandId = 157, responseRequired = 
false, ackType = 2, consumerId = ID:ca8f70e115d0-37087-1516883370639-1:22:10:1, 
firstMessageId = ID:a95345a9c0df-33771-1516883685728-1:17:5:1:23, lastMessageId 
= ID:a95345a9c0df-33771-1516883685728-1:17:5:1:23, destination = 
queue://Q.CHECKOUT.AUFTRAG_OUT, transactionId = null, messageCount = 1, 
poisonCause = null} | org.apache.activemq.broker.region.PrefetchSubscription | 
ActiveMQ Transport: tcp:///172.16.6.56:55464@61616

 

2018-01-26 09:26:39,211 | WARN  | Transport Connection to: 
tcp://172.16.6.56:55860 failed: java.net.SocketException: Connection reset | 
org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ 
InactivityMonitor Worker

 

2018-01-26 09:26:47,175 | WARN  | Transport Connection to: 
tcp://172.16.6.56:57012 failed: java.net.SocketException: Broken pipe (Write 
failed) | org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ 
InactivityMonitor Worker

 \{code}

After short period of time AMQ server comes out of resources with 
"java.lang.OutOfMemoryError: unable to create new native thread" error. The AMQ 
service process in this case has a huge number of threads (some thousands)

 

The client side log contains a lot of reconnection attempts messages like:

{code}

2018-01-26 00:10:31,387 WARN    
[\{{bundle.name,org.apache.activemq.activemq-osgi}{bundle.version,5.14.1}\{bundle.id,181}}]
 [null]  org.apache.activemq.transport.failover.FailoverTransport  
Failed to connect to [tcp://activemq-vm-primary:61616, 
tcp://activemq-vm-secondary:61616] after: 810 attempt(s) continuing to retry.

 \{code}

It seems that client creates a huge number of connections by failover retry and 
after some time kills the server.

Issue looks very similar to described in 
https://issues.apache.org/jira/browse/AMQ-6603, however server isn't configured 
with access control settings.

I found the description of similar problem into 
[http://activemq.2283324.n4.nabble.com/ActiveMQ-5-2-OutOfMemoryError-unable-to-create-new-native-thread-td2366585.html],
  but without concrete suggestion.

 

Part of server log is attached



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1642) Add log info to FileStoreMonitor

2018-01-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345035#comment-16345035
 ] 

ASF GitHub Bot commented on ARTEMIS-1642:
-

GitHub user gaohoward opened a pull request:

https://github.com/apache/activemq-artemis/pull/1823

ARTEMIS-1642 Add log info to FileStoreMonitor

Adding log info in case that an IOException is thrown from
the underlying file system to provide information for debugging.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gaohoward/activemq-artemis gent1015

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/activemq-artemis/pull/1823.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1823


commit 66c33f0625e0b664f5ce331e513871cfaa520121
Author: Howard Gao 
Date:   2018-01-30T13:13:50Z

ARTEMIS-1642 Add log info to FileStoreMonitor

Adding log info in case that an IOException is thrown from
the underlying file system to provide information for debugging.




> Add log info to FileStoreMonitor
> 
>
> Key: ARTEMIS-1642
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1642
> Project: ActiveMQ Artemis
>  Issue Type: Task
>  Components: Broker
>Affects Versions: 2.4.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.5.0
>
>
> Adding log info in case that an IOException is thrown from the underlying
> file system to provide information for debugging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1642) Add log info to FileStoreMonitor

2018-01-30 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-1642:
---

 Summary: Add log info to FileStoreMonitor
 Key: ARTEMIS-1642
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1642
 Project: ActiveMQ Artemis
  Issue Type: Task
  Components: Broker
Affects Versions: 2.4.0
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.5.0


Adding log info in case that an IOException is thrown from the underlying

file system to provide information for debugging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1640) JDBC NodeManager tests have to be customizable to run on different DBMS

2018-01-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345012#comment-16345012
 ] 

ASF GitHub Bot commented on ARTEMIS-1640:
-

Github user asfgit closed the pull request at:

https://github.com/apache/activemq-artemis/pull/1821


> JDBC NodeManager tests have to be customizable to run on different DBMS
> ---
>
> Key: ARTEMIS-1640
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1640
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> NettyFailoverTest and JdbcLeaseLockTest can be made configurable in order to 
> run on different DBMS, like other ActiveMQTestBase tests already do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1640) JDBC NodeManager tests have to be customizable to run on different DBMS

2018-01-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345011#comment-16345011
 ] 

ASF subversion and git services commented on ARTEMIS-1640:
--

Commit 52e594d21890824d526d57e8b2e5bbbd1aeb7162 in activemq-artemis's branch 
refs/heads/master from [~nigro@gmail.com]
[ https://git-wip-us.apache.org/repos/asf?p=activemq-artemis.git;h=52e594d ]

ARTEMIS-1640 JDBC NodeManager tests have to be customizable to run on different 
DBMS

ActiveMQTestBase has been enhanced to expose the Database storage configuration 
and by adding specific JDBC HA configuration properties.
JdbcLeaseLockTest and NettyFailoverTests have been changed in order to make use 
of the JDBC configuration provided by ActiveMQTestBase.
JdbcNodeManager has been made restartable to allow failover tests to reuse it 
after a failover.


> JDBC NodeManager tests have to be customizable to run on different DBMS
> ---
>
> Key: ARTEMIS-1640
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1640
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Assignee: Francesco Nigro
>Priority: Major
>
> NettyFailoverTest and JdbcLeaseLockTest can be made configurable in order to 
> run on different DBMS, like other ActiveMQTestBase tests already do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (AMQ-6891) Duplicated message in JMS transaction, when jdbc persistence fails (Memory leak on Queue)

2018-01-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344868#comment-16344868
 ] 

ASF subversion and git services commented on AMQ-6891:
--

Commit dd2572bcb1c3793a8a2fa19cc4fc88cc8481f96e in activemq's branch 
refs/heads/master from [~gtully]
[ https://git-wip-us.apache.org/repos/asf?p=activemq.git;h=dd2572b ]

[AMQ-6891] test and fix non tx variant of this leak


> Duplicated message in JMS transaction, when jdbc persistence fails (Memory 
> leak on Queue)
> -
>
> Key: AMQ-6891
> URL: https://issues.apache.org/jira/browse/AMQ-6891
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.15.2
>Reporter: Radek Kraus
>Assignee: Gary Tully
>Priority: Major
> Fix For: 5.16.0
>
> Attachments: JmsTransactionCommitFailureTest.java
>
>
> I have following scenario (see attached test case):
>  # Send 1 message in JMS transaction
>  # Enable database problem simulation (throw {{SQLException}} in 
> {{TransactionContext.executeBatch()}} method - the similar situation should 
> happen, when commit fails)
>  # Attempt to send 2 messages in one JMS transaction, send operation fails as 
> is expected (only 1 message is in database from first send operation)
>  # Disable database problem simulation ({{SQLException}} is not thrown from 
> now)
>  # Repeat the attempt to send "the same" 2 messages in one JMS transaction, 
> send operation is successful now, how is expected (3 messages are in database)
>  # Attempt to receive 3 messages 1, 2, 3, but 5 messages are received 1, 2, 
> 3, 2, 3.
> I have suspicion, that problem is in 
> {{org.apache.activemq.broker.region.Queue}}. It seems that reason is 
> {{indexOrderedCursorUpdates}} list. The {{Queue.onAdd(MessageContext)}} 
> method is invoked for each message by 
> {{JDBCMessageStore.addMessage(ConnectionContext, Message) method}}, which 
> adds {{MessageContext}} into this list. The added {{MessageContext}} is 
> processed (and removed) in {{Queue.doPendingCursorAdditions()}} method, which 
> is invoked only from "afterCommit synchronization" 
> ({{Queue.CursorAddSync.afterCommit()}} method). But when commit operation 
> fails, then "afterCommit" method is not invoked (but {{afterRollback}} method 
> is invoked) and {{MessageContext}} entries stays in 
> {{indexOrderedCursorUpdates}} list.
> Personaly I would expect, that some "remove" operation should be done in 
> {{Queue.CursorAddSync.afterRolback()}} method. Probably the similar operation 
> should be done in {{Queue.doMessageSend()}} method on place, where 
> {{Exception}} from "addMessage" is handled in case when JMS transaction is 
> not used. Or some different "completion" operation should be introduced, 
> because {{MessageContext}} is only add into the list,  but don't removed in 
> case of failure.
> When I tried to register (and use) {{LeaseLockerIOExceptionHandler}} 
> IOExceptionHandler, the transports was successfully restarted, but my 
> "client" code was blocked in {{ActiveMQSession.commit()}} method. Is it 
> expected behavior?
> When I tried to add following code into 
> {{Queue.CursorAddSync.afterRollback()}}, I received only 3 expected messages 
> (when JMS transaction is used), but it was only blind shot, sorry, because I 
> don't understand the whole logic here.
> {code:java}
> @Override
> public void afterRollback() throws Exception {
>   synchronized(indexOrderedCursorUpdates) {
> for(int i = indexOrderedCursorUpdates.size() - 1; i >= 0; i--) {
>   MessageContext mc = indexOrderedCursorUpdates.get(i);
> 
> if(mc.message.getMessageId().equals(messageContext.message.getMessageId())) {
> indexOrderedCursorUpdates.remove(mc);
> if(mc.onCompletion != null) {
>   mc.onCompletion.run();
> }
> break;
>   }
> }
>   }
>   messageContext.message.decrementReferenceCount();
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (AMQ-6891) Duplicated message in JMS transaction, when jdbc persistence fails (Memory leak on Queue)

2018-01-30 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-6891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-6891.
-
Resolution: Fixed

fixed up the non transacted path and reused the leak test, thanks.

> Duplicated message in JMS transaction, when jdbc persistence fails (Memory 
> leak on Queue)
> -
>
> Key: AMQ-6891
> URL: https://issues.apache.org/jira/browse/AMQ-6891
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.15.2
>Reporter: Radek Kraus
>Assignee: Gary Tully
>Priority: Major
> Fix For: 5.16.0
>
> Attachments: JmsTransactionCommitFailureTest.java
>
>
> I have following scenario (see attached test case):
>  # Send 1 message in JMS transaction
>  # Enable database problem simulation (throw {{SQLException}} in 
> {{TransactionContext.executeBatch()}} method - the similar situation should 
> happen, when commit fails)
>  # Attempt to send 2 messages in one JMS transaction, send operation fails as 
> is expected (only 1 message is in database from first send operation)
>  # Disable database problem simulation ({{SQLException}} is not thrown from 
> now)
>  # Repeat the attempt to send "the same" 2 messages in one JMS transaction, 
> send operation is successful now, how is expected (3 messages are in database)
>  # Attempt to receive 3 messages 1, 2, 3, but 5 messages are received 1, 2, 
> 3, 2, 3.
> I have suspicion, that problem is in 
> {{org.apache.activemq.broker.region.Queue}}. It seems that reason is 
> {{indexOrderedCursorUpdates}} list. The {{Queue.onAdd(MessageContext)}} 
> method is invoked for each message by 
> {{JDBCMessageStore.addMessage(ConnectionContext, Message) method}}, which 
> adds {{MessageContext}} into this list. The added {{MessageContext}} is 
> processed (and removed) in {{Queue.doPendingCursorAdditions()}} method, which 
> is invoked only from "afterCommit synchronization" 
> ({{Queue.CursorAddSync.afterCommit()}} method). But when commit operation 
> fails, then "afterCommit" method is not invoked (but {{afterRollback}} method 
> is invoked) and {{MessageContext}} entries stays in 
> {{indexOrderedCursorUpdates}} list.
> Personaly I would expect, that some "remove" operation should be done in 
> {{Queue.CursorAddSync.afterRolback()}} method. Probably the similar operation 
> should be done in {{Queue.doMessageSend()}} method on place, where 
> {{Exception}} from "addMessage" is handled in case when JMS transaction is 
> not used. Or some different "completion" operation should be introduced, 
> because {{MessageContext}} is only add into the list,  but don't removed in 
> case of failure.
> When I tried to register (and use) {{LeaseLockerIOExceptionHandler}} 
> IOExceptionHandler, the transports was successfully restarted, but my 
> "client" code was blocked in {{ActiveMQSession.commit()}} method. Is it 
> expected behavior?
> When I tried to add following code into 
> {{Queue.CursorAddSync.afterRollback()}}, I received only 3 expected messages 
> (when JMS transaction is used), but it was only blind shot, sorry, because I 
> don't understand the whole logic here.
> {code:java}
> @Override
> public void afterRollback() throws Exception {
>   synchronized(indexOrderedCursorUpdates) {
> for(int i = indexOrderedCursorUpdates.size() - 1; i >= 0; i--) {
>   MessageContext mc = indexOrderedCursorUpdates.get(i);
> 
> if(mc.message.getMessageId().equals(messageContext.message.getMessageId())) {
> indexOrderedCursorUpdates.remove(mc);
> if(mc.onCompletion != null) {
>   mc.onCompletion.run();
> }
> break;
>   }
> }
>   }
>   messageContext.message.decrementReferenceCount();
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)