[jira] [Updated] (AMQ-5692) Inactivity monitor does not time out on stuck socket writes

2015-04-07 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke updated AMQ-5692:

Attachment: AMQ-5692.pl

  Inactivity monitor does not time out on stuck socket writes
 

 Key: AMQ-5692
 URL: https://issues.apache.org/jira/browse/AMQ-5692
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.11.1
Reporter: Torsten Mielke
  Labels: broker, inactivity
 Attachments: AMQ-5692.pl


 It is possible that a socket write is stuck but the inactivity monitor 
 currently does not time out on a socketWrite. 
 {code:title=AbstractInactivityMonitor.java}
 final void writeCheck() {
 if (inSend.get()) {
 LOG.trace(Send in progress. Skipping write check.);
 return;
 }
 {code}
 As a result a connection that is stuck in a tcp write will never be taken 
 down due to inactivity. If a client misbehaves the broker will not be able to 
 clear that connection as part of the inactivity monitoring.
 Now AMQ-2511 introduced a counter on the reachCheck() to detect it a socket 
 read in progress really retrieves data or is stuck. 
 I propose for a similar mechanism being applied on the writeCheck() operation 
 so that a socket write that is stuck can be detected and the connection can 
 be closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5692) Inactivity monitor does not time out on stuck socket writes

2015-04-07 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14483067#comment-14483067
 ] 

Torsten Mielke commented on AMQ-5692:
-

Attached reproducer in AMQ-5692.pl. 
To reproduce issue follow these steps:

- Run broker with stomp connector on port 61613 and this connector configuration
{code:xml}
transportConnector name=stomp  
uri=stomp://0.0.0.0:61613?transport.defaultHeartBeat=1,0amp;transport.useKeepAlive=trueamp;trace=true/
{code}
- Run consumer 
{code}
perl ./AMQ-5692.pl consumer
{code}

- Run producer 
{code}
perl ./AMQ-5692.pl producer
{code}

After consuming around 8000 messages, the consumer hangs and won't report any 
more messages consumed. 
Take a broker thread dump and observe two stuck threads as in (1) below.
Despite the configured stomp heart-beat defaultHeartBeat=1,0 the connection 
does not time out after 10 seconds and no keep-alive is sent either. 

Have not managed to reproduce the issue with the ActiveMQ stomp client library. 

(1)
{code}
ActiveMQ BrokerService[localhost] Task-4 daemon prio=5 tid=0x7fe9ca087800 
nid=0x650b runnable [0x0001192d9000]
   java.lang.Thread.State: RUNNABLE
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
at 
org.apache.activemq.transport.tcp.TcpBufferedOutputStream.flush(TcpBufferedOutputStream.java:115)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at 
org.apache.activemq.transport.tcp.TcpTransport.oneway(TcpTransport.java:176)
at 
org.apache.activemq.transport.stomp.StompTransportFilter.sendToStomp(StompTransportFilter.java:98)
at 
org.apache.activemq.transport.stomp.StompSubscription.onMessageDispatch(StompSubscription.java:103)
at 
org.apache.activemq.transport.stomp.ProtocolConverter.onActiveMQCommand(ProtocolConverter.java:870)
at 
org.apache.activemq.transport.stomp.StompTransportFilter.oneway(StompTransportFilter.java:62)
at 
org.apache.activemq.transport.AbstractInactivityMonitor.doOnewaySend(AbstractInactivityMonitor.java:304)
at 
org.apache.activemq.transport.AbstractInactivityMonitor.oneway(AbstractInactivityMonitor.java:286)
at 
org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
at 
org.apache.activemq.broker.TransportConnection.dispatch(TransportConnection.java:1419)
at 
org.apache.activemq.broker.TransportConnection.processDispatch(TransportConnection.java:938)
at 
org.apache.activemq.broker.TransportConnection.iterate(TransportConnection.java:984)
at 
org.apache.activemq.thread.PooledTaskRunner.runTask(PooledTaskRunner.java:133)
at 
org.apache.activemq.thread.PooledTaskRunner$1.run(PooledTaskRunner.java:48)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

ActiveMQ Transport: tcp:///192.168.178.28:60902@61613 daemon prio=5 
tid=0x7fe9ca086800 nid=0x3b0f waiting on condition [0x000118f2d000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  0x0007c0ecc2f0 (a 
java.util.concurrent.locks.ReentrantLock$NonfairSync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
at 
java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:214)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:290)
at 
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:43)
at 
org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270)
at 
org.apache.activemq.transport.stomp.StompTransportFilter.sendToActiveMQ(StompTransportFilter.java:87)
at 
org.apache.activemq.transport.stomp.ProtocolConverter.sendToActiveMQ(ProtocolConverter.java:199)
at 
org.apache.activemq.transport.stomp.ProtocolConverter.onStompAck(ProtocolConverter.java:433)
at 
org.apache.activemq.transport.stomp.ProtocolConverter.onStompCommand(ProtocolConverter.java:247)
at 
org.apache.activemq.transport.stomp.StompTransportFilter.onCommand(StompTransportFilter.java:75)
at 

[jira] [Created] (AMQ-5692) Inactivity monitor does not time out on stuck socket writes

2015-03-27 Thread Torsten Mielke (JIRA)
Torsten Mielke created AMQ-5692:
---

 Summary:  Inactivity monitor does not time out on stuck socket 
writes
 Key: AMQ-5692
 URL: https://issues.apache.org/jira/browse/AMQ-5692
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.11.1
Reporter: Torsten Mielke


It is possible that a socket write is stuck but the inactivity monitor 
currently does not time out on a socketWrite. 

{code:title=AbstractInactivityMonitor.java}
final void writeCheck() {
if (inSend.get()) {
LOG.trace(Send in progress. Skipping write check.);
return;
}
{code}

As a result a connection that is stuck in a tcp write will never be taken down 
due to inactivity. If a client misbehaves the broker will not be able to clear 
that connection as part of the inactivity monitoring.

Now AMQ-2511 introduced a counter on the reachCheck() to detect it a socket 
read in progress really retrieves data or is stuck. 
I propose for a similar mechanism being applied on the writeCheck() operation 
so that a socket write that is stuck can be detected and the connection can be 
closed.







--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5692) Inactivity monitor does not time out on stuck socket writes

2015-03-27 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383802#comment-14383802
 ] 

Torsten Mielke commented on AMQ-5692:
-

A workaround currently is to configure transport.soWriteTimeout on the broker 
transport connector url. It will make tcp writes time out but this config is 
really independent of the inactivity monitor configuration and should not be 
required.

  Inactivity monitor does not time out on stuck socket writes
 

 Key: AMQ-5692
 URL: https://issues.apache.org/jira/browse/AMQ-5692
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.11.1
Reporter: Torsten Mielke
  Labels: broker, inactivity

 It is possible that a socket write is stuck but the inactivity monitor 
 currently does not time out on a socketWrite. 
 {code:title=AbstractInactivityMonitor.java}
 final void writeCheck() {
 if (inSend.get()) {
 LOG.trace(Send in progress. Skipping write check.);
 return;
 }
 {code}
 As a result a connection that is stuck in a tcp write will never be taken 
 down due to inactivity. If a client misbehaves the broker will not be able to 
 clear that connection as part of the inactivity monitoring.
 Now AMQ-2511 introduced a counter on the reachCheck() to detect it a socket 
 read in progress really retrieves data or is stuck. 
 I propose for a similar mechanism being applied on the writeCheck() operation 
 so that a socket write that is stuck can be detected and the connection can 
 be closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5683) [doc] No documentation of transport.soWriteTimeout

2015-03-23 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14376132#comment-14376132
 ] 

Torsten Mielke commented on AMQ-5683:
-

Resolved in [version 
34|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=55151565navigatingVersions=true]
 of the TCP Transport Reference page 
http://activemq.apache.org/tcp-transport-reference.html.



 [doc] No documentation of transport.soWriteTimeout
 --

 Key: AMQ-5683
 URL: https://issues.apache.org/jira/browse/AMQ-5683
 Project: ActiveMQ
  Issue Type: Bug
  Components: Documentation
Affects Versions: 5.11.1
Reporter: Torsten Mielke
Assignee: Torsten Mielke
  Labels: documentation, transport

 There is an undocumented transport.soWriteTimeout option. 
 It needs to get documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-5683) [doc] No documentation of transport.soWriteTimeout

2015-03-23 Thread Torsten Mielke (JIRA)
Torsten Mielke created AMQ-5683:
---

 Summary: [doc] No documentation of transport.soWriteTimeout
 Key: AMQ-5683
 URL: https://issues.apache.org/jira/browse/AMQ-5683
 Project: ActiveMQ
  Issue Type: Bug
  Components: Documentation
Affects Versions: 5.11.1
Reporter: Torsten Mielke
Assignee: Torsten Mielke


There is an undocumented transport.soWriteTimeout option. 
It needs to get documented.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (AMQ-5683) [doc] No documentation of transport.soWriteTimeout

2015-03-23 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke resolved AMQ-5683.
-
   Resolution: Fixed
Fix Version/s: 5.11.1

 [doc] No documentation of transport.soWriteTimeout
 --

 Key: AMQ-5683
 URL: https://issues.apache.org/jira/browse/AMQ-5683
 Project: ActiveMQ
  Issue Type: Bug
  Components: Documentation
Affects Versions: 5.11.1
Reporter: Torsten Mielke
Assignee: Torsten Mielke
  Labels: documentation, transport
 Fix For: 5.11.1


 There is an undocumented transport.soWriteTimeout option. 
 It needs to get documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-5668) NPE in kahadb with concurrentStoreAndDispatchTopics when sending MQTT msgs with different QoS

2015-03-17 Thread Torsten Mielke (JIRA)
Torsten Mielke created AMQ-5668:
---

 Summary: NPE in kahadb with concurrentStoreAndDispatchTopics when 
sending MQTT msgs with different QoS
 Key: AMQ-5668
 URL: https://issues.apache.org/jira/browse/AMQ-5668
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, KahaDB, MQTT
Affects Versions: 5.11.1
 Environment: MQTT, KahaDB
Reporter: Torsten Mielke


Running KahaDB with concurrentStoreAndDispatchTopics=true and sending 3 MQTT 
messages using different QoS values raises 

{code}
2015-03-17 13:27:48,866 WARN ActiveMQ NIO Worker 2 - Failed to send MQTT 
Publish:
java.lang.NullPointerException
at 
org.apache.activemq.broker.region.cursors.AbstractStoreCursor.setLastCachedId(AbstractStoreCursor.java:319)
at 
org.apache.activemq.broker.region.cursors.AbstractStoreCursor.trackLastCached(AbstractStoreCursor.java:280)
at 
org.apache.activemq.broker.region.cursors.AbstractStoreCursor.addMessageLast(AbstractStoreCursor.java:213)
at 
org.apache.activemq.broker.region.cursors.TopicStorePrefetch.addMessageLast(TopicStorePrefetch.java:74)
at 
org.apache.activemq.broker.region.cursors.StoreDurableSubscriberCursor.addMessageLast(StoreDurableSubscriberCursor.java:198)
at 
org.apache.activemq.broker.region.PrefetchSubscription.add(PrefetchSubscription.java:159)
at 
org.apache.activemq.broker.region.DurableTopicSubscription.add(DurableTopicSubscription.java:274)
at 
org.apache.activemq.broker.region.policy.SimpleDispatchPolicy.dispatch(SimpleDispatchPolicy.java:48)
at org.apache.activemq.broker.region.Topic.dispatch(Topic.java:717)
at org.apache.activemq.broker.region.Topic.doMessageSend(Topic.java:510)
at org.apache.activemq.broker.region.Topic.send(Topic.java:441)
at 
org.apache.activemq.broker.region.AbstractRegion.send(AbstractRegion.java:419)
at 
org.apache.activemq.broker.region.RegionBroker.send(RegionBroker.java:468)
at 
org.apache.activemq.broker.jmx.ManagedRegionBroker.send(ManagedRegionBroker.java:297)
at org.apache.activemq.broker.BrokerFilter.send(BrokerFilter.java:152)
at 
org.apache.activemq.broker.CompositeDestinationBroker.send(CompositeDestinationBroker.java:96)
at 
org.apache.activemq.broker.TransactionBroker.send(TransactionBroker.java:307)
at 
org.apache.activemq.broker.MutableBrokerFilter.send(MutableBrokerFilter.java:157)
at 
org.apache.activemq.broker.TransportConnection.processMessage(TransportConnection.java:541)
at 
org.apache.activemq.command.ActiveMQMessage.visit(ActiveMQMessage.java:768)
at 
org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:334)
at 
org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:188)
at 
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:45)
at 
org.apache.activemq.transport.mqtt.MQTTInactivityMonitor.onCommand(MQTTInactivityMonitor.java:147)
at 
org.apache.activemq.transport.mqtt.MQTTTransportFilter.sendToActiveMQ(MQTTTransportFilter.java:106)
at 
org.apache.activemq.transport.mqtt.MQTTProtocolConverter.sendToActiveMQ(MQTTProtocolConverter.java:173)
at 
org.apache.activemq.transport.mqtt.MQTTProtocolConverter.onMQTTPublish(MQTTProtocolConverter.java:445)
at 
org.apache.activemq.transport.mqtt.MQTTProtocolConverter.onMQTTCommand(MQTTProtocolConverter.java:210)
at 
org.apache.activemq.transport.mqtt.MQTTTransportFilter.onCommand(MQTTTransportFilter.java:94)
at 
org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
at 
org.apache.activemq.transport.mqtt.MQTTCodec$1.onFrame(MQTTCodec.java:54)
at 
org.apache.activemq.transport.mqtt.MQTTCodec.processCommand(MQTTCodec.java:79)
at 
org.apache.activemq.transport.mqtt.MQTTCodec.access$400(MQTTCodec.java:26)
at 
org.apache.activemq.transport.mqtt.MQTTCodec$4.parse(MQTTCodec.java:194)
at 
org.apache.activemq.transport.mqtt.MQTTCodec$3.parse(MQTTCodec.java:160)
at 
org.apache.activemq.transport.mqtt.MQTTCodec$2.parse(MQTTCodec.java:123)
at org.apache.activemq.transport.mqtt.MQTTCodec.parse(MQTTCodec.java:65)
at 
org.apache.activemq.transport.mqtt.MQTTNIOTransport.serviceRead(MQTTNIOTransport.java:105)
at 
org.apache.activemq.transport.mqtt.MQTTNIOTransport.access$000(MQTTNIOTransport.java:43)
at 
org.apache.activemq.transport.mqtt.MQTTNIOTransport$1.onSelect(MQTTNIOTransport.java:66)
at 
org.apache.activemq.transport.nio.SelectorSelection.onSelect(SelectorSelection.java:97)
at 
org.apache.activemq.transport.nio.SelectorWorker$1.run(SelectorWorker.java:119)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 

[jira] [Commented] (AMQ-5668) NPE in kahadb with concurrentStoreAndDispatchTopics when sending MQTT msgs with different QoS

2015-03-17 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14365057#comment-14365057
 ] 

Torsten Mielke commented on AMQ-5668:
-

Possible workarounds:
- concurrentStoreAndDispatchTopics=false as recommended in AMQ-2864,
- use a different persistence store, e.g. LevelDB


 NPE in kahadb with concurrentStoreAndDispatchTopics when sending MQTT msgs 
 with different QoS
 -

 Key: AMQ-5668
 URL: https://issues.apache.org/jira/browse/AMQ-5668
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, KahaDB, MQTT
Affects Versions: 5.11.1
 Environment: MQTT, KahaDB
Reporter: Torsten Mielke
  Labels: broker, kahadb, mqtt

 Running KahaDB with concurrentStoreAndDispatchTopics=true and sending 3 
 MQTT messages using different QoS values raises 
 {code}
 2015-03-17 13:27:48,866 WARN ActiveMQ NIO Worker 2 - Failed to send MQTT 
 Publish:
 java.lang.NullPointerException
   at 
 org.apache.activemq.broker.region.cursors.AbstractStoreCursor.setLastCachedId(AbstractStoreCursor.java:319)
   at 
 org.apache.activemq.broker.region.cursors.AbstractStoreCursor.trackLastCached(AbstractStoreCursor.java:280)
   at 
 org.apache.activemq.broker.region.cursors.AbstractStoreCursor.addMessageLast(AbstractStoreCursor.java:213)
   at 
 org.apache.activemq.broker.region.cursors.TopicStorePrefetch.addMessageLast(TopicStorePrefetch.java:74)
   at 
 org.apache.activemq.broker.region.cursors.StoreDurableSubscriberCursor.addMessageLast(StoreDurableSubscriberCursor.java:198)
   at 
 org.apache.activemq.broker.region.PrefetchSubscription.add(PrefetchSubscription.java:159)
   at 
 org.apache.activemq.broker.region.DurableTopicSubscription.add(DurableTopicSubscription.java:274)
   at 
 org.apache.activemq.broker.region.policy.SimpleDispatchPolicy.dispatch(SimpleDispatchPolicy.java:48)
   at org.apache.activemq.broker.region.Topic.dispatch(Topic.java:717)
   at org.apache.activemq.broker.region.Topic.doMessageSend(Topic.java:510)
   at org.apache.activemq.broker.region.Topic.send(Topic.java:441)
   at 
 org.apache.activemq.broker.region.AbstractRegion.send(AbstractRegion.java:419)
   at 
 org.apache.activemq.broker.region.RegionBroker.send(RegionBroker.java:468)
   at 
 org.apache.activemq.broker.jmx.ManagedRegionBroker.send(ManagedRegionBroker.java:297)
   at org.apache.activemq.broker.BrokerFilter.send(BrokerFilter.java:152)
   at 
 org.apache.activemq.broker.CompositeDestinationBroker.send(CompositeDestinationBroker.java:96)
   at 
 org.apache.activemq.broker.TransactionBroker.send(TransactionBroker.java:307)
   at 
 org.apache.activemq.broker.MutableBrokerFilter.send(MutableBrokerFilter.java:157)
   at 
 org.apache.activemq.broker.TransportConnection.processMessage(TransportConnection.java:541)
   at 
 org.apache.activemq.command.ActiveMQMessage.visit(ActiveMQMessage.java:768)
   at 
 org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:334)
   at 
 org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:188)
   at 
 org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:45)
   at 
 org.apache.activemq.transport.mqtt.MQTTInactivityMonitor.onCommand(MQTTInactivityMonitor.java:147)
   at 
 org.apache.activemq.transport.mqtt.MQTTTransportFilter.sendToActiveMQ(MQTTTransportFilter.java:106)
   at 
 org.apache.activemq.transport.mqtt.MQTTProtocolConverter.sendToActiveMQ(MQTTProtocolConverter.java:173)
   at 
 org.apache.activemq.transport.mqtt.MQTTProtocolConverter.onMQTTPublish(MQTTProtocolConverter.java:445)
   at 
 org.apache.activemq.transport.mqtt.MQTTProtocolConverter.onMQTTCommand(MQTTProtocolConverter.java:210)
   at 
 org.apache.activemq.transport.mqtt.MQTTTransportFilter.onCommand(MQTTTransportFilter.java:94)
   at 
 org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
   at 
 org.apache.activemq.transport.mqtt.MQTTCodec$1.onFrame(MQTTCodec.java:54)
   at 
 org.apache.activemq.transport.mqtt.MQTTCodec.processCommand(MQTTCodec.java:79)
   at 
 org.apache.activemq.transport.mqtt.MQTTCodec.access$400(MQTTCodec.java:26)
   at 
 org.apache.activemq.transport.mqtt.MQTTCodec$4.parse(MQTTCodec.java:194)
   at 
 org.apache.activemq.transport.mqtt.MQTTCodec$3.parse(MQTTCodec.java:160)
   at 
 org.apache.activemq.transport.mqtt.MQTTCodec$2.parse(MQTTCodec.java:123)
   at org.apache.activemq.transport.mqtt.MQTTCodec.parse(MQTTCodec.java:65)
   at 
 org.apache.activemq.transport.mqtt.MQTTNIOTransport.serviceRead(MQTTNIOTransport.java:105)
   at 
 

[jira] [Updated] (AMQ-5640) negative TotalMessageCount in JMX Broker MBean

2015-03-05 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke updated AMQ-5640:

Attachment: TotalMessageCountTest.java

 negative TotalMessageCount in JMX Broker MBean
 --

 Key: AMQ-5640
 URL: https://issues.apache.org/jira/browse/AMQ-5640
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker, JMX
Affects Versions: 5.11.0
Reporter: Torsten Mielke
  Labels: broker, jmx
 Attachments: TotalMessageCountTest.java


 Starting a broker with a few messages on a queue and consuming these messages 
 will cause the TotalMessageCount property on the Broker MBean go to a 
 negative value. 
 That value should never go negative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (AMQ-5640) negative TotalMessageCount in JMX Broker MBean

2015-03-05 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14348715#comment-14348715
 ] 

Torsten Mielke edited comment on AMQ-5640 at 3/5/15 2:21 PM:
-

Unit test attached in TotalMessageCountTest.java, to be placed into 
activemq-unit-tests/src/test/java/org/apache/activemq/jmx/TotalMessageCountTest.java



was (Author: tmielke):
Unit test to follow.


 negative TotalMessageCount in JMX Broker MBean
 --

 Key: AMQ-5640
 URL: https://issues.apache.org/jira/browse/AMQ-5640
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker, JMX
Affects Versions: 5.11.0
Reporter: Torsten Mielke
  Labels: broker, jmx
 Attachments: TotalMessageCountTest.java


 Starting a broker with a few messages on a queue and consuming these messages 
 will cause the TotalMessageCount property on the Broker MBean go to a 
 negative value. 
 That value should never go negative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-5640) negative TotalMessageCount in JMX Broker MBean

2015-03-05 Thread Torsten Mielke (JIRA)
Torsten Mielke created AMQ-5640:
---

 Summary: negative TotalMessageCount in JMX Broker MBean
 Key: AMQ-5640
 URL: https://issues.apache.org/jira/browse/AMQ-5640
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker, JMX
Affects Versions: 5.11.0
Reporter: Torsten Mielke


Starting a broker with a few messages on a queue and consuming these messages 
will cause the TotalMessageCount property on the Broker MBean go to a negative 
value. 

That value should never go negative.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5640) negative TotalMessageCount in JMX Broker MBean

2015-03-05 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14348715#comment-14348715
 ] 

Torsten Mielke commented on AMQ-5640:
-

Unit test to follow.


 negative TotalMessageCount in JMX Broker MBean
 --

 Key: AMQ-5640
 URL: https://issues.apache.org/jira/browse/AMQ-5640
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker, JMX
Affects Versions: 5.11.0
Reporter: Torsten Mielke
  Labels: broker, jmx

 Starting a broker with a few messages on a queue and consuming these messages 
 will cause the TotalMessageCount property on the Broker MBean go to a 
 negative value. 
 That value should never go negative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-4366) PooledConnectionFactory closes connections that are in use

2015-03-02 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14343384#comment-14343384
 ] 

Torsten Mielke commented on AMQ-4366:
-

Yes, 5.10 will include the fix for this bug. 

 PooledConnectionFactory closes connections that are in use
 --

 Key: AMQ-4366
 URL: https://issues.apache.org/jira/browse/AMQ-4366
 Project: ActiveMQ
  Issue Type: Bug
  Components: activemq-pool
Affects Versions: 5.7.0, 5.8.0
Reporter: Petr Janata
Assignee: Timothy Bish
 Fix For: 5.9.0

 Attachments: poolConClose.diff


 {{PooledConnectionFactory}} closes connections that are still referenced and 
 should not be closed. Happens only when connection idle or expire time 
 passes. Calling {{createConnection}} after that time will invalidate the 
 connection and all previously obtained {{Sessions}} will behave as closed.
 Due to default 30 second idle timeout, it is likely not to cause problems 
 when:
 * connection is continually in use
 * all {{PooledConnection}} s are borrowed at startup
 Client with session whose connection was prematurely closed will see similar 
 stacktrace:
 {noformat}
 javax.jms.IllegalStateException: The Session is closed
 at 
 org.apache.activemq.ActiveMQSession.checkClosed(ActiveMQSession.java:731)
 at 
 org.apache.activemq.ActiveMQSession.configureMessage(ActiveMQSession.java:719)
 at 
 org.apache.activemq.ActiveMQSession.createBytesMessage(ActiveMQSession.java:316)
 at 
 org.apache.activemq.pool.PooledSession.createBytesMessage(PooledSession.java:168)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-4705) Add keep alive support to shared file locker

2015-02-06 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14309094#comment-14309094
 ] 

Torsten Mielke commented on AMQ-4705:
-

Kind of follow up problem: AMQ-5568

 Add keep alive support to shared file locker
 

 Key: AMQ-4705
 URL: https://issues.apache.org/jira/browse/AMQ-4705
 Project: ActiveMQ
  Issue Type: Bug
  Components: Message Store
Affects Versions: 5.8.0
Reporter: Gary Tully
Assignee: Gary Tully
  Labels: kahadb, netapp, nfsv4, shared-file-lock
 Fix For: 5.9.0


 issue on nsfv4 with a master slave configuration, where both the slave and 
 the master could obtain a lock.
 The following events occurred:
 * master locks the file - does no more i/o to it – it's passive wrt the lock
 * slave asks every 10 seconds if it can get the lock nfs come back and say 
 no, someone has it
 * nfs dies not nicely
  ** nfsv4 is stateful - no callback for locks. 
  ** It has a grace period of 30 seconds to let all clients that had locks 
 reclaim them as locked
 * master does not realize it needs to reclaim the lock and continues under 
 the assumption it has the lock.
 * After 30 sec grace period, slave comes in and asks for the lock and it 
 receives it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (AMQ-5568) deleting lock file on broker shut down can take a master broker down

2015-02-06 Thread Torsten Mielke (JIRA)
Torsten Mielke created AMQ-5568:
---

 Summary: deleting lock file on broker shut down can take a master 
broker down
 Key: AMQ-5568
 URL: https://issues.apache.org/jira/browse/AMQ-5568
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.11.0
Reporter: Torsten Mielke


This problem may only occur on a shared file system master/slave setup. 
I can reproduce reliably on a NFSv4 mount using a persistence adapter 
configuration like 

{code}
levelDB directory=/nfs/activemq/data/leveldb lockKeepAlivePeriod=5000
  locker
shared-file-locker lockAcquireSleepInterval=1/
  /locker
/levelDB
{code}

However the problem is also reproducible using kahaDB.
Two broker instances competing for the lock on the shared storage (e.g. leveldb 
or kahadb). Lets say brokerA becomes master, broker B slave.

If brokerA looses access to the NFS share, it will shut down. As part of 
shutting down, it tries delete the lock file of the persistence adapter. Now 
since the NFS share is gone, all file i/o calls hang for a good while before 
returning errors. 

In the meantime the slave broker B (not affected by the NFS problem) grabs the 
lock and becomes master.

If the NFS mount is restored while broker A (the previous master) still hangs 
on the file i/o operations (as part of its shutdown routine), the attempt to 
delete the lock file will finally succeed and broker A shuts down. 

Deleting the lock file however also affects the new master broker B who 
periodically runs a keepAlive() check on the lock. That check verifies the file 
still exists and the FileLock is still valid. As the lock got deleted keepAlive 
fails on broker B and that broker shuts down as well. 
The overall result is that both broker instances have shut down.

Using restartAllowed=true is not an option either as this can cause other 
problems in an NFS based master/slave setup.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5568) Deleting lock file on broker shut down can take a master broker down

2015-02-06 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14309106#comment-14309106
 ] 

Torsten Mielke commented on AMQ-5568:
-

It seems the broker simply deletes the lock file on the persistence adapter 
without any further checks. 
Perhaps a fix is to delete the lock file only if the broker still holds the 
lock and otherwise just shut down without deleting the file. 

 Deleting lock file on broker shut down can take a master broker down
 

 Key: AMQ-5568
 URL: https://issues.apache.org/jira/browse/AMQ-5568
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.11.0
Reporter: Torsten Mielke
  Labels: persistence

 This problem may only occur on a shared file system master/slave setup. 
 I can reproduce reliably on a NFSv4 mount using a persistence adapter 
 configuration like 
 {code}
 levelDB directory=/nfs/activemq/data/leveldb lockKeepAlivePeriod=5000
   locker
 shared-file-locker lockAcquireSleepInterval=1/
   /locker
 /levelDB
 {code}
 However the problem is also reproducible using kahaDB.
 Two broker instances competing for the lock on the shared storage (e.g. 
 leveldb or kahadb). Lets say brokerA becomes master, broker B slave.
 If brokerA looses access to the NFS share, it will shut down. As part of 
 shutting down, it tries delete the lock file of the persistence adapter. Now 
 since the NFS share is gone, all file i/o calls hang for a good while before 
 returning errors. As such the broker shut down gets delayed.
 In the meantime the slave broker B (not affected by the NFS problem) grabs 
 the lock and becomes master.
 If the NFS mount is restored while broker A (the previous master) still hangs 
 on the file i/o operations (as part of its shutdown routine), the attempt to 
 delete the persistence adapter lock file will finally succeed and broker A 
 shuts down. 
 Deleting the lock file however also affects the new master broker B who 
 periodically runs a keepAlive() check on the lock. That check verifies the 
 file still exists and the FileLock is still valid. As the lock file got 
 deleted, keepAlive() fails on broker B and that broker shuts down as well. 
 The overall result is that both broker instances have shut down despite an 
 initially successful failover.
 Using restartAllowed=true is not an option either as this can cause other 
 problems in an NFS based master/slave setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5568) deleting lock file on broker shut down can take a master broker down

2015-02-06 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke updated AMQ-5568:

Description: 
This problem may only occur on a shared file system master/slave setup. 
I can reproduce reliably on a NFSv4 mount using a persistence adapter 
configuration like 

{code}
levelDB directory=/nfs/activemq/data/leveldb lockKeepAlivePeriod=5000
  locker
shared-file-locker lockAcquireSleepInterval=1/
  /locker
/levelDB
{code}

However the problem is also reproducible using kahaDB.
Two broker instances competing for the lock on the shared storage (e.g. leveldb 
or kahadb). Lets say brokerA becomes master, broker B slave.

If brokerA looses access to the NFS share, it will shut down. As part of 
shutting down, it tries delete the lock file of the persistence adapter. Now 
since the NFS share is gone, all file i/o calls hang for a good while before 
returning errors. As such the broker shut down gets delayed.

In the meantime the slave broker B (not affected by the NFS problem) grabs the 
lock and becomes master.

If the NFS mount is restored while broker A (the previous master) still hangs 
on the file i/o operations (as part of its shutdown routine), the attempt to 
delete the persistence adapter lock file will finally succeed and broker A 
shuts down. 

Deleting the lock file however also affects the new master broker B who 
periodically runs a keepAlive() check on the lock. That check verifies the file 
still exists and the FileLock is still valid. As the lock file got deleted, 
keepAlive() fails on broker B and that broker shuts down as well. 
The overall result is that both broker instances have shut down despite an 
initially successful failover.

Using restartAllowed=true is not an option either as this can cause other 
problems in an NFS based master/slave setup.


  was:
This problem may only occur on a shared file system master/slave setup. 
I can reproduce reliably on a NFSv4 mount using a persistence adapter 
configuration like 

{code}
levelDB directory=/nfs/activemq/data/leveldb lockKeepAlivePeriod=5000
  locker
shared-file-locker lockAcquireSleepInterval=1/
  /locker
/levelDB
{code}

However the problem is also reproducible using kahaDB.
Two broker instances competing for the lock on the shared storage (e.g. leveldb 
or kahadb). Lets say brokerA becomes master, broker B slave.

If brokerA looses access to the NFS share, it will shut down. As part of 
shutting down, it tries delete the lock file of the persistence adapter. Now 
since the NFS share is gone, all file i/o calls hang for a good while before 
returning errors. 

In the meantime the slave broker B (not affected by the NFS problem) grabs the 
lock and becomes master.

If the NFS mount is restored while broker A (the previous master) still hangs 
on the file i/o operations (as part of its shutdown routine), the attempt to 
delete the lock file will finally succeed and broker A shuts down. 

Deleting the lock file however also affects the new master broker B who 
periodically runs a keepAlive() check on the lock. That check verifies the file 
still exists and the FileLock is still valid. As the lock got deleted keepAlive 
fails on broker B and that broker shuts down as well. 
The overall result is that both broker instances have shut down.

Using restartAllowed=true is not an option either as this can cause other 
problems in an NFS based master/slave setup.



 deleting lock file on broker shut down can take a master broker down
 

 Key: AMQ-5568
 URL: https://issues.apache.org/jira/browse/AMQ-5568
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.11.0
Reporter: Torsten Mielke
  Labels: persistence

 This problem may only occur on a shared file system master/slave setup. 
 I can reproduce reliably on a NFSv4 mount using a persistence adapter 
 configuration like 
 {code}
 levelDB directory=/nfs/activemq/data/leveldb lockKeepAlivePeriod=5000
   locker
 shared-file-locker lockAcquireSleepInterval=1/
   /locker
 /levelDB
 {code}
 However the problem is also reproducible using kahaDB.
 Two broker instances competing for the lock on the shared storage (e.g. 
 leveldb or kahadb). Lets say brokerA becomes master, broker B slave.
 If brokerA looses access to the NFS share, it will shut down. As part of 
 shutting down, it tries delete the lock file of the persistence adapter. Now 
 since the NFS share is gone, all file i/o calls hang for a good while before 
 returning errors. As such the broker shut down gets delayed.
 In the meantime the slave broker B (not affected by the NFS problem) grabs 
 the lock and becomes master.
 If the NFS mount is restored while broker A (the previous master) still hangs 
 on the file i/o 

[jira] [Commented] (AMQ-5549) Shared Filesystem Master/Slave using NFSv4 allows both brokers become active at the same time

2015-02-06 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14309119#comment-14309119
 ] 

Torsten Mielke commented on AMQ-5549:
-

Also see AMQ-5568.

 Shared Filesystem Master/Slave using NFSv4 allows both brokers become active 
 at the same time
 -

 Key: AMQ-5549
 URL: https://issues.apache.org/jira/browse/AMQ-5549
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.10.1
 Environment: - CentOS Linux 6
 - OpenJDK 1.7
 - ActiveMQ 5.10.1
Reporter: Heikki Manninen
Priority: Critical

 Identical ActiveMQ master and slave brokers are installed on CentOS Linux 6 
 virtual machines. There is a third virtual machine (also CentOS 6) providing 
 an NFSv4 share for the brokers KahaDB.
 Both brokers are started and the master broker acquires file lock on the lock 
 file and the slave broker sits in a loop and waits for a lock as expected. 
 Also changing brokers work as expected.
 Once the network connection of the NFS server is disconnected both master and 
 slave NFS mounts block and slave broker stops logging file lock re-tries. 
 After a short while after bringing the network connection back the mounts 
 come back and the slave broker is able to acquire the lock simultaneously. 
 Both brokers accept client connections.
 In this situation it is also possible to stop and start both individual 
 brokers many times and they are always able to acquire the lock even if the 
 other one is already running. Only after stopping both brokers and starting 
 them again is the situation back to normal.
 * NFS server:
 ** CentOS Linux 6
 ** NFS v4 export options: rw,sync
 ** NFS v4 grace time 45 seconds
 ** NFS v4 lease time 10 seconds
 * NFS client:
 ** CentOS Linux 6
 ** NFS mount options: nfsvers=4,proto=tcp,hard,wsize=65536,rsize=65536
 * ActiveMQ configuration (otherwise default):
 {code:xml}
 persistenceAdapter
 kahaDB directory=${activemq.data}/kahadb
   locker
 shared-file-locker lockAcquireSleepInterval=1000/
   /locker
 /kahaDB
 /persistenceAdapter
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (AMQ-5568) deleting lock file on broker shut down can take a master broker down

2015-02-06 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14309092#comment-14309092
 ] 

Torsten Mielke edited comment on AMQ-5568 at 2/6/15 1:05 PM:
-

The keepAlive() check is needed due to 
[AMQ-4705|https://issues.apache.org/jira/browse/AMQ-4705], otherwise you may 
get two master broker instances.


was (Author: tmielke):
The keepAlive ping is needed due to 
[AMQ-4705|https://issues.apache.org/jira/browse/AMQ-4705], otherwise you may 
get two master broker instances.

 deleting lock file on broker shut down can take a master broker down
 

 Key: AMQ-5568
 URL: https://issues.apache.org/jira/browse/AMQ-5568
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.11.0
Reporter: Torsten Mielke
  Labels: persistence

 This problem may only occur on a shared file system master/slave setup. 
 I can reproduce reliably on a NFSv4 mount using a persistence adapter 
 configuration like 
 {code}
 levelDB directory=/nfs/activemq/data/leveldb lockKeepAlivePeriod=5000
   locker
 shared-file-locker lockAcquireSleepInterval=1/
   /locker
 /levelDB
 {code}
 However the problem is also reproducible using kahaDB.
 Two broker instances competing for the lock on the shared storage (e.g. 
 leveldb or kahadb). Lets say brokerA becomes master, broker B slave.
 If brokerA looses access to the NFS share, it will shut down. As part of 
 shutting down, it tries delete the lock file of the persistence adapter. Now 
 since the NFS share is gone, all file i/o calls hang for a good while before 
 returning errors. As such the broker shut down gets delayed.
 In the meantime the slave broker B (not affected by the NFS problem) grabs 
 the lock and becomes master.
 If the NFS mount is restored while broker A (the previous master) still hangs 
 on the file i/o operations (as part of its shutdown routine), the attempt to 
 delete the persistence adapter lock file will finally succeed and broker A 
 shuts down. 
 Deleting the lock file however also affects the new master broker B who 
 periodically runs a keepAlive() check on the lock. That check verifies the 
 file still exists and the FileLock is still valid. As the lock file got 
 deleted, keepAlive() fails on broker B and that broker shuts down as well. 
 The overall result is that both broker instances have shut down despite an 
 initially successful failover.
 Using restartAllowed=true is not an option either as this can cause other 
 problems in an NFS based master/slave setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5568) deleting lock file on broker shut down can take a master broker down

2015-02-06 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14309092#comment-14309092
 ] 

Torsten Mielke commented on AMQ-5568:
-

The keepAlive ping is needed due to 
[AMQ-4705|https://issues.apache.org/jira/browse/AMQ-4705], otherwise you may 
get two master broker instances.

 deleting lock file on broker shut down can take a master broker down
 

 Key: AMQ-5568
 URL: https://issues.apache.org/jira/browse/AMQ-5568
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.11.0
Reporter: Torsten Mielke
  Labels: persistence

 This problem may only occur on a shared file system master/slave setup. 
 I can reproduce reliably on a NFSv4 mount using a persistence adapter 
 configuration like 
 {code}
 levelDB directory=/nfs/activemq/data/leveldb lockKeepAlivePeriod=5000
   locker
 shared-file-locker lockAcquireSleepInterval=1/
   /locker
 /levelDB
 {code}
 However the problem is also reproducible using kahaDB.
 Two broker instances competing for the lock on the shared storage (e.g. 
 leveldb or kahadb). Lets say brokerA becomes master, broker B slave.
 If brokerA looses access to the NFS share, it will shut down. As part of 
 shutting down, it tries delete the lock file of the persistence adapter. Now 
 since the NFS share is gone, all file i/o calls hang for a good while before 
 returning errors. As such the broker shut down gets delayed.
 In the meantime the slave broker B (not affected by the NFS problem) grabs 
 the lock and becomes master.
 If the NFS mount is restored while broker A (the previous master) still hangs 
 on the file i/o operations (as part of its shutdown routine), the attempt to 
 delete the persistence adapter lock file will finally succeed and broker A 
 shuts down. 
 Deleting the lock file however also affects the new master broker B who 
 periodically runs a keepAlive() check on the lock. That check verifies the 
 file still exists and the FileLock is still valid. As the lock file got 
 deleted, keepAlive() fails on broker B and that broker shuts down as well. 
 The overall result is that both broker instances have shut down despite an 
 initially successful failover.
 Using restartAllowed=true is not an option either as this can cause other 
 problems in an NFS based master/slave setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-5568) Deleting lock file on broker shut down can take a master broker down

2015-02-06 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke updated AMQ-5568:

Summary: Deleting lock file on broker shut down can take a master broker 
down  (was: deleting lock file on broker shut down can take a master broker 
down)

 Deleting lock file on broker shut down can take a master broker down
 

 Key: AMQ-5568
 URL: https://issues.apache.org/jira/browse/AMQ-5568
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.11.0
Reporter: Torsten Mielke
  Labels: persistence

 This problem may only occur on a shared file system master/slave setup. 
 I can reproduce reliably on a NFSv4 mount using a persistence adapter 
 configuration like 
 {code}
 levelDB directory=/nfs/activemq/data/leveldb lockKeepAlivePeriod=5000
   locker
 shared-file-locker lockAcquireSleepInterval=1/
   /locker
 /levelDB
 {code}
 However the problem is also reproducible using kahaDB.
 Two broker instances competing for the lock on the shared storage (e.g. 
 leveldb or kahadb). Lets say brokerA becomes master, broker B slave.
 If brokerA looses access to the NFS share, it will shut down. As part of 
 shutting down, it tries delete the lock file of the persistence adapter. Now 
 since the NFS share is gone, all file i/o calls hang for a good while before 
 returning errors. As such the broker shut down gets delayed.
 In the meantime the slave broker B (not affected by the NFS problem) grabs 
 the lock and becomes master.
 If the NFS mount is restored while broker A (the previous master) still hangs 
 on the file i/o operations (as part of its shutdown routine), the attempt to 
 delete the persistence adapter lock file will finally succeed and broker A 
 shuts down. 
 Deleting the lock file however also affects the new master broker B who 
 periodically runs a keepAlive() check on the lock. That check verifies the 
 file still exists and the FileLock is still valid. As the lock file got 
 deleted, keepAlive() fails on broker B and that broker shuts down as well. 
 The overall result is that both broker instances have shut down despite an 
 initially successful failover.
 Using restartAllowed=true is not an option either as this can cause other 
 problems in an NFS based master/slave setup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5549) Shared Filesystem Master/Slave using NFSv4 allows both brokers become active at the same time

2015-01-30 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298738#comment-14298738
 ] 

Torsten Mielke commented on AMQ-5549:
-

{quote}
Maybe I wasn't clear enough in the original issue description but this is about 
NFS server failures (crash, network outage, ungraceful shutdown, ...) affecting 
all the NFS clients.
{quote}
Got you. Have not tested that. With restartAllowed=false the master should shut 
down at the least and stop its transport connectors.

Also did some more reading on sync mount option. It is discouraged in some 
posts but I would hope that by also using noac or sync mount option, it should 
be okay as writes are sync now. So data should not get corrupted. This however 
comes at the expense of a performance degradation.




 Shared Filesystem Master/Slave using NFSv4 allows both brokers become active 
 at the same time
 -

 Key: AMQ-5549
 URL: https://issues.apache.org/jira/browse/AMQ-5549
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.10.1
 Environment: - CentOS Linux 6
 - OpenJDK 1.7
 - ActiveMQ 5.10.1
Reporter: Heikki Manninen
Priority: Critical

 Identical ActiveMQ master and slave brokers are installed on CentOS Linux 6 
 virtual machines. There is a third virtual machine (also CentOS 6) providing 
 an NFSv4 share for the brokers KahaDB.
 Both brokers are started and the master broker acquires file lock on the lock 
 file and the slave broker sits in a loop and waits for a lock as expected. 
 Also changing brokers work as expected.
 Once the network connection of the NFS server is disconnected both master and 
 slave NFS mounts block and slave broker stops logging file lock re-tries. 
 After a short while after bringing the network connection back the mounts 
 come back and the slave broker is able to acquire the lock simultaneously. 
 Both brokers accept client connections.
 In this situation it is also possible to stop and start both individual 
 brokers many times and they are always able to acquire the lock even if the 
 other one is already running. Only after stopping both brokers and starting 
 them again is the situation back to normal.
 * NFS server:
 ** CentOS Linux 6
 ** NFS v4 export options: rw,sync
 ** NFS v4 grace time 45 seconds
 ** NFS v4 lease time 10 seconds
 * NFS client:
 ** CentOS Linux 6
 ** NFS mount options: nfsvers=4,proto=tcp,hard,wsize=65536,rsize=65536
 * ActiveMQ configuration (otherwise default):
 {code:xml}
 persistenceAdapter
 kahaDB directory=${activemq.data}/kahadb
   locker
 shared-file-locker lockAcquireSleepInterval=1000/
   /locker
 /kahaDB
 /persistenceAdapter
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (AMQ-5549) Shared Filesystem Master/Slave using NFSv4 allows both brokers become active at the same time

2015-01-30 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298361#comment-14298361
 ] 

Torsten Mielke edited comment on AMQ-5549 at 1/30/15 8:29 AM:
--

Some of the NFS mount options may not support a quick broker failover from 
master to slave. 
The options we finally got best results with where

{code}
timeo=100,retrans=1,soft,noac
{code}

We reduced the timeout to 10 seconds and also reduced the retry to just 1. 
In addition a hard mount seems to retry NFS operations forever (according to 
man page) and using soft operations will fail after retrans transmission 
attempts. Most likely what you want to ensure a quick failover.
And finally the noac option seemed to had a big effect as well on the speed at 
which the master broker detects the NFS failure as it also caused a sync write 
to NFS, which seems to propagate exceptions more quickly. It most likely has a 
negative impact on performance though.

I can't provide a scientific support for these arguments other than above but 
with these settings the master broker would should down much quicker upon an 
NFS failure. 



was (Author: tmielke):
Some of the NFS mount options may not support a quick broker failover from 
master to slave. 
The options we finally got best results with where

{code}
timeo=100,retrans=1,soft,noac
{code}

We reduced the timeout to 10 seconds and also reduced the retry to just 1. 
In addition a hard mount seems to retry NFS operations forever (according to 
man page) and using soft operations will fail after retrans transmission 
attempts. Most likely what you want to ensure a quick failover.
And finally the noac option seemed to had a big effect as well on speed at 
which the master broker detects the NFS failure as it also caused a sync write 
to NFS, which seems to propagate exceptions more quickly. It most likely has a 
negative impact on performance though.

I can't provide no scientific support for these arguments other than above but 
with these settings the master broker would should down much quicker upon an 
NFS failure. 


 Shared Filesystem Master/Slave using NFSv4 allows both brokers become active 
 at the same time
 -

 Key: AMQ-5549
 URL: https://issues.apache.org/jira/browse/AMQ-5549
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.10.1
 Environment: - CentOS Linux 6
 - OpenJDK 1.7
 - ActiveMQ 5.10.1
Reporter: Heikki Manninen
Priority: Critical

 Identical ActiveMQ master and slave brokers are installed on CentOS Linux 6 
 virtual machines. There is a third virtual machine (also CentOS 6) providing 
 an NFSv4 share for the brokers KahaDB.
 Both brokers are started and the master broker acquires file lock on the lock 
 file and the slave broker sits in a loop and waits for a lock as expected. 
 Also changing brokers work as expected.
 Once the network connection of the NFS server is disconnected both master and 
 slave NFS mounts block and slave broker stops logging file lock re-tries. 
 After a short while after bringing the network connection back the mounts 
 come back and the slave broker is able to acquire the lock simultaneously. 
 Both brokers accept client connections.
 In this situation it is also possible to stop and start both individual 
 brokers many times and they are always able to acquire the lock even if the 
 other one is already running. Only after stopping both brokers and starting 
 them again is the situation back to normal.
 * NFS server:
 ** CentOS Linux 6
 ** NFS v4 export options: rw,sync
 ** NFS v4 grace time 45 seconds
 ** NFS v4 lease time 10 seconds
 * NFS client:
 ** CentOS Linux 6
 ** NFS mount options: nfsvers=4,proto=tcp,hard,wsize=65536,rsize=65536
 * ActiveMQ configuration (otherwise default):
 {code:xml}
 persistenceAdapter
 kahaDB directory=${activemq.data}/kahadb
   locker
 shared-file-locker lockAcquireSleepInterval=1000/
   /locker
 /kahaDB
 /persistenceAdapter
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5549) Shared Filesystem Master/Slave using NFSv4 allows both brokers become active at the same time

2015-01-30 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298361#comment-14298361
 ] 

Torsten Mielke commented on AMQ-5549:
-

Some of the NFS mount options may not support a quick broker failover from 
master to slave. 
The options we finally got best results with where

{code}
timeo=100,retrans=1,soft,noac
{code}

We reduced the timeout to 10 seconds and also reduced the retry to just 1. 
In addition a hard mount seems to retry NFS operations forever (according to 
man page) and using soft operations will fail after retrans transmission 
attempts.
And finally the noac option seemed to had a big effect as well on speed at 
which the master broker detects the NFS failure as it also caused a sync write 
to NFS, which seems to propagate exceptions more quickly. It most likely has a 
negative impact on performance though.

I can't provide no scientific support for these arguments other than above but 
with these settings the master broker would should down much quicker upon an 
NFS failure. 


 Shared Filesystem Master/Slave using NFSv4 allows both brokers become active 
 at the same time
 -

 Key: AMQ-5549
 URL: https://issues.apache.org/jira/browse/AMQ-5549
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.10.1
 Environment: - CentOS Linux 6
 - OpenJDK 1.7
 - ActiveMQ 5.10.1
Reporter: Heikki Manninen
Priority: Critical

 Identical ActiveMQ master and slave brokers are installed on CentOS Linux 6 
 virtual machines. There is a third virtual machine (also CentOS 6) providing 
 an NFSv4 share for the brokers KahaDB.
 Both brokers are started and the master broker acquires file lock on the lock 
 file and the slave broker sits in a loop and waits for a lock as expected. 
 Also changing brokers work as expected.
 Once the network connection of the NFS server is disconnected both master and 
 slave NFS mounts block and slave broker stops logging file lock re-tries. 
 After a short while after bringing the network connection back the mounts 
 come back and the slave broker is able to acquire the lock simultaneously. 
 Both brokers accept client connections.
 In this situation it is also possible to stop and start both individual 
 brokers many times and they are always able to acquire the lock even if the 
 other one is already running. Only after stopping both brokers and starting 
 them again is the situation back to normal.
 * NFS server:
 ** CentOS Linux 6
 ** NFS v4 export options: rw,sync
 ** NFS v4 grace time 45 seconds
 ** NFS v4 lease time 10 seconds
 * NFS client:
 ** CentOS Linux 6
 ** NFS mount options: nfsvers=4,proto=tcp,hard,wsize=65536,rsize=65536
 * ActiveMQ configuration (otherwise default):
 {code:xml}
 persistenceAdapter
 kahaDB directory=${activemq.data}/kahadb
   locker
 shared-file-locker lockAcquireSleepInterval=1000/
   /locker
 /kahaDB
 /persistenceAdapter
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (AMQ-5549) Shared Filesystem Master/Slave using NFSv4 allows both brokers become active at the same time

2015-01-30 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298361#comment-14298361
 ] 

Torsten Mielke edited comment on AMQ-5549 at 1/30/15 8:28 AM:
--

Some of the NFS mount options may not support a quick broker failover from 
master to slave. 
The options we finally got best results with where

{code}
timeo=100,retrans=1,soft,noac
{code}

We reduced the timeout to 10 seconds and also reduced the retry to just 1. 
In addition a hard mount seems to retry NFS operations forever (according to 
man page) and using soft operations will fail after retrans transmission 
attempts. Most likely what you want to ensure a quick failover.
And finally the noac option seemed to had a big effect as well on speed at 
which the master broker detects the NFS failure as it also caused a sync write 
to NFS, which seems to propagate exceptions more quickly. It most likely has a 
negative impact on performance though.

I can't provide no scientific support for these arguments other than above but 
with these settings the master broker would should down much quicker upon an 
NFS failure. 



was (Author: tmielke):
Some of the NFS mount options may not support a quick broker failover from 
master to slave. 
The options we finally got best results with where

{code}
timeo=100,retrans=1,soft,noac
{code}

We reduced the timeout to 10 seconds and also reduced the retry to just 1. 
In addition a hard mount seems to retry NFS operations forever (according to 
man page) and using soft operations will fail after retrans transmission 
attempts.
And finally the noac option seemed to had a big effect as well on speed at 
which the master broker detects the NFS failure as it also caused a sync write 
to NFS, which seems to propagate exceptions more quickly. It most likely has a 
negative impact on performance though.

I can't provide no scientific support for these arguments other than above but 
with these settings the master broker would should down much quicker upon an 
NFS failure. 


 Shared Filesystem Master/Slave using NFSv4 allows both brokers become active 
 at the same time
 -

 Key: AMQ-5549
 URL: https://issues.apache.org/jira/browse/AMQ-5549
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.10.1
 Environment: - CentOS Linux 6
 - OpenJDK 1.7
 - ActiveMQ 5.10.1
Reporter: Heikki Manninen
Priority: Critical

 Identical ActiveMQ master and slave brokers are installed on CentOS Linux 6 
 virtual machines. There is a third virtual machine (also CentOS 6) providing 
 an NFSv4 share for the brokers KahaDB.
 Both brokers are started and the master broker acquires file lock on the lock 
 file and the slave broker sits in a loop and waits for a lock as expected. 
 Also changing brokers work as expected.
 Once the network connection of the NFS server is disconnected both master and 
 slave NFS mounts block and slave broker stops logging file lock re-tries. 
 After a short while after bringing the network connection back the mounts 
 come back and the slave broker is able to acquire the lock simultaneously. 
 Both brokers accept client connections.
 In this situation it is also possible to stop and start both individual 
 brokers many times and they are always able to acquire the lock even if the 
 other one is already running. Only after stopping both brokers and starting 
 them again is the situation back to normal.
 * NFS server:
 ** CentOS Linux 6
 ** NFS v4 export options: rw,sync
 ** NFS v4 grace time 45 seconds
 ** NFS v4 lease time 10 seconds
 * NFS client:
 ** CentOS Linux 6
 ** NFS mount options: nfsvers=4,proto=tcp,hard,wsize=65536,rsize=65536
 * ActiveMQ configuration (otherwise default):
 {code:xml}
 persistenceAdapter
 kahaDB directory=${activemq.data}/kahadb
   locker
 shared-file-locker lockAcquireSleepInterval=1000/
   /locker
 /kahaDB
 /persistenceAdapter
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5549) Shared Filesystem Master/Slave using NFSv4 allows both brokers become active at the same time

2015-01-30 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14298603#comment-14298603
 ] 

Torsten Mielke commented on AMQ-5549:
-

I am not suggesting ActiveMQ requires a soft NFS mount. Just noticed we got NFS 
errors propagated much quicker using soft mounts. 

Yes, the fix for ENTMQ-391 will be needed and its contained in 5.9.0. See 
AMQ-4705.

In my tests I shut down the nic of the nfs client machine that runs the broker 
and tested how quickly this resulted in an error on the master broker and how 
quickly a slave broker running on a different machine takes over. 

With the NFS options 

{code}
timeo=50,retrans=1,soft,noac
{code}

and the previously suggested broker configuration the master broker would raise 
an exception within 15 secs after loosing access to the NFS share and would 
shutdown within another 1-2 minutes. During the shutdown the broker tries to 
close all file pointing to the persistence store and that close() call hangs 
too and needs to timeout as well.
It took about 60 - 80 seconds for the slave broker to take over. 

Previously testing with default NFS mount options the master broker would some 
times not shut down within 10+ minutes. 

I took various thread dumps along the way and the broker was always hung in a 
Java I/O operation that took a long time to finally raise an exception. 
Was able to reproduce the same behavior using a very simple Java application 
that only tries the same Java I/O. So IMHO the entire issue is really down to 
configuring NFS in a way that it quickly raises errors to the application stack.
 


 Shared Filesystem Master/Slave using NFSv4 allows both brokers become active 
 at the same time
 -

 Key: AMQ-5549
 URL: https://issues.apache.org/jira/browse/AMQ-5549
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.10.1
 Environment: - CentOS Linux 6
 - OpenJDK 1.7
 - ActiveMQ 5.10.1
Reporter: Heikki Manninen
Priority: Critical

 Identical ActiveMQ master and slave brokers are installed on CentOS Linux 6 
 virtual machines. There is a third virtual machine (also CentOS 6) providing 
 an NFSv4 share for the brokers KahaDB.
 Both brokers are started and the master broker acquires file lock on the lock 
 file and the slave broker sits in a loop and waits for a lock as expected. 
 Also changing brokers work as expected.
 Once the network connection of the NFS server is disconnected both master and 
 slave NFS mounts block and slave broker stops logging file lock re-tries. 
 After a short while after bringing the network connection back the mounts 
 come back and the slave broker is able to acquire the lock simultaneously. 
 Both brokers accept client connections.
 In this situation it is also possible to stop and start both individual 
 brokers many times and they are always able to acquire the lock even if the 
 other one is already running. Only after stopping both brokers and starting 
 them again is the situation back to normal.
 * NFS server:
 ** CentOS Linux 6
 ** NFS v4 export options: rw,sync
 ** NFS v4 grace time 45 seconds
 ** NFS v4 lease time 10 seconds
 * NFS client:
 ** CentOS Linux 6
 ** NFS mount options: nfsvers=4,proto=tcp,hard,wsize=65536,rsize=65536
 * ActiveMQ configuration (otherwise default):
 {code:xml}
 persistenceAdapter
 kahaDB directory=${activemq.data}/kahadb
   locker
 shared-file-locker lockAcquireSleepInterval=1000/
   /locker
 /kahaDB
 /persistenceAdapter
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (AMQ-4495) Improve cursor memory management

2015-01-29 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke updated AMQ-4495:

Summary: Improve cursor memory management  (was: Imporve cursor memory 
management)

 Improve cursor memory management
 

 Key: AMQ-4495
 URL: https://issues.apache.org/jira/browse/AMQ-4495
 Project: ActiveMQ
  Issue Type: Bug
Reporter: Dejan Bosanac
Assignee: Dejan Bosanac
 Fix For: 5.9.0

 Attachments: FasterDispatchTest.java


 As currently stands, the store queue cursor will cache producer messages 
 until it gets to the 70% (high watermark) of its usage. After that caching 
 stops and messages goes only in store. When consumers comes, messages get 
 dispatched to it, but memory isn't released until they are acked. The problem 
 is with the use case where producer flow control is off and we have a 
 prefetch large enough to get all our messages from the cache. Then, basically 
 the cursor gets empty and as message acks release memory one by one, we go to 
 the store and try to batch one message at the time. You can guess that things 
 start to be really slow at that point. 
 The solution for this scenario is to wait with batching until we have more 
 space so that store access is optimized. We can do this by adding a new limit 
 (smaller then the high watermark) which will be used as the limit after which 
 we start filling cursor from the store again.
 All this led us to the following questions:
 1. Why do we use 70% as the limit (instead of 100%) when we stop caching 
 producer messages?
 2. Would a solution that stop caching producer messages at 100% of usage and 
 then start batching messages from the store when usage drops below high 
 watermark value be enough. Of course, high watermark would be configurable, 
 but 100% by default so we don't alter any behavior for regular use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5549) Shared Filesystem Master/Slave using NFSv4 allows both brokers become active at the same time

2015-01-29 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296863#comment-14296863
 ] 

Torsten Mielke commented on AMQ-5549:
-

Hi, the config

{code:xml}
   kahaDB directory=${activemq.data}/kahadb 
lockKeepAlivePeriod=15000
  locker
shared-file-locker lockAcquireSleepInterval=5000/
  /locker
/kahaDB
{code}
has an error. lockKeepAlivePeriod should be much lower than 
lockAcquireSleepInterval (at max half of lockAcquireSleepInterval, perhaps even 
below that).

{code:xml}
levelDB directory=/nfs-import/leveldb lockKeepAlivePeriod=5000
  locker
shared-file-locker lockAcquireSleepInterval=1/
  /locker
/levelDB
{code}

You also want to explicitly configure for restartAllowed=false on the broker 
config.

Finally, what are your mount options? We have learned the mount options used 
play a significant role.





 Shared Filesystem Master/Slave using NFSv4 allows both brokers become active 
 at the same time
 -

 Key: AMQ-5549
 URL: https://issues.apache.org/jira/browse/AMQ-5549
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.10.1
 Environment: - CentOS Linux 6
 - OpenJDK 1.7
 - ActiveMQ 5.10.1
Reporter: Heikki Manninen
Priority: Critical

 Identical ActiveMQ master and slave brokers are installed on CentOS Linux 6 
 virtual machines. There is a third virtual machine (also CentOS 6) providing 
 an NFSv4 share for the brokers KahaDB.
 Both brokers are started and the master broker acquires file lock on the lock 
 file and the slave broker sits in a loop and waits for a lock as expected. 
 Also changing brokers work as expected.
 Once the network connection of the NFS server is disconnected both master and 
 slave NFS mounts block and slave broker stops logging file lock re-tries. 
 After a short while after bringing the network connection back the mounts 
 come back and the slave broker is able to acquire the lock simultaneously. 
 Both brokers accept client connections.
 In this situation it is also possible to stop and start both individual 
 brokers many times and they are always able to acquire the lock even if the 
 other one is already running. Only after stopping both brokers and starting 
 them again is the situation back to normal.
 * NFS server:
 ** CentOS Linux 6
 ** NFS v4 export options: rw,sync
 ** NFS v4 grace time 45 seconds
 ** NFS v4 lease time 10 seconds
 * NFS client:
 ** CentOS Linux 6
 ** NFS mount options: nfsvers=4,proto=tcp,hard,wsize=65536,rsize=65536
 * ActiveMQ configuration (otherwise default):
 {code:xml}
 persistenceAdapter
 kahaDB directory=${activemq.data}/kahadb
   locker
 shared-file-locker lockAcquireSleepInterval=1000/
   /locker
 /kahaDB
 /persistenceAdapter
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (AMQ-5549) Shared Filesystem Master/Slave using NFSv4 allows both brokers become active at the same time

2015-01-29 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14296863#comment-14296863
 ] 

Torsten Mielke edited comment on AMQ-5549 at 1/29/15 2:42 PM:
--

Hi, the config

{code:xml}
kahaDB directory=${activemq.data}/kahadb lockKeepAlivePeriod=15000
  locker
shared-file-locker lockAcquireSleepInterval=5000/
  /locker
/kahaDB
{code}
has an error. lockKeepAlivePeriod should be much lower than 
lockAcquireSleepInterval (at max half of lockAcquireSleepInterval, perhaps even 
below that).

{code:xml}
levelDB directory=/nfs-import/leveldb lockKeepAlivePeriod=5000
  locker
shared-file-locker lockAcquireSleepInterval=1/
  /locker
/levelDB
{code}

You also want to explicitly configure for restartAllowed=false on the broker 
config.

Finally, what are your mount options? We have learned the mount options used 
play a significant role.






was (Author: tmielke):
Hi, the config

{code:xml}
   kahaDB directory=${activemq.data}/kahadb 
lockKeepAlivePeriod=15000
  locker
shared-file-locker lockAcquireSleepInterval=5000/
  /locker
/kahaDB
{code}
has an error. lockKeepAlivePeriod should be much lower than 
lockAcquireSleepInterval (at max half of lockAcquireSleepInterval, perhaps even 
below that).

{code:xml}
levelDB directory=/nfs-import/leveldb lockKeepAlivePeriod=5000
  locker
shared-file-locker lockAcquireSleepInterval=1/
  /locker
/levelDB
{code}

You also want to explicitly configure for restartAllowed=false on the broker 
config.

Finally, what are your mount options? We have learned the mount options used 
play a significant role.





 Shared Filesystem Master/Slave using NFSv4 allows both brokers become active 
 at the same time
 -

 Key: AMQ-5549
 URL: https://issues.apache.org/jira/browse/AMQ-5549
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker, Message Store
Affects Versions: 5.10.1
 Environment: - CentOS Linux 6
 - OpenJDK 1.7
 - ActiveMQ 5.10.1
Reporter: Heikki Manninen
Priority: Critical

 Identical ActiveMQ master and slave brokers are installed on CentOS Linux 6 
 virtual machines. There is a third virtual machine (also CentOS 6) providing 
 an NFSv4 share for the brokers KahaDB.
 Both brokers are started and the master broker acquires file lock on the lock 
 file and the slave broker sits in a loop and waits for a lock as expected. 
 Also changing brokers work as expected.
 Once the network connection of the NFS server is disconnected both master and 
 slave NFS mounts block and slave broker stops logging file lock re-tries. 
 After a short while after bringing the network connection back the mounts 
 come back and the slave broker is able to acquire the lock simultaneously. 
 Both brokers accept client connections.
 In this situation it is also possible to stop and start both individual 
 brokers many times and they are always able to acquire the lock even if the 
 other one is already running. Only after stopping both brokers and starting 
 them again is the situation back to normal.
 * NFS server:
 ** CentOS Linux 6
 ** NFS v4 export options: rw,sync
 ** NFS v4 grace time 45 seconds
 ** NFS v4 lease time 10 seconds
 * NFS client:
 ** CentOS Linux 6
 ** NFS mount options: nfsvers=4,proto=tcp,hard,wsize=65536,rsize=65536
 * ActiveMQ configuration (otherwise default):
 {code:xml}
 persistenceAdapter
 kahaDB directory=${activemq.data}/kahadb
   locker
 shared-file-locker lockAcquireSleepInterval=1000/
   /locker
 /kahaDB
 /persistenceAdapter
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ACTIVEMQ6-36) Disallow use of SSLv3 to protect against POODLE

2014-12-04 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/ACTIVEMQ6-36?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14234311#comment-14234311
 ] 

Torsten Mielke commented on ACTIVEMQ6-36:
-

Using this configuration should disable SSLv3 in the brokers transport connector

{code}
transportConnector name=ssl 
uri=ssl:localhost:61617?transport.enabledProtocols=TLSv1,TLSv1.1,TLSv1.2/
{code}

Additional configuration is needed for the web console. 

 Disallow use of SSLv3 to protect against POODLE
 ---

 Key: ACTIVEMQ6-36
 URL: https://issues.apache.org/jira/browse/ACTIVEMQ6-36
 Project: Apache ActiveMQ 6
  Issue Type: Bug
Reporter: Justin Bertram
Assignee: Justin Bertram
 Fix For: 6.0.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (AMQ-5399) MQTT - out of order acks

2014-10-20 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14176941#comment-14176941
 ] 

Torsten Mielke commented on AMQ-5399:
-

I can confirm the bug is fixed on latest 5.11-SNAPSHOT.
Thx Dejan!

 MQTT - out of order acks
 

 Key: AMQ-5399
 URL: https://issues.apache.org/jira/browse/AMQ-5399
 Project: ActiveMQ
  Issue Type: Bug
  Components: MQTT
Affects Versions: 5.10.0
Reporter: Dejan Bosanac
Assignee: Dejan Bosanac
 Fix For: 5.11.0


 As different QoS messages are acked at different points, we can get in the 
 situation where broker gets message acks out of order, leading to exceptions 
 like
 {code}javax.jms.JMSException: Unmatched acknowledge: MessageAck {commandId = 
 0, responseRequired = false, ackType = 2, consumerId = 
 ID:mac.fritz.box-62188-1412945008667-1:3:-1:1, firstMessageId = null, 
 lastMessageId = ID:mac.fritz.box-62188-1412945008667-1:2:-1:1:2, destination 
 = topic://xxx, transactionId = null, messageCount = 1, poisonCause = null}; 
 Expected message count (1) differs from count in dispatched-list (2){code}
 The same situation can occur in heavy load environments. The root of the 
 problem is that we send back standard acks which should be in order. As we 
 really ack message by message we should be using individual acks in mqtt 
 filter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (AMQ-5274) Stuck messages and CPU churn when aborted transacted message expires

2014-08-27 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14112193#comment-14112193
 ] 

Torsten Mielke edited comment on AMQ-5274 at 8/27/14 12:23 PM:
---

Uploading a modified version AMQ-5274v2.zip that runs a JUnit test. Simply run 
mvn test to reproduce the issue.


was (Author: tmielke):
Uploading a modified version of AMQ-5274.zip that runs a JUnit test. Simply run 
mvn test to reproduce the issue.

 Stuck messages and CPU churn when aborted transacted message expires
 

 Key: AMQ-5274
 URL: https://issues.apache.org/jira/browse/AMQ-5274
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.8.0, 5.9.0, 5.9.1, 5.10.0
 Environment: win64, RHEL11
Reporter: Yannick Malins
Priority: Critical
 Attachments: AMQ-5274.zip, AMQ-5274v2.zip, AMQ5274Test.java, 
 logs_extract.txt


 The test case is a simple producer/consumer:
 Producer: 20 messages are injected, with a timeout of 10s.
 Consumer: The redelivery policy is set to 0 retries (the issue exists with 
 other values). The consumer uses transactions and throws a runtime exception 
 on each message received.
 queue stats show 20 enqueue, 19 dequeue, 1 pending
 DLQ stat show 20 enqueue: all 20 messages go to DLQ, IDs ending in 1-10 for 
 failing, 11-20 for expiry (approx)
 the pending item (ID ending in 10) is a ghost message , and remains stuck 
 indefinitely in queue.test
 if you browse, the message is not shown
 A) if you restart the broker, after a short while the message is cleaned:
 jvm 1|  WARN | Duplicate message add attempt rejected. Destination: 
 QUEUE://ActiveMQ.DLQ, Message id: ID:REDACTED-52872-1405079629779-1:1:1:1:10
 jvm 1|  WARN | 
 org.apache.activemq.broker.region.cursors.QueueStorePrefetch@5b427f3c:ActiveMQ.DLQ,batchResetNeeded=false,storeHasMessages=true,size=0,cacheEnabled=true,maxBatchSize:20,hasSpace:tru
 e - cursor got duplicate: ID:REDACTED--52872-1405079629779-1:1:1:1:10, 4
 jvm 1|  WARN | duplicate message from store 
 ID:REDACTED--52872-1405079629779-1:1:1:1:10, redirecting for dlq processing
 B) if you purge, ActiveMQ logs a warning: WARN | queue://queue.test after 
 purge complete, message count stats report: 1
   the queue is marked as being empty.
   however if you restart the broker, the message re-appears shorty, 
 before being cleaned as above
   
   
 SUPPLEMENTARY: with activeMQ 5.9.0 and above , if you run the injection 
 several times, the CPU usage of ActiveMQ climbs drastically until the queue 
 is purged.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (AMQ-5274) Stuck messages and CPU churn when aborted transacted message expires

2014-08-27 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke updated AMQ-5274:


Attachment: AMQ-5274v2.zip

Uploading a modified version of AMQ-5274.zip that runs a JUnit test. Simply run 
mvn test to reproduce the issue.

 Stuck messages and CPU churn when aborted transacted message expires
 

 Key: AMQ-5274
 URL: https://issues.apache.org/jira/browse/AMQ-5274
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.8.0, 5.9.0, 5.9.1, 5.10.0
 Environment: win64, RHEL11
Reporter: Yannick Malins
Priority: Critical
 Attachments: AMQ-5274.zip, AMQ-5274v2.zip, AMQ5274Test.java, 
 logs_extract.txt


 The test case is a simple producer/consumer:
 Producer: 20 messages are injected, with a timeout of 10s.
 Consumer: The redelivery policy is set to 0 retries (the issue exists with 
 other values). The consumer uses transactions and throws a runtime exception 
 on each message received.
 queue stats show 20 enqueue, 19 dequeue, 1 pending
 DLQ stat show 20 enqueue: all 20 messages go to DLQ, IDs ending in 1-10 for 
 failing, 11-20 for expiry (approx)
 the pending item (ID ending in 10) is a ghost message , and remains stuck 
 indefinitely in queue.test
 if you browse, the message is not shown
 A) if you restart the broker, after a short while the message is cleaned:
 jvm 1|  WARN | Duplicate message add attempt rejected. Destination: 
 QUEUE://ActiveMQ.DLQ, Message id: ID:REDACTED-52872-1405079629779-1:1:1:1:10
 jvm 1|  WARN | 
 org.apache.activemq.broker.region.cursors.QueueStorePrefetch@5b427f3c:ActiveMQ.DLQ,batchResetNeeded=false,storeHasMessages=true,size=0,cacheEnabled=true,maxBatchSize:20,hasSpace:tru
 e - cursor got duplicate: ID:REDACTED--52872-1405079629779-1:1:1:1:10, 4
 jvm 1|  WARN | duplicate message from store 
 ID:REDACTED--52872-1405079629779-1:1:1:1:10, redirecting for dlq processing
 B) if you purge, ActiveMQ logs a warning: WARN | queue://queue.test after 
 purge complete, message count stats report: 1
   the queue is marked as being empty.
   however if you restart the broker, the message re-appears shorty, 
 before being cleaned as above
   
   
 SUPPLEMENTARY: with activeMQ 5.9.0 and above , if you run the injection 
 several times, the CPU usage of ActiveMQ climbs drastically until the queue 
 is purged.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (AMQ-5274) Stuck messages and CPU churn when aborted transacted message expires

2014-08-27 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke updated AMQ-5274:


Attachment: AMQ-5274v2.zip

 Stuck messages and CPU churn when aborted transacted message expires
 

 Key: AMQ-5274
 URL: https://issues.apache.org/jira/browse/AMQ-5274
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.8.0, 5.9.0, 5.9.1, 5.10.0
 Environment: win64, RHEL11
Reporter: Yannick Malins
Priority: Critical
 Attachments: AMQ-5274.zip, AMQ-5274v2.zip, AMQ5274Test.java, 
 logs_extract.txt


 The test case is a simple producer/consumer:
 Producer: 20 messages are injected, with a timeout of 10s.
 Consumer: The redelivery policy is set to 0 retries (the issue exists with 
 other values). The consumer uses transactions and throws a runtime exception 
 on each message received.
 queue stats show 20 enqueue, 19 dequeue, 1 pending
 DLQ stat show 20 enqueue: all 20 messages go to DLQ, IDs ending in 1-10 for 
 failing, 11-20 for expiry (approx)
 the pending item (ID ending in 10) is a ghost message , and remains stuck 
 indefinitely in queue.test
 if you browse, the message is not shown
 A) if you restart the broker, after a short while the message is cleaned:
 jvm 1|  WARN | Duplicate message add attempt rejected. Destination: 
 QUEUE://ActiveMQ.DLQ, Message id: ID:REDACTED-52872-1405079629779-1:1:1:1:10
 jvm 1|  WARN | 
 org.apache.activemq.broker.region.cursors.QueueStorePrefetch@5b427f3c:ActiveMQ.DLQ,batchResetNeeded=false,storeHasMessages=true,size=0,cacheEnabled=true,maxBatchSize:20,hasSpace:tru
 e - cursor got duplicate: ID:REDACTED--52872-1405079629779-1:1:1:1:10, 4
 jvm 1|  WARN | duplicate message from store 
 ID:REDACTED--52872-1405079629779-1:1:1:1:10, redirecting for dlq processing
 B) if you purge, ActiveMQ logs a warning: WARN | queue://queue.test after 
 purge complete, message count stats report: 1
   the queue is marked as being empty.
   however if you restart the broker, the message re-appears shorty, 
 before being cleaned as above
   
   
 SUPPLEMENTARY: with activeMQ 5.9.0 and above , if you run the injection 
 several times, the CPU usage of ActiveMQ climbs drastically until the queue 
 is purged.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (AMQ-5304) groupClass not applied to TempDestinationAuthorizationEntry

2014-08-12 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093858#comment-14093858
 ] 

Torsten Mielke commented on AMQ-5304:
-

Fixed in commit 
[ec2a3c750bbfb33763ac56b8b0a660bdf8542145|https://fisheye6.atlassian.com/changelog/activemq-git?cs=ec2a3c750bbfb33763ac56b8b0a660bdf8542145].

 groupClass not applied to TempDestinationAuthorizationEntry
 ---

 Key: AMQ-5304
 URL: https://issues.apache.org/jira/browse/AMQ-5304
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.10.0
Reporter: Torsten Mielke
Assignee: Torsten Mielke
  Labels: authorization, security
 Attachments: AMQ-5304.patch


 When configuring the authorization plugin with a 
 tempDestinationAuthorizationEntry that also set a groupClass, this 
 groupClass is not properly applied to the TempDestinationAuthorizationEntry 
 instance. 
 E.g. consider this example config
 {code:xml}
   authorizationPlugin
 map
   authorizationMap 
 groupClass=org.apache.karaf.jaas.boot.principal.RolePrincipal
 authorizationEntries
   authorizationEntry queue= read=admin write=client,admin 
 admin=client,admin /
   authorizationEntry topic= read=client,admin write=admin 
 admin=admin/
   authorizationEntry topic=ActiveMQ.Advisory. 
 read=admin,client write=admin,client admin=admin/
/authorizationEntries
tempDestinationAuthorizationEntry
  tempDestinationAuthorizationEntry read=client,admin 
 write=client,admin admin=client,admin 
 groupClass=org.apache.karaf.jaas.boot.principal.RolePrincipal/
/tempDestinationAuthorizationEntry
  /authorizationMap
 /map
   /authorizationPlugin
 {code}
 The groupClass attribute is set on the TempDestinationAuthorizationEntry 
 instance but we don't apply the groupClass to the AuthorizationEntry by 
 calling afterPropertiesSet();
 As a result, authorization fails when trying to create a temp destination. 
 This can happen when deploying the broker inside a Karaf container and have 
 Karaf do the authentication (such as in JBoss A-MQ). 
 The groupClass is properly set on the authorizationEntries within the 
 authorizationEntries list and only fails to be applied properly on the 
 tempDestinationAuthorizationEntry. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (AMQ-5304) groupClass not applied to TempDestinationAuthorizationEntry

2014-08-12 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke resolved AMQ-5304.
-

   Resolution: Fixed
Fix Version/s: 5.11.0

 groupClass not applied to TempDestinationAuthorizationEntry
 ---

 Key: AMQ-5304
 URL: https://issues.apache.org/jira/browse/AMQ-5304
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.10.0
Reporter: Torsten Mielke
Assignee: Torsten Mielke
  Labels: authorization, security
 Fix For: 5.11.0

 Attachments: AMQ-5304.patch


 When configuring the authorization plugin with a 
 tempDestinationAuthorizationEntry that also set a groupClass, this 
 groupClass is not properly applied to the TempDestinationAuthorizationEntry 
 instance. 
 E.g. consider this example config
 {code:xml}
   authorizationPlugin
 map
   authorizationMap 
 groupClass=org.apache.karaf.jaas.boot.principal.RolePrincipal
 authorizationEntries
   authorizationEntry queue= read=admin write=client,admin 
 admin=client,admin /
   authorizationEntry topic= read=client,admin write=admin 
 admin=admin/
   authorizationEntry topic=ActiveMQ.Advisory. 
 read=admin,client write=admin,client admin=admin/
/authorizationEntries
tempDestinationAuthorizationEntry
  tempDestinationAuthorizationEntry read=client,admin 
 write=client,admin admin=client,admin 
 groupClass=org.apache.karaf.jaas.boot.principal.RolePrincipal/
/tempDestinationAuthorizationEntry
  /authorizationMap
 /map
   /authorizationPlugin
 {code}
 The groupClass attribute is set on the TempDestinationAuthorizationEntry 
 instance but we don't apply the groupClass to the AuthorizationEntry by 
 calling afterPropertiesSet();
 As a result, authorization fails when trying to create a temp destination. 
 This can happen when deploying the broker inside a Karaf container and have 
 Karaf do the authentication (such as in JBoss A-MQ). 
 The groupClass is properly set on the authorizationEntries within the 
 authorizationEntries list and only fails to be applied properly on the 
 tempDestinationAuthorizationEntry. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (AMQ-5307) MQTT Transport codec does not properly deal with partial read of frame header

2014-08-05 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14085941#comment-14085941
 ] 

Torsten Mielke commented on AMQ-5307:
-

I can confirm the problem is resolved using the unit test attached to 
[ENTMQ-751|https://issues.jboss.org/browse/ENTMQ-751].

 MQTT Transport codec does not properly deal with partial read of frame header
 -

 Key: AMQ-5307
 URL: https://issues.apache.org/jira/browse/AMQ-5307
 Project: ActiveMQ
  Issue Type: Bug
  Components: MQTT, Transport
Affects Versions: 5.10.0
Reporter: Timothy Bish
Assignee: Timothy Bish
Priority: Critical
 Fix For: 5.11.0


 The Codec used to parse MQTT Frames does not properly deal with the case 
 where only part of the initial frame header arrives.  This can happen in 
 NIO+SSL etc where the incoming packet has only the first byte or so of the 
 frame which causes the process header method get stuck in an endless loop.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (AMQ-5307) MQTT Transport codec does not properly deal with partial read of frame header

2014-08-04 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14084765#comment-14084765
 ] 

Torsten Mielke commented on AMQ-5307:
-

Seems to be related to [ENTMQ-751|https://issues.jboss.org/browse/ENTMQ-751].

 MQTT Transport codec does not properly deal with partial read of frame header
 -

 Key: AMQ-5307
 URL: https://issues.apache.org/jira/browse/AMQ-5307
 Project: ActiveMQ
  Issue Type: Bug
  Components: MQTT, Transport
Affects Versions: 5.10.0
Reporter: Timothy Bish
Assignee: Timothy Bish
Priority: Critical
 Fix For: 5.11.0


 The Codec used to parse MQTT Frames does not properly deal with the case 
 where only part of the initial frame header arrives.  This can happen in 
 NIO+SSL etc where the incoming packet has only the first byte or so of the 
 frame which causes the process header method get stuck in an endless loop.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (AMQ-5304) groupClass not applied to TempDestinationAuthorizationEntry

2014-08-01 Thread Torsten Mielke (JIRA)
Torsten Mielke created AMQ-5304:
---

 Summary: groupClass not applied to 
TempDestinationAuthorizationEntry
 Key: AMQ-5304
 URL: https://issues.apache.org/jira/browse/AMQ-5304
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.10.0
Reporter: Torsten Mielke
Assignee: Torsten Mielke


When configuring the authorization plugin with a 
tempDestinationAuthorizationEntry that also set a groupClass, this groupClass 
is not properly applied to the TempDestinationAuthorizationEntry instance. 

E.g. consider this example config
{code:xml}
  authorizationPlugin
map
  authorizationMap 
groupClass=org.apache.karaf.jaas.boot.principal.RolePrincipal
authorizationEntries
  authorizationEntry queue= read=admin write=client,admin 
admin=client,admin /
  authorizationEntry topic= read=client,admin write=admin 
admin=admin/
  authorizationEntry topic=ActiveMQ.Advisory. 
read=admin,client write=admin,client admin=admin/
   /authorizationEntries

   tempDestinationAuthorizationEntry
 tempDestinationAuthorizationEntry read=client,admin 
write=client,admin admin=client,admin 
groupClass=org.apache.karaf.jaas.boot.principal.RolePrincipal/
   /tempDestinationAuthorizationEntry

 /authorizationMap
/map
  /authorizationPlugin
{code}


Its groupClass property is called and set to the class specified in Spring but 
we don't apply the groupClass to the AuthorizationEntry. 





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (AMQ-5304) groupClass not applied to TempDestinationAuthorizationEntry

2014-08-01 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082191#comment-14082191
 ] 

Torsten Mielke commented on AMQ-5304:
-

I have a unit test but its based on Pax-Exam and loads JBoss A-MQ 6.0 directly. 
Will try to find out if I can write a more generic unit test.


 groupClass not applied to TempDestinationAuthorizationEntry
 ---

 Key: AMQ-5304
 URL: https://issues.apache.org/jira/browse/AMQ-5304
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.10.0
Reporter: Torsten Mielke
Assignee: Torsten Mielke
  Labels: authorization, security

 When configuring the authorization plugin with a 
 tempDestinationAuthorizationEntry that also set a groupClass, this 
 groupClass is not properly applied to the TempDestinationAuthorizationEntry 
 instance. 
 E.g. consider this example config
 {code:xml}
   authorizationPlugin
 map
   authorizationMap 
 groupClass=org.apache.karaf.jaas.boot.principal.RolePrincipal
 authorizationEntries
   authorizationEntry queue= read=admin write=client,admin 
 admin=client,admin /
   authorizationEntry topic= read=client,admin write=admin 
 admin=admin/
   authorizationEntry topic=ActiveMQ.Advisory. 
 read=admin,client write=admin,client admin=admin/
/authorizationEntries
tempDestinationAuthorizationEntry
  tempDestinationAuthorizationEntry read=client,admin 
 write=client,admin admin=client,admin 
 groupClass=org.apache.karaf.jaas.boot.principal.RolePrincipal/
/tempDestinationAuthorizationEntry
  /authorizationMap
 /map
   /authorizationPlugin
 {code}
 The groupClass attribute is set on the TempDestinationAuthorizationEntry 
 instance but we don't apply the groupClass to the AuthorizationEntry by 
 calling afterPropertiesSet();
 As a result, authorization fails when trying to create a temp destination. 
 This can happen when deploying the broker inside a Karaf container and have 
 Karaf do the authentication (such as in JBoss A-MQ). 
 The groupClass is properly set on the authorizationEntries within the 
 authorizationEntries list and only fails to be applied properly on the 
 tempDestinationAuthorizationEntry. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (AMQ-5304) groupClass not applied to TempDestinationAuthorizationEntry

2014-08-01 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke updated AMQ-5304:


Description: 
When configuring the authorization plugin with a 
tempDestinationAuthorizationEntry that also set a groupClass, this groupClass 
is not properly applied to the TempDestinationAuthorizationEntry instance. 

E.g. consider this example config
{code:xml}
  authorizationPlugin
map
  authorizationMap 
groupClass=org.apache.karaf.jaas.boot.principal.RolePrincipal
authorizationEntries
  authorizationEntry queue= read=admin write=client,admin 
admin=client,admin /
  authorizationEntry topic= read=client,admin write=admin 
admin=admin/
  authorizationEntry topic=ActiveMQ.Advisory. 
read=admin,client write=admin,client admin=admin/
   /authorizationEntries

   tempDestinationAuthorizationEntry
 tempDestinationAuthorizationEntry read=client,admin 
write=client,admin admin=client,admin 
groupClass=org.apache.karaf.jaas.boot.principal.RolePrincipal/
   /tempDestinationAuthorizationEntry

 /authorizationMap
/map
  /authorizationPlugin
{code}


The groupClass attribute is set on the TempDestinationAuthorizationEntry 
instance but we don't apply the groupClass to the AuthorizationEntry by calling 
afterPropertiesSet();

As a result, authorization fails when trying to create a temp destination. 
This can happen when deploying the broker inside a Karaf container and have 
Karaf do the authentication (such as in JBoss A-MQ). 
The groupClass is properly set on the authorizationEntries within the 
authorizationEntries list and only fails to be applied properly on the 
tempDestinationAuthorizationEntry. 





  was:
When configuring the authorization plugin with a 
tempDestinationAuthorizationEntry that also set a groupClass, this groupClass 
is not properly applied to the TempDestinationAuthorizationEntry instance. 

E.g. consider this example config
{code:xml}
  authorizationPlugin
map
  authorizationMap 
groupClass=org.apache.karaf.jaas.boot.principal.RolePrincipal
authorizationEntries
  authorizationEntry queue= read=admin write=client,admin 
admin=client,admin /
  authorizationEntry topic= read=client,admin write=admin 
admin=admin/
  authorizationEntry topic=ActiveMQ.Advisory. 
read=admin,client write=admin,client admin=admin/
   /authorizationEntries

   tempDestinationAuthorizationEntry
 tempDestinationAuthorizationEntry read=client,admin 
write=client,admin admin=client,admin 
groupClass=org.apache.karaf.jaas.boot.principal.RolePrincipal/
   /tempDestinationAuthorizationEntry

 /authorizationMap
/map
  /authorizationPlugin
{code}


Its groupClass property is called and set to the class specified in Spring but 
we don't apply the groupClass to the AuthorizationEntry. 




 groupClass not applied to TempDestinationAuthorizationEntry
 ---

 Key: AMQ-5304
 URL: https://issues.apache.org/jira/browse/AMQ-5304
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.10.0
Reporter: Torsten Mielke
Assignee: Torsten Mielke
  Labels: authorization, security

 When configuring the authorization plugin with a 
 tempDestinationAuthorizationEntry that also set a groupClass, this 
 groupClass is not properly applied to the TempDestinationAuthorizationEntry 
 instance. 
 E.g. consider this example config
 {code:xml}
   authorizationPlugin
 map
   authorizationMap 
 groupClass=org.apache.karaf.jaas.boot.principal.RolePrincipal
 authorizationEntries
   authorizationEntry queue= read=admin write=client,admin 
 admin=client,admin /
   authorizationEntry topic= read=client,admin write=admin 
 admin=admin/
   authorizationEntry topic=ActiveMQ.Advisory. 
 read=admin,client write=admin,client admin=admin/
/authorizationEntries
tempDestinationAuthorizationEntry
  tempDestinationAuthorizationEntry read=client,admin 
 write=client,admin admin=client,admin 
 groupClass=org.apache.karaf.jaas.boot.principal.RolePrincipal/
/tempDestinationAuthorizationEntry
  /authorizationMap
 /map
   /authorizationPlugin
 {code}
 The groupClass attribute is set on the TempDestinationAuthorizationEntry 
 instance but we don't apply the groupClass to the AuthorizationEntry by 
 calling afterPropertiesSet();
 As a result, authorization fails when trying to create a temp destination. 
 This can happen when deploying the broker inside a Karaf container and have 
 Karaf do the authentication (such as in JBoss 

[jira] [Updated] (AMQ-5304) groupClass not applied to TempDestinationAuthorizationEntry

2014-08-01 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke updated AMQ-5304:


Attachment: AMQ-5304.patch

Attaching a possible fix for this bug in AMQ-5304.patch.
Will still try to also come up with a unit test that can be incorporated into 
the source tree.


 groupClass not applied to TempDestinationAuthorizationEntry
 ---

 Key: AMQ-5304
 URL: https://issues.apache.org/jira/browse/AMQ-5304
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.10.0
Reporter: Torsten Mielke
Assignee: Torsten Mielke
  Labels: authorization, security
 Attachments: AMQ-5304.patch


 When configuring the authorization plugin with a 
 tempDestinationAuthorizationEntry that also set a groupClass, this 
 groupClass is not properly applied to the TempDestinationAuthorizationEntry 
 instance. 
 E.g. consider this example config
 {code:xml}
   authorizationPlugin
 map
   authorizationMap 
 groupClass=org.apache.karaf.jaas.boot.principal.RolePrincipal
 authorizationEntries
   authorizationEntry queue= read=admin write=client,admin 
 admin=client,admin /
   authorizationEntry topic= read=client,admin write=admin 
 admin=admin/
   authorizationEntry topic=ActiveMQ.Advisory. 
 read=admin,client write=admin,client admin=admin/
/authorizationEntries
tempDestinationAuthorizationEntry
  tempDestinationAuthorizationEntry read=client,admin 
 write=client,admin admin=client,admin 
 groupClass=org.apache.karaf.jaas.boot.principal.RolePrincipal/
/tempDestinationAuthorizationEntry
  /authorizationMap
 /map
   /authorizationPlugin
 {code}
 The groupClass attribute is set on the TempDestinationAuthorizationEntry 
 instance but we don't apply the groupClass to the AuthorizationEntry by 
 calling afterPropertiesSet();
 As a result, authorization fails when trying to create a temp destination. 
 This can happen when deploying the broker inside a Karaf container and have 
 Karaf do the authentication (such as in JBoss A-MQ). 
 The groupClass is properly set on the authorizationEntries within the 
 authorizationEntries list and only fails to be applied properly on the 
 tempDestinationAuthorizationEntry. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Issue Comment Deleted] (AMQ-5304) groupClass not applied to TempDestinationAuthorizationEntry

2014-08-01 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke updated AMQ-5304:


Comment: was deleted

(was: A comment with security level 'activemq-developers' was removed.)

 groupClass not applied to TempDestinationAuthorizationEntry
 ---

 Key: AMQ-5304
 URL: https://issues.apache.org/jira/browse/AMQ-5304
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.10.0
Reporter: Torsten Mielke
Assignee: Torsten Mielke
  Labels: authorization, security
 Attachments: AMQ-5304.patch


 When configuring the authorization plugin with a 
 tempDestinationAuthorizationEntry that also set a groupClass, this 
 groupClass is not properly applied to the TempDestinationAuthorizationEntry 
 instance. 
 E.g. consider this example config
 {code:xml}
   authorizationPlugin
 map
   authorizationMap 
 groupClass=org.apache.karaf.jaas.boot.principal.RolePrincipal
 authorizationEntries
   authorizationEntry queue= read=admin write=client,admin 
 admin=client,admin /
   authorizationEntry topic= read=client,admin write=admin 
 admin=admin/
   authorizationEntry topic=ActiveMQ.Advisory. 
 read=admin,client write=admin,client admin=admin/
/authorizationEntries
tempDestinationAuthorizationEntry
  tempDestinationAuthorizationEntry read=client,admin 
 write=client,admin admin=client,admin 
 groupClass=org.apache.karaf.jaas.boot.principal.RolePrincipal/
/tempDestinationAuthorizationEntry
  /authorizationMap
 /map
   /authorizationPlugin
 {code}
 The groupClass attribute is set on the TempDestinationAuthorizationEntry 
 instance but we don't apply the groupClass to the AuthorizationEntry by 
 calling afterPropertiesSet();
 As a result, authorization fails when trying to create a temp destination. 
 This can happen when deploying the broker inside a Karaf container and have 
 Karaf do the authentication (such as in JBoss A-MQ). 
 The groupClass is properly set on the authorizationEntries within the 
 authorizationEntries list and only fails to be applied properly on the 
 tempDestinationAuthorizationEntry. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (AMQ-5141) Message expiry that is done as part of a removeSubscription command should not use the clients credentials.

2014-04-11 Thread Torsten Mielke (JIRA)
Torsten Mielke created AMQ-5141:
---

 Summary: Message expiry that is done as part of a 
removeSubscription command should not use the clients credentials.
 Key: AMQ-5141
 URL: https://issues.apache.org/jira/browse/AMQ-5141
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.9.0
Reporter: Torsten Mielke


If the broker handles a RemoveInfo command it may also kick off a message 
expiry check for (I presume) any prefetched messages. If messages are to be 
expired they get sent to ActiveMQ.DLQ by default. See stack trace in next 
comment.

If the broker is security enabled with authorization turned on and messages get 
sent to DLQ as a result of the expiry check then the broker uses the client's 
security context when sending the messages to DLQ. 
This implies the client user needs to have write access to ActiveMQ.DLQ. 

As this may happen with any other client, all client users will require write 
access to ActiveMQ.DLQ, which may not be appropriate from a security point of 
view. 

The broker regularly runs an expiry check and uses a broker internal security 
context for this task. In my opinion this same broker internal security context 
should be used when expiring messages as part of the RemoveInfo command. The 
broker should not use the client's security context. 

[1]
The current behavior can raise the following SecurityException if the client 
user does not have write access to ActiveMQ.DLQ

{code}
2014-04-11 08:11:22,229 | WARN  | 2.38:61201@61616 | RegionBroker | 
ivemq.broker.region.RegionBroker  703 | 
105 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | Caught an 
exception sending to DLQ: Message 
ID:S930A3085-50865-635327964441522304-1:1:363:2:1 dropped=false acked=false 
locked=true
java.lang.SecurityException: User Test is not authorized to write to: 
queue://ActiveMQ.DLQ
at 
org.apache.activemq.security.AuthorizationBroker.send(AuthorizationBroker.java:197)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at 
org.apache.activemq.broker.MutableBrokerFilter.send(MutableBrokerFilter.java:135)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at 
org.apache.activemq.util.BrokerSupport.doResend(BrokerSupport.java:68)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at 
org.apache.activemq.util.BrokerSupport.resendNoCopy(BrokerSupport.java:38)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at 
org.apache.activemq.broker.region.RegionBroker.sendToDeadLetterQueue(RegionBroker.java:691)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at 
org.apache.activemq.broker.BrokerFilter.sendToDeadLetterQueue(BrokerFilter.java:262)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at 
org.apache.activemq.broker.BrokerFilter.sendToDeadLetterQueue(BrokerFilter.java:262)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at 
org.apache.activemq.advisory.AdvisoryBroker.sendToDeadLetterQueue(AdvisoryBroker.java:413)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at 
org.apache.activemq.broker.BrokerFilter.sendToDeadLetterQueue(BrokerFilter.java:262)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at 
org.apache.activemq.broker.BrokerFilter.sendToDeadLetterQueue(BrokerFilter.java:262)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at 
org.apache.activemq.broker.MutableBrokerFilter.sendToDeadLetterQueue(MutableBrokerFilter.java:274)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at 
org.apache.activemq.broker.util.RedeliveryPlugin.sendToDeadLetterQueue(RedeliveryPlugin.java:132)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at 
org.apache.activemq.broker.BrokerFilter.sendToDeadLetterQueue(BrokerFilter.java:262)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at 
org.apache.activemq.broker.BrokerFilter.sendToDeadLetterQueue(BrokerFilter.java:262)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at 
org.apache.activemq.broker.MutableBrokerFilter.sendToDeadLetterQueue(MutableBrokerFilter.java:274)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at 
org.apache.activemq.broker.region.RegionBroker.messageExpired(RegionBroker.java:659)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at 
org.apache.activemq.broker.BrokerFilter.messageExpired(BrokerFilter.java:257)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at 
org.apache.activemq.broker.BrokerFilter.messageExpired(BrokerFilter.java:257)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at 
org.apache.activemq.advisory.AdvisoryBroker.messageExpired(AdvisoryBroker.java:283)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
at 

[jira] [Commented] (AMQ-5141) Message expiry that is done as part of a removeSubscription command should not use the clients credentials.

2014-04-11 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966427#comment-13966427
 ] 

Torsten Mielke commented on AMQ-5141:
-

Don't have a Unit test at hand but could build one if time permits.

 Message expiry that is done as part of a removeSubscription command should 
 not use the clients credentials.
 ---

 Key: AMQ-5141
 URL: https://issues.apache.org/jira/browse/AMQ-5141
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.9.0
Reporter: Torsten Mielke
  Labels: DLQ, authorization, expiry, security

 If the broker handles a RemoveInfo command it may also kick off a message 
 expiry check for (I presume) any prefetched messages. If messages are to be 
 expired they get sent to ActiveMQ.DLQ by default. See stack trace in next 
 comment.
 If the broker is security enabled with authorization turned on and messages 
 get sent to DLQ as a result of the expiry check then the broker uses the 
 client's security context when sending the messages to DLQ. 
 This implies the client user needs to have write access to ActiveMQ.DLQ. 
 As this may happen with any other client, all client users will require write 
 access to ActiveMQ.DLQ, which may not be appropriate from a security point of 
 view. 
 The broker regularly runs an expiry check and uses a broker internal security 
 context for this task. In my opinion this same broker internal security 
 context should be used when expiring messages as part of the RemoveInfo 
 command. The broker should not use the client's security context. 
 [1]
 The current behavior can raise the following SecurityException if the client 
 user does not have write access to ActiveMQ.DLQ
 {code}
 2014-04-11 08:11:22,229 | WARN  | 2.38:61201@61616 | RegionBroker | 
 ivemq.broker.region.RegionBroker  703 | 
 105 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | Caught an 
 exception sending to DLQ: Message 
 ID:S930A3085-50865-635327964441522304-1:1:363:2:1 dropped=false acked=false 
 locked=true
 java.lang.SecurityException: User Test is not authorized to write to: 
 queue://ActiveMQ.DLQ
   at 
 org.apache.activemq.security.AuthorizationBroker.send(AuthorizationBroker.java:197)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.MutableBrokerFilter.send(MutableBrokerFilter.java:135)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.util.BrokerSupport.doResend(BrokerSupport.java:68)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.util.BrokerSupport.resendNoCopy(BrokerSupport.java:38)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.region.RegionBroker.sendToDeadLetterQueue(RegionBroker.java:691)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.BrokerFilter.sendToDeadLetterQueue(BrokerFilter.java:262)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.BrokerFilter.sendToDeadLetterQueue(BrokerFilter.java:262)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.advisory.AdvisoryBroker.sendToDeadLetterQueue(AdvisoryBroker.java:413)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.BrokerFilter.sendToDeadLetterQueue(BrokerFilter.java:262)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.BrokerFilter.sendToDeadLetterQueue(BrokerFilter.java:262)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.MutableBrokerFilter.sendToDeadLetterQueue(MutableBrokerFilter.java:274)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.util.RedeliveryPlugin.sendToDeadLetterQueue(RedeliveryPlugin.java:132)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.BrokerFilter.sendToDeadLetterQueue(BrokerFilter.java:262)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.BrokerFilter.sendToDeadLetterQueue(BrokerFilter.java:262)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.MutableBrokerFilter.sendToDeadLetterQueue(MutableBrokerFilter.java:274)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.region.RegionBroker.messageExpired(RegionBroker.java:659)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 

[jira] [Commented] (AMQ-5141) Message expiry that is done as part of a removeSubscription command should not use the clients credentials.

2014-04-11 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966450#comment-13966450
 ] 

Torsten Mielke commented on AMQ-5141:
-

Here are two possible workarounds:
1)
The quick solution is to grant write permissions to user 'Test' but potentially 
other users could be affected as well and may also need write permissions for 
ActiveMQ.DLQ. In the worst case all users may require write permissions to 
ActiveMQ.DLQ.

2)
In case you don't care about these expired messages you can also configure the 
broker to simply discard expired messages using this configuration within the 
policyEntry config

deadLetterStrategy
  sharedDeadLetterStrategy processExpired=false /
/deadLetterStrategy

as per http://activemq.apache.org/message-redelivery-and-dlq-handling.html. 
Then the permission problem won't arise.


 Message expiry that is done as part of a removeSubscription command should 
 not use the clients credentials.
 ---

 Key: AMQ-5141
 URL: https://issues.apache.org/jira/browse/AMQ-5141
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.9.0
Reporter: Torsten Mielke
  Labels: DLQ, authorization, expiry, security

 If the broker handles a RemoveInfo command it may also kick off a message 
 expiry check for (I presume) any prefetched messages. If messages are to be 
 expired they get sent to ActiveMQ.DLQ by default. See stack trace in next 
 comment.
 If the broker is security enabled with authorization turned on and messages 
 get sent to DLQ as a result of the expiry check then the broker uses the 
 client's security context when sending the messages to DLQ. 
 This implies the client user needs to have write access to ActiveMQ.DLQ. 
 As this may happen with any other client, all client users will require write 
 access to ActiveMQ.DLQ, which may not be appropriate from a security point of 
 view. 
 The broker regularly runs an expiry check and uses a broker internal security 
 context for this task. In my opinion this same broker internal security 
 context should be used when expiring messages as part of the RemoveInfo 
 command. The broker should not use the client's security context. 
 [1]
 The current behavior can raise the following SecurityException if the client 
 user does not have write access to ActiveMQ.DLQ
 {code}
 2014-04-11 08:11:22,229 | WARN  | 2.38:61201@61616 | RegionBroker | 
 ivemq.broker.region.RegionBroker  703 | 
 105 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | Caught an 
 exception sending to DLQ: Message 
 ID:S930A3085-50865-635327964441522304-1:1:363:2:1 dropped=false acked=false 
 locked=true
 java.lang.SecurityException: User Test is not authorized to write to: 
 queue://ActiveMQ.DLQ
   at 
 org.apache.activemq.security.AuthorizationBroker.send(AuthorizationBroker.java:197)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.MutableBrokerFilter.send(MutableBrokerFilter.java:135)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.util.BrokerSupport.doResend(BrokerSupport.java:68)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.util.BrokerSupport.resendNoCopy(BrokerSupport.java:38)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.region.RegionBroker.sendToDeadLetterQueue(RegionBroker.java:691)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.BrokerFilter.sendToDeadLetterQueue(BrokerFilter.java:262)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.BrokerFilter.sendToDeadLetterQueue(BrokerFilter.java:262)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.advisory.AdvisoryBroker.sendToDeadLetterQueue(AdvisoryBroker.java:413)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.BrokerFilter.sendToDeadLetterQueue(BrokerFilter.java:262)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.BrokerFilter.sendToDeadLetterQueue(BrokerFilter.java:262)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.MutableBrokerFilter.sendToDeadLetterQueue(MutableBrokerFilter.java:274)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.util.RedeliveryPlugin.sendToDeadLetterQueue(RedeliveryPlugin.java:132)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 

[jira] [Comment Edited] (AMQ-5141) Message expiry that is done as part of a removeSubscription command should not use the clients credentials.

2014-04-11 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13966450#comment-13966450
 ] 

Torsten Mielke edited comment on AMQ-5141 at 4/11/14 12:13 PM:
---

Here are two possible workarounds:
1)
The quick solution is to grant write permissions to user 'Test' but potentially 
other users could be affected as well and may also need write permissions for 
ActiveMQ.DLQ. In the worst case all users may require write permissions to 
ActiveMQ.DLQ.

2)
In case you don't care about these expired messages you can also configure the 
broker to simply discard expired messages using this configuration within the 
policyEntry config

{code:xml}
deadLetterStrategy
  sharedDeadLetterStrategy processExpired=false /
/deadLetterStrategy
{code}

as per http://activemq.apache.org/message-redelivery-and-dlq-handling.html. 
Then the permission problem won't arise.



was (Author: tmielke):
Here are two possible workarounds:
1)
The quick solution is to grant write permissions to user 'Test' but potentially 
other users could be affected as well and may also need write permissions for 
ActiveMQ.DLQ. In the worst case all users may require write permissions to 
ActiveMQ.DLQ.

2)
In case you don't care about these expired messages you can also configure the 
broker to simply discard expired messages using this configuration within the 
policyEntry config

deadLetterStrategy
  sharedDeadLetterStrategy processExpired=false /
/deadLetterStrategy

as per http://activemq.apache.org/message-redelivery-and-dlq-handling.html. 
Then the permission problem won't arise.


 Message expiry that is done as part of a removeSubscription command should 
 not use the clients credentials.
 ---

 Key: AMQ-5141
 URL: https://issues.apache.org/jira/browse/AMQ-5141
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.9.0
Reporter: Torsten Mielke
  Labels: DLQ, authorization, expiry, security

 If the broker handles a RemoveInfo command it may also kick off a message 
 expiry check for (I presume) any prefetched messages. If messages are to be 
 expired they get sent to ActiveMQ.DLQ by default. See stack trace in next 
 comment.
 If the broker is security enabled with authorization turned on and messages 
 get sent to DLQ as a result of the expiry check then the broker uses the 
 client's security context when sending the messages to DLQ. 
 This implies the client user needs to have write access to ActiveMQ.DLQ. 
 As this may happen with any other client, all client users will require write 
 access to ActiveMQ.DLQ, which may not be appropriate from a security point of 
 view. 
 The broker regularly runs an expiry check and uses a broker internal security 
 context for this task. In my opinion this same broker internal security 
 context should be used when expiring messages as part of the RemoveInfo 
 command. The broker should not use the client's security context. 
 [1]
 The current behavior can raise the following SecurityException if the client 
 user does not have write access to ActiveMQ.DLQ
 {code}
 2014-04-11 08:11:22,229 | WARN  | 2.38:61201@61616 | RegionBroker | 
 ivemq.broker.region.RegionBroker  703 | 
 105 - org.apache.activemq.activemq-osgi - 5.8.0.redhat-60024 | Caught an 
 exception sending to DLQ: Message 
 ID:S930A3085-50865-635327964441522304-1:1:363:2:1 dropped=false acked=false 
 locked=true
 java.lang.SecurityException: User Test is not authorized to write to: 
 queue://ActiveMQ.DLQ
   at 
 org.apache.activemq.security.AuthorizationBroker.send(AuthorizationBroker.java:197)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.MutableBrokerFilter.send(MutableBrokerFilter.java:135)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.util.BrokerSupport.doResend(BrokerSupport.java:68)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.util.BrokerSupport.resendNoCopy(BrokerSupport.java:38)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.region.RegionBroker.sendToDeadLetterQueue(RegionBroker.java:691)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.BrokerFilter.sendToDeadLetterQueue(BrokerFilter.java:262)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 org.apache.activemq.broker.BrokerFilter.sendToDeadLetterQueue(BrokerFilter.java:262)[105:org.apache.activemq.activemq-osgi:5.8.0.redhat-60024]
   at 
 

[jira] [Resolved] (AMQ-4950) java.lang.ClassCastException: org.apache.activemq.command.ExceptionResponse cannot be cast to org.apache.activemq.command.IntegerResponse, attempting to automatically re

2014-01-06 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke resolved AMQ-4950.
-

   Resolution: Fixed
Fix Version/s: 5.10.0

Fixed in commit f69cbd8ec6ec7cfa78a83892d32c7bb3bbd3a7d1.

  java.lang.ClassCastException: org.apache.activemq.command.ExceptionResponse 
 cannot be cast to org.apache.activemq.command.IntegerResponse, attempting to 
 automatically reconnect
 ---

 Key: AMQ-4950
 URL: https://issues.apache.org/jira/browse/AMQ-4950
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.9.0
Reporter: Torsten Mielke
Assignee: Torsten Mielke
  Labels: XA, prepare, transaction
 Fix For: 5.10.0


 If an XA prepare() raises an exception back to the client it results in the 
 warning 
 {noformat}
 WARN  FailoverTransport - Transport (tcp://127.0.0.1:61249) failed, reason:  
 java.io.IOException: 
 Unexpected error occured: java.lang.ClassCastException: 
 org.apache.activemq.command.ExceptionResponse cannot be cast to 
 org.apache.activemq.command.IntegerResponse, attempting to automatically 
 reconnect
 {noformat}
 which triggers a failover reconnect and a replay of the transaction which 
 then causes
 {noformat}
 2013-12-20 13:38:12,581 [main] - WARN  TransactionContext - prepare of: 
 XID:[86,globalId=0001,branchId=0001] failed with: 
 javax.jms.JMSException: Cannot call prepare now.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (AMQ-4950) java.lang.ClassCastException: org.apache.activemq.command.ExceptionResponse cannot be cast to org.apache.activemq.command.IntegerResponse, attempting to automatica

2014-01-06 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13862984#comment-13862984
 ] 

Torsten Mielke edited comment on AMQ-4950 at 1/6/14 1:27 PM:
-

Fixed in commit 
[f69cbd8ec6ec7cfa78a83892d32c7bb3bbd3a7d1|https://git-wip-us.apache.org/repos/asf?p=activemq.git;a=commit;h=f69cbd8ec6ec7cfa78a83892d32c7bb3bbd3a7d1].


was (Author: tmielke):
Fixed in commit f69cbd8ec6ec7cfa78a83892d32c7bb3bbd3a7d1.

  java.lang.ClassCastException: org.apache.activemq.command.ExceptionResponse 
 cannot be cast to org.apache.activemq.command.IntegerResponse, attempting to 
 automatically reconnect
 ---

 Key: AMQ-4950
 URL: https://issues.apache.org/jira/browse/AMQ-4950
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.9.0
Reporter: Torsten Mielke
Assignee: Torsten Mielke
  Labels: XA, prepare, transaction
 Fix For: 5.10.0


 If an XA prepare() raises an exception back to the client it results in the 
 warning 
 {noformat}
 WARN  FailoverTransport - Transport (tcp://127.0.0.1:61249) failed, reason:  
 java.io.IOException: 
 Unexpected error occured: java.lang.ClassCastException: 
 org.apache.activemq.command.ExceptionResponse cannot be cast to 
 org.apache.activemq.command.IntegerResponse, attempting to automatically 
 reconnect
 {noformat}
 which triggers a failover reconnect and a replay of the transaction which 
 then causes
 {noformat}
 2013-12-20 13:38:12,581 [main] - WARN  TransactionContext - prepare of: 
 XID:[86,globalId=0001,branchId=0001] failed with: 
 javax.jms.JMSException: Cannot call prepare now.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Issue Comment Deleted] (AMQ-4950) java.lang.ClassCastException: org.apache.activemq.command.ExceptionResponse cannot be cast to org.apache.activemq.command.IntegerResponse, attempting to aut

2014-01-06 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke updated AMQ-4950:


Comment: was deleted

(was: I have a unit test and possible patch already but want to test it 
thoroughly first. Will follow up in the new Year.)

  java.lang.ClassCastException: org.apache.activemq.command.ExceptionResponse 
 cannot be cast to org.apache.activemq.command.IntegerResponse, attempting to 
 automatically reconnect
 ---

 Key: AMQ-4950
 URL: https://issues.apache.org/jira/browse/AMQ-4950
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.9.0
Reporter: Torsten Mielke
Assignee: Torsten Mielke
  Labels: XA, prepare, transaction
 Fix For: 5.10.0


 If an XA prepare() raises an exception back to the client it results in the 
 warning 
 {noformat}
 WARN  FailoverTransport - Transport (tcp://127.0.0.1:61249) failed, reason:  
 java.io.IOException: 
 Unexpected error occured: java.lang.ClassCastException: 
 org.apache.activemq.command.ExceptionResponse cannot be cast to 
 org.apache.activemq.command.IntegerResponse, attempting to automatically 
 reconnect
 {noformat}
 which triggers a failover reconnect and a replay of the transaction which 
 then causes
 {noformat}
 2013-12-20 13:38:12,581 [main] - WARN  TransactionContext - prepare of: 
 XID:[86,globalId=0001,branchId=0001] failed with: 
 javax.jms.JMSException: Cannot call prepare now.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (AMQ-4950) java.lang.ClassCastException: org.apache.activemq.command.ExceptionResponse cannot be cast to org.apache.activemq.command.IntegerResponse, attempting to automatically r

2013-12-22 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13855282#comment-13855282
 ] 

Torsten Mielke commented on AMQ-4950:
-

I have a unit test and possible patch already but want to test it thoroughly 
first. Will follow up in the new Year.

  java.lang.ClassCastException: org.apache.activemq.command.ExceptionResponse 
 cannot be cast to org.apache.activemq.command.IntegerResponse, attempting to 
 automatically reconnect
 ---

 Key: AMQ-4950
 URL: https://issues.apache.org/jira/browse/AMQ-4950
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.9.0
Reporter: Torsten Mielke
Assignee: Torsten Mielke
  Labels: XA, prepare, transaction

 If an XA prepare() raises an exception back to the client it results in the 
 warning 
 {noformat}
 WARN  FailoverTransport - Transport (tcp://127.0.0.1:61249) failed, reason:  
 java.io.IOException: 
 Unexpected error occured: java.lang.ClassCastException: 
 org.apache.activemq.command.ExceptionResponse cannot be cast to 
 org.apache.activemq.command.IntegerResponse, attempting to automatically 
 reconnect
 {noformat}
 which triggers a failover reconnect and a replay of the transaction which 
 then causes
 {noformat}
 2013-12-20 13:38:12,581 [main] - WARN  TransactionContext - prepare of: 
 XID:[86,globalId=0001,branchId=0001] failed with: 
 javax.jms.JMSException: Cannot call prepare now.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (AMQ-4950) java.lang.ClassCastException: org.apache.activemq.command.ExceptionResponse cannot be cast to org.apache.activemq.command.IntegerResponse, attempting to automatically rec

2013-12-20 Thread Torsten Mielke (JIRA)
Torsten Mielke created AMQ-4950:
---

 Summary:  java.lang.ClassCastException: 
org.apache.activemq.command.ExceptionResponse cannot be cast to 
org.apache.activemq.command.IntegerResponse, attempting to automatically 
reconnect
 Key: AMQ-4950
 URL: https://issues.apache.org/jira/browse/AMQ-4950
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.9.0
Reporter: Torsten Mielke
Assignee: Torsten Mielke


If an XA prepare() raises an exception back to the client it results in the 
warning 

{noformat}
WARN  FailoverTransport - Transport (tcp://127.0.0.1:61249) failed, reason:  
java.io.IOException: 
Unexpected error occured: java.lang.ClassCastException: 
org.apache.activemq.command.ExceptionResponse cannot be cast to 
org.apache.activemq.command.IntegerResponse, attempting to automatically 
reconnect
{noformat}

which triggers a failover reconnect and a replay of the transaction which then 
causes

{noformat}
2013-12-20 13:38:12,581 [main] - WARN  TransactionContext - prepare of: 
XID:[86,globalId=0001,branchId=0001] failed with: 
javax.jms.JMSException: Cannot call prepare now.
{noformat}




--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (AMQ-4950) java.lang.ClassCastException: org.apache.activemq.command.ExceptionResponse cannot be cast to org.apache.activemq.command.IntegerResponse, attempting to automatically r

2013-12-20 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13853933#comment-13853933
 ] 

Torsten Mielke commented on AMQ-4950:
-

Full error message:
{noformat}
2013-12-20 13:38:12,558 [0.1:61249@61251] - WARN  FailoverTransport 
 - Transport (tcp://127.0.0.1:61249) failed, reason:  java.io.IOException: 
Unexpected error occured: java.lang.ClassCastException: 
org.apache.activemq.command.ExceptionResponse cannot be cast to 
org.apache.activemq.command.IntegerResponse, attempting to automatically 
reconnect
...
2013-12-20 13:38:12,581 [main   ] - WARN  TransactionContext
 - prepare of: XID:[86,globalId=0001,branchId=0001] failed with: 
javax.jms.JMSException: Cannot call prepare now.
javax.jms.JMSException: Cannot call prepare now.
at 
org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:54)
at 
org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1405)
at 
org.apache.activemq.TransactionContext.syncSendPacketWithInterruptionHandling(TransactionContext.java:757)
at 
org.apache.activemq.TransactionContext.prepare(TransactionContext.java:453)
at 
org.apache.activemq.bugs.AMQXXXTest.testXAPrepareFailure(AMQXXXTest.java:121)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:176)
at junit.framework.TestCase.runBare(TestCase.java:141)
at 
org.apache.activemq.CombinationTestSupport.runBare(CombinationTestSupport.java:107)
at 
org.apache.activemq.CombinationTestSupport.runBare(CombinationTestSupport.java:113)
at junit.framework.TestResult$1.protect(TestResult.java:122)
at junit.framework.TestResult.runProtected(TestResult.java:142)
at junit.framework.TestResult.run(TestResult.java:125)
at junit.framework.TestCase.run(TestCase.java:129)
at junit.framework.TestSuite.runTest(TestSuite.java:255)
at junit.framework.TestSuite.run(TestSuite.java:250)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
at 
org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:53)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:123)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:164)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:110)
at 
org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:175)
at 
org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcessWhenForked(SurefireStarter.java:81)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:68)
Caused by: javax.transaction.xa.XAException: Cannot call prepare now.
at 
org.apache.activemq.transaction.XATransaction.illegalStateTransition(XATransaction.java:100)
at 
org.apache.activemq.transaction.XATransaction.prepare(XATransaction.java:195)
at 
org.apache.activemq.broker.TransactionBroker.prepareTransaction(TransactionBroker.java:248)
at 
org.apache.activemq.bugs.AMQXXXTest$1.prepareTransaction(AMQXXXTest.java:81)
at 
org.apache.activemq.broker.MutableBrokerFilter.prepareTransaction(MutableBrokerFilter.java:127)
at 
org.apache.activemq.broker.TransportConnection.processPrepareTransaction(TransportConnection.java:408)
at 
org.apache.activemq.command.TransactionInfo.visit(TransactionInfo.java:98)
at 
org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:295)
at 
org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:152)
at 
org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
at 
org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
at 
org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:270)
at 
org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)

[jira] [Commented] (AMQ-4465) Rethink replayWhenNoConsumers solution

2013-12-18 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13851921#comment-13851921
 ] 

Torsten Mielke commented on AMQ-4465:
-

Agreed Gary. I will mark this bug as resolved.

 Rethink replayWhenNoConsumers solution
 --

 Key: AMQ-4465
 URL: https://issues.apache.org/jira/browse/AMQ-4465
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.8.0
Reporter: Torsten Mielke

 I would like to start a discussion about the way we allow messages to be 
 replayed back to the original broker in a broker network, i.e. setting 
 replayWhenNoConsumers=true.
 This discussion is based on the blog post 
 http://tmielke.blogspot.de/2012/03/i-have-messages-on-queue-but-they-dont.html
 but I will outline the full story here again. 
 Consider a network of two brokers A and B. 
 Broker A has a producer that sends one msg to queue Test.in. Broker B has a 
 consumer connected so the msg is transferred to broker B. Lets assume the 
 consumer disconnects from B *before* it consumes the msg and reconnects to 
 broker A. If broker B has replayWhenNoConsumers=true, the message will be 
 replayed back to broker A. 
 If that replay happens in a short time frame, the cursor will mark the 
 replayed msgs as a duplicate and won't dispatch it. To overcome this, one 
 needs to set enableAudit=false on the policyEntry for the destination. 
 This has a consequence as it disables duplicate detection in the cursor. 
 External JMS producers will still be blocked from sending duplicates thanks 
 to the duplicate detection built into the persistence adapter. 
 However you can still get duplicate messages over the network bridge now. 
 With enableAudit=false these duplicates will be happily added to the cursor 
 now. If the same consumer receives the duplicate message, it will likely 
 detect the duplicate. However if the duplicate message is dispatched to a 
 different consumer, it won't be detected but will be processed by the 
 application.
 For many use cases its important not to receive duplicate messages so the 
 above setup replayWhenNoConsumers=true and enableAudit=false becomes a 
 problem.
 There is the additional option of specifying auditNetworkProducers=true on 
 the transport connector but that's very likely going to have consequences as 
 well. With auditNetworkProducers=true we will now detect duplicates over 
 the network bridge, so if there is a network glitch while the message is 
 replayed back on the bridge to broker A and broker B tries to resend the 
 message again, it will be detected as a duplicate on broker A. This is good.
 However lets assume the consumer now disconnects from broker A *after* the 
 message was replayed back from broker B to broker A but *before* the consumer 
 actually received the message. The consumer then reconnects to broker B 
 again. 
 The replayed message is on broker A now. Broker B registers a new demand for 
 this message (due to the consumer reconnecting) and broker A will pass on the 
 message to broker B again. However due to auditNetworkProducers=true broker 
 B will treat the resent message as a duplicate and very likely not accept it 
 (or even worse simply drop the message - not sure how exactly it will 
 behave). 
 So the message is stuck again and won't be dispatched to the consumer on 
 broker B. 
 The networkTTL setting will further have an effect on this scenario and so 
 will have other broker topologies like a full mesh.
 It seems to me that 
 - When allowing replayWhenNoConsumers=true you may receive duplicate messages 
 unless you also set auditNetworkProducers=true which has consequences as 
 well.
 - If consumers are reconnecting to a different broker each time that you may 
 end up with msgs stuck on a broker that won't get dispatched. 
 - Ideally you want sticky consumers, i.e. they reconnect to the same broker 
 if possible in order to avoid replaying back messages. This implies that you 
 don't want to use randomize=true on failover urls. I don't think we recommend 
 this in any docs.
 - The network ttl will potentially never be high enough and the message may 
 be stuck on a particular broker as the consumer may have reconnected to 
 another broker in the network.
 I am sure there are more sides to this discussion. I just wanted to capture 
 what gtully and I found when discussing this problem. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Resolved] (AMQ-4465) Rethink replayWhenNoConsumers solution

2013-12-18 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke resolved AMQ-4465.
-

   Resolution: Fixed
Fix Version/s: 5.9.0

This should be fixed by the changes in AMQ-4607. 

 Rethink replayWhenNoConsumers solution
 --

 Key: AMQ-4465
 URL: https://issues.apache.org/jira/browse/AMQ-4465
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.8.0
Reporter: Torsten Mielke
 Fix For: 5.9.0


 I would like to start a discussion about the way we allow messages to be 
 replayed back to the original broker in a broker network, i.e. setting 
 replayWhenNoConsumers=true.
 This discussion is based on the blog post 
 http://tmielke.blogspot.de/2012/03/i-have-messages-on-queue-but-they-dont.html
 but I will outline the full story here again. 
 Consider a network of two brokers A and B. 
 Broker A has a producer that sends one msg to queue Test.in. Broker B has a 
 consumer connected so the msg is transferred to broker B. Lets assume the 
 consumer disconnects from B *before* it consumes the msg and reconnects to 
 broker A. If broker B has replayWhenNoConsumers=true, the message will be 
 replayed back to broker A. 
 If that replay happens in a short time frame, the cursor will mark the 
 replayed msgs as a duplicate and won't dispatch it. To overcome this, one 
 needs to set enableAudit=false on the policyEntry for the destination. 
 This has a consequence as it disables duplicate detection in the cursor. 
 External JMS producers will still be blocked from sending duplicates thanks 
 to the duplicate detection built into the persistence adapter. 
 However you can still get duplicate messages over the network bridge now. 
 With enableAudit=false these duplicates will be happily added to the cursor 
 now. If the same consumer receives the duplicate message, it will likely 
 detect the duplicate. However if the duplicate message is dispatched to a 
 different consumer, it won't be detected but will be processed by the 
 application.
 For many use cases its important not to receive duplicate messages so the 
 above setup replayWhenNoConsumers=true and enableAudit=false becomes a 
 problem.
 There is the additional option of specifying auditNetworkProducers=true on 
 the transport connector but that's very likely going to have consequences as 
 well. With auditNetworkProducers=true we will now detect duplicates over 
 the network bridge, so if there is a network glitch while the message is 
 replayed back on the bridge to broker A and broker B tries to resend the 
 message again, it will be detected as a duplicate on broker A. This is good.
 However lets assume the consumer now disconnects from broker A *after* the 
 message was replayed back from broker B to broker A but *before* the consumer 
 actually received the message. The consumer then reconnects to broker B 
 again. 
 The replayed message is on broker A now. Broker B registers a new demand for 
 this message (due to the consumer reconnecting) and broker A will pass on the 
 message to broker B again. However due to auditNetworkProducers=true broker 
 B will treat the resent message as a duplicate and very likely not accept it 
 (or even worse simply drop the message - not sure how exactly it will 
 behave). 
 So the message is stuck again and won't be dispatched to the consumer on 
 broker B. 
 The networkTTL setting will further have an effect on this scenario and so 
 will have other broker topologies like a full mesh.
 It seems to me that 
 - When allowing replayWhenNoConsumers=true you may receive duplicate messages 
 unless you also set auditNetworkProducers=true which has consequences as 
 well.
 - If consumers are reconnecting to a different broker each time that you may 
 end up with msgs stuck on a broker that won't get dispatched. 
 - Ideally you want sticky consumers, i.e. they reconnect to the same broker 
 if possible in order to avoid replaying back messages. This implies that you 
 don't want to use randomize=true on failover urls. I don't think we recommend 
 this in any docs.
 - The network ttl will potentially never be high enough and the message may 
 be stuck on a particular broker as the consumer may have reconnected to 
 another broker in the network.
 I am sure there are more sides to this discussion. I just wanted to capture 
 what gtully and I found when discussing this problem. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (AMQ-4595) QueueBrowser hangs when browsing large queues

2013-08-07 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13731719#comment-13731719
 ] 

Torsten Mielke commented on AMQ-4595:
-

@Nicholas - Thx for the update. Will do.

 QueueBrowser hangs when browsing large queues
 -

 Key: AMQ-4595
 URL: https://issues.apache.org/jira/browse/AMQ-4595
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.8.0
 Environment: All
Reporter: Nicholas Rahn
Assignee: Timothy Bish
Priority: Critical
  Labels: QueueBrowser
 Fix For: 5.9.0

 Attachments: AMQ4595Test.java, AMQ580BrowsingBug.java, 
 amq-test-20130621T155120.log


 When trying to browse a queue with a QueueBrowser, the browsing will hang and 
 never complete. This appears to happen only with a lot of message in the 
 queue. 1000 messages works correctly, but 1 hangs.
 I have attached a unit test that exhibits the problem. Change the 
 messageToSend variable in the test method to see the difference between 
 small queue size and large queue size. 
 I've attached the unit test code as well as the output from one of the runs 
 with 1 messages. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4595) QueueBrowser hangs when browsing large queues

2013-07-12 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13706779#comment-13706779
 ] 

Torsten Mielke commented on AMQ-4595:
-

Hi Tim,

Do you mind explaining a bit more why maxAuditDepth needs to be raised in order 
to browse a higher number of messages? 
Also, I barely found any documentation on this property. Can we get it added to 
http://activemq.apache.org/per-destination-policies.html? 

Would the current behavior (when not setting maxAuditDepth) not rather be a 
bug? From a client's point of view should a queue browser not be used like an 
ordinary consumer and not require any broker side config changes? 
I am concerned that no ActiveMQ user will think of raising maxAuditDepth in 
order to browse more than the default 400 messages. This does not seem 
intuitive.

And finally I want to add that you also need to use version 5.9-SNAPSHOT as 
even the updated test case does not work with 5.8.0. I presume due to the fix 
that went in for AMQ-4487?


 QueueBrowser hangs when browsing large queues
 -

 Key: AMQ-4595
 URL: https://issues.apache.org/jira/browse/AMQ-4595
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.8.0
 Environment: All
Reporter: Nicholas Rahn
Priority: Critical
  Labels: QueueBrowser
 Attachments: AMQ4595Test.java, AMQ580BrowsingBug.java, 
 amq-test-20130621T155120.log


 When trying to browse a queue with a QueueBrowser, the browsing will hang and 
 never complete. This appears to happen only with a lot of message in the 
 queue. 1000 messages works correctly, but 1 hangs.
 I have attached a unit test that exhibits the problem. Change the 
 messageToSend variable in the test method to see the difference between 
 small queue size and large queue size. 
 I've attached the unit test code as well as the output from one of the runs 
 with 1 messages. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (AMQ-4571) Improve DestinationFilter to allow any filter to unsubscribe its wrapped destination from a durable subscruption

2013-06-05 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke resolved AMQ-4571.
-

Resolution: Fixed

 Improve DestinationFilter to allow any filter to unsubscribe its wrapped 
 destination from a durable subscruption
 

 Key: AMQ-4571
 URL: https://issues.apache.org/jira/browse/AMQ-4571
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.8.0
Reporter: Timothy Bish
Assignee: Timothy Bish
Priority: Minor
 Fix For: 5.9.0


 Related to AMQ-4356 lets make the durable unsbscribe possible from any 
 DestinationFilter so that custom filters can clean up properly. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4571) Improve DestinationFilter to allow any filter to unsubscribe its wrapped destination from a durable subscruption

2013-06-05 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13675835#comment-13675835
 ] 

Torsten Mielke commented on AMQ-4571:
-

Unit test for this fix added in 
activemq-unit-tests/src/test/java/org/apache/activemq/broker/virtual/DestinationInterceptorDurableSubTest.java

 Improve DestinationFilter to allow any filter to unsubscribe its wrapped 
 destination from a durable subscruption
 

 Key: AMQ-4571
 URL: https://issues.apache.org/jira/browse/AMQ-4571
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.8.0
Reporter: Timothy Bish
Assignee: Timothy Bish
Priority: Minor
 Fix For: 5.9.0


 Related to AMQ-4356 lets make the durable unsbscribe possible from any 
 DestinationFilter so that custom filters can clean up properly. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4356) unsubcribes DurableSuscriber does not work well with Virtual Topics

2013-06-05 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13675843#comment-13675843
 ] 

Torsten Mielke commented on AMQ-4356:
-

This fix is superseded by AMQ-4571.

 unsubcribes DurableSuscriber does not work  well with Virtual Topics 
 -

 Key: AMQ-4356
 URL: https://issues.apache.org/jira/browse/AMQ-4356
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.8.0
 Environment: With kahaDb store
Reporter: Federico Weisse
Assignee: Timothy Bish
 Fix For: 5.9.0


 We have a Virtual Topic with 2 consumers
 then we use a DurableSubscriber to the topic (to use it as a normal topic)
 when we call session.unsubscribe the methods ends ok and the 
 DurableSubscribers disappears(from the webConsole) but the storePercentUsage 
 does't decrements  and when we restart the broker durableSuscribers are there 
 again. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4567) JMX operations on broker bypass authorization plugin

2013-06-03 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673192#comment-13673192
 ] 

Torsten Mielke commented on AMQ-4567:
-

Hi Christian,

Yes, I think we should enhance it. 
Using the authorization plugin we can fine tune what operations a user is 
allowed to invoke. There are admin rights to be given to users for 
creating/destroying destinations.

If JMX access to the broker was only done by JMX tools like jconsole, this bug 
would be less relevant. But the AMQ web console uses JMX for creating/deleting 
destinations and IIRC subscriptions as well. Right now its impossible to secure 
the web console in a way that certain users cannot invoke these administrative 
functions but have read access in general to the console.



  JMX operations on broker bypass authorization plugin
 -

 Key: AMQ-4567
 URL: https://issues.apache.org/jira/browse/AMQ-4567
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.8.0
Reporter: Torsten Mielke
  Labels: authorization

 When securing the broker using authentication and authorization, any JMX 
 operations on the broker completely bypass the authorization plugin.
 So anyone can modify the broker bypassing the security checks. Also, because 
 of this its not possible to define a read only user for the web console.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4565) Cannot unsubscribe from virtual topics

2013-06-03 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13673204#comment-13673204
 ] 

Torsten Mielke commented on AMQ-4565:
-

Is this problem related to or even the same as AMQ-4356?

 Cannot unsubscribe from virtual topics
 --

 Key: AMQ-4565
 URL: https://issues.apache.org/jira/browse/AMQ-4565
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.8.0
Reporter: Christian Posta

 Virtual Topics allow us to do things with pub/sub that we cannot otherwise do 
 with JMS 1.1 spec. However, with durable subs we can unsubscribe a consumer 
 telling the broker we are no longer interested in messages. If we just stop 
 consuming, the queues can fill up.
 With the queue-based impl of VT, we have to wait for the queue to be empty to 
 delete it... to do that reliably, the producer would have to stop producing? 
 And/or we have to use some TTL for the messages? Not sure... 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (AMQ-4562) SimplePriorityMessageDispatchChannel.clear() needs to reset size attribute

2013-05-30 Thread Torsten Mielke (JIRA)
Torsten Mielke created AMQ-4562:
---

 Summary: SimplePriorityMessageDispatchChannel.clear() needs to 
reset size attribute
 Key: AMQ-4562
 URL: https://issues.apache.org/jira/browse/AMQ-4562
 Project: ActiveMQ
  Issue Type: Bug
  Components: JMS client
Affects Versions: 5.8.0
Reporter: Torsten Mielke
Assignee: Torsten Mielke


SimplePriorityMessageDispatchChannel.clear() deletes all prefetched messages 
but does not reset the size counter. The other method removeAll() does it 
correctly.

Propose to fix this as follows:

{code:title=SimplePriorityMessageDispatchChannel.java}
public void clear() {
synchronized (mutex) {
for (int i = 0; i  MAX_PRIORITY; i++) {
lists[i].clear();
}
size = 0;
}
}
{code}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4562) SimplePriorityMessageDispatchChannel.clear() needs to reset size attribute

2013-05-30 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13670159#comment-13670159
 ] 

Torsten Mielke commented on AMQ-4562:
-

I hope this fix is obvious as I am not sure yet how to test this in a unit 
test. 
If I get the ok, I will push the fix without a test.

 SimplePriorityMessageDispatchChannel.clear() needs to reset size attribute
 --

 Key: AMQ-4562
 URL: https://issues.apache.org/jira/browse/AMQ-4562
 Project: ActiveMQ
  Issue Type: Bug
  Components: JMS client
Affects Versions: 5.8.0
Reporter: Torsten Mielke
Assignee: Torsten Mielke

 SimplePriorityMessageDispatchChannel.clear() deletes all prefetched messages 
 but does not reset the size counter. The other method removeAll() does it 
 correctly.
 Propose to fix this as follows:
 {code:title=SimplePriorityMessageDispatchChannel.java}
 public void clear() {
 synchronized (mutex) {
 for (int i = 0; i  MAX_PRIORITY; i++) {
 lists[i].clear();
 }
   size = 0;
 }
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4480) mkahadb with perDestination=true lazily loads kahadb journal files after startup

2013-04-22 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637941#comment-13637941
 ] 

Torsten Mielke commented on AMQ-4480:
-

Will try to work out a JUnit test later.

 mkahadb with perDestination=true lazily loads kahadb journal files after 
 startup
 --

 Key: AMQ-4480
 URL: https://issues.apache.org/jira/browse/AMQ-4480
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.7.0
Reporter: Torsten Mielke

 Using the following mKahaDB config:
 {code:xml}
 persistenceAdapter
   mKahaDB directory=${activemq.data}/kahadb
 filteredPersistenceAdapters
   filteredKahaDB perDestination=true
   persistenceAdapter
 kahaDB journalMaxFileLength=32mb /
   /persistenceAdapter
   /filteredKahaDB
 /filteredPersistenceAdapters
   /mKahaDB
 /persistenceAdapter
 {code}
 Note perDestination=true. 
 Using that configuration and sending a message to a JMS queue whose name is 
 longer than 50 characters, this destination's messages won't be loaded 
 eagerly upon a restart of the broker. As a result that destination does not 
 show up in JMX. 
 Only when a producer or consumer connects to this destination, this 
 destination gets loaded from kahadb as this broker log output confirms
 {noformat}
 INFO | KahaDB is version 4
 INFO | Recovering from the journal ...
 INFO | Recovery replayed 1 operations from the journal in 0.0010 seconds.
 {noformat}
 This log output is written after the broker had completely started up. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (AMQ-4480) mkahadb with perDestination=true lazily loads kahadb journal files after startup

2013-04-22 Thread Torsten Mielke (JIRA)
Torsten Mielke created AMQ-4480:
---

 Summary: mkahadb with perDestination=true lazily loads kahadb 
journal files after startup
 Key: AMQ-4480
 URL: https://issues.apache.org/jira/browse/AMQ-4480
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.7.0
Reporter: Torsten Mielke


Using the following mKahaDB config:
{code:xml}
persistenceAdapter
  mKahaDB directory=${activemq.data}/kahadb
filteredPersistenceAdapters
  filteredKahaDB perDestination=true
persistenceAdapter
  kahaDB journalMaxFileLength=32mb /
/persistenceAdapter
  /filteredKahaDB
/filteredPersistenceAdapters
  /mKahaDB
/persistenceAdapter
{code}

Note perDestination=true. 
Using that configuration and sending a message to a JMS queue whose name is 
longer than 50 characters, this destination's messages won't be loaded eagerly 
upon a restart of the broker. As a result that destination does not show up in 
JMX. 

Only when a producer or consumer connects to this destination, this destination 
gets loaded from kahadb as this broker log output confirms

{noformat}
INFO | KahaDB is version 4
INFO | Recovering from the journal ...
INFO | Recovery replayed 1 operations from the journal in 0.0010 seconds.
{noformat}

This log output is written after the broker had completely started up. 


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQ-4480) mkahadb with perDestination=true lazily loads kahadb journal files after startup

2013-04-22 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke updated AMQ-4480:


Affects Version/s: 5.8.0

 mkahadb with perDestination=true lazily loads kahadb journal files after 
 startup
 --

 Key: AMQ-4480
 URL: https://issues.apache.org/jira/browse/AMQ-4480
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.7.0, 5.8.0
Reporter: Torsten Mielke

 Using the following mKahaDB config:
 {code:xml}
 persistenceAdapter
   mKahaDB directory=${activemq.data}/kahadb
 filteredPersistenceAdapters
   filteredKahaDB perDestination=true
   persistenceAdapter
 kahaDB journalMaxFileLength=32mb /
   /persistenceAdapter
   /filteredKahaDB
 /filteredPersistenceAdapters
   /mKahaDB
 /persistenceAdapter
 {code}
 Note perDestination=true. 
 Using that configuration and sending a message to a JMS queue whose name is 
 longer than 50 characters, this destination's messages won't be loaded 
 eagerly upon a restart of the broker. As a result that destination does not 
 show up in JMX. 
 Only when a producer or consumer connects to this destination, this 
 destination gets loaded from kahadb as this broker log output confirms
 {noformat}
 INFO | KahaDB is version 4
 INFO | Recovering from the journal ...
 INFO | Recovery replayed 1 operations from the journal in 0.0010 seconds.
 {noformat}
 This log output is written after the broker had completely started up. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4480) mkahadb with perDestination=true lazily loads kahadb journal files after startup

2013-04-22 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637999#comment-13637999
 ] 

Torsten Mielke commented on AMQ-4480:
-

This seems to be caused by the default file and dir name lengths used when 
converting dests to filenames. In code these are defined in 
activemq-broker/src/main/java/org/apache/activemq/util/IOHelper.java as 

{code:title=IOHelper.java}
static {
  MAX_DIR_NAME_LENGTH = Integer.getInteger(MaximumDirNameLength, 200);
  MAX_FILE_NAME_LENGTH = Integer.getInteger(MaximumFileNameLength, 64);
}
{code}


*Possible workarounds:*

- Don't use perDestination=true with mKahaDB

- don't use destination names  50 characters.

- Pass the JVM option -DMaximumFileNameLength=150 to the broker JVM. 



 mkahadb with perDestination=true lazily loads kahadb journal files after 
 startup
 --

 Key: AMQ-4480
 URL: https://issues.apache.org/jira/browse/AMQ-4480
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.7.0, 5.8.0
Reporter: Torsten Mielke

 Using the following mKahaDB config:
 {code:xml}
 persistenceAdapter
   mKahaDB directory=${activemq.data}/kahadb
 filteredPersistenceAdapters
   filteredKahaDB perDestination=true
   persistenceAdapter
 kahaDB journalMaxFileLength=32mb /
   /persistenceAdapter
   /filteredKahaDB
 /filteredPersistenceAdapters
   /mKahaDB
 /persistenceAdapter
 {code}
 Note perDestination=true. 
 Using that configuration and sending a message to a JMS queue whose name is 
 longer than 50 characters, this destination's messages won't be loaded 
 eagerly upon a restart of the broker. As a result that destination does not 
 show up in JMX. 
 Only when a producer or consumer connects to this destination, this 
 destination gets loaded from kahadb as this broker log output confirms
 {noformat}
 INFO | KahaDB is version 4
 INFO | Recovering from the journal ...
 INFO | Recovery replayed 1 operations from the journal in 0.0010 seconds.
 {noformat}
 This log output is written after the broker had completely started up. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (AMQ-4480) mkahadb with perDestination=true lazily loads kahadb journal files after startup

2013-04-22 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637999#comment-13637999
 ] 

Torsten Mielke edited comment on AMQ-4480 at 4/22/13 1:44 PM:
--

This seems to be caused by the default file and dir name lengths used when 
converting dests to filenames. In code these are defined in 
activemq-broker/src/main/java/org/apache/activemq/util/IOHelper.java as 

{code:title=IOHelper.java}
static {
  MAX_DIR_NAME_LENGTH = Integer.getInteger(MaximumDirNameLength, 200);
  MAX_FILE_NAME_LENGTH = Integer.getInteger(MaximumFileNameLength, 64);
}
{code}


*Possible workarounds:*

- Don't use perDestination=true with mKahaDB

- don't use destination names  50 characters.

- Pass the JVM option -DMaximumFileNameLength=150 to the broker JVM (or any 
other value 64 that holds the full destination name plus another 14 characters 
that get prepended when mapping the destination to a kahadb folder). 



  was (Author: tmielke):
This seems to be caused by the default file and dir name lengths used when 
converting dests to filenames. In code these are defined in 
activemq-broker/src/main/java/org/apache/activemq/util/IOHelper.java as 

{code:title=IOHelper.java}
static {
  MAX_DIR_NAME_LENGTH = Integer.getInteger(MaximumDirNameLength, 200);
  MAX_FILE_NAME_LENGTH = Integer.getInteger(MaximumFileNameLength, 64);
}
{code}


*Possible workarounds:*

- Don't use perDestination=true with mKahaDB

- don't use destination names  50 characters.

- Pass the JVM option -DMaximumFileNameLength=150 to the broker JVM. 


  
 mkahadb with perDestination=true lazily loads kahadb journal files after 
 startup
 --

 Key: AMQ-4480
 URL: https://issues.apache.org/jira/browse/AMQ-4480
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.7.0, 5.8.0
Reporter: Torsten Mielke

 Using the following mKahaDB config:
 {code:xml}
 persistenceAdapter
   mKahaDB directory=${activemq.data}/kahadb
 filteredPersistenceAdapters
   filteredKahaDB perDestination=true
   persistenceAdapter
 kahaDB journalMaxFileLength=32mb /
   /persistenceAdapter
   /filteredKahaDB
 /filteredPersistenceAdapters
   /mKahaDB
 /persistenceAdapter
 {code}
 Note perDestination=true. 
 Using that configuration and sending a message to a JMS queue whose name is 
 longer than 50 characters, this destination's messages won't be loaded 
 eagerly upon a restart of the broker. As a result that destination does not 
 show up in JMX. 
 Only when a producer or consumer connects to this destination, this 
 destination gets loaded from kahadb as this broker log output confirms
 {noformat}
 INFO | KahaDB is version 4
 INFO | Recovering from the journal ...
 INFO | Recovery replayed 1 operations from the journal in 0.0010 seconds.
 {noformat}
 This log output is written after the broker had completely started up. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (AMQ-4465) Rethink replayWhenNoConsumers solution

2013-04-10 Thread Torsten Mielke (JIRA)
Torsten Mielke created AMQ-4465:
---

 Summary: Rethink replayWhenNoConsumers solution
 Key: AMQ-4465
 URL: https://issues.apache.org/jira/browse/AMQ-4465
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Broker
Affects Versions: 5.8.0
Reporter: Torsten Mielke


I would like to start a discussion about the way we allow messages to be 
replayed back to the original broker in a broker network, i.e. setting 
replayWhenNoConsumers=true.

This discussion is based on the blog post 
http://tmielke.blogspot.de/2012/03/i-have-messages-on-queue-but-they-dont.html
but I will outline the full story here again. 


Consider a network of two brokers A and B. 
Broker A has a producer that sends one msg to queue Test.in. Broker B has a 
consumer connected so the msg is transferred to broker B. Lets assume the 
consumer disconnects from B *before* it consumes the msg and reconnects to 
broker A. If broker B has replayWhenNoConsumers=true, the message will be 
replayed back to broker A. 
If that replay happens in a short time frame, the cursor will mark the replayed 
msgs as a duplicate and won't dispatch it. To overcome this, one needs to set 
enableAudit=false on the policyEntry for the destination. 

This has a consequence as it disables duplicate detection in the cursor. 
External JMS producers will still be blocked from sending duplicates thanks to 
the duplicate detection built into the persistence adapter. 
However you can still get duplicate messages over the network bridge now. With 
enableAudit=false these duplicates will be happily added to the cursor now. If 
the same consumer receives the duplicate message, it will likely detect the 
duplicate. However if the duplicate message is dispatched to a different 
consumer, it won't be detected but will be processed by the application.

For many use cases its important not to receive duplicate messages so the above 
setup replayWhenNoConsumers=true and enableAudit=false becomes a problem.

There is the additional option of specifying auditNetworkProducers=true on 
the transport connector but that's very likely going to have consequences as 
well. With auditNetworkProducers=true we will now detect duplicates over the 
network bridge, so if there is a network glitch while the message is replayed 
back on the bridge to broker A and broker B tries to resend the message again, 
it will be detected as a duplicate on broker A. This is good.

However lets assume the consumer now disconnects from broker A *after* the 
message was replayed back from broker B to broker A but *before* the consumer 
actually received the message. The consumer then reconnects to broker B again. 
The replayed message is on broker A now. Broker B registers a new demand for 
this message (due to the consumer reconnecting) and broker A will pass on the 
message to broker B again. However due to auditNetworkProducers=true broker B 
will treat the resent message as a duplicate and very likely not accept it (or 
even worse simply drop the message - not sure how exactly it will behave). 

So the message is stuck again and won't be dispatched to the consumer on broker 
B. 
The networkTTL setting will further have an effect on this scenario and so will 
have other broker topologies like a full mesh.

It seems to me that 
- When allowing replayWhenNoConsumers=true you may receive duplicate messages 
unless you also set auditNetworkProducers=true which has consequences as well.
- If consumers are reconnecting to a different broker each time that you may 
end up with msgs stuck on a broker that won't get dispatched. 
- Ideally you want sticky consumers, i.e. they reconnect to the same broker if 
possible in order to avoid replaying back messages. This implies that you don't 
want to use randomize=true on failover urls. I don't think we recommend this in 
any docs.
- The network ttl will potentially never be high enough and the message may be 
stuck on a particular broker as the consumer may have reconnected to another 
broker in the network.

I am sure there are more sides to this discussion. I just wanted to capture 
what gtully and I found when discussing this problem. 



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4366) PooledConnectionFactory closes connections that are in use

2013-04-09 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13626330#comment-13626330
 ] 

Torsten Mielke commented on AMQ-4366:
-

I am not aware of any side effects when applying the workaround and setting 
idleTimeout=0. 
As a consequence sessions won't be invalidated just because they were idle for 
a specific period of time. But that should generally not be a problem. 
I had a customer testing this idleTimeout=0 in their integration test env and 
it did not cause any problems. 

Also, its my understanding that you generally should not loose any messages due 
to the The Session is closed error. 
Typically the session is checked for validity right *before* sending the 
message. You app code of course needs to handle the error and should not assume 
that the msg was sent.




 PooledConnectionFactory closes connections that are in use
 --

 Key: AMQ-4366
 URL: https://issues.apache.org/jira/browse/AMQ-4366
 Project: ActiveMQ
  Issue Type: Bug
  Components: activemq-pool
Affects Versions: 5.7.0, 5.8.0
Reporter: Petr Janata
Assignee: Timothy Bish
 Fix For: 5.9.0

 Attachments: poolConClose.diff


 {{PooledConnectionFactory}} closes connections that are still referenced and 
 should not be closed. Happens only when connection idle or expire time 
 passes. Calling {{createConnection}} after that time will invalidate the 
 connection and all previously obtained {{Sessions}} will behave as closed.
 Due to default 30 second idle timeout, it is likely not to cause problems 
 when:
 * connection is continually in use
 * all {{PooledConnection}} s are borrowed at startup
 Client with session whose connection was prematurely closed will see similar 
 stacktrace:
 {noformat}
 javax.jms.IllegalStateException: The Session is closed
 at 
 org.apache.activemq.ActiveMQSession.checkClosed(ActiveMQSession.java:731)
 at 
 org.apache.activemq.ActiveMQSession.configureMessage(ActiveMQSession.java:719)
 at 
 org.apache.activemq.ActiveMQSession.createBytesMessage(ActiveMQSession.java:316)
 at 
 org.apache.activemq.pool.PooledSession.createBytesMessage(PooledSession.java:168)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4366) PooledConnectionFactory closes connections that are in use

2013-04-02 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13619630#comment-13619630
 ] 

Torsten Mielke commented on AMQ-4366:
-

A proper workaround for any 5.7 and 5.8 versioned clients seems to be to 
disable the session idle timeout on the ConnectionFactory using 
PooledConnectionFactory.setIdleTimeout(0).


 PooledConnectionFactory closes connections that are in use
 --

 Key: AMQ-4366
 URL: https://issues.apache.org/jira/browse/AMQ-4366
 Project: ActiveMQ
  Issue Type: Bug
  Components: activemq-pool
Affects Versions: 5.7.0, 5.8.0
Reporter: Petr Janata
Assignee: Timothy Bish
 Fix For: 5.9.0

 Attachments: poolConClose.diff


 {{PooledConnectionFactory}} closes connections that are still referenced and 
 should not be closed. Happens only when connection idle or expire time 
 passes. Calling {{createConnection}} after that time will invalidate the 
 connection and all previously obtained {{Sessions}} will behave as closed.
 Due to default 30 second idle timeout, it is likely not to cause problems 
 when:
 * connection is continually in use
 * all {{PooledConnection}}s are borrowed at startup
 Client with session whose connection was prematurely closed will see similar 
 stacktrace:
 {noformat}
 javax.jms.IllegalStateException: The Session is closed
 at 
 org.apache.activemq.ActiveMQSession.checkClosed(ActiveMQSession.java:731)
 at 
 org.apache.activemq.ActiveMQSession.configureMessage(ActiveMQSession.java:719)
 at 
 org.apache.activemq.ActiveMQSession.createBytesMessage(ActiveMQSession.java:316)
 at 
 org.apache.activemq.pool.PooledSession.createBytesMessage(PooledSession.java:168)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4212) Broker may be unable to recover durable topic subscription from the kahadb journal

2013-01-17 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13556387#comment-13556387
 ] 

Torsten Mielke commented on AMQ-4212:
-

@Tim, don't have a unit test yet but presumably can create one. It will have to 
wait until February though as I am currently on vacation.

 Broker may be unable to recover durable topic subscription from the kahadb 
 journal
 --

 Key: AMQ-4212
 URL: https://issues.apache.org/jira/browse/AMQ-4212
 Project: ActiveMQ
  Issue Type: Bug
  Components: Message Store
Affects Versions: 5.7.0
 Environment: kahadb and durable topic subscribers
Reporter: Torsten Mielke

 KahaDB is supposed to recover its index from the journal *completely*. 
 Such recovery can be enforced by stopping the broker, deleting the db.data 
 index file and restarting the broker. 
 The recovery process may however not be able to recover inactive durable 
 topic subscriptions. 
 This is because the kahadb cleanup task will not consider any active 
 subscription entries in the journal files when marking journal files for 
 deletion. 
 E.g. If a durable sub info was written to e.g. the journal file db-1.log but 
 kahadb has already rolled over to writing to db-2.log, the cleanup task may 
 delete db1.log (in case all msgs in db1.log got acked). The durable sub 
 however is still alive. 
 When stopping the broker this durable sub info is still present in the index 
 file and will be restored at broker restart.
 If however the index file gets deleted in order to enforce a recovery of the 
 index from the journal, then the broker has lost the information about this 
 durable sub.
 The broker is therefore not able to recover its state fully from the journal 
 files.
 If the durable subscriber remains inactive (i.e. does not reconnect to broker 
 immediately after broker restart), it may miss messages as the broker has no 
 knowledge of this durable sub. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (AMQ-4212) Broker may be unable to recover durable topic subscription from the kahadb journal

2012-12-07 Thread Torsten Mielke (JIRA)
Torsten Mielke created AMQ-4212:
---

 Summary: Broker may be unable to recover durable topic 
subscription from the kahadb journal
 Key: AMQ-4212
 URL: https://issues.apache.org/jira/browse/AMQ-4212
 Project: ActiveMQ
  Issue Type: Bug
  Components: Message Store
Affects Versions: 5.7.0
 Environment: kahadb and durable topic subscribers
Reporter: Torsten Mielke


KahaDB is supposed to recover its index from the journal *completely*. 
Such recovery can be enforced by stopping the broker, deleting the db.data 
index file and restarting the broker. 

The recovery process may however not be able to recover inactive durable topic 
subscriptions. 
This is because the kahadb cleanup task will not consider any active 
subscription entries in the journal files when marking journal files for 
deletion. 

E.g. If a durable sub info was written to e.g. the journal file db-1.log but 
kahadb has already rolled over to writing to db-2.log, the cleanup task may 
delete db1.log (in case all msgs in db1.log got acked). The durable sub however 
is still alive. 
When stopping the broker this durable sub info is still present in the index 
file and will be restored at broker restart.
If however the index file gets deleted in order to enforce a recovery of the 
index from the journal, then the broker has lost the information about this 
durable sub.
The broker is therefore not able to recover its state fully from the journal 
files.

If the durable subscriber remains inactive (i.e. does not reconnect to broker 
immediately after broker restart), it may miss messages as the broker has no 
knowledge of this durable sub. 






--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (AMQ-4000) Durable subscription not getting unregistered on networked broker

2012-08-29 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-4000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444017#comment-13444017
 ] 

Torsten Mielke commented on AMQ-4000:
-

org.apache.activemq.advisory.AdvisoryBroker does not override the method 
public void removeSubscription(ConnectionContext context, 
RemoveSubscriptionInfo info);

of class org.apache.activemq.broker.BrokerFilter.

Its the AdvisoryBroker that is responsible for creating and firing the advisory 
message to inform other brokers in the network that a durable subscription got 
removed. 
So AdvisoryBroker needs to override method removeSubscription() accordingly. 


 Durable subscription not getting unregistered on networked broker
 -

 Key: AMQ-4000
 URL: https://issues.apache.org/jira/browse/AMQ-4000
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.6.0
 Environment: network of brokers, durable topic subscriptions.
Reporter: Torsten Mielke
  Labels: durable_subscription, networks
 Attachments: JUnitTest.patch


 In a network of two brokers, a durable subscription is correctly propagated 
 across to the remote broker. However when the consumer unsubscribes from the 
 durable subscription again, it is only removed on the local broker but not on 
 the remote broker. The remote broker keeps its durable subscription alive.
 As a consequence messages sent to the topic destination on the remote broker 
 for which the durable subscriptions existed, are passed on to the local 
 broker, although there is no active subscription on the local broker. The 
 local broker will discard these msgs but unnecessary traffic has already 
 occurred on the network bridge.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (AMQ-4000) Durable subscription not getting unregistered on networked broker

2012-08-28 Thread Torsten Mielke (JIRA)
Torsten Mielke created AMQ-4000:
---

 Summary: Durable subscription not getting unregistered on 
networked broker
 Key: AMQ-4000
 URL: https://issues.apache.org/jira/browse/AMQ-4000
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.6.0
 Environment: network of brokers, durable topic subscriptions.
Reporter: Torsten Mielke


In a network of two brokers, a durable subscription is correctly propagated 
across to the remote broker. However when the consumer unsubscribes from the 
durable subscription again, it is only removed on the local broker but not on 
the remote broker. The remote broker keeps its durable subscription alive.

As a consequence messages sent to the topic destination on the remote broker 
for which the durable subscriptions existed, are passed on to the local broker, 
although there is no active subscription on the local broker. The local broker 
will discard these msgs but unnecessary traffic has already occurred on the 
network bridge.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (AMQ-4000) Durable subscription not getting unregistered on networked broker

2012-08-28 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-4000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke updated AMQ-4000:


Attachment: JUnitTest.patch

Attaching patch file containing JUnit test but no fix yet. 


 Durable subscription not getting unregistered on networked broker
 -

 Key: AMQ-4000
 URL: https://issues.apache.org/jira/browse/AMQ-4000
 Project: ActiveMQ
  Issue Type: Bug
Affects Versions: 5.6.0
 Environment: network of brokers, durable topic subscriptions.
Reporter: Torsten Mielke
  Labels: durable_subscription, networks
 Attachments: JUnitTest.patch


 In a network of two brokers, a durable subscription is correctly propagated 
 across to the remote broker. However when the consumer unsubscribes from the 
 durable subscription again, it is only removed on the local broker but not on 
 the remote broker. The remote broker keeps its durable subscription alive.
 As a consequence messages sent to the topic destination on the remote broker 
 for which the durable subscriptions existed, are passed on to the local 
 broker, although there is no active subscription on the local broker. The 
 local broker will discard these msgs but unnecessary traffic has already 
 occurred on the network bridge.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (AMQ-3965) Expired msgs not getting acked to broker causing consumer to fill up its prefetch and not getting more msgs.

2012-08-13 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-3965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke reassigned AMQ-3965:
---

Assignee: Torsten Mielke

 Expired msgs not getting acked to broker causing consumer to fill up its 
 prefetch and not getting more msgs.
 

 Key: AMQ-3965
 URL: https://issues.apache.org/jira/browse/AMQ-3965
 Project: ActiveMQ
  Issue Type: Bug
  Components: JMS client
Affects Versions: 5.6.0
Reporter: Torsten Mielke
Assignee: Torsten Mielke
  Labels: optimizeDispatch
 Attachments: AMQ-3956.patch, 
 OptimizeAcknowledgeWithExpiredMsgsTest.java, testcase.tgz


 It is possible to get a consumer stalled and not receiving any more messages 
 when using optimizeAcknowledge.
 Let me illustrate in an example (JUnit test attached).
 Suppose a consumer with optimizeAcknowledge and a prefetch of 100 msgs.
 The broker's queue contains 105 msg. The first 45 msgs have a very low expiry 
 time, the remaining don't expiry. 
 So the first 100 msgs get dispatched to the consumer (due to prefetch=100). 
 Out of these the first 45 msgs do not get dispatched to consumer code because 
 their expiry has elapsed by the time that are handled in the client. 
 {code:title=ActiveMQMessageConsumer.java}
 public void dispatch(MessageDispatch md) {
 MessageListener listener = this.messageListener.get();
 try {
 [...]
 synchronized (unconsumedMessages.getMutex()) {
 if (!unconsumedMessages.isClosed()) {
 if (this.info.isBrowser() || 
 !session.connection.isDuplicate(this, md.getMessage())) {
 if (listener != null  
 unconsumedMessages.isRunning()) {
 ActiveMQMessage message = 
 createActiveMQMessage(md);
 beforeMessageIsConsumed(md);
 try {
 boolean expired = message.isExpired();
 if (!expired) {
 listener.onMessage(message);
 }
 afterMessageIsConsumed(md, expired);
 {code}
 listener.onMessage() above is not called as the msg has expired. 
 However it will calls into afterMessagesIsConsumed()
 {code:title=ActiveMQMessageConsumer.java}
 private void afterMessageIsConsumed(MessageDispatch md, boolean 
 messageExpired) throws JMSException {
   [...]  
   if (messageExpired) {
 synchronized (deliveredMessages) {
 deliveredMessages.remove(md);
 }
 stats.getExpiredMessageCount().increment();
 ackLater(md, MessageAck.DELIVERED_ACK_TYPE);
 {code}
 and will remove the expired msg from the deliveredMessages list. It then 
 calls into ackLater(). 
 However ackLater() only fires an ack back to the broker when the number of 
 unsent acks has reached 50% of the prefetch value.
 {code:title=ActiveMQMessageConsumer.java}
  private void ackLater(MessageDispatch md, byte ackType) throws JMSException {
 [...]
 if ((0.5 * info.getPrefetchSize()) = (deliveredCounter - 
 additionalWindowSize)) {
 session.sendAck(pendingAck);
 {code}
 In our example it has not reached that mark (only 45 expired msgs, i.e. 45%). 
 So the first 45 msgs, which expired before being dispatched, did not cause an 
 ack being sent to the broker.
 Now the next 55 messages get processed. These don't have an expiry so they 
 get dispatched to consumer code. 
 After dispatching each msg to the registered application code, we call into 
 afterMessageIsConsumed() but this time executing a different branch as the 
 msgs are not expired
 {code:title=ActiveMQMessageConsumer.java}
 private void afterMessageIsConsumed(MessageDispatch md, boolean 
 messageExpired) throws JMSException {
 [...]
 else if (isAutoAcknowledgeEach()) {
 if (deliveryingAcknowledgements.compareAndSet(false, true)) {
 synchronized (deliveredMessages) {
 if (!deliveredMessages.isEmpty()) {
 if (optimizeAcknowledge) {
 ackCounter++;
 if (ackCounter = (info.getPrefetchSize() * 
 .65) || (optimizeAcknowledgeTimeOut  0  System.currentTimeMillis() = 
 (optimizeAckTimestamp + optimizeAcknowledgeTimeOut))) {
 MessageAck ack = 
 makeAckForAllDeliveredMessages(MessageAck.STANDARD_ACK_TYPE);
 if (ack != null) {
 deliveredMessages.clear();
 ackCounter = 0;
 

[jira] [Resolved] (AMQ-3965) Expired msgs not getting acked to broker causing consumer to fill up its prefetch and not getting more msgs.

2012-08-13 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-3965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke resolved AMQ-3965.
-

   Resolution: Fixed
Fix Version/s: 5.7.0

Resolved by this 
[commit|https://fisheye6.atlassian.com/changelog/activemq?cs=1371722].

 Expired msgs not getting acked to broker causing consumer to fill up its 
 prefetch and not getting more msgs.
 

 Key: AMQ-3965
 URL: https://issues.apache.org/jira/browse/AMQ-3965
 Project: ActiveMQ
  Issue Type: Bug
  Components: JMS client
Affects Versions: 5.6.0
Reporter: Torsten Mielke
Assignee: Torsten Mielke
  Labels: optimizeDispatch
 Fix For: 5.7.0

 Attachments: AMQ-3956.patch, 
 OptimizeAcknowledgeWithExpiredMsgsTest.java, testcase.tgz


 It is possible to get a consumer stalled and not receiving any more messages 
 when using optimizeAcknowledge.
 Let me illustrate in an example (JUnit test attached).
 Suppose a consumer with optimizeAcknowledge and a prefetch of 100 msgs.
 The broker's queue contains 105 msg. The first 45 msgs have a very low expiry 
 time, the remaining don't expiry. 
 So the first 100 msgs get dispatched to the consumer (due to prefetch=100). 
 Out of these the first 45 msgs do not get dispatched to consumer code because 
 their expiry has elapsed by the time that are handled in the client. 
 {code:title=ActiveMQMessageConsumer.java}
 public void dispatch(MessageDispatch md) {
 MessageListener listener = this.messageListener.get();
 try {
 [...]
 synchronized (unconsumedMessages.getMutex()) {
 if (!unconsumedMessages.isClosed()) {
 if (this.info.isBrowser() || 
 !session.connection.isDuplicate(this, md.getMessage())) {
 if (listener != null  
 unconsumedMessages.isRunning()) {
 ActiveMQMessage message = 
 createActiveMQMessage(md);
 beforeMessageIsConsumed(md);
 try {
 boolean expired = message.isExpired();
 if (!expired) {
 listener.onMessage(message);
 }
 afterMessageIsConsumed(md, expired);
 {code}
 listener.onMessage() above is not called as the msg has expired. 
 However it will calls into afterMessagesIsConsumed()
 {code:title=ActiveMQMessageConsumer.java}
 private void afterMessageIsConsumed(MessageDispatch md, boolean 
 messageExpired) throws JMSException {
   [...]  
   if (messageExpired) {
 synchronized (deliveredMessages) {
 deliveredMessages.remove(md);
 }
 stats.getExpiredMessageCount().increment();
 ackLater(md, MessageAck.DELIVERED_ACK_TYPE);
 {code}
 and will remove the expired msg from the deliveredMessages list. It then 
 calls into ackLater(). 
 However ackLater() only fires an ack back to the broker when the number of 
 unsent acks has reached 50% of the prefetch value.
 {code:title=ActiveMQMessageConsumer.java}
  private void ackLater(MessageDispatch md, byte ackType) throws JMSException {
 [...]
 if ((0.5 * info.getPrefetchSize()) = (deliveredCounter - 
 additionalWindowSize)) {
 session.sendAck(pendingAck);
 {code}
 In our example it has not reached that mark (only 45 expired msgs, i.e. 45%). 
 So the first 45 msgs, which expired before being dispatched, did not cause an 
 ack being sent to the broker.
 Now the next 55 messages get processed. These don't have an expiry so they 
 get dispatched to consumer code. 
 After dispatching each msg to the registered application code, we call into 
 afterMessageIsConsumed() but this time executing a different branch as the 
 msgs are not expired
 {code:title=ActiveMQMessageConsumer.java}
 private void afterMessageIsConsumed(MessageDispatch md, boolean 
 messageExpired) throws JMSException {
 [...]
 else if (isAutoAcknowledgeEach()) {
 if (deliveryingAcknowledgements.compareAndSet(false, true)) {
 synchronized (deliveredMessages) {
 if (!deliveredMessages.isEmpty()) {
 if (optimizeAcknowledge) {
 ackCounter++;
 if (ackCounter = (info.getPrefetchSize() * 
 .65) || (optimizeAcknowledgeTimeOut  0  System.currentTimeMillis() = 
 (optimizeAckTimestamp + optimizeAcknowledgeTimeOut))) {
 MessageAck ack = 
 makeAckForAllDeliveredMessages(MessageAck.STANDARD_ACK_TYPE);
 if (ack 

[jira] [Updated] (AMQ-3965) Expired msgs not getting acked to broker causing consumer to fill up its prefetch and not getting more msgs.

2012-08-10 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-3965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke updated AMQ-3965:


Attachment: AMQ-3956.patch

Attaching possible patch plus JUnit test. 
Would ask for review and if accepted I can commit the code to trunk. 

 Expired msgs not getting acked to broker causing consumer to fill up its 
 prefetch and not getting more msgs.
 

 Key: AMQ-3965
 URL: https://issues.apache.org/jira/browse/AMQ-3965
 Project: ActiveMQ
  Issue Type: Bug
  Components: JMS client
Affects Versions: 5.6.0
Reporter: Torsten Mielke
  Labels: optimizeDispatch
 Attachments: AMQ-3956.patch, 
 OptimizeAcknowledgeWithExpiredMsgsTest.java, testcase.tgz


 It is possible to get a consumer stalled and not receiving any more messages 
 when using optimizeAcknowledge.
 Let me illustrate in an example (JUnit test attached).
 Suppose a consumer with optimizeAcknowledge and a prefetch of 100 msgs.
 The broker's queue contains 105 msg. The first 45 msgs have a very low expiry 
 time, the remaining don't expiry. 
 So the first 100 msgs get dispatched to the consumer (due to prefetch=100). 
 Out of these the first 45 msgs do not get dispatched to consumer code because 
 their expiry has elapsed by the time that are handled in the client. 
 {code:title=ActiveMQMessageConsumer.java}
 public void dispatch(MessageDispatch md) {
 MessageListener listener = this.messageListener.get();
 try {
 [...]
 synchronized (unconsumedMessages.getMutex()) {
 if (!unconsumedMessages.isClosed()) {
 if (this.info.isBrowser() || 
 !session.connection.isDuplicate(this, md.getMessage())) {
 if (listener != null  
 unconsumedMessages.isRunning()) {
 ActiveMQMessage message = 
 createActiveMQMessage(md);
 beforeMessageIsConsumed(md);
 try {
 boolean expired = message.isExpired();
 if (!expired) {
 listener.onMessage(message);
 }
 afterMessageIsConsumed(md, expired);
 {code}
 listener.onMessage() above is not called as the msg has expired. 
 However it will calls into afterMessagesIsConsumed()
 {code:title=ActiveMQMessageConsumer.java}
 private void afterMessageIsConsumed(MessageDispatch md, boolean 
 messageExpired) throws JMSException {
   [...]  
   if (messageExpired) {
 synchronized (deliveredMessages) {
 deliveredMessages.remove(md);
 }
 stats.getExpiredMessageCount().increment();
 ackLater(md, MessageAck.DELIVERED_ACK_TYPE);
 {code}
 and will remove the expired msg from the deliveredMessages list. It then 
 calls into ackLater(). 
 However ackLater() only fires an ack back to the broker when the number of 
 unsent acks has reached 50% of the prefetch value.
 {code:title=ActiveMQMessageConsumer.java}
  private void ackLater(MessageDispatch md, byte ackType) throws JMSException {
 [...]
 if ((0.5 * info.getPrefetchSize()) = (deliveredCounter - 
 additionalWindowSize)) {
 session.sendAck(pendingAck);
 {code}
 In our example it has not reached that mark (only 45 expired msgs, i.e. 45%). 
 So the first 45 msgs, which expired before being dispatched, did not cause an 
 ack being sent to the broker.
 Now the next 55 messages get processed. These don't have an expiry so they 
 get dispatched to consumer code. 
 After dispatching each msg to the registered application code, we call into 
 afterMessageIsConsumed() but this time executing a different branch as the 
 msgs are not expired
 {code:title=ActiveMQMessageConsumer.java}
 private void afterMessageIsConsumed(MessageDispatch md, boolean 
 messageExpired) throws JMSException {
 [...]
 else if (isAutoAcknowledgeEach()) {
 if (deliveryingAcknowledgements.compareAndSet(false, true)) {
 synchronized (deliveredMessages) {
 if (!deliveredMessages.isEmpty()) {
 if (optimizeAcknowledge) {
 ackCounter++;
 if (ackCounter = (info.getPrefetchSize() * 
 .65) || (optimizeAcknowledgeTimeOut  0  System.currentTimeMillis() = 
 (optimizeAckTimestamp + optimizeAcknowledgeTimeOut))) {
 MessageAck ack = 
 makeAckForAllDeliveredMessages(MessageAck.STANDARD_ACK_TYPE);
 if (ack != null) {
 

[jira] [Commented] (AMQ-3965) Expired msgs not getting acked to broker causing consumer to fill up its prefetch and not getting more msgs.

2012-08-09 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431945#comment-13431945
 ] 

Torsten Mielke commented on AMQ-3965:
-

We thought that the following fix could do the job 
{code:title=ActiveMQMessageConsumer.java} 
private void afterMessageIsConsumed(MessageDispatch md, boolean messageExpired) 
throws JMSException {
[...]
if (messageExpired) {
synchronized (deliveredMessages) {
deliveredMessages.remove(md);
}
stats.getExpiredMessageCount().increment();
ackLater(md, MessageAck.DELIVERED_ACK_TYPE);
} else {
stats.onMessage();
if (session.getTransacted()) {
// Do nothing.
} else if (isAutoAcknowledgeEach()) {
if (deliveryingAcknowledgements.compareAndSet(false, true)) {
synchronized (deliveredMessages) {
if (!deliveredMessages.isEmpty()) {
if (optimizeAcknowledge) {
ackCounter++;

// AMQ-3965 - this alone does not fix it.
float threshold = (float) 
info.getPrefetchSize() * (float) 0.65;
if (optimizeAcknowledge  pendingAck != null 
 (ackCounter + deliveredCounter) = (threshold)) {
session.sendAck(pendingAck);
pendingAck = null;
deliveredCounter = 0;
}
if (ackCounter = (threshold) || 
(optimizeAcknowledgeTimeOut  0  System.currentTimeMillis() = 
(optimizeAckTimestamp + optimizeAcknowledgeTimeOut))) {
MessageAck ack = 
makeAckForAllDeliveredMessages(MessageAck.STANDARD_ACK_TYPE);
if (ack != null) {
deliveredMessages.clear();
ackCounter = 0;
session.sendAck(ack);
optimizeAckTimestamp = 
System.currentTimeMillis();
}
}
{code} 

but that extra code 
{code} 
// AMQ-3965 - this alone does not fix it.
float threshold = (float) info.getPrefetchSize() * (float) 0.65;
if (optimizeAcknowledge  pendingAck != null  (ackCounter + 
deliveredCounter) = (threshold)) {
  session.sendAck(pendingAck);
  pendingAck = null;
  deliveredCounter = 0;
} 

{code} alone is not enough. Let me explain why: 

Suppose a prefetch of 100. Consumer receives 56 normal msgs. So ackCounter is 
at 56, no ack sent back to broker yet. It then receives 44 msgs that expire on 
consumer before dispatch. So deliveredCounter=44 and ackCounter=56. In 
afterMessageIsConsumed() it does not go into the proposed code for the expired 
msgs, only for normal msgs. So for the last 44 expired msgs there is no trigger 
fired to sent an ack to the broker. The result is a hanging consumer that does 
not receive any more msgs. Problem not fixed. 

 Expired msgs not getting acked to broker causing consumer to fill up its 
 prefetch and not getting more msgs.
 

 Key: AMQ-3965
 URL: https://issues.apache.org/jira/browse/AMQ-3965
 Project: ActiveMQ
  Issue Type: Bug
  Components: JMS client
Affects Versions: 5.6.0
Reporter: Torsten Mielke
  Labels: optimizeDispatch
 Attachments: OptimizeAcknowledgeWithExpiredMsgsTest.java, testcase.tgz


 It is possible to get a consumer stalled and not receiving any more messages 
 when using optimizeAcknowledge.
 Let me illustrate in an example (JUnit test attached).
 Suppose a consumer with optimizeAcknowledge and a prefetch of 100 msgs.
 The broker's queue contains 105 msg. The first 45 msgs have a very low expiry 
 time, the remaining don't expiry. 
 So the first 100 msgs get dispatched to the consumer (due to prefetch=100). 
 Out of these the first 45 msgs do not get dispatched to consumer code because 
 their expiry has elapsed by the time that are handled in the client. 
 {code:title=ActiveMQMessageConsumer.java}
 public void dispatch(MessageDispatch md) {
 MessageListener listener = this.messageListener.get();
 try {
 [...]
 synchronized (unconsumedMessages.getMutex()) {
 if (!unconsumedMessages.isClosed()) {
 if (this.info.isBrowser() || 
 !session.connection.isDuplicate(this, md.getMessage())) {
 if (listener != null  
 unconsumedMessages.isRunning()) 

[jira] [Created] (AMQ-3965) Expired msgs not getting acked to broker causing consumer to fill up its prefetch and not getting more msgs.

2012-08-08 Thread Torsten Mielke (JIRA)
Torsten Mielke created AMQ-3965:
---

 Summary: Expired msgs not getting acked to broker causing consumer 
to fill up its prefetch and not getting more msgs.
 Key: AMQ-3965
 URL: https://issues.apache.org/jira/browse/AMQ-3965
 Project: ActiveMQ
  Issue Type: Bug
  Components: JMS client
Affects Versions: 5.6.0
Reporter: Torsten Mielke



It is possible to get a consumer stalled and not receiving any more messages 
when using optimizeAcknowledge.
Let me illustrate in an example (JUnit test attached).

Suppose a consumer with optimizeAcknowledge and a prefetch of 100 msgs.
The broker's queue contains 105 msg. The first 45 msgs have a very low expiry 
time, the remaining don't expiry. 

So the first 100 msgs get dispatched to the consumer (due to prefetch=100). Out 
of these the first 45 msgs do not get dispatched to consumer code because their 
expiry has elapsed by the time that are handled in the client. 

{code:title=ActiveMQMessageConsumer.java}
public void dispatch(MessageDispatch md) {
MessageListener listener = this.messageListener.get();
try {
[...]
synchronized (unconsumedMessages.getMutex()) {
if (!unconsumedMessages.isClosed()) {
if (this.info.isBrowser() || 
!session.connection.isDuplicate(this, md.getMessage())) {
if (listener != null  unconsumedMessages.isRunning()) 
{
ActiveMQMessage message = createActiveMQMessage(md);
beforeMessageIsConsumed(md);
try {
boolean expired = message.isExpired();
if (!expired) {
listener.onMessage(message);
}
afterMessageIsConsumed(md, expired);
{code}

listener.onMessage() above is not called as the msg has expired. 
However it will calls into afterMessagesIsConsumed()

{code:title=ActiveMQMessageConsumer.java}
private void afterMessageIsConsumed(MessageDispatch md, boolean 
messageExpired) throws JMSException {
  [...]  
  if (messageExpired) {
synchronized (deliveredMessages) {
deliveredMessages.remove(md);
}
stats.getExpiredMessageCount().increment();
ackLater(md, MessageAck.DELIVERED_ACK_TYPE);

{code}

and will remove the expired msg from the deliveredMessages list. It then calls 
into ackLater(). 
However ackLater() only fires an ack back to the broker when the number of 
unsent acks has reached 50% of the prefetch value.

{code:title=ActiveMQMessageConsumer.java}
 private void ackLater(MessageDispatch md, byte ackType) throws JMSException {
[...]
if ((0.5 * info.getPrefetchSize()) = (deliveredCounter - 
additionalWindowSize)) {
session.sendAck(pendingAck);
{code}

In our example it has not reached that mark (only 45 expired msgs, i.e. 45%). 
So the first 45 msgs, which expired before being dispatched, did not cause an 
ack being sent to the broker.

Now the next 55 messages get processed. These don't have an expiry so they get 
dispatched to consumer code. 
After dispatching each msg to the registered application code, we call into 
afterMessageIsConsumed() but this time executing a different branch as the msgs 
are not expired

{code:title=ActiveMQMessageConsumer.java}
private void afterMessageIsConsumed(MessageDispatch md, boolean messageExpired) 
throws JMSException {
[...]
else if (isAutoAcknowledgeEach()) {
if (deliveryingAcknowledgements.compareAndSet(false, true)) {
synchronized (deliveredMessages) {
if (!deliveredMessages.isEmpty()) {
if (optimizeAcknowledge) {
ackCounter++;
if (ackCounter = (info.getPrefetchSize() * 
.65) || (optimizeAcknowledgeTimeOut  0  System.currentTimeMillis() = 
(optimizeAckTimestamp + optimizeAcknowledgeTimeOut))) {
MessageAck ack = 
makeAckForAllDeliveredMessages(MessageAck.STANDARD_ACK_TYPE);
if (ack != null) {
deliveredMessages.clear();
ackCounter = 0;
session.sendAck(ack);
optimizeAckTimestamp = 
System.currentTimeMillis();
}
}
{code}

with optimizeAcknowledge=true we only send an ack back to the broker if either 
optimizeAcknowledgeTimeOut has elapsed or the ackCounter has reached 65% of the 
prefetch (100). 
The timeout will not have kicked in. The ackCounter will be at 55 after 
processing the last of 

[jira] [Commented] (AMQ-3965) Expired msgs not getting acked to broker causing consumer to fill up its prefetch and not getting more msgs.

2012-08-08 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431146#comment-13431146
 ] 

Torsten Mielke commented on AMQ-3965:
-

A possible fix for this may be to no hold back acks for expired messages (as 
currently done by calling ackLater()) but to ack any expired messages
straight away.
This however will cause more acks to be written back to the broker, but only in 
case of expired messages.

Perhaps there is a better solution that has less of an overhead? 

 Expired msgs not getting acked to broker causing consumer to fill up its 
 prefetch and not getting more msgs.
 

 Key: AMQ-3965
 URL: https://issues.apache.org/jira/browse/AMQ-3965
 Project: ActiveMQ
  Issue Type: Bug
  Components: JMS client
Affects Versions: 5.6.0
Reporter: Torsten Mielke
  Labels: optimizeDispatch

 It is possible to get a consumer stalled and not receiving any more messages 
 when using optimizeAcknowledge.
 Let me illustrate in an example (JUnit test attached).
 Suppose a consumer with optimizeAcknowledge and a prefetch of 100 msgs.
 The broker's queue contains 105 msg. The first 45 msgs have a very low expiry 
 time, the remaining don't expiry. 
 So the first 100 msgs get dispatched to the consumer (due to prefetch=100). 
 Out of these the first 45 msgs do not get dispatched to consumer code because 
 their expiry has elapsed by the time that are handled in the client. 
 {code:title=ActiveMQMessageConsumer.java}
 public void dispatch(MessageDispatch md) {
 MessageListener listener = this.messageListener.get();
 try {
 [...]
 synchronized (unconsumedMessages.getMutex()) {
 if (!unconsumedMessages.isClosed()) {
 if (this.info.isBrowser() || 
 !session.connection.isDuplicate(this, md.getMessage())) {
 if (listener != null  
 unconsumedMessages.isRunning()) {
 ActiveMQMessage message = 
 createActiveMQMessage(md);
 beforeMessageIsConsumed(md);
 try {
 boolean expired = message.isExpired();
 if (!expired) {
 listener.onMessage(message);
 }
 afterMessageIsConsumed(md, expired);
 {code}
 listener.onMessage() above is not called as the msg has expired. 
 However it will calls into afterMessagesIsConsumed()
 {code:title=ActiveMQMessageConsumer.java}
 private void afterMessageIsConsumed(MessageDispatch md, boolean 
 messageExpired) throws JMSException {
   [...]  
   if (messageExpired) {
 synchronized (deliveredMessages) {
 deliveredMessages.remove(md);
 }
 stats.getExpiredMessageCount().increment();
 ackLater(md, MessageAck.DELIVERED_ACK_TYPE);
 {code}
 and will remove the expired msg from the deliveredMessages list. It then 
 calls into ackLater(). 
 However ackLater() only fires an ack back to the broker when the number of 
 unsent acks has reached 50% of the prefetch value.
 {code:title=ActiveMQMessageConsumer.java}
  private void ackLater(MessageDispatch md, byte ackType) throws JMSException {
 [...]
 if ((0.5 * info.getPrefetchSize()) = (deliveredCounter - 
 additionalWindowSize)) {
 session.sendAck(pendingAck);
 {code}
 In our example it has not reached that mark (only 45 expired msgs, i.e. 45%). 
 So the first 45 msgs, which expired before being dispatched, did not cause an 
 ack being sent to the broker.
 Now the next 55 messages get processed. These don't have an expiry so they 
 get dispatched to consumer code. 
 After dispatching each msg to the registered application code, we call into 
 afterMessageIsConsumed() but this time executing a different branch as the 
 msgs are not expired
 {code:title=ActiveMQMessageConsumer.java}
 private void afterMessageIsConsumed(MessageDispatch md, boolean 
 messageExpired) throws JMSException {
 [...]
 else if (isAutoAcknowledgeEach()) {
 if (deliveryingAcknowledgements.compareAndSet(false, true)) {
 synchronized (deliveredMessages) {
 if (!deliveredMessages.isEmpty()) {
 if (optimizeAcknowledge) {
 ackCounter++;
 if (ackCounter = (info.getPrefetchSize() * 
 .65) || (optimizeAcknowledgeTimeOut  0  System.currentTimeMillis() = 
 (optimizeAckTimestamp + optimizeAcknowledgeTimeOut))) {
 MessageAck ack = 
 

[jira] [Commented] (AMQ-3557) Performance of consumption with JDBC persistance and Microsoft SQL Server

2012-07-17 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13416048#comment-13416048
 ] 

Torsten Mielke commented on AMQ-3557:
-

There was recently an improvement made to the JDBC persistence adapter to 
remove an unneeded synchronization point. See the last comment of AMQ-2868. 
This fix improves performance of the JDBC persistence adapter in case of using 
multiple consumers. 


 Performance of consumption with JDBC persistance and Microsoft SQL Server
 -

 Key: AMQ-3557
 URL: https://issues.apache.org/jira/browse/AMQ-3557
 Project: ActiveMQ
  Issue Type: Bug
  Components: Message Store, Performance Test
Affects Versions: 5.4.3, 5.5.0, 5.5.1
 Environment: Microsoft SQL Server 2005, Debian Linux, 
Reporter: Nicholas Rahn
  Labels: jdbc, performance, sqlserver
 Attachments: activemq.xml


 We are trying to upgrade our ActiveMQ installation and have run into some 
 performance issues. I'll attached our activemq.xml file to this bug.
 I've setup a fresh SQLServer database for our upgrade tests and using the 
 example Ant tools in the distribution, I've populated a persistent queue with 
 1,000,000 messages. I then consume those messages using the example Ant 
 consumption script. The producing side works fine. However the performance of 
 the consumption side is extremely poor. To consume just 10,000 of those 
 messages takes over 5 minutes.
 The consumer will pause for 4-5 seconds every 200 messages. This is easily 
 visible in the output of the Ant script. We have also traced the DB to see 
 what is happening there and have found that the findNextMessagesStatement 
 takes 4-5 seconds every time it is executed. The statement's ID parameter is 
 increased by 200 every time it is executed.  We also noticed the use of the 
 SET ROWCOUNT 1 statement setting the maximum number of rows returned 
 from a query at 1. We also traced previous versions of ActiveMQ and found 
 that SET ROWCOUNT was used much more often, with much smaller values (often 
 10, 20 or 30).
 We have also tested the same setup with version 5.4.0 and did not have the 
 same issues. Consumption speeds with 5.4.0 were normal, with no pauses. 
 Version 5.4.3 did have the problem, however. So there seems to be a 
 regression somewhere between 5.4.0 and 5.4.3 (also affects 5.5.0 and later).
 Please let me know if you need more information, including the database 
 traces.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (AMQ-3904) InactivityMonitor doc

2012-06-29 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-3904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke reassigned AMQ-3904:
---

Assignee: Torsten Mielke

 InactivityMonitor doc
 -

 Key: AMQ-3904
 URL: https://issues.apache.org/jira/browse/AMQ-3904
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Documentation
Reporter: Pat Fox
Assignee: Torsten Mielke
Priority: Minor
  Labels: documentation
 Attachments: InactivityMonitor.html


 I created a doc for the InactivityMonitor. Perhaps it could be added to the 
 activemq docs? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (AMQ-3904) InactivityMonitor doc

2012-06-29 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13403942#comment-13403942
 ] 

Torsten Mielke commented on AMQ-3904:
-

Many thanks to Pat Fox for providing this documentation. 
It has been added to the ActiveMQ documentation set and is also linked from 
some of the transport reference pages. 
https://cwiki.apache.org/confluence/display/ACTIVEMQ/ActiveMQ+InactivityMonitor



 InactivityMonitor doc
 -

 Key: AMQ-3904
 URL: https://issues.apache.org/jira/browse/AMQ-3904
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Documentation
Reporter: Pat Fox
Assignee: Torsten Mielke
Priority: Minor
  Labels: documentation
 Attachments: InactivityMonitor.html


 I created a doc for the InactivityMonitor. Perhaps it could be added to the 
 activemq docs? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (AMQ-3904) InactivityMonitor doc

2012-06-29 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-3904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke resolved AMQ-3904.
-

Resolution: Fixed

https://cwiki.apache.org/confluence/display/ACTIVEMQ/ActiveMQ+InactivityMonitor

Give it a few hours to appear on the live site.


 InactivityMonitor doc
 -

 Key: AMQ-3904
 URL: https://issues.apache.org/jira/browse/AMQ-3904
 Project: ActiveMQ
  Issue Type: Improvement
  Components: Documentation
Reporter: Pat Fox
Assignee: Torsten Mielke
Priority: Minor
  Labels: documentation
 Attachments: InactivityMonitor.html


 I created a doc for the InactivityMonitor. Perhaps it could be added to the 
 activemq docs? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (AMQ-3506) Access to ConnectionPool.createSession needs to be synchronized

2011-09-20 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-3506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke updated AMQ-3506:


Attachment: AMQ-3506.patch

Attaching possible fix and corresponding JUnit test

 Access to ConnectionPool.createSession needs to be synchronized 
 

 Key: AMQ-3506
 URL: https://issues.apache.org/jira/browse/AMQ-3506
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.5.0
 Environment: activemq-pool, PooledConnectionFactory with 
 maximumActive=1 and blockIfSessionPoolIsFull=true (default behavior)
Reporter: Torsten Mielke
  Labels: activemq-pool, maximumActive, sessionPool
 Fix For: 5.6.0

 Attachments: AMQ-3506.patch

   Original Estimate: 3h
  Remaining Estimate: 3h

 When configuring a PooledConnectionFactory with maximumActive=1 and 
 blockIfSessionPoolIsFull=true (default behavior for latter config) it is 
 possible that multiple threads that concurrently try to use the same JMS 
 connection to create a new session might create more sessions than the 
 configured maximumActive limit.
 That's because the call to ConnectionPool.createSession() is not synchronized 
 and if multiple threads try to call this method concurrently (on the same 
 underlying JMS connection) then the if-condition in 
 {code:java}
 SessionKey key = new SessionKey(transacted, ackMode);
 SessionPool pool = cache.get(key);
 if (pool == null) {
   pool = createSessionPool(key);
   cache.put(key, pool);
 }
 {code}
 will evaluate to true for *all* threads and they all end up creating their 
 own sessionPool using the same SessionKey properties. 
 Access to the if-condition needs to be synchronized so that only one session 
 pool gets created. That will ensure that not more than the configured 
 maximumActive number of sessions can get created. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (AMQ-3503) Too many open files for db log

2011-09-16 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13106065#comment-13106065
 ] 

Torsten Mielke commented on AMQ-3503:
-

You can get the latest snapshot version from 
[here|https://repository.apache.org/content/groups/snapshots-group/org/apache/activemq/apache-activemq/5.6-SNAPSHOT/].

 Too many open files for db log
 --

 Key: AMQ-3503
 URL: https://issues.apache.org/jira/browse/AMQ-3503
 Project: ActiveMQ
  Issue Type: Bug
  Components: Message Store
Affects Versions: 5.5.0
 Environment: Redhat 5.7.  Default conf file used for startup
Reporter: Michael Black
Priority: Blocker

 lsof | grep apache | grep data/localhost | wc -l
 Shows constantly increasing number of open files. Had 1016 open when it died.
 ulimit is set at 1024.
 Docs say files are supposed to be removed when no longer needed.
 We're running 3 producers and 3 consumers so no messages should be left in 
 the queue.
 At the point of dying we have put in around 2 billion messages.
 2011-09-16 00:18:21,148 | ERROR | I/O error | 
 org.apache.activemq.broker.region.cursors.FilePendingMessageCursor | 
 Queue:MCNA
 java.io.FileNotFoundException: 
 /usr/local/apache-activemq-5.5.0/data/localhost/tmp_storage/db-1987.log (Too 
 many open files)
   at java.io.RandomAccessFile.open(Native Method)
   at java.io.RandomAccessFile.init(RandomAccessFile.java:212)
   at org.apache.kahadb.journal.DataFile.openRandomAccessFile(DataFile.java:70)
   at 
 org.apache.kahadb.journal.DataFileAccessor.init(DataFileAccessor.java:49)
   at 
 org.apache.kahadb.journal.DataFileAccessorPool$Pool.openDataFileReader(DataFileAccessorPool.java:53)
   at 
 org.apache.kahadb.journal.DataFileAccessorPool.openDataFileAccessor(DataFileAccessorPool.java:139)
   at org.apache.kahadb.journal.Journal.read(Journal.java:598)
   at 
 org.apache.activemq.store.kahadb.plist.PListStore.getPayload(PListStore.java:337)
   at org.apache.activemq.store.kahadb.plist.PList.getNext(PList.java:316)
   at 
 org.apache.activemq.broker.region.cursors.FilePendingMessageCursor$DiskIterator.next(FilePendingMessageCursor.java:500)
   at 
 org.apache.activemq.broker.region.cursors.FilePendingMessageCursor$DiskIterator.next(FilePendingMessageCursor.java:473)
   at 
 org.apache.activemq.broker.region.cursors.FilePendingMessageCursor.next(FilePendingMessageCursor.java:293)
   at 
 org.apache.activemq.broker.region.cursors.StoreQueueCursor.next(StoreQueueCursor.java:135)
   at 
 org.apache.activemq.broker.region.Queue.doPageInForDispatch(Queue.java:1714)
   at org.apache.activemq.broker.region.Queue.pageInMessages(Queue.java:1932)
   at org.apache.activemq.broker.region.Queue.iterate(Queue.java:1440)
   at 
 org.apache.activemq.thread.DedicatedTaskRunner.runTask(DedicatedTaskRunner.java:104)
   at 
 org.apache.activemq.thread.DedicatedTaskRunner$1.run(DedicatedTaskRunner.java:42)

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (AMQ-3492) enhance maven-activemq-perf-plugin to send specific message loaded from a file

2011-09-09 Thread Torsten Mielke (JIRA)
enhance maven-activemq-perf-plugin to send specific message loaded from a file
--

 Key: AMQ-3492
 URL: https://issues.apache.org/jira/browse/AMQ-3492
 Project: ActiveMQ
  Issue Type: New Feature
  Components: Performance Test
Affects Versions: 5.5.0
 Environment: maven-activemq-perf-plugin
Reporter: Torsten Mielke
 Fix For: 5.6.0
 Attachments: AMQ-3492.patch

For load testing a particular application (with JMS Consumer) it will often be 
required to send a fixed message to the broker. Otherwise consumers might 
reject the message.
Currently the maven-activemq-perf-plugin does not support sending messages with 
a particular content. 

I propose to improve this plugin so that the producer can be configured to load 
a message from a file that is then used for the load test. 


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (AMQ-3492) enhance maven-activemq-perf-plugin to send specific message loaded from a file

2011-09-09 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-3492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke updated AMQ-3492:


Attachment: AMQ-3492.patch

Attaching a patch based on 5.6-SNAPSHOT version. 

 enhance maven-activemq-perf-plugin to send specific message loaded from a file
 --

 Key: AMQ-3492
 URL: https://issues.apache.org/jira/browse/AMQ-3492
 Project: ActiveMQ
  Issue Type: New Feature
  Components: Performance Test
Affects Versions: 5.5.0
 Environment: maven-activemq-perf-plugin
Reporter: Torsten Mielke
  Labels: maven-activemq-perf-plugin
 Fix For: 5.6.0

 Attachments: AMQ-3492.patch

   Original Estimate: 3h
  Remaining Estimate: 3h

 For load testing a particular application (with JMS Consumer) it will often 
 be required to send a fixed message to the broker. Otherwise consumers might 
 reject the message.
 Currently the maven-activemq-perf-plugin does not support sending messages 
 with a particular content. 
 I propose to improve this plugin so that the producer can be configured to 
 load a message from a file that is then used for the load test. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (AMQ-3482) Make PooledConnectionFactory's sessionPool non-blocking in case its full.

2011-08-31 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-3482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke updated AMQ-3482:


Attachment: AMQ-3482.patch

Attaching a possible patch including JUnit test.
This patch allows to configure the behavior of the session pool once its full 
but changes the default behavior to throw a javax.jms.JMSException: Pool 
exhausted in case the pool is full (previous versions simply block).

The behavior is controlled by API 
PooledConnectionFactory.setBlockIfSessionPoolIsFull(boolean block) and defaults 
to false (don't block but raise an exception).


 Make PooledConnectionFactory's sessionPool non-blocking in case its full.
 -

 Key: AMQ-3482
 URL: https://issues.apache.org/jira/browse/AMQ-3482
 Project: ActiveMQ
  Issue Type: Improvement
 Environment: PooledConnectionFactory
Reporter: Torsten Mielke
 Attachments: AMQ-3482.patch


 When using the PooledConnectionFactory it internally caches the JMS Sessions. 
 This is done using a commons pool. 
 The amount of sessions to be pooled is controlled by the maximumActive 
 property of the PooledConnectionFactory. 
 Right now, when the session pool is full, then any further call to 
 Connection.getSession() will block until a session is available from the pool.
 Depending on whether a connection is returned to the pool, this call might 
 potentially block forever.
 IMHO this is not the best default behavior. Less experienced users might 
 believe the JMS client is hung or suffering a bug if it simply does not 
 return. There is currently no warning logged that this call will block, so no 
 indication of the full session pool is given. 
 I propose to change this default behavior and raise a JMSException exception 
 in case the session pool is full and no further Session can be created.
 The underlying commons-pool class 
 org.apache.commons.pool.impl.GenericObjectPoolFactory can be configured 
 easily to raise an ex rather than blocking. 
 This will indicate JMS clients that the session pool is full and allows them 
 to take appropriate actions (retry later, or propagate the error upwards).

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (AMQ-3430) activemq-web: SessionPool.returnSession() should discard sessions that are closed.

2011-08-01 Thread Torsten Mielke (JIRA)
activemq-web: SessionPool.returnSession() should discard sessions that are 
closed. 
---

 Key: AMQ-3430
 URL: https://issues.apache.org/jira/browse/AMQ-3430
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.5.0, 5.6.0
Reporter: Torsten Mielke
 Fix For: 5.6.0


In activemq.web project, SessionPool.returnSession() does not check if the 
session is still open. As long as the session isn't null, its returned back to 
the pool.
At least one customer reported a problem when using the web console for 
browsing a queue, where the session was already closed. 

{noformat}
javax.jms.IllegalStateException: The Session is closed
at org.apache.activemq.ActiveMQSession.checkClosed(ActiveMQSession.java:722)
at org.apache.activemq.ActiveMQSession.createQueue(ActiveMQSession.java:1141)
at org.apache.activemq.web.QueueBrowseQuery.getQueue(QueueBrowseQuery.java:65)
at 
org.apache.activemq.web.QueueBrowseQuery.createBrowser(QueueBrowseQuery.java:91)
at org.apache.activemq.web.QueueBrowseQuery.getBrowser(QueueBrowseQuery.java:54)
at sun.reflect.GeneratedMethodAccessor125.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at javax.el.BeanELResolver.getValue(BeanELResolver.java:62)
...
{noformat}

Not sure what triggered the closure of the session, however once it is closed 
it should not be returned to the pool but be discarded. If its not discarded, 
then the pool will always return the closed session and any invocations on the 
session return an exception. Restarting the broker is the only remedy.

 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (AMQ-3430) activemq-web: SessionPool.returnSession() should discard sessions that are closed.

2011-08-01 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-3430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13073485#comment-13073485
 ] 

Torsten Mielke commented on AMQ-3430:
-

full stack trace of the error reads:

{noformat}
javax.jms.IllegalStateException: The Session is closed
at org.apache.activemq.ActiveMQSession.checkClosed(ActiveMQSession.java:722)
at org.apache.activemq.ActiveMQSession.createQueue(ActiveMQSession.java:1141)
at org.apache.activemq.web.QueueBrowseQuery.getQueue(QueueBrowseQuery.java:65)
at 
org.apache.activemq.web.QueueBrowseQuery.createBrowser(QueueBrowseQuery.java:91)
at org.apache.activemq.web.QueueBrowseQuery.getBrowser(QueueBrowseQuery.java:54)
at sun.reflect.GeneratedMethodAccessor125.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at javax.el.BeanELResolver.getValue(BeanELResolver.java:62)
at javax.el.CompositeELResolver.getValue(CompositeELResolver.java:54)
at org.apache.el.parser.AstValue.getValue(AstValue.java:123)
at org.apache.el.ValueExpressionImpl.getValue(ValueExpressionImpl.java:186)
at 
org.apache.jasper.runtime.PageContextImpl.proprietaryEvaluate(PageContextImpl.java:935)
at 
org.apache.jsp.browse_jsp._jspx_meth_jms_005fforEachMessage_005f0(browse_jsp.java:169)
at org.apache.jsp.browse_jsp._jspService(browse_jsp.java:103)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at 
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:377)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:313)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:260)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:83)
at 
org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.activemq.web.SessionFilter.doFilter(SessionFilter.java:45)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.activemq.web.filter.ApplicationContextFilter.doFilter(ApplicationContextFilter.java:81)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
com.opensymphony.module.sitemesh.filter.PageFilter.parsePage(PageFilter.java:118)
at 
com.opensymphony.module.sitemesh.filter.PageFilter.doFilter(PageFilter.java:52)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at 
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:558)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
at 
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:662)
{noformat}

 activemq-web: SessionPool.returnSession() should discard sessions that are 
 closed. 
 ---

 Key: AMQ-3430
 URL: https://issues.apache.org/jira/browse/AMQ-3430
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.5.0, 5.6.0
Reporter: Torsten Mielke
  Labels: SessionPool, activemq-web-console,
 Fix For: 5.6.0


 In activemq.web project, SessionPool.returnSession() does not check if the 
 session is still open. As long as the session isn't null, its returned back 
 

[jira] [Updated] (AMQ-3430) activemq-web: SessionPool.returnSession() should discard sessions that are closed.

2011-08-01 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-3430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke updated AMQ-3430:


Attachment: AMQ-3430.patch

Attaching a possible fix including some logging for SessionPool.java
It was necessary to add a public method ActiveMQSession.isClosed() for checking 
if a session is closed.



 activemq-web: SessionPool.returnSession() should discard sessions that are 
 closed. 
 ---

 Key: AMQ-3430
 URL: https://issues.apache.org/jira/browse/AMQ-3430
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.5.0, 5.6.0
Reporter: Torsten Mielke
  Labels: SessionPool, activemq-web-console,
 Fix For: 5.6.0

 Attachments: AMQ-3430.patch


 In activemq.web project, SessionPool.returnSession() does not check if the 
 session is still open. As long as the session isn't null, its returned back 
 to the pool.
 At least one customer reported a problem when using the web console for 
 browsing a queue, where the session was already closed. 
 {noformat}
 javax.jms.IllegalStateException: The Session is closed
 at org.apache.activemq.ActiveMQSession.checkClosed(ActiveMQSession.java:722)
 at org.apache.activemq.ActiveMQSession.createQueue(ActiveMQSession.java:1141)
 at org.apache.activemq.web.QueueBrowseQuery.getQueue(QueueBrowseQuery.java:65)
 at 
 org.apache.activemq.web.QueueBrowseQuery.createBrowser(QueueBrowseQuery.java:91)
 at 
 org.apache.activemq.web.QueueBrowseQuery.getBrowser(QueueBrowseQuery.java:54)
 at sun.reflect.GeneratedMethodAccessor125.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at javax.el.BeanELResolver.getValue(BeanELResolver.java:62)
 ...
 {noformat}
 Not sure what triggered the closure of the session, however once it is closed 
 it should not be returned to the pool but be discarded. If its not discarded, 
 then the pool will always return the closed session and any invocations on 
 the session return an exception. Restarting the broker is the only remedy.
  

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (AMQ-3425) Unable to delete a queue via web console

2011-07-29 Thread Torsten Mielke (JIRA)
Unable to delete a queue via web console


 Key: AMQ-3425
 URL: https://issues.apache.org/jira/browse/AMQ-3425
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.5.0, 5.6.0
 Environment: web console, default configuration
Reporter: Torsten Mielke


Using the following steps will make it impossible to delete a queue via the web 
console admin interface
- start ActiveMQ with default configuration (where web console and sample Camel 
route are deployed)
- open the web console http://localhost:8161/admin, click on Queues
- for the only queue example.A, press browse
- go back in your browser and now try to Delete the queue using the Delete link
- it will raise Exception occurred while processing this request, check the 
log for more information!

The AMQ log contains:
{noformat}
java.lang.UnsupportedOperationException: Possible CSRF attack
at 
org.apache.activemq.web.handler.BindingBeanNameUrlHandlerMapping.getHandlerInternal(BindingBeanNameUrlHandlerMapping.java:58)
at 
org.springframework.web.servlet.handler.AbstractHandlerMapping.getHandler(AbstractHandlerMapping.java:184)
at 
org.springframework.web.servlet.DispatcherServlet.getHandler(DispatcherServlet.java:945)
at 
org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:753)
at 
org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:719)
at 
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:644)
at 
org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:549)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:693)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:806)
at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:527)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1216)
at org.apache.activemq.web.AuditFilter.doFilter(AuditFilter.java:59)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1187)
at 
org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:83)
at 
org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1187)
at 
org.apache.activemq.web.filter.ApplicationContextFilter.doFilter(ApplicationContextFilter.java:81)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1187)
at 
com.opensymphony.module.sitemesh.filter.PageFilter.parsePage(PageFilter.java:118)
at 
com.opensymphony.module.sitemesh.filter.PageFilter.doFilter(PageFilter.java:52)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1187)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:421)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:493)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:225)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:930)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:358)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:183)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:866)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:456)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:113)
at org.eclipse.jetty.server.Server.handle(Server.java:351)
at 
org.eclipse.jetty.server.HttpConnection.handleRequest(HttpConnection.java:594)
at 
org.eclipse.jetty.server.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:1042)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:549)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:211)
at 
org.eclipse.jetty.server.HttpConnection.handle(HttpConnection.java:424)
at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:506)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:436)
at 

[jira] [Commented] (AMQ-2455) Need a facility to retry jms connections to a foreign provider by the ActiveMQ JMS bridge.

2011-07-21 Thread Torsten Mielke (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-2455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13068869#comment-13068869
 ] 

Torsten Mielke commented on AMQ-2455:
-

I can confirm that this issue isn't fixed in 5.5 either. 
Have run some tests based on http://activemq.apache.org/jms-to-jms-bridge.html 
bridging two AMQ brokers.
If the remote broker is stopped (the one the JMS bridge connects to), the 
JMSBridge is not closed in the local broker. JMX still lists it. 
When the remote broker is restarted, connections are not re-established to the 
remote broker. The JMS bridge does not get refreshed. 
As a result messages don't flow between the two brokers.



 Need a facility to retry jms connections to a foreign provider by the 
 ActiveMQ JMS bridge.
 --

 Key: AMQ-2455
 URL: https://issues.apache.org/jira/browse/AMQ-2455
 Project: ActiveMQ
  Issue Type: New Feature
  Components: Broker
 Environment: Debian Lenny.  ActiveMQ 5.2.  OpenJMS-0.7.7-beta-1
Reporter: Billy Buzzard
Assignee: Rob Davies
 Fix For: 5.4.0

 Attachments: bridge-reconnect.patch, test.zip


 I followed an example 
 (http://www.codeproject.com/KB/docview/jms_to_jms_bridge_activem.aspx?display=Print)
  showing how to set up a bridge between OpenJMS and ActiveMQ.  The bridge 
 seems to work perfectly until I stop then restart OpenJMS while leaving 
 ActiveMQ running.  Once I restart OpenJMS I try sending a message from it to 
 ActiveMQ, but ActiveMQ doesn't receive it until I stop and restart ActiveMQ.  
 I can recreate the exact same problem by starting ActiveMQ first and then 
 OpenJMS.  After a little more reading it looks like failover should fix this 
 problem, but I tried it and it didn't work.  I submitted a question to 
 ActiveMQ and Gary Tully responded and told me there is currently no facility 
 to retry jms connections to a foreign provider by the ActiveMQ JMS bridge.
 Assuming that remote end-points may not be using ActiveMQ then I would think 
 this would be a very important feature to have.
 Here's a link to our conversation: 
 http://www.nabble.com/How-to-configure-failover-for-jmsBridgeConnector-td25909047.html#a25918800
 The conversation also contains an attachment showing me configuration file.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (AMQ-2455) Need a facility to retry jms connections to a foreign provider by the ActiveMQ JMS bridge.

2011-07-21 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-2455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke reopened AMQ-2455:
-


Reopening the case as the issue is not fixed.

 Need a facility to retry jms connections to a foreign provider by the 
 ActiveMQ JMS bridge.
 --

 Key: AMQ-2455
 URL: https://issues.apache.org/jira/browse/AMQ-2455
 Project: ActiveMQ
  Issue Type: New Feature
  Components: Broker
 Environment: Debian Lenny.  ActiveMQ 5.2.  OpenJMS-0.7.7-beta-1
Reporter: Billy Buzzard
Assignee: Rob Davies
 Fix For: 5.4.0

 Attachments: bridge-reconnect.patch, test.zip


 I followed an example 
 (http://www.codeproject.com/KB/docview/jms_to_jms_bridge_activem.aspx?display=Print)
  showing how to set up a bridge between OpenJMS and ActiveMQ.  The bridge 
 seems to work perfectly until I stop then restart OpenJMS while leaving 
 ActiveMQ running.  Once I restart OpenJMS I try sending a message from it to 
 ActiveMQ, but ActiveMQ doesn't receive it until I stop and restart ActiveMQ.  
 I can recreate the exact same problem by starting ActiveMQ first and then 
 OpenJMS.  After a little more reading it looks like failover should fix this 
 problem, but I tried it and it didn't work.  I submitted a question to 
 ActiveMQ and Gary Tully responded and told me there is currently no facility 
 to retry jms connections to a foreign provider by the ActiveMQ JMS bridge.
 Assuming that remote end-points may not be using ActiveMQ then I would think 
 this would be a very important feature to have.
 Here's a link to our conversation: 
 http://www.nabble.com/How-to-configure-failover-for-jmsBridgeConnector-td25909047.html#a25918800
 The conversation also contains an attachment showing me configuration file.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (AMQ-3375) stomp consumer might not receive all msgs of a queue

2011-06-23 Thread Torsten Mielke (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-3375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Mielke updated AMQ-3375:


Attachment: stomp-testcase.tgz

Attaching a test case in JUnit format. It may take a number of test runs before 
this test case fails (10  x  20).
For a description of the test case, have a read over the class documentation in 
src/test/resources/org/apache/activemq/transport/stomp/StompVirtualTopicTest.java.

The test case uses an embedded broker but I have the feeling the problem is 
easier reproduced using an external broker with the config 
src/test/resources/StompVirtualTopicTest.xml. It also makes it easier to attach 
jconsole.

To run the test case simply call mvn test.

 stomp consumer might not receive all msgs of a queue
 

 Key: AMQ-3375
 URL: https://issues.apache.org/jira/browse/AMQ-3375
 Project: ActiveMQ
  Issue Type: Bug
  Components: Broker
Affects Versions: 5.5.0
 Environment: stomp consumer on virtual destination 
 Tested on Ubuntu 10.10 and MacOSX using java 6.
Reporter: Torsten Mielke
  Labels: stomp, virtualTopic
 Attachments: stomp-testcase.tgz


 Have a testcase that connects a Java stomp consumer to a virtual destination 
 queue and consumes a fixed amount of msgs.
 During the test I noticed that the consumer does not always receive the full 
 amount of msgs.
 Instead the receive times out although JMX QueueSize property is greater than 
 0. However when trying to browse the queue using JMX, it returns null, 
 despite the fact that not all msgs got dequeued yet (dispatch and dequeue 
 counter  enqueue counter).
 So far I reproduced this with a stomp producer/consumer only. The producer 
 writes msgs to a virtual topic VirtualTopic.Foo and the consumer takes msgs 
 off the Consumer.A.VirtualTopic.Foo queue. Using JMX I noticed all msgs got 
 moved from the virtual topic to the queue (reflected by JMX enqueue counter) 
 but not all msgs got consumed.
 So it seems the broker lost some msgs on the way. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   >