[
https://issues.apache.org/activemq/browse/AMQ-1918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Richard Yarger reopened AMQ-1918:
---------------------------------
I applied my test scenario to apache-activemq-5.3-20090113.084327-5.
It was actually worse.
I still got a negative queue.
And I was unable to consume from or produce to queue1.
The queue was left with 131 messages that consumer1 will not consume, even
after I stop and restart the consumer.
I ran my producers and no messages are added to queue1.
I restarted the broker and the 131 messages were consumed.
The following error was in the log:
ERROR Service - Async error occurred:
javax.jms.JMSException: Unmatched acknowledege: MessageAck {comm
andId = 839, responseRequired = false, ackType = 2, consumerId =
ID:vibes-richyarger-1501-1231882347001-0:0:2:1, firstMessage
Id = null, lastMessageId = ID:vibes-richyarger-3948-1231881217090-0:5200:1:1:1,
destination = queue://test.queue.1, transacti
onId = TX:ID:vibes-richyarger-1501-1231882347001-0:0:138, messageCount = 1};
Could not find Message-ID ID:vibes-richyarger-39
48-1231881217090-0:5200:1:1:1 in dispatched-list (end of ack)
javax.jms.JMSException: Unmatched acknowledege: MessageAck {commandId = 839,
responseRequired = false, ackType = 2, consumerI
d = ID:vibes-richyarger-1501-1231882347001-0:0:2:1, firstMessageId = null,
lastMessageId = ID:vibes-richyarger-3948-123188121
7090-0:5200:1:1:1, destination = queue://test.queue.1, transactionId =
TX:ID:vibes-richyarger-1501-1231882347001-0:0:138, mes
sageCount = 1}; Could not find Message-ID
ID:vibes-richyarger-3948-1231881217090-0:5200:1:1:1 in dispatched-list (end of
ack)
at
org.apache.activemq.broker.region.PrefetchSubscription.assertAckMatchesDispatched(PrefetchSubscription.java:439)
at
org.apache.activemq.broker.region.PrefetchSubscription.acknowledge(PrefetchSubscription.java:192)
at
org.apache.activemq.broker.region.AbstractRegion.acknowledge(AbstractRegion.java:377)
at
org.apache.activemq.broker.region.RegionBroker.acknowledge(RegionBroker.java:462)
at
org.apache.activemq.broker.TransactionBroker.acknowledge(TransactionBroker.java:194)
at
org.apache.activemq.broker.BrokerFilter.acknowledge(BrokerFilter.java:74)
at
org.apache.activemq.broker.BrokerFilter.acknowledge(BrokerFilter.java:74)
at
org.apache.activemq.broker.MutableBrokerFilter.acknowledge(MutableBrokerFilter.java:85)
at
org.apache.activemq.broker.TransportConnection.processMessageAck(TransportConnection.java:458)
at org.apache.activemq.command.MessageAck.visit(MessageAck.java:205)
at
org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:305)
at
org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:179)
at
org.apache.activemq.transport.TransportFilter.onCommand(TransportFilter.java:68)
at
org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:143)
at
org.apache.activemq.transport.InactivityMonitor.onCommand(InactivityMonitor.java:206)
at
org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:84)
at
org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:203)
at
org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:185)
at java.lang.Thread.run(Thread.java:595)
> AbstractStoreCursor.size gets out of synch with Store size and blocks
> consumers
> -------------------------------------------------------------------------------
>
> Key: AMQ-1918
> URL: https://issues.apache.org/activemq/browse/AMQ-1918
> Project: ActiveMQ
> Issue Type: Bug
> Components: Message Store
> Affects Versions: 5.1.0
> Reporter: Richard Yarger
> Assignee: Rob Davies
> Priority: Critical
> Fix For: 5.3.0
>
> Attachments: activemq.xml, testAMQMessageStore.zip, testdata.zip
>
>
> In version 5.1.0, we are seeing our queue consumers stop consuming for no
> reason.
> We have a staged queue environment and we occasionally see one queue display
> negative pending message counts that hang around -x, rise to -x+n gradually
> and then fall back to -x abruptly. The messages are building up and being
> processed in bunches but its not easy to see because the counts are negative.
> We see this behavior in the messages coming out of the system. Outbound
> messages come out in bunches and are synchronized with the queue pending
> count dropping to -x.
> This issue does not happen ALL of the time. It happens about once a week and
> the only way to fix it is to bounce the broker. It doesn't happen to the same
> queue everytime, so it is not our consuming code.
> Although we don't have a reproducible scenario, we have been able to debug
> the issue in our test environment.
> We traced the problem to the cached store size in the AbstractStoreCursor.
> This value becomes 0 or negative and prevents the AbstractStoreCursor from
> retrieving more messages from the store. (see AbstractStoreCursor.fillBatch()
> )
> We have seen size value go lower than -1000.
> We have also forced it to fix itself by sending in n+1 messages. Once the
> size goes above zero, the cached value is refreshed and things work ok again.
> Unfortunately, during low volume times, it could be hours before n+1 messages
> are received, so our message latency can rise during low volume times.... :(
> I have attached our broker config.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.