[
https://issues.apache.org/activemq/browse/AMQ-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_40739
]
Rainer Klute commented on AMQ-1490:
-----------------------------------
The good news is: Yes, it works! Great! Thanks, Rob!
The bad news is: Some test cases take a tremendous amount of heap space now. My
former max memory setting of -Xmx256K was insufficient; with -Xmx1024K the test
application runs. This gave me cause to investigate that behaviour more closely
and I had jconsole watch the memory usage. The result you'll find in attachment
AMQ-1490_memory-001.png. I did some drawing on that screenshot to demarcate the
test cases. The wall clock times on the X axis you can correlate to the
detailed test case protocol in AMQ-1490_result-001.txt.
The scenario is that two producers send each 100,000 messages to a topic and
one consumer reads them all. The differences between them are whether producing
and consuming sessions are transactional or not.
Interesting results:
* The enormous memory consumption only occurs if the producers operate
transactionally and the consumer does not.
* Test case 0001 (transactional producers) starts to consume much memory.
However, after a mark-and-sweep garbage collection things stay moderate.
* The execution times vary very much: There's a factor of 7.7 between the
fastest and the slowest run.
* The fastests test cases are those without transactional producers.
> Deadlocks (with JUnit tests)
> ----------------------------
>
> Key: AMQ-1490
> URL: https://issues.apache.org/activemq/browse/AMQ-1490
> Project: ActiveMQ
> Issue Type: Bug
> Components: Broker
> Affects Versions: 5.0.0
> Environment: Linux
> Reporter: Rainer Klute
> Assignee: Rob Davies
> Fix For: 5.0.0
>
> Attachments: ActiveMQ_Testcases.jar, ActiveMQ_Testcases.jar,
> ActiveMQ_Testcases.jar, ActiveMQ_Testcases.jar, AMQ-1490_memory-001.png
>
>
> For some time now there have been various bug reports about ActiveMQ
> "blocking", "not receiving messages", "running into a deadlock" etc. Since I
> encoutered such deadlocks now and then, too, I eventually wrote up a JUnit
> testing scenario for this stuff. I found out that deadlocks can be quite
> easily reproduced. The symptoms are that the producer thread is sending or
> committing while the consumer thread is receiving or committing - and none of
> them can advance. One of the threads is always stuck in a blocking queue.
> Here's a sample output of my testing class:
> An ActiveMQ deadlock has been discovered. The following threads seem to be
> involved:
> Thread "producer" is inactive since 16 seconds after 358719 status changes.
> The current status is COMMITTING
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
>
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1889)
> java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:317)
>
> org.apache.activemq.transport.FutureResponse.getResult(FutureResponse.java:40)
>
> org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:76)
>
> org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1172)
> org.apache.activemq.TransactionContext.commit(TransactionContext.java:259)
> org.apache.activemq.ActiveMQSession.commit(ActiveMQSession.java:494)
> de.rainer_klute.activemq.ProducerThread.run(ProducerThread.java:162)
> Thread "consumer" is inactive since 16 seconds after 1807 status changes.
> The current status is RECEIVING
> java.lang.Object.wait(Native Method)
> java.lang.Object.wait(Object.java:485)
>
> org.apache.activemq.MessageDispatchChannel.dequeue(MessageDispatchChannel.java:75)
>
> org.apache.activemq.ActiveMQMessageConsumer.dequeue(ActiveMQMessageConsumer.java:404)
>
> org.apache.activemq.ActiveMQMessageConsumer.receive(ActiveMQMessageConsumer.java:452)
>
> org.apache.activemq.ActiveMQMessageConsumer.receive(ActiveMQMessageConsumer.java:504)
> de.rainer_klute.activemq.ConsumerThread.run(ConsumerThread.java:183)
> The following factors seem to increase the probability of a deadlock:
> * small values for memoryUsage
> * working transacted in the consumer (not always necessary but "helps")
> * many messages in the persistence store (to be achieved via a long delay
> before the consumer starts to read messages)
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.