[
https://issues.apache.org/activemq/browse/AMQ-2475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Rob Davies resolved AMQ-2475.
-----------------------------
Resolution: Fixed
Fix Version/s: 5.4.0
dead lock fixed by svn revision 881313 and 881340
> If tmp message store fills up, broker can deadlock due to while producers
> wait on disk space and consumers wait on acks
> -----------------------------------------------------------------------------------------------------------------------
>
> Key: AMQ-2475
> URL: https://issues.apache.org/activemq/browse/AMQ-2475
> Project: ActiveMQ
> Issue Type: Bug
> Components: Broker, Message Store, Transport
> Affects Versions: 5.3.0
> Environment: Tested on Windows XP with JDK 1.60_13, but fairly sure
> it will be an issue on all platforms
> Reporter: Martin Murphy
> Assignee: Rob Davies
> Fix For: 5.4.0
>
> Attachments: activemq.xml, hangtest.zip, Queue.java,
> Queue.patchfile.txt, Topic.java, Topic.patchfile.txt, TopicSubscription.java,
> TopicSubscription.patchfile.txt
>
>
> I will attach a simple project that shows this. In the test the tmp space is
> set to 32 MB and two threads are created. One thread will constantly produce
> 1KB messages and the other consumes these, but sleeps for 100ms, note that
> producer flow control is turned off as well. The goal here is to ensure that
> the producers block while the consumers read the rest of the messages from
> the broker and catch up, this in turn frees up the disk space and allows the
> producer to send more messages. This config means that you can bound the
> broker based on disk space rather than memory usage.
> Unfortunately in this test using topics while the broker is reading in the
> message from the producer it has to lock the matched list it is adding it to.
> This is an abstract from the Topic's point of view and doesn't realize that
> the file may block based on the file system.
> {code}
> public void add(MessageReference node) throws Exception { //... snip ...
> if (maximumPendingMessages != 0) {
> synchronized (matchedListMutex) { // We have this mutex
> matched.addMessageLast(node); // ends up waiting for space
> // NOTE - be careful about the slaveBroker!
> if (maximumPendingMessages > 0) {
> {code}
> Meanwhile the consumer is sending acknowledgements for the 10 messages it
> just read in (the configured prefetch) from the same topic, but since they
> also modify the same list in the topic this waits as well on the mutex held
> to service the producer:
> {code}
> private void dispatchMatched() throws IOException {
> synchronized (matchedListMutex) { // never gets passed here.
> if (!matched.isEmpty() && !isFull()) {
> {code}
> This is a fairly classic deadlock. The trick is now how to resolve this given
> the fact that the topic isn't aware that it's list may need to wait for the
> file system to clean up.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.