[
https://issues.apache.org/activemq/browse/AMQ-2475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=55218#action_55218
]
Dominic Tootell commented on AMQ-2475:
--------------------------------------
I've ran some more tests on the patch I uploaded yesterday, and come across a
small issue with sendFailIfNoSpace="true". The ResourceAllocationException
would only be thrown if the producer noticed the out of space condition before
the message was added to the cursor. However, there was a slight chance space
would be available when producer 1 checked, but this space was then eaten by
producer 2. Producer 1 would then be within the waiting for space loop; and
not send a ResourceAllocationException.
I have added checks within the waiting for space loop, to check if a
ResourceAllocationException should be thrown if sendFailIfNoSpace="true".
I shall update the patches attached yesterday, to reflect this.
apologies,
/dom
I'm also currently running the test against a normal persistent queue; to make
sure all is ok with that; I'll comment back once the run has finished.
> If tmp message store fills up, broker can deadlock due to while producers
> wait on disk space and consumers wait on acks
> -----------------------------------------------------------------------------------------------------------------------
>
> Key: AMQ-2475
> URL: https://issues.apache.org/activemq/browse/AMQ-2475
> Project: ActiveMQ
> Issue Type: Bug
> Components: Broker, Message Store, Transport
> Affects Versions: 5.3.0
> Environment: Tested on Windows XP with JDK 1.60_13, but fairly sure
> it will be an issue on all platforms
> Reporter: Martin Murphy
> Assignee: Rob Davies
> Attachments: activemq.xml, hangtest.zip, Queue.java,
> Queue.patchfile.txt, Topic.java, Topic.patchfile.txt, TopicSubscription.java,
> TopicSubscription.patchfile.txt
>
>
> I will attach a simple project that shows this. In the test the tmp space is
> set to 32 MB and two threads are created. One thread will constantly produce
> 1KB messages and the other consumes these, but sleeps for 100ms, note that
> producer flow control is turned off as well. The goal here is to ensure that
> the producers block while the consumers read the rest of the messages from
> the broker and catch up, this in turn frees up the disk space and allows the
> producer to send more messages. This config means that you can bound the
> broker based on disk space rather than memory usage.
> Unfortunately in this test using topics while the broker is reading in the
> message from the producer it has to lock the matched list it is adding it to.
> This is an abstract from the Topic's point of view and doesn't realize that
> the file may block based on the file system.
> {code}
> public void add(MessageReference node) throws Exception { //... snip ...
> if (maximumPendingMessages != 0) {
> synchronized (matchedListMutex) { // We have this mutex
> matched.addMessageLast(node); // ends up waiting for space
> // NOTE - be careful about the slaveBroker!
> if (maximumPendingMessages > 0) {
> {code}
> Meanwhile the consumer is sending acknowledgements for the 10 messages it
> just read in (the configured prefetch) from the same topic, but since they
> also modify the same list in the topic this waits as well on the mutex held
> to service the producer:
> {code}
> private void dispatchMatched() throws IOException {
> synchronized (matchedListMutex) { // never gets passed here.
> if (!matched.isEmpty() && !isFull()) {
> {code}
> This is a fairly classic deadlock. The trick is now how to resolve this given
> the fact that the topic isn't aware that it's list may need to wait for the
> file system to clean up.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.