Thanks, So you can confirm that threads are no longer blocked during retries in 5.1? Can I find any docs/info on the retry mechanism in 5.1 anywhere?
rajdavies wrote: > > I'd certainly try 5.1 > > cheers, > > Rob > On 6 Jun 2008, at 14:05, Demian Mrakovich wrote: > >> >> Hi, bumping this one... Has this (I'd have to say major) issue been >> addressed >> in newer versions of ActiveMQ? >> >> >> Jason Rosenberg wrote: >>> >>> All, >>> >>> I have been experimenting with using exponentialBackOff as a >>> redelivery >>> policy. I have multiple parallel consumers. I'm using AMQ 4.1.1, >>> using >>> Spring's DefaultMessageListener, within Tomcat 6. >>> >>> I've noticed that when a message consumer has a failure, and throws >>> an >>> exception, and the message gets scheduled for retry, the particular >>> consumer thread gets tied to the redelivery, and it doesn't process >>> any >>> more messages until it's gone through the full redelivery schedule >>> for the >>> error prone message. So, with the exponentialBackOff mode, it's >>> easy to >>> set it to wait 10 minutes before redelivering a failed message, and >>> in >>> that time that consumer will be blocked. >>> >>> If the consumer has a pre-fetch greater than 1, this means other >>> messages >>> which might be perfectly valid for processing are blocked until the >>> troubled message goes through it's full redelivery cycle (which >>> could be >>> infinite!).... >>> >>> I'm wondering if there's an alternate way to configure things, so >>> that a >>> message marked for redelivery goes back on the queue to be >>> scheduled by >>> the next available consumer (and not until it's next scheduled >>> redelivery >>> time has arrived!).... >>> >>> If I have a queue with multiple consumers, and perhaps even a small >>> percentage of the messages are corrupt and force multiple retries, it >>> could eventually happen that all consumers are blocked waiting idly >>> to >>> retry the message at the top of it's list, even though other >>> messages are >>> readily available. This would seem to be the case even when the >>> prefetch >>> limit is 1. >>> >>> Thoughts? >>> >>> Jason >>> >> >> -- >> View this message in context: >> http://www.nabble.com/Redelivery-sticks-to-and-blocks-a-consumer-thread-tp13261850s2354p17691674.html >> Sent from the ActiveMQ - User mailing list archive at Nabble.com. >> > > > -- View this message in context: http://www.nabble.com/Redelivery-sticks-to-and-blocks-a-consumer-thread-tp13261850s2354p17693688.html Sent from the ActiveMQ - User mailing list archive at Nabble.com.