[
https://issues.apache.org/activemq/browse/AMQ-2155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=51363#action_51363
]
Brian Moran commented on AMQ-2155:
----------------------------------
This appears to be related to the activemq.prefetchSize. Messages are also
being dispatched out of order.
With the default prefetchSize (set to 1000 if the online docs are correct),
using the code provided in this report, I do the following:
1) Clear the queues
2) Start one consumer
3) Queue 200 messages, 1 second delay (TestModel(200,1) in the example)
4) Start the 2nd consumer
5) --- I observe that the 2nd consumer sees NO messages while the first
consumer continues to process messages. EXPECTED BEHAVIOR: 2nd consumer should
see messages
6) Queue 50 messages, 2 second delay(TestModel(50,2) in the example)
7) --- I observe that the 1st consumer is working on the original set of
messages. The 2nd consumer starts to process the newly-queued 2-second
messages- processing the Messages out of order. It only processes 1/2 of the
messages that were queued (this 1/2 ratio of processed to queued was observed
repeated for various #'s of queued messages
8) When the 2nd consumer stops, queue 4 messages of 3 second delay
9) Observe that the 2nd consumer processes 2 of the 3 second delay messages,
then stops. Also observe that consumer #1 is still processing 1 second
messages: A listing of the messages in the queue shows gaps in the message
sequences where messages have been removed and processed.
ID:shine.local-54798-1240240213473-4:11:-1:1:1432 Persistent
0 false 2009-04-23 09:56:46:037 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1433 Persistent
0 false 2009-04-23 09:56:46:039 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1434 Persistent
0 false 2009-04-23 09:57:30:533 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1436 Persistent
0 false 2009-04-23 09:57:30:535 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1438 Persistent
0 false 2009-04-23 09:57:30:536 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1440 Persistent
0 false 2009-04-23 09:57:30:541 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1442 Persistent
0 false 2009-04-23 09:57:30:542 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1444 Persistent
0 false 2009-04-23 09:57:30:582 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1446 Persistent
0 false 2009-04-23 09:57:30:582 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1448 Persistent
0 false 2009-04-23 09:57:30:582 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1450 Persistent
0 false 2009-04-23 09:57:30:583 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1452 Persistent
0 false 2009-04-23 09:57:30:583 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1454 Persistent
0 false 2009-04-23 09:57:30:596 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1456 Persistent
0 false 2009-04-23 09:57:30:597 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1458 Persistent
0 false 2009-04-23 09:57:30:598 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1460 Persistent
0 false 2009-04-23 09:57:30:600 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1462 Persistent
0 false 2009-04-23 09:57:30:600 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1464 Persistent
0 false 2009-04-23 09:57:30:601 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1466 Persistent
0 false 2009-04-23 09:57:30:605 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1468 Persistent
0 false 2009-04-23 09:57:30:605 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1470 Persistent
0 false 2009-04-23 09:57:30:605 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1472 Persistent
0 false 2009-04-23 09:57:30:606 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1474 Persistent
0 false 2009-04-23 09:57:30:606 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1476 Persistent
0 false 2009-04-23 09:57:30:606 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1478 Persistent
0 false 2009-04-23 09:57:30:614 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1480 Persistent
0 false 2009-04-23 09:57:30:615 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1482 Persistent
0 false 2009-04-23 09:57:30:616 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1484 Persistent
0 false 2009-04-23 10:00:21:369 PDT Delete
ID:shine.local-54798-1240240213473-4:11:-1:1:1486 Persistent
0 false 2009-04-23 10:00:21:373 PDT Delete
10) Observe that Consumer 1 eventually processes the remaining 2 second and 3
second messages.
> ActiveMQ with STOMP producer, multiple consumers -- ActiveMQ doesn't
> distribute messages to additional consumers.
> -----------------------------------------------------------------------------------------------------------------
>
> Key: AMQ-2155
> URL: https://issues.apache.org/activemq/browse/AMQ-2155
> Project: ActiveMQ
> Issue Type: Bug
> Components: Broker
> Affects Versions: 5.1.0, 5.2.0
> Environment: mac osx; rails 2.1.1; ruby 1.8.6;
> Reporter: Brian Moran
> Priority: Critical
> Attachments: stomplog-1.log
>
>
> Using the stock ActiveMQ 5.2.0 install, with a very small producer and
> consumer (see source code here:
> http://groups.google.com/group/activemessaging-discuss/browse_thread/thread/0b45ae6f030a13af#)
> , using client :ack mode;
> Adding additional consumers to a queue results has no affect on message
> distribution -- only the originally connected consumers receive the messages
> UNTIL the queue is empty, or the original message consumes disconnect. One
> would LIKE to be able to add additional consumers for a queue, and have the
> broker distrubute messages to those consumers -- this does NOT happen in
> 5.1.0 and 5.2.0.
> ALSO, when there are multiple consumers attached to the broker, performance
> can degrade so that only one or two consumers are being sent messages. THIS
> may be related, perhaps not.
> ATTACHED is a dump of the STOMP TRANSPORT trace.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.