We have a finite set of message groups - 20 (0-19)

We are monitoring the consumers via JMX and at one time were having issues
with memory leaks and GC operation that have since been solved. Now
everything looks good on the consumer side.

There are variations in how long messages take to process - ranges in
roughly 50 to 150 ms but nothing that ever takes a really long time. I am
not seeing signs that they are not processing the messages - more that they
just stop receiving them. However, the logs do often indicate that a message
is in transit... I will have to find one for exact wording.

The consumers usually start when there are messages in the queue. In our
testing we have often started the consumer with well over 100k messages in
the queue and it will usually churn through them without issue - which can
sometimes take an hour or two. In ideal circumstances the consumers would
remain running indefinitely - we only start and stop them for updates and
because of this problem.

This is what is perplexing, because the consumers are operating under
heaviest load at these times and we never saw an issue. If the normal
incoming message rate is low enough we do not usually see a pause, it is
when the message rate flowing into AMQ increases that we see the problem....
in other words it seems to be more related to the rate at which messages
flow into AMQ than the rate at which it dispatches to the consumer.

As I said in the original post - when we have to completely separate
consumer applications processing from the same AMQ instance they pause and
resume together. It does not seem possible that the consumers could
simultaneously run into the same memory/resource issue. What is common
between them is the AMQ instance, database server and Redis but the latter
two are also shared by other consumers that are not affected at the same
time.



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/AMQ-pauses-sending-to-consumers-tp4701242p4701267.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Reply via email to