can you post your test code? I would like to give it a whirl to investigate.
On 19 December 2012 14:19, benj <apache....@benandi.com> wrote: > Hi > > Sorry - I should have said that I did try preFetch="1", and I also updated > the constantPendingMessage limit to 0. > > Here's my config: > <policyEntry topic=">" topicPrefetch="1" > alwaysRetroactive="true"> > <subscriptionRecoveryPolicy> > <lastImageSubscriptionRecoveryPolicy/> > </subscriptionRecoveryPolicy> > <pendingMessageLimitStrategy> > <constantPendingMessageLimitStrategy limit="0"/> > </pendingMessageLimitStrategy> > <messageEvictionStrategy> > <oldestMessageEvictionStrategy/> > </messageEvictionStrategy> > </policyEntry> > > But between the producer and consumer you can see a queue building up, and > the latency increasing second by second: > > Producer, sending approx 5000 msgs/s > Sending message: 3944, 3944 msg/s > Sending message: 8302, 4358 msg/s > Sending message: 13471, 5169 msg/s > Sending message: 18763, 5292 msg/s > Sending message: 24212, 5449 msg/s > Sending message: 29331, 5119 msg/s > Sending message: 35003, 5672 msg/s > Sending message: 40666, 5663 msg/s > Sending message: 46207, 5541 msg/s > Sending message: 51914, 5707 msg/s > > Consumer, receiving only 500-1000 msgs/s, latency increasing, no messages > being discarded > Received message: 1525, 1526 msgs/s, latency 21 < 197 < 439 > Received message: 3289, 1764 msgs/s, latency 350 < 624 < 937 > Received message: 4586, 1297 msgs/s, latency 948 < 1240 < 1595 > Received message: 6390, 1804 msgs/s, latency 1591 < 1983 < 2239 > Received message: 7947, 1557 msgs/s, latency 2235 < 2485 < 2784 > Received message: 8958, 1011 msgs/s, latency 3102 < 3345 < 3625 > Received message: 9464, 506 msgs/s, latency 4422 < 4451 < 4503 > Received message: 9970, 506 msgs/s, latency 5014 < 5061 < 5121 > Received message: 10470, 500 msgs/s, latency 5751 < 5812 < 5873 > Received message: 10971, 501 msgs/s, latency 6409 < 6466 < 6527 > > Each row is a 1 second sampling period > Messages are simple incrementing ids > Latency is given as min < average < max, in milliseconds > > So, I'm kind of at the end of things to try and get this working... but I > hope there's something simple I missing > > Thanks again > > Ben > > From: gtully [via ActiveMQ] > [mailto:ml-node+s2283324n4660890...@n4.nabble.com] > Sent: 19 December 2012 13:34 > To: benj > Subject: Re: ActiveMQ slow consumer policy > > > 1000 topics with a prefetch of 100 is 100,000 pending messages in memory. > > > > To get real eager discarding try: > > > > <policyEntry topic=">" topicPrefetch="1" > > > <subscriptionRecoveryPolicy> > > <lastImageSubscriptionRecoveryPolicy/> > > </subscriptionRecoveryPolicy> > > <pendingMessageLimitStrategy> > > <constantPendingMessageLimitStrategy limit="0"/> > > </pendingMessageLimitStrategy> > > ... > > > > so that it won't keep any messages pending when the consumer is not ready > > to consume. this will reduce the dispatch work load on the topic and > > ensure > > fast consumer get all messages while slow consumers get gaps. > > > > The lastImageSubscriptionRecoveryPolicy is only for retroactive > > returning/new consumers, the get the current last state when the connect. > > > > > > > -- > View this message in context: > http://activemq.2283324.n4.nabble.com/ActiveMQ-slow-consumer-policy-tp4660859p4660900.html > Sent from the ActiveMQ - User mailing list archive at Nabble.com. > -- http://redhat.com http://blog.garytully.com