I think I’m in a weird edge situation caused by a potential bug / design
flaw.

I have a java daemon that needs to process tasks as much as possible.  It’s
a thread per task model with each box having a thread per session and
consumer.

This is required per activemq/jms:

http://activemq.apache.org/multiple-consumers-on-a-queue.html

The problem comes with prefetch sizing.

Since each consumer has a prefetch window I have to be careful.

Too high and I get slow consumers that prefetch to many messages and just
sit on them.. preventing anything using it.

So if I had a 50k prefetch (silly but just for an example) and a queue with
50k messages, a consumer will just prefetch them thereby preventing
anything else from using it.

Too low and I *think* what’s happening is that the activemq connection
can’t service queues fast enough to keep them prefectched (warm).  This
means I get this cyclical performance wave which looks like a sine wave.
On startup, we prefetch everything, but then CPU spikes processing my
tasks.  the ActiveMQ connection then gets choked.  Then the tasks finish
up, but have nothing prefetched so they have to wait for ActiveMQ.  But now
ActiveMQ has enough CPU locally to get around to prefetching again.

… so one strategy could be (I think) to use prefetch eviction

which I think this is about:

http://activemq.apache.org/slow-consumer-handling.html

am I right? It’s hard to tell though.

This would be resolved if JMS could handle threading better.  I wouldn’t
need so many consumers.  Another idea is that I could use ONE thread for
all the ActiveMQ message handling and then just dispatch local messages,
and then ack them in the same thread.  This seems like a pain though.

And this is yet another ActiveMQ complexity I have to swallow in my app…

Hopefully 6.0 can address some of these issues with JMS 2.0 goodness… but I
haven’t looked into it that much to know for sure.


-- 

Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ profile
<https://plus.google.com/102718274791889610666/posts>

Reply via email to