Bruce Thanks again for all of your answers, the light is starting to dawn on
me.
The information about threadpool is what I think I will need.

To clarify my problem (Hopefully answering your questions).

The general problem I am tryng to solve is a asynchrnous communication.
The external client wants to drop off an http message and not wait for a
reply (other than servicemix received it).
The main reason for this is that the backend is cpu/time consuming process
and the client does not want to wait for the processing to finish.
This final backend process is a servicemix component.

Our solution for this was to put a jms queue in the middle of the process,
so when the client posts a message, servicemix can simple enqueue it and
return to the client (very quick).

This queue is then serviced by a servicemix jms-consumer endpoint which
reads off of the queue and sends them to the final servicemix endpoint for
processing.

Our concern is that if clients enqueue too many messages too quickly, then
servicemix will try to process too many of the messages at once.  This will
lead to the cpu on the single servicemix jvm to get overrun and not have the
resource availble for the http consumer to do it's job.

We thought we could alleviate this 2 ways (probably a combination of both of
them)
1)  Limit the number of messages any one servicemix will work on at a time
(it looks like this can be done using the ThreadPool's)
2) Have another servicemix running on a seperate box/jvm that is looking at
the same queues and processing them (which looks like it can be achieved by
clustering activemq)


Is there another solution to the asynch problem and/or the limiting of
resources any single endpoint can use within servicemix?

I also may be totally offbase, if so, please let me know.

Thanks again
-Kevin
-- 
View this message in context: 
http://www.nabble.com/jms-limiting-tp15070635s12049p15138543.html
Sent from the ServiceMix - User mailing list archive at Nabble.com.

Reply via email to