I need to process events from thousands of logical streams (discriminated by
request.body.partitionKey) in order by stream. In other words, events from a
particular stream need to be processed in strict order, however, events
between the streams can be processed asynchronously. 

To accomplish this in a multi-threaded fashion, I create 23 pipelines (1
thread each) which receive events from a sticky load balancer ( single
thread) :



The load balancer end point has a queue of about 70K events, while each of
the pipelines is 10K.  

During heavy processing I reach a condition where one pipeline queue is
filled/saturated to the max and cannot accept any more data.  When this
happens and sticky load balancer has an event for that pipeline, it blocks
trying to put the data into the queue.  So, it waits for the queue to become
available.  In the mean time, other pipelines could be running empty and
since the main route is blocked are not getting additional events to
process.

This causes CPU to be underutilized and drops my performance significantly.

So, what I was thinking of doing (unless there is a better solution), is to
try to "peek" ahead in the queue to find an event for a different pipeline,
pluck it out and deliver to the pipeline which is not blocked.  

Is there something like this already available in Camel?  Should I be using
a different pattern (other than sticky load balancer)?  What would be the
most efficient way of doing this?

In short, I want all 23 pipelines to have work to do and the load balancer
not get suck when one of the pipeline's queue is full.

I could, of course, add ActiveMQ to the mix, or try to write the queues to
disk, however, that will add more latency and time to the processing.  So, I
would like to keep the queues in memory.

Thanks
-AP_




--
View this message in context: 
http://camel.465427.n5.nabble.com/How-to-implement-a-peek-ahead-skip-processing-in-queue-tp5774068.html
Sent from the Camel - Users mailing list archive at Nabble.com.

Reply via email to