Hi Marc,

Did you get an answer to this question? I'm wondering how to keep multiple 
consumers on a single machine running long running tasks without blocking 
each other due to the shared executor pool. I've noticed that I can not 
consume two queues in parallel if one of the queues ends up getting tied up 
with long running blocking calls. 

AK

On Tuesday, August 6, 2013 12:33:00 PM UTC-4, Marc Limotte wrote:
>
> Hi,
>
> I have a few related questions having to do with parallelism and 
> performance of the consumer:
>
> 1. 
> When using prefetch of 1 (because I have a worker pool and I want 
> fair-load balancing), will the prefetch setting restrict each consumer to 
> processing 1 message at a time, even if there are multiple threads in the 
> subscriber pool?
>
> I create a channel with prefetch 1:
>   (lb/qos ch 1)
>
> Declare a queue and bind using that channel:
>   (lq/declare ch queue-name :exclusive false :auto-delete false)
>   (lq/bind ch my-q "amq.direct" :routing-key my-q)
>
> And subscribe a consumer:
>   (lc/subscribe ch my-q my-handler)
>
> I would expect that each thread in the Executor Pool used by subscribe 
> would fetch 1 message, so that multiple message may be processing on a 
> single node at once.  I'm afraid that it's only processing 1 message per 
> node, though?  What should I expect?
>
> 2. 
> Looking through the source code for lc/subscribe, looks like it uses a 
> default Executor Pool for the consumers.  Not sure, but I believe the 
> number of threads in the default pool is based on the number of cores in 
> the machine.  Is it possible to impact the pool that is created for this? 
>  In particular, I'd like to create more threads for one of my consumers, 
> because it does some I/O and I believe we would get better throughput with 
> more simultaneous requests.
>
> Tracing through the code, I think I found what I want here, with the 
> "executor" setting given to: com.novemberain.langohr.Connection#init.  Am I 
> on the right track?
>
> 3. 
> Finally, I have a use-case where nodes in a worker pool have some caching, 
> so I'd like to preferentially route a message to a particular node in a 
> worker pool based on a set of keys.  Are there any examples of the best 
> approach to use for this scenario.  A naive approach might be to create one 
> node per key/hash -- e.g. if I took the keys and hashed them together to 
> come up with a single digit 0 - 9, I could then create 10 workers and 
> statically route each of 0 - 9 to one of those 10.  
>
> But this has a lot of drawbacks: load-balancing, fail over, and dev/ops 
> effort to setup and coordinate.  I haven't totally thought it through, but 
> I think I want some sort of affinity based routing.  I'm hoping there is 
> some example or best practice for how to do this with rabbit.
>
> What I'm using: 
>
>    - Langhor 1.1.0
>    - Ubuntu Lucid 
>    - java version "1.6.0_27"
>    OpenJDK Runtime Environment (IcedTea6 1.12.5) 
>    (6b27-1.12.5-0ubuntu0.10.04.1)
>    OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)
>
>
> I hope this isn't too much all at once.  But thanks for your help and an 
> excellent client library.
>  
> marc
> The Climate Corporation
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"clojure-rabbitmq" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure-rabbitmq+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to