I don't know if this group is right place to post it, but I couldn't
find any general Scala or Scala/Actor groups, and this is most active
Scala group..

I have server that consumes AMQP messages, handles messages in
dedicated workers(Actors). So design is this:

1. One Dispatcher which is event-based actor (nested reacts)
   a. First it listens for event from worker that worker is ready
   b. when worker is ready it uses nested react to get AMQP message
and forward it to worker
react {
   case WorkerReady(worker:Actor) => react {
      case m...@message => worker ! msg; channelHandler ! 'ack; act
   }
}
2. Few workers that are thread-based actors (sends 'WorkerReady' to
dispatcher)
3. RabbitMQ Java lib, handleDelivery (baseConsume) in Java lib
Connection thread (sends 'message' to dispatcher)
4. There is also one per dispatcher thread-based actor that handles
AMQP channel, but it's not important here (channelHandler ! 'ack)

so when I'm running load test I see that number of threads that used
by "dispatcher" is growing (so far up to 200) and performance is going
down. Most of them are always in "blocked" state
Dispatcher's Actor queue never should be more that few messages as I'm
using QoS = 5 on channel and sending ack only when worker picks up
message, also I have just a few workers
When I'm looking to heap, I can see that more than 80% (up to 95%) are
"scala.actors.FJTaskRunner$VolatileTaskRef" objects and number of
those is quickly growing (have 1'000'000 right now).

What can be a cause of spawning so many threads and large number of
'scala.actors.FJTaskRunner$VolatileTaskRef' objects (GC doesn't kill
them)? And is there a way to limit size of thread pool used by even
Actors? Is there a way to monitor size of Actor queue?

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Lift" group.
To post to this group, send email to liftweb@googlegroups.com
To unsubscribe from this group, send email to 
liftweb+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/liftweb?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to