Re: [akka-user] Re: Cluster actors an parallelism

2016-08-26 Thread Patrik Nordwall
You could have only one routee per node that creates child actors and
delegates the actual work to them.
Another flexible and dynamic approach is to use Distributed Pub Sub as a
"router": http://doc.akka.io/docs/akka/2.4/scala/distributed-pub-sub.html

On Thu, Aug 25, 2016 at 11:20 PM, Cosmin Marginean 
wrote:

> Thanks Patrik. I was hoping there would be a more flexible (and less
> hardcoded way to do this)
>
> On 25 Aug 2016, at 18:41, Patrik Nordwall 
> wrote:
>
> Start more worker actors on each node, each with a different name, let's
> say worker1, worker2, worker3. Then in the config you define all 3 in the
> paths
>
> "paths": ["/user/worker1", "/user/worker2", "/user/worker3"]
>
> /Patrik
>
> On Thu, Aug 25, 2016 at 4:41 PM, Cosmin Marginean 
> wrote:
>
>> Thanks, I will try to do further diagnosis, however this is a last
>> resort. I believe I would like to understand if this kind of use case is
>> something that Akka would natively be delivering in one form or another and
>> / or if I'm missing a trick in terms of correct router/etc that I'm using
>> here.
>>
>> Thank you
>> Cosmin
>>
>> On Thursday, August 25, 2016 at 3:14:38 PM UTC+1, Justin du coeur wrote:
>>>
>>> Smells like the problem is somewhere in the router?  I mean, if you're
>>> only processing one message at a time, that suggests that you've only got
>>> one instance of the worker Actor running, instead of the 100 that it's
>>> trying to declare.
>>>
>>> I don't use routers, so I can't speak to this config, but personally I
>>> would turn on a ton of logging, to see how many worker Actors are actually
>>> being created and how the requests are being routed to them...
>>>
>>> On Thu, Aug 25, 2016 at 9:17 AM, Cosmin Marginean 
>>> wrote:
>>>
 Hi Muthu

 I've explored these but they're not exactly what I'm after. The use
 case is as follows: we have let's say 5 nodes, and 3 of them serve as
 "workers". These 3 should be processing a series of events/messages in
 parallel.

 We thus want some "load balancing" (consistent hashing is rigid and not
 suited for this IMO) whereby if we send 90 messages, they get (reasonably)
 equally distributed between the 3 nodes (~30 each for example).

 Going further, we want on each of the 3 worker nodes to have a level of
 parallel processing, so each node would be able to process 30 messages in
 parallel let's say, and thus making all of this process (almost) fully
 parallelised in this example.

 What happens now with one node is that we are sending a few thousand
 messages and only one message is processed at a time (single threaded
 like). This is something I couldn't figure out how to overcome.

 I've also configured the dispatcher to parallelise manically, so that
 is clearly "not it". Below the complete config (seed nodes etc is added
 dynamically from somewhere else)

 {
 "main": {
 "akka": {
 "remote": {
 "log-remote-lifecycle-events": "on"
 },
 "cluster": {
 "auto-down-unreachable-after": "10s"
 },
 "actor": {
 "provider": "akka.cluster.ClusterActorRefProvider",
 "default-dispatcher": {
 "type": "Dispatcher",
 "executor": "fork-join-executor",
 "fork-join-executor": {
 "parallelism-min": 16,
 "parallelism-factor": 32,
 "parallelism-max": 512
 },
 "throughput": 1
 },
 "deployment": {
 "/frontend/backend": {
 "router": "round-robin-group",
 "nr-of-instances": 100,
 "routees": {
 "paths": ["/user/worker"]
 },
 "cluster": {
 "enabled": "on",
 "allow-local-routees": "off",
 "use-role": "worker"
 }
 }
 }
 }
 }
 }
 }


 Thank you
 Cosmin

 On Thursday, August 25, 2016 at 1:59:54 PM UTC+1, Muthukumaran
 Kothandaraman wrote:
>
> Hi Cosmin,
>
> Are these what you are looking for
>
> http://doc.akka.io/docs/akka/snapshot/scala/routing.html#Con
> sistentHashingPool_and_ConsistentHashingGroup OR
> http://doc.akka.io/docs/akka/snapshot/scala/routing.html#Bro
> adcastPool_and_BroadcastGroup
>
> Regards
> Muthu
>
>
>

Re: [akka-user] Re: Cluster actors an parallelism

2016-08-25 Thread Cosmin Marginean
Thanks Patrik. I was hoping there would be a more flexible (and less hardcoded 
way to do this)

> On 25 Aug 2016, at 18:41, Patrik Nordwall  wrote:
> 
> Start more worker actors on each node, each with a different name, let's say 
> worker1, worker2, worker3. Then in the config you define all 3 in the paths
> 
> "paths": ["/user/worker1", "/user/worker2", "/user/worker3"]
> 
> /Patrik
> 
>> On Thu, Aug 25, 2016 at 4:41 PM, Cosmin Marginean  
>> wrote:
>> Thanks, I will try to do further diagnosis, however this is a last resort. I 
>> believe I would like to understand if this kind of use case is something 
>> that Akka would natively be delivering in one form or another and / or if 
>> I'm missing a trick in terms of correct router/etc that I'm using here.
>> 
>> Thank you
>> Cosmin
>> 
>>> On Thursday, August 25, 2016 at 3:14:38 PM UTC+1, Justin du coeur wrote:
>>> Smells like the problem is somewhere in the router?  I mean, if you're only 
>>> processing one message at a time, that suggests that you've only got one 
>>> instance of the worker Actor running, instead of the 100 that it's trying 
>>> to declare.
>>> 
>>> I don't use routers, so I can't speak to this config, but personally I 
>>> would turn on a ton of logging, to see how many worker Actors are actually 
>>> being created and how the requests are being routed to them...
>>> 
 On Thu, Aug 25, 2016 at 9:17 AM, Cosmin Marginean  
 wrote:
 Hi Muthu
 
 I've explored these but they're not exactly what I'm after. The use case 
 is as follows: we have let's say 5 nodes, and 3 of them serve as 
 "workers". These 3 should be processing a series of events/messages in 
 parallel.
 
 We thus want some "load balancing" (consistent hashing is rigid and not 
 suited for this IMO) whereby if we send 90 messages, they get (reasonably) 
 equally distributed between the 3 nodes (~30 each for example).
 
 Going further, we want on each of the 3 worker nodes to have a level of 
 parallel processing, so each node would be able to process 30 messages in 
 parallel let's say, and thus making all of this process (almost) fully 
 parallelised in this example.
 
 What happens now with one node is that we are sending a few thousand 
 messages and only one message is processed at a time (single threaded 
 like). This is something I couldn't figure out how to overcome.
 
 I've also configured the dispatcher to parallelise manically, so that is 
 clearly "not it". Below the complete config (seed nodes etc is added 
 dynamically from somewhere else)
 
 {
 "main": {
 "akka": {
 "remote": {
 "log-remote-lifecycle-events": "on"
 },
 "cluster": {
 "auto-down-unreachable-after": "10s"
 },
 "actor": {
 "provider": "akka.cluster.ClusterActorRefProvider",
 "default-dispatcher": {
 "type": "Dispatcher",
 "executor": "fork-join-executor",
 "fork-join-executor": {
 "parallelism-min": 16,
 "parallelism-factor": 32,
 "parallelism-max": 512
 },
 "throughput": 1
 },
 "deployment": {
 "/frontend/backend": {
 "router": "round-robin-group",
 "nr-of-instances": 100,
 "routees": {
 "paths": ["/user/worker"]
 },
 "cluster": {
 "enabled": "on",
 "allow-local-routees": "off",
 "use-role": "worker"
 }
 }
 }
 }
 }
 }
 }
 
 Thank you
 Cosmin
 
> On Thursday, August 25, 2016 at 1:59:54 PM UTC+1, Muthukumaran 
> Kothandaraman wrote:
> Hi Cosmin, 
> 
> Are these what you are looking for 
> 
> http://doc.akka.io/docs/akka/snapshot/scala/routing.html#ConsistentHashingPool_and_ConsistentHashingGroup
>  OR
> http://doc.akka.io/docs/akka/snapshot/scala/routing.html#BroadcastPool_and_BroadcastGroup
> 
> Regards
> Muthu
> 
> 
> 
> 
>> On Thursday, 25 August 2016 14:50:13 UTC+5:30, Cosmin Marginean wrote:
>> Hello everyone
>> 
>> We have a classic scenario with a cluster with 2 tiers where one is a 
>> "worker" that we offload heavy processing to. We wired Akka clustering 
>> and have the following setup for a remote 

Re: [akka-user] Re: Cluster actors an parallelism

2016-08-25 Thread Patrik Nordwall
Start more worker actors on each node, each with a different name, let's
say worker1, worker2, worker3. Then in the config you define all 3 in the
paths

"paths": ["/user/worker1", "/user/worker2", "/user/worker3"]

/Patrik

On Thu, Aug 25, 2016 at 4:41 PM, Cosmin Marginean 
wrote:

> Thanks, I will try to do further diagnosis, however this is a last resort.
> I believe I would like to understand if this kind of use case is something
> that Akka would natively be delivering in one form or another and / or if
> I'm missing a trick in terms of correct router/etc that I'm using here.
>
> Thank you
> Cosmin
>
> On Thursday, August 25, 2016 at 3:14:38 PM UTC+1, Justin du coeur wrote:
>>
>> Smells like the problem is somewhere in the router?  I mean, if you're
>> only processing one message at a time, that suggests that you've only got
>> one instance of the worker Actor running, instead of the 100 that it's
>> trying to declare.
>>
>> I don't use routers, so I can't speak to this config, but personally I
>> would turn on a ton of logging, to see how many worker Actors are actually
>> being created and how the requests are being routed to them...
>>
>> On Thu, Aug 25, 2016 at 9:17 AM, Cosmin Marginean 
>> wrote:
>>
>>> Hi Muthu
>>>
>>> I've explored these but they're not exactly what I'm after. The use case
>>> is as follows: we have let's say 5 nodes, and 3 of them serve as "workers".
>>> These 3 should be processing a series of events/messages in parallel.
>>>
>>> We thus want some "load balancing" (consistent hashing is rigid and not
>>> suited for this IMO) whereby if we send 90 messages, they get (reasonably)
>>> equally distributed between the 3 nodes (~30 each for example).
>>>
>>> Going further, we want on each of the 3 worker nodes to have a level of
>>> parallel processing, so each node would be able to process 30 messages in
>>> parallel let's say, and thus making all of this process (almost) fully
>>> parallelised in this example.
>>>
>>> What happens now with one node is that we are sending a few thousand
>>> messages and only one message is processed at a time (single threaded
>>> like). This is something I couldn't figure out how to overcome.
>>>
>>> I've also configured the dispatcher to parallelise manically, so that is
>>> clearly "not it". Below the complete config (seed nodes etc is added
>>> dynamically from somewhere else)
>>>
>>> {
>>> "main": {
>>> "akka": {
>>> "remote": {
>>> "log-remote-lifecycle-events": "on"
>>> },
>>> "cluster": {
>>> "auto-down-unreachable-after": "10s"
>>> },
>>> "actor": {
>>> "provider": "akka.cluster.ClusterActorRefProvider",
>>> "default-dispatcher": {
>>> "type": "Dispatcher",
>>> "executor": "fork-join-executor",
>>> "fork-join-executor": {
>>> "parallelism-min": 16,
>>> "parallelism-factor": 32,
>>> "parallelism-max": 512
>>> },
>>> "throughput": 1
>>> },
>>> "deployment": {
>>> "/frontend/backend": {
>>> "router": "round-robin-group",
>>> "nr-of-instances": 100,
>>> "routees": {
>>> "paths": ["/user/worker"]
>>> },
>>> "cluster": {
>>> "enabled": "on",
>>> "allow-local-routees": "off",
>>> "use-role": "worker"
>>> }
>>> }
>>> }
>>> }
>>> }
>>> }
>>> }
>>>
>>>
>>> Thank you
>>> Cosmin
>>>
>>> On Thursday, August 25, 2016 at 1:59:54 PM UTC+1, Muthukumaran
>>> Kothandaraman wrote:

 Hi Cosmin,

 Are these what you are looking for

 http://doc.akka.io/docs/akka/snapshot/scala/routing.html#Con
 sistentHashingPool_and_ConsistentHashingGroup OR
 http://doc.akka.io/docs/akka/snapshot/scala/routing.html#Bro
 adcastPool_and_BroadcastGroup

 Regards
 Muthu




 On Thursday, 25 August 2016 14:50:13 UTC+5:30, Cosmin Marginean wrote:
>
> Hello everyone
>
> We have a classic scenario with a cluster with 2 tiers where one is a
> "worker" that we offload heavy processing to. We wired Akka clustering and
> have the following setup for a remote actor that is to be executed only on
> the worker tier.
>
> "/frontend/backend": {
> "router": "round-robin-group",
> "routees": {
> "paths": ["/user/worker"]
> },
> "cluster": {
> "enabled": "on",
> "allow-local-routees": "off",
> "use-role": 

Re: [akka-user] Re: Cluster actors an parallelism

2016-08-25 Thread Cosmin Marginean
Thanks, I will try to do further diagnosis, however this is a last resort. 
I believe I would like to understand if this kind of use case is something 
that Akka would natively be delivering in one form or another and / or if 
I'm missing a trick in terms of correct router/etc that I'm using here.

Thank you
Cosmin

On Thursday, August 25, 2016 at 3:14:38 PM UTC+1, Justin du coeur wrote:
>
> Smells like the problem is somewhere in the router?  I mean, if you're 
> only processing one message at a time, that suggests that you've only got 
> one instance of the worker Actor running, instead of the 100 that it's 
> trying to declare.
>
> I don't use routers, so I can't speak to this config, but personally I 
> would turn on a ton of logging, to see how many worker Actors are actually 
> being created and how the requests are being routed to them...
>
> On Thu, Aug 25, 2016 at 9:17 AM, Cosmin Marginean  > wrote:
>
>> Hi Muthu
>>
>> I've explored these but they're not exactly what I'm after. The use case 
>> is as follows: we have let's say 5 nodes, and 3 of them serve as "workers". 
>> These 3 should be processing a series of events/messages in parallel.
>>
>> We thus want some "load balancing" (consistent hashing is rigid and not 
>> suited for this IMO) whereby if we send 90 messages, they get (reasonably) 
>> equally distributed between the 3 nodes (~30 each for example).
>>
>> Going further, we want on each of the 3 worker nodes to have a level of 
>> parallel processing, so each node would be able to process 30 messages in 
>> parallel let's say, and thus making all of this process (almost) fully 
>> parallelised in this example.
>>
>> What happens now with one node is that we are sending a few thousand 
>> messages and only one message is processed at a time (single threaded 
>> like). This is something I couldn't figure out how to overcome.
>>
>> I've also configured the dispatcher to parallelise manically, so that is 
>> clearly "not it". Below the complete config (seed nodes etc is added 
>> dynamically from somewhere else)
>>
>> {
>> "main": {
>> "akka": {
>> "remote": {
>> "log-remote-lifecycle-events": "on"
>> },
>> "cluster": {
>> "auto-down-unreachable-after": "10s"
>> },
>> "actor": {
>> "provider": "akka.cluster.ClusterActorRefProvider",
>> "default-dispatcher": {
>> "type": "Dispatcher",
>> "executor": "fork-join-executor",
>> "fork-join-executor": {
>> "parallelism-min": 16,
>> "parallelism-factor": 32,
>> "parallelism-max": 512
>> },
>> "throughput": 1
>> },
>> "deployment": {
>> "/frontend/backend": {
>> "router": "round-robin-group",
>> "nr-of-instances": 100,
>> "routees": {
>> "paths": ["/user/worker"]
>> },
>> "cluster": {
>> "enabled": "on",
>> "allow-local-routees": "off",
>> "use-role": "worker"
>> }
>> }
>> }
>> }
>> }
>> }
>> }
>>
>>
>> Thank you
>> Cosmin
>>
>> On Thursday, August 25, 2016 at 1:59:54 PM UTC+1, Muthukumaran 
>> Kothandaraman wrote:
>>>
>>> Hi Cosmin, 
>>>
>>> Are these what you are looking for 
>>>
>>>
>>> http://doc.akka.io/docs/akka/snapshot/scala/routing.html#ConsistentHashingPool_and_ConsistentHashingGroup
>>>  
>>> OR
>>>
>>> http://doc.akka.io/docs/akka/snapshot/scala/routing.html#BroadcastPool_and_BroadcastGroup
>>>
>>> Regards
>>> Muthu
>>>
>>>
>>>
>>>
>>> On Thursday, 25 August 2016 14:50:13 UTC+5:30, Cosmin Marginean wrote:

 Hello everyone

 We have a classic scenario with a cluster with 2 tiers where one is a 
 "worker" that we offload heavy processing to. We wired Akka clustering and 
 have the following setup for a remote actor that is to be executed only on 
 the worker tier.

 "/frontend/backend": {
 "router": "round-robin-group",
 "routees": {
 "paths": ["/user/worker"]
 },
 "cluster": {
 "enabled": "on",
 "allow-local-routees": "off",
 "use-role": "worker"
 }
 }


 This works fine and message gets processed in the worker accordingly, 
 however we're interested to understand how to control the parallelism at 
 that level. More precisely, we'd want each worker node to process a series 
 of messages in parallel rather than one at a time as it does now.
 Any ideas?

 Thank you
 Cosmin

>>> -- 
>> 

Re: [akka-user] Re: Cluster actors an parallelism

2016-08-25 Thread Justin du coeur
Smells like the problem is somewhere in the router?  I mean, if you're only
processing one message at a time, that suggests that you've only got one
instance of the worker Actor running, instead of the 100 that it's trying
to declare.

I don't use routers, so I can't speak to this config, but personally I
would turn on a ton of logging, to see how many worker Actors are actually
being created and how the requests are being routed to them...

On Thu, Aug 25, 2016 at 9:17 AM, Cosmin Marginean 
wrote:

> Hi Muthu
>
> I've explored these but they're not exactly what I'm after. The use case
> is as follows: we have let's say 5 nodes, and 3 of them serve as "workers".
> These 3 should be processing a series of events/messages in parallel.
>
> We thus want some "load balancing" (consistent hashing is rigid and not
> suited for this IMO) whereby if we send 90 messages, they get (reasonably)
> equally distributed between the 3 nodes (~30 each for example).
>
> Going further, we want on each of the 3 worker nodes to have a level of
> parallel processing, so each node would be able to process 30 messages in
> parallel let's say, and thus making all of this process (almost) fully
> parallelised in this example.
>
> What happens now with one node is that we are sending a few thousand
> messages and only one message is processed at a time (single threaded
> like). This is something I couldn't figure out how to overcome.
>
> I've also configured the dispatcher to parallelise manically, so that is
> clearly "not it". Below the complete config (seed nodes etc is added
> dynamically from somewhere else)
>
> {
> "main": {
> "akka": {
> "remote": {
> "log-remote-lifecycle-events": "on"
> },
> "cluster": {
> "auto-down-unreachable-after": "10s"
> },
> "actor": {
> "provider": "akka.cluster.ClusterActorRefProvider",
> "default-dispatcher": {
> "type": "Dispatcher",
> "executor": "fork-join-executor",
> "fork-join-executor": {
> "parallelism-min": 16,
> "parallelism-factor": 32,
> "parallelism-max": 512
> },
> "throughput": 1
> },
> "deployment": {
> "/frontend/backend": {
> "router": "round-robin-group",
> "nr-of-instances": 100,
> "routees": {
> "paths": ["/user/worker"]
> },
> "cluster": {
> "enabled": "on",
> "allow-local-routees": "off",
> "use-role": "worker"
> }
> }
> }
> }
> }
> }
> }
>
>
> Thank you
> Cosmin
>
> On Thursday, August 25, 2016 at 1:59:54 PM UTC+1, Muthukumaran
> Kothandaraman wrote:
>>
>> Hi Cosmin,
>>
>> Are these what you are looking for
>>
>> http://doc.akka.io/docs/akka/snapshot/scala/routing.html#Con
>> sistentHashingPool_and_ConsistentHashingGroup OR
>> http://doc.akka.io/docs/akka/snapshot/scala/routing.html#Bro
>> adcastPool_and_BroadcastGroup
>>
>> Regards
>> Muthu
>>
>>
>>
>>
>> On Thursday, 25 August 2016 14:50:13 UTC+5:30, Cosmin Marginean wrote:
>>>
>>> Hello everyone
>>>
>>> We have a classic scenario with a cluster with 2 tiers where one is a
>>> "worker" that we offload heavy processing to. We wired Akka clustering and
>>> have the following setup for a remote actor that is to be executed only on
>>> the worker tier.
>>>
>>> "/frontend/backend": {
>>> "router": "round-robin-group",
>>> "routees": {
>>> "paths": ["/user/worker"]
>>> },
>>> "cluster": {
>>> "enabled": "on",
>>> "allow-local-routees": "off",
>>> "use-role": "worker"
>>> }
>>> }
>>>
>>>
>>> This works fine and message gets processed in the worker accordingly,
>>> however we're interested to understand how to control the parallelism at
>>> that level. More precisely, we'd want each worker node to process a series
>>> of messages in parallel rather than one at a time as it does now.
>>> Any ideas?
>>>
>>> Thank you
>>> Cosmin
>>>
>> --
> >> Read the docs: http://akka.io/docs/
> >> Check the FAQ: http://doc.akka.io/docs/akka/
> current/additional/faq.html
> >> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at 

[akka-user] Re: Cluster actors an parallelism

2016-08-25 Thread Cosmin Marginean
Hi Muthu

I've explored these but they're not exactly what I'm after. The use case is 
as follows: we have let's say 5 nodes, and 3 of them serve as "workers". 
These 3 should be processing a series of events/messages in parallel.

We thus want some "load balancing" (consistent hashing is rigid and not 
suited for this IMO) whereby if we send 90 messages, they get (reasonably) 
equally distributed between the 3 nodes (~30 each for example).

Going further, we want on each of the 3 worker nodes to have a level of 
parallel processing, so each node would be able to process 30 messages in 
parallel let's say, and thus making all of this process (almost) fully 
parallelised in this example.

What happens now with one node is that we are sending a few thousand 
messages and only one message is processed at a time (single threaded 
like). This is something I couldn't figure out how to overcome.

I've also configured the dispatcher to parallelise manically, so that is 
clearly "not it". Below the complete config (seed nodes etc is added 
dynamically from somewhere else)

{
"main": {
"akka": {
"remote": {
"log-remote-lifecycle-events": "on"
},
"cluster": {
"auto-down-unreachable-after": "10s"
},
"actor": {
"provider": "akka.cluster.ClusterActorRefProvider",
"default-dispatcher": {
"type": "Dispatcher",
"executor": "fork-join-executor",
"fork-join-executor": {
"parallelism-min": 16,
"parallelism-factor": 32,
"parallelism-max": 512
},
"throughput": 1
},
"deployment": {
"/frontend/backend": {
"router": "round-robin-group",
"nr-of-instances": 100,
"routees": {
"paths": ["/user/worker"]
},
"cluster": {
"enabled": "on",
"allow-local-routees": "off",
"use-role": "worker"
}
}
}
}
}
}
}


Thank you
Cosmin

On Thursday, August 25, 2016 at 1:59:54 PM UTC+1, Muthukumaran 
Kothandaraman wrote:
>
> Hi Cosmin, 
>
> Are these what you are looking for 
>
>
> http://doc.akka.io/docs/akka/snapshot/scala/routing.html#ConsistentHashingPool_and_ConsistentHashingGroup
>  
> OR
>
> http://doc.akka.io/docs/akka/snapshot/scala/routing.html#BroadcastPool_and_BroadcastGroup
>
> Regards
> Muthu
>
>
>
>
> On Thursday, 25 August 2016 14:50:13 UTC+5:30, Cosmin Marginean wrote:
>>
>> Hello everyone
>>
>> We have a classic scenario with a cluster with 2 tiers where one is a 
>> "worker" that we offload heavy processing to. We wired Akka clustering and 
>> have the following setup for a remote actor that is to be executed only on 
>> the worker tier.
>>
>> "/frontend/backend": {
>> "router": "round-robin-group",
>> "routees": {
>> "paths": ["/user/worker"]
>> },
>> "cluster": {
>> "enabled": "on",
>> "allow-local-routees": "off",
>> "use-role": "worker"
>> }
>> }
>>
>>
>> This works fine and message gets processed in the worker accordingly, 
>> however we're interested to understand how to control the parallelism at 
>> that level. More precisely, we'd want each worker node to process a series 
>> of messages in parallel rather than one at a time as it does now.
>> Any ideas?
>>
>> Thank you
>> Cosmin
>>
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.


[akka-user] Re: Cluster actors an parallelism

2016-08-25 Thread Muthukumaran Kothandaraman
Hi Cosmin, 

Are these what you are looking for 

http://doc.akka.io/docs/akka/snapshot/scala/routing.html#ConsistentHashingPool_and_ConsistentHashingGroup
 
OR
http://doc.akka.io/docs/akka/snapshot/scala/routing.html#BroadcastPool_and_BroadcastGroup

Regards
Muthu




On Thursday, 25 August 2016 14:50:13 UTC+5:30, Cosmin Marginean wrote:
>
> Hello everyone
>
> We have a classic scenario with a cluster with 2 tiers where one is a 
> "worker" that we offload heavy processing to. We wired Akka clustering and 
> have the following setup for a remote actor that is to be executed only on 
> the worker tier.
>
> "/frontend/backend": {
> "router": "round-robin-group",
> "routees": {
> "paths": ["/user/worker"]
> },
> "cluster": {
> "enabled": "on",
> "allow-local-routees": "off",
> "use-role": "worker"
> }
> }
>
>
> This works fine and message gets processed in the worker accordingly, 
> however we're interested to understand how to control the parallelism at 
> that level. More precisely, we'd want each worker node to process a series 
> of messages in parallel rather than one at a time as it does now.
> Any ideas?
>
> Thank you
> Cosmin
>

-- 
>>  Read the docs: http://akka.io/docs/
>>  Check the FAQ: 
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>  Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.