Hmm.  In that case, that's not Cluster Sharding -- but it's also not
common, because it can be challenging to get right (without inconsistencies
between replicas) and I suspect *usually* will result in worse throughput.
 (Since you're introducing a lot of PubSub traffic, and replicating the
processing of events.)  I'd recommend sanity-checking whether it's actually
cheaper to do it this way when everything is taken into account.

Assuming so, then no, this is just plain uncommon.  But you might want to
take a look at the relatively recent CRDT support, which is the closest
common cognate I can think of to this sort of approach...

On Wed, Jun 1, 2016 at 9:25 PM, kraythe <kray...@gmail.com> wrote:

> They are updated from a DistributedPubSub topic. Each of them gets the
> update messages and then recalculates the data they need to maintain
> locally. So essentially they all function as independent actors, isolated
> from each other, knowing nothing about each other. This object is one of
> the most hammered in our system so we need it to scale horizontally and the
> object can possibly hold a HUGE state (which is unavoidable) in the nature
> of 40 megs. So its cheaper to update them all independently. So when an
> update comes in it is published to a Topic that the actors listen to, if
> they care about that particular update, they deal with it. If its an update
> not within their scope, they ignore it. The supervisor just manages the
> objects by key so that you find the right one to interrogate for read ops.
>
> -- Robert
>
> On Wednesday, June 1, 2016 at 5:52:19 PM UTC-5, Ryan Tanner wrote:
>>
>> How are you coordinating state between logically-equal actors on
>> different physical nodes?
>>
>> On Wednesday, June 1, 2016 at 3:24:57 PM UTC-6, kraythe wrote:
>>>
>>> So the reason I didn't think this was cluster sharding is that I
>>> actually want these supervisor actors (and their supervised children) to be
>>> REPLICATED on every node (they handle user requests). Basically if there is
>>> an actor with key 10, I want one actor with key 10 per node. I didn't want
>>> the messages getting rerouted to another node. So if I have one of these
>>> actors running on every node how can I do that with sharding? Id imagine I
>>> would have to be fancy with the shard id but i have no idea how.
>>>
>>> -- Robert
>>>
>>> On Wednesday, June 1, 2016 at 3:42:16 PM UTC-5, Konrad Malawski wrote:
>>>>
>>>>
>>>> I was working on a supervisor that lazy creates actors based on some
>>>> key and then will forward messages to that actor.
>>>>
>>>> That's Cluster Sharding :-)
>>>>
>>>> http://doc.akka.io/docs/akka/snapshot/scala/cluster-sharding.html
>>>>
>>>> Technically you can use it on one node too, yeah.
>>>>
>>>>
>>>> Happy hAkking!
>>>> --
>>>> Konrad `ktoso` Malawski
>>>> Akka <http://akka.io> @ Lightbend <http://lightbend.com>
>>>>
>>>>
>>>> --
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to akka-user+unsubscr...@googlegroups.com.
> To post to this group, send email to akka-user@googlegroups.com.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
>>>>>>>>>>      Read the docs: http://akka.io/docs/
>>>>>>>>>>      Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>      Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Reply via email to