Yeah that occurred, the question is that their supervisor is the cluster 
sharding system so that makes it complicated. 

On Sunday, June 12, 2016 at 12:36:57 PM UTC-5, Guido Medina wrote:
>
> You could create a compressed list of Integers and send the message to 
> their supervisor, that way the children's supervisor pass that message to 
> each actor ID, assuming your IDs come from a sequence,
> if you have from 1 to 1m, that can be expressed as [1,20], [25,50], [52], 
> [55..100] where each array with single element is just a single element, 
> otherwise a range
>
> I created a data structure for Integer ranges for these caches, is backed 
> by a List of int[] from FastUtil.
> I'll be a good Samaritan and share it with you: 
> https://gist.github.com/guidomedina/307a7f76d19602c9de9884b2cba79277
>
> Google's Guava has a similar data structure but mine has a primitive 
> specialization.
>
> HTH,
>
> Guido.
>
> On Sunday, June 12, 2016 at 5:54:47 PM UTC+1, kraythe wrote:
>>
>> I don't necessarily want to drop location transparency. I am just 
>> concerned because I have millions of actors across the cluster and they 
>> have to be started at some point in order to get messages. Also I need to 
>> send the same message at times to a ton of them, potentially in the 
>> millions. I am concerned about how that will perform if I say do something 
>> like: 
>>
>> public void doSomething(final Set<Integer> ids) {
>>     final ActorRef actorRef = 
>> ClusterSharding.get(actorSystem).shardRegion(SHARD_REGION);
>>     ids.stream().forEach(id -> actorRef.tell(new MyActor.GetState(id), 
>> getSelf()));
>> }
>>
>>
>> You see my concern? If that ID list is massive, did I just wire 1m 
>> messages across the network? Is that scalable? Even if the message is just 
>> the ID I am concerned. Now it could be that I am barking at the moon here 
>> and Akka is designed for this. I am converting the system off a Hazelcast 
>> executor based system where some call like that would be lethal, we would 
>> have to find some more crafty way to do it like send a key id that all of 
>> those 1m actors would have and let them figure out if the message is for 
>> them. Or find the messages locally. 
>>
>> Another reason I am wondering about location is because with hazelcast It 
>> is more efficient if the object is accessed locally. We have been bit by 
>> that before, and some of the actors are managing entities in the cache. Now 
>> I can do it without all of this and see if I have the same problem i had 
>> when I implemeted in hazelcast with the thought of "location transparency" 
>> . 
>>
>> As usual your advice is appreciated.
>>
>> On Sunday, June 12, 2016 at 10:32:32 AM UTC-5, Justin du coeur wrote:
>>>
>>> On Sat, Jun 11, 2016 at 2:14 PM, kraythe <[email protected]> wrote:
>>>
>>>> It doesnt seem very specific to my problem scope so I am wondering if 
>>>> there is a better way to do this that I have missed.
>>>>
>>>
>>> The crux of the problem here is that you're fighting one of the central 
>>> design tenets of Akka: location transparency.  I wouldn't call that the 
>>> whole point of Akka -- there are a lot of benefits to the system -- but the 
>>> notion that an Actor might be local or might be remote is at the heart of 
>>> Akka's scalability story, and is *especially* central to Cluster Sharding.  
>>> Nearly everybody just works with that, and it works fine in most cases: 
>>> most times when multiple Actors really need to be local to each other are 
>>> closely-enough coupled that they have a direct parent-child relationship 
>>> anyway, so they *are* local unless stated otherwise.  It's very rare to 
>>> care where Sharded entities live, and that assumption is central to 
>>> Sharding.  (Keeping in mind that an "entity" is sometimes an entire troupe 
>>> of closely-related Actors, and the "entity" is just a top-level router.)
>>>
>>> This really is the common thread to a lot of the problems you're having, 
>>> and why things that seem to you like they should be common, aren't -- since 
>>> large-scale location transparency is pretty core to Sharding, it's unusual 
>>> to be trying to manage locality on the large scale the way you're doing.  
>>> Normally I would say that your system needs a redesign, because it doesn't 
>>> feel quite "Sharding-ish" at the large architectural level.  But I 
>>> understand that you're working within hard legacy constraints.
>>>
>>> So I'm not sure what to say here.  Creating your own ShardRegion variant 
>>> (which I suspect requires forking Cluster Sharding) seems possible, and it 
>>> might be the way go since your requirements are unusual.  But keep in mind 
>>> that that code's pretty battle-tested, and dealing with all the edge 
>>> conditions can be challenging.  In particular, rebalancing and restarting 
>>> the entities in the environment you're describing sounds a bit iffy.
>>>
>>> Honestly, though, before doing that, I might well just scrap Cluster 
>>> Sharding per se and roll it myself.  Much of the complexity of Cluster 
>>> Sharding is managing location transparency -- if you don't *want* that (and 
>>> it kinda sounds like you don't), the problem isn't that difficult.  I 
>>> actually had built my own sharding for the first 2-3 years of Querki, when 
>>> I was only running on one node and the official Cluster Sharding hadn't 
>>> been written yet -- it was less than two hundred lines of code for a simple 
>>> version.  You can look at an old release of Querki 
>>> <https://github.com/jducoeur/Querki/blob/v.0.10.6.3/querki/app/querki/spaces/SpaceManager.scala>
>>>  
>>> for an example -- basically, you just have a central manager Actor per 
>>> node, which deals with creating the entities and routing messages to them. 
>>>  (Possibly with some custom routers to make things more efficient, although 
>>> I never got to the point of caring.)
>>>
>>> That's not necessarily the ideal solution -- again, Cluster Sharding has 
>>> an awful lot of in-the-field experience that you'd have to relearn for your 
>>> own.  Personally, the release of the official solution was a godsend, 
>>> because it fit my requirements beautifully.  (I was one of the people 
>>> loudly boosting the idea at the time.)  But it's possible that reinventing 
>>> the wheel might be easier than trying to work around its core tenets, in 
>>> your current case...
>>>
>>

-- 
>>>>>>>>>>      Read the docs: http://akka.io/docs/
>>>>>>>>>>      Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>      Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Reply via email to