On Sat, Jun 11, 2016 at 2:14 PM, kraythe <[email protected]> wrote:

> It doesnt seem very specific to my problem scope so I am wondering if
> there is a better way to do this that I have missed.
>

The crux of the problem here is that you're fighting one of the central
design tenets of Akka: location transparency.  I wouldn't call that the
whole point of Akka -- there are a lot of benefits to the system -- but the
notion that an Actor might be local or might be remote is at the heart of
Akka's scalability story, and is *especially* central to Cluster Sharding.
Nearly everybody just works with that, and it works fine in most cases:
most times when multiple Actors really need to be local to each other are
closely-enough coupled that they have a direct parent-child relationship
anyway, so they *are* local unless stated otherwise.  It's very rare to
care where Sharded entities live, and that assumption is central to
Sharding.  (Keeping in mind that an "entity" is sometimes an entire troupe
of closely-related Actors, and the "entity" is just a top-level router.)

This really is the common thread to a lot of the problems you're having,
and why things that seem to you like they should be common, aren't -- since
large-scale location transparency is pretty core to Sharding, it's unusual
to be trying to manage locality on the large scale the way you're doing.
Normally I would say that your system needs a redesign, because it doesn't
feel quite "Sharding-ish" at the large architectural level.  But I
understand that you're working within hard legacy constraints.

So I'm not sure what to say here.  Creating your own ShardRegion variant
(which I suspect requires forking Cluster Sharding) seems possible, and it
might be the way go since your requirements are unusual.  But keep in mind
that that code's pretty battle-tested, and dealing with all the edge
conditions can be challenging.  In particular, rebalancing and restarting
the entities in the environment you're describing sounds a bit iffy.

Honestly, though, before doing that, I might well just scrap Cluster
Sharding per se and roll it myself.  Much of the complexity of Cluster
Sharding is managing location transparency -- if you don't *want* that (and
it kinda sounds like you don't), the problem isn't that difficult.  I
actually had built my own sharding for the first 2-3 years of Querki, when
I was only running on one node and the official Cluster Sharding hadn't
been written yet -- it was less than two hundred lines of code for a simple
version.  You can look at an old release of Querki
<https://github.com/jducoeur/Querki/blob/v.0.10.6.3/querki/app/querki/spaces/SpaceManager.scala>
for an example -- basically, you just have a central manager Actor per
node, which deals with creating the entities and routing messages to them.
 (Possibly with some custom routers to make things more efficient, although
I never got to the point of caring.)

That's not necessarily the ideal solution -- again, Cluster Sharding has an
awful lot of in-the-field experience that you'd have to relearn for your
own.  Personally, the release of the official solution was a godsend,
because it fit my requirements beautifully.  (I was one of the people
loudly boosting the idea at the time.)  But it's possible that reinventing
the wheel might be easier than trying to work around its core tenets, in
your current case...

-- 
>>>>>>>>>>      Read the docs: http://akka.io/docs/
>>>>>>>>>>      Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>      Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Reply via email to