Thanks for all the hard work that has gone into Akka.  I have a question in 
regards to processorId in the context of a sharded set of processors:

Based on the Akka persistence documentation "Overriding processorId is the 
recommended way to generate stable identifiers”.  So if I have:


   1. An EventsourcedProcessor that is representing a Customer, say 
   CustomerProcessor
   2. 2 nodes in my cluster
   3. 10 Shards
   4. 100 unique CustomerProcessors each representing a different customer
   
What should the implementation for ProcessorId be?  Looking at through the 
ClusterSharding code, I see in the ShardRegion Class, deliverMessage the it 
calls the idExtractor and then uses that as the name of the actor to lookup 
the shard region for.  I know that the path of the actor (minus the host & 
port information) is used as a processorId default.

If a processor shares multiple aggregates, shouldn't the shard id be part 
of the processor id?  

I’m missing how the shard region is associated to the ProcessorId or is it? 
 It is my understanding that there can only be 1 instance of a Processor 
running with a given processorId in the cluster, so I have to believe this 
is associated in some way or is that up to the creator of the 
CustomerProcessor to make sure the name is unique, something like 
processorId + “ “ + cmd.id in the idExtractor implementation?

TIA for the assistance.

Regards,

Todd

-- 
>>>>>>>>>>      Read the docs: http://akka.io/docs/
>>>>>>>>>>      Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>      Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Reply via email to