Hello Patrik,

thank you for looking into this.
But I think I was wrong about the retry interval:
The Retry messageHandler first checks if the Coordinator is Some only then 
requestShardBufferHomes is called, therefore
the remaining ShardRegion on Node A first has to receive a MemberRemoved 
ClusterEvent to update the oldest member information to send the Register 
msg to the correct cluster node.
When the ShardRegion receives the RegisterAck from the Coordinator, 
requestShardBufferHomes is called - so a longer retry interval will have
no effect. (hope I got it right this time:)
So to fix this,  I think the Coordinator's state has to be updated when it 
gets or after it was replayed (and it would decrease the cases when msgs 
can get lost by one:)

thanks again,

michael


On Tuesday, 29 July 2014 11:36:04 UTC+2, Patrik Nordwall wrote:
>
> Hi Michael,
>
> I get your point. I'm not sure your workaround is correct for all 
> scenarios. I will follow up on this next week. Perhaps we can improve this, 
> but there are other cases when messages will be lost, so reliable delivery 
> must anyway be added on top when it is needed.
>
> /Patrik
>
> 29 jul 2014 kl. 10:24 skrev delasoul <[email protected] <javascript:>>:
>
> Hello,
>
> we have for example a 2 node cluster, the ShardCoordinator runs on node B 
> , sharded actors run on node A and node B.
> When node B goes down the coordinator is started on node A and gets 
> replayed, which means that its state still hold outdated information about
> the ShardRegion(with its ShardIds) of  removed Node B.
> In the meantime the ShardRegion on node A buffers all incoming messages 
> until it is able to reregister with the new Coordinator. Then it requests 
> the ShardHomes
> for the ShardIds formerly housed on Node B but as the Coordinator still 
> has the information that these ShardIds should live on Node B it returns 
> this information and the ShardRegion on Node A forwards the messages to 
> Node B which will fail...
> When the Coordinator finishes replaying it watches all ShardRegions, 
> including the removed ShardRegion on Node B, hence it will get a Terminated 
> message for this
> actor - but unfortunately that is received too late if the ShardRegion's 
> retry interval is set too low.
> To make this more "predictable" we  added a filter in the 
> ShardCoodinator's AfterRecover message handler:
>
> case AfterRecover ⇒
>       val currentMembers = Cluster(context.system).state.members
>       persistentState.regions.foreach { case (a, _) ⇒
>         if(currentMembers.exists(_.address == a.path.address))
>           context.watch(a)
>         else {
>           persist(ShardRegionTerminated(a)) { evt ⇒
>             persistentState = persistentState.updated(evt)
>           }
>         }
>       }
>
>
> Could this (or a better solution) be added to the ShardCoordinator?
>
> michael
>
>  -- 
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ: 
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> --- 
> You received this message because you are subscribed to the Google Groups 
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] <javascript:>.
> To post to this group, send email to [email protected] 
> <javascript:>.
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 
>>>>>>>>>>      Read the docs: http://akka.io/docs/
>>>>>>>>>>      Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>      Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Reply via email to