Hello,

(I have tested with Akka 2.3.0, akka-persistence-mongo-casbah 0.4-SNAPSHOT 
and running the BlogApp in different processes) 

I can confirm

points 1 and 4, but these have no influence on how the application works. 
The dead-letter and gating messages are always coming when starting the 
first
seed node but everything is working fine when the other nodes join the 
cluster.
I don't see duplicate key exceptions or OOM, but when stopping the first 
seed node, after a while the remaining nodes start to fail with 
a NoSuchElementException for every shard lookup  (see exception log at the 
end of the post).
As said this only happens when stopping the first seed node, if I stop the 
second seed node or other BlogApps I started with port 0 and then restart 
them in various order everything works fine.

hth,

michael

[ERROR] [04/01/2014 09:37:46.131] 
[ClusterSystem-akka.actor.default-dispatcher-16] [akka:/
/ClusterSystem/user/sharding/AuthorListingCoordinator/singleton] key not 
found: Actor[akka
://ClusterSystem/user/sharding/AuthorListing#-1841768893]
java.util.NoSuchElementException: key not found: 
Actor[akka://ClusterSystem/user/sharding/
AuthorListing#-1841768893]
        at scala.collection.MapLike$class.default(MapLike.scala:228)
        at scala.collection.AbstractMap.default(Map.scala:58)
        at scala.collection.MapLike$class.apply(MapLike.scala:141)
        at scala.collection.AbstractMap.apply(Map.scala:58)
        at 
akka.contrib.pattern.ShardCoordinator$Internal$State.updated(ClusterSharding.sc
ala:1055)
        at 
akka.contrib.pattern.ShardCoordinator$$anonfun$receiveRecover$1.applyOrElse(Clu
sterSharding.scala:1162)
        at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunc
tion.scala:33)
        at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.sca
la:33)
        at 
scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.sca
la:25)
        at 
akka.persistence.Eventsourced$$anonfun$1.applyOrElse(Eventsourced.scala:196)
        at 
akka.persistence.Recovery$State$$anonfun$processPersistent$1.apply(Recovery.sca
la:31)
        at 
akka.persistence.Recovery$State$$anonfun$processPersistent$1.apply(Recovery.sca
la:31)
        at 
akka.persistence.Recovery$State$class.withCurrentPersistent(Recovery.scala:42)
        at 
akka.persistence.Recovery$$anon$1.withCurrentPersistent(Recovery.scala:105)
        at 
akka.persistence.Recovery$State$class.processPersistent(Recovery.scala:31)
        at 
akka.persistence.Recovery$$anon$1.processPersistent(Recovery.scala:105)
        at 
akka.persistence.Recovery$$anon$1.aroundReceive(Recovery.scala:110)
        at akka.persistence.Recovery$class.aroundReceive(Recovery.scala:242)
        at 
akka.contrib.pattern.ShardCoordinator.akka$persistence$Eventsourced$$super$arou
ndReceive(ClusterSharding.scala:1132)
        at 
akka.persistence.Eventsourced$$anon$1.aroundReceive(Eventsourced.scala:29)
        at 
akka.persistence.Eventsourced$class.aroundReceive(Eventsourced.scala:172)
        at 
akka.contrib.pattern.ShardCoordinator.aroundReceive(ClusterSharding.scala:1132)

        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
        at akka.actor.ActorCell.invoke(ActorCell.scala:487)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
        at akka.dispatch.Mailbox.run(Mailbox.scala:220)
        at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispat
cher.scala:393)
        at 
scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339
)
        at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:10
7)



 

On Monday, 31 March 2014 16:10:44 UTC+2, Raman Gupta wrote:
>
> I am experimenting with the cluster sharding activator, but am having lots 
> of issues with it. I have tried updating the activator to 2.3.1, but to no 
> avail (and other issues show up, such as described here: 
> https://www.assembla.com/spaces/akka/simple_planner#/ticket:3967).
>
> Problems noticed so far:
>
> 1) 100% of the time, the activator sends a lot of messages to the 
> ClusterSystem deadLetters on startup of the seed node. Here is one example:
>
> [INFO] [03/31/2014 09:37:00.654] 
> [ClusterSystem-akka.actor.default-dispatcher-2] 
> [akka://ClusterSystem/deadLetters] Message 
> [akka.cluster.InternalClusterAction$InitJoin$] from 
> Actor[akka://ClusterSystem/system/cluster/core/daemon/firstSeedNodeProcess#-438400827]
>  
> to Actor[akka://ClusterSystem/deadLetters] was not delivered. [1] dead 
> letters encountered. This logging can be turned off or adjusted with 
> configuration settings 'akka.log-dead-letters' and 
> 'akka.log-dead-letters-during-shutdown'.
> [ ... many more akka.cluster.InternalClusterAction$InitJoin$ messages ... ]
> [INFO] [03/31/2014 09:37:05.518] 
> [ClusterSystem-akka.actor.default-dispatcher-14] 
> [akka://ClusterSystem/system/cluster/core/daemon/firstSeedNodeProcess] 
> Message [akka.dispatch.sysmsg.Terminate] from 
> Actor[akka://ClusterSystem/system/cluster/core/daemon/firstSeedNodeProcess#-438400827]
>  
> to 
> Actor[akka://ClusterSystem/system/cluster/core/daemon/firstSeedNodeProcess#-438400827]
>  
> was not delivered. [6] dead letters encountered. This logging can be turned 
> off or adjusted with configuration settings 'akka.log-dead-letters' and 
> 'akka.log-dead-letters-during-shutdown'.
> [... JOINING and Up message...]
> [INFO] [03/31/2014 09:37:06.516] 
> [ClusterSystem-akka.actor.default-dispatcher-16] 
> [akka://ClusterSystem/user/sharding/AuthorListingCoordinator/singleton] 
> Message [akka.contrib.pattern.ShardCoordinator$Internal$Register] from 
> Actor[akka://ClusterSystem/user/sharding/AuthorListing#1471529820] to 
> Actor[akka://ClusterSystem/user/sharding/AuthorListingCoordinator/singleton] 
> was not delivered. [7] dead letters encountered. This logging can be turned 
> off or adjusted with configuration settings 'akka.log-dead-letters' and 
> 'akka.log-dead-letters-during-shutdown'.
> [INFO] [03/31/2014 09:37:06.516] 
> [ClusterSystem-akka.actor.default-dispatcher-16] 
> [akka://ClusterSystem/user/sharding/PostCoordinator/singleton] Message 
> [akka.contrib.pattern.ShardCoordinator$Internal$Register] from 
> Actor[akka://ClusterSystem/user/sharding/Post#589187748] to 
> Actor[akka://ClusterSystem/user/sharding/PostCoordinator/singleton] was not 
> delivered. [8] dead letters encountered. This logging can be turned off or 
> adjusted with configuration settings 'akka.log-dead-letters' and 
> 'akka.log-dead-letters-during-shutdown'.
>
> 2) Using the default shared LevelDB journal configuration, sometimes (but 
> not always) when the Bot node is started, the seed node goes nuts:
>
> [INFO] [03/31/2014 09:46:00.768] 
> [ClusterSystem-akka.actor.default-dispatcher-3] 
> [Cluster(akka://ClusterSystem)] Cluster Node [akka.tcp://
> [email protected]:2551] - Leader is moving node [akka.tcp://
> [email protected]:50327] to [Up]
> Uncaught error from thread 
> [ClusterSystem-akka.remote.default-remote-dispatcher-24] shutting down JVM 
> since 'akka.jvm-exit-on-fatal-error' is enabled for 
> ActorSystem[ClusterSystem]
> Uncaught error from thread 
> [ClusterSystem-akka.actor.default-dispatcher-17] shutting down JVM since 
> 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[ClusterSystem]
> Uncaught error from thread 
> [ClusterSystem-akka.actor.default-dispatcher-28] shutting down JVM since 
> 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[ClusterSystem]
> Uncaught error from thread 
> [ClusterSystem-akka.actor.default-dispatcher-29] shutting down JVM since 
> 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[ClusterSystem]
> [...keeps going forever...]
> ^CJava HotSpot(TM) 64-Bit Server VM warning: Exception 
> java.lang.OutOfMemoryError occurred dispatching signal SIGINT to handler- 
> the VM may need to be forcibly terminated
>
> 3) When it is working, the shared leveldb journal seems to work reasonably 
> well (except for the SPOF on the first node). However, when I change to 
> either one of the MongoDB replicated journals in contrib, when testing 
> various combinations of node failures, things go nuts with 
> duplicatekeyexceptions (looping infinitely), OutOfMemoryError's, and other 
> weirdness. I know these are early implementations but the similarity of the 
> failures when using the two different journal implementations makes me 
> think the problems may not be with the journal implementations, but with 
> akka-persistence instead.
>
> 4) When restarting the Bot node, there are lots of WARNings about unknown 
> UIDs (the following message keeps repeating for Bots that have been shut 
> down -- i.e. the node never appears to be actually removed from the 
> cluster, even after the entire cluster is restarted):
>
> [WARN] [03/31/2014 10:01:40.280] 
> [ClusterSystem-akka.remote.default-remote-dispatcher-5] [Remoting] 
> Association to [akka.tcp://[email protected]:50327] with unknown 
> UID is reported as quarantined, but address cannot be quarantined without 
> knowing the UID, gating instead for 5000 ms.
>
> Has anyone else done any experimentation with akka-cluster-sharding?
>
> Regards,
> Raman
>
>

-- 
>>>>>>>>>>      Read the docs: http://akka.io/docs/
>>>>>>>>>>      Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>      Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Reply via email to