Hi Patrik,
We solved the issue. The only issue is, we have to mention seed node's in
configuration. We included below snippet in existing configuration,
cluster {
seed-nodes = [
"akka.tcp://[email protected]:4001"
]
auto-down-unreachable-after = 20s
}
I thought for cluster of one node to work, only
akka.cluster.ClusterActorRefProvider
is required in configuration. We did not include seed nodes.
For a cluster to come up ClusterActorRefProvider and seed nodes(in case of
one node mention self) are must. The configuration should include both.
Front end/Master are in one cluster. Worker node is in different cluster.
Lot of thanks for guiding us through in solving this issue. It helped a lot.
-Prakhyat M M
> On Wed, Sep 3, 2014 at 3:22 PM, Prakhyat Mallikarjun <[email protected]
> <javascript:>> wrote:
>
> Patrik,
>
> Bingo...you got me.
>
> Alright, then I know what is wrong. You are using cluster sharding on the
> worker nodes. That is not done in the activator template.
> [Prakhyat]Yes. You are correct.
>
> The WorkerSystem is not part of the cluster, perhaps not part of any
> cluster, or do you have a separate cluster for all worker nodes?
> [Prakhyat] The WorkerSystem is not part of the cluster. The workers
> connect to master by ClusterClient. If you see worker configuration there
> is no configuration for cluster or seed nodes. It is bare minimum
> configuration,
> akka {
> actor.provider = "akka.cluster.ClusterActorRefProvider"
> extensions = ["akka.contrib.pattern.ClusterReceptionistExtension"]
> remote.netty.tcp.port=4001
> remote.netty.tcp.hostname=127.0.0.1
> }
> contact-points = [
> "akka.tcp://[email protected]:2551"
> // Master nodes info
> ]
>
> As I understand from configuration worker is not part of any cluster.
> Correct me if I am wrong.
>
> Do you see the WorkerSystem nodes as Up? Probably not, and then there is
> no coordinator for the worker nodes, and thereby no progress.
> [Prakhyat] I will check on this have to check the logs.
>
> If my guessing is correct you can solve this by make sure that the worker
> nodes join a separate worker cluster.
> [Prakhyat] Are you asking us to try including worker node with separate
> cluster? I will try this. In any case please guide me which is the right
> way.
>
> You should also consider if you have a good reason to not include the
> worker nodes in the same cluster as the master/frontend nodes?
> [Prakhyat] We have not tried workers in same cluster as master and front.
> If worker is in same cluster as master, do you see ClusterClient required?
> We thought to use ClusterClient, workers should be in different cluster
> then master. Correct me if I am wrong.
>
> Will you have more than 1000 worker nodes? Are they untrusted or unstable?
> [Prakhyat] Yes workers can go beyond 1000. They are trusted.
>
> If you answer yes on any of these questions then you might have a good
> reason to not include the worker nodes in the same cluster, otherwise I
> think you should simplify the architecture by using one single cluster. In
> that case you would not use ClusterClient, but instead use
> DistributedPubSub for similar things.
> [Prakhyat] Let me know from the above points, what steps are recommended.
>
>
> If you want to use cluster sharding on the worker node, they must be part
> of some cluster. You could make them part of the same cluster as the
> master/frontend, or you can use a separate cluster for the worker nodes.
>
> Both these options means that you will have a large cluster (1000+ nodes).
> Even though we have tested large clusters
> <http://typesafe.com/blog/running-a-2400-akka-nodes-cluster-on-google-compute-engine>
> with
> good results there is some point when there will be too much
> activity/failures and the cluster will not be stable enough. Then you must
> split up the cluster in smaller separate clusters. I don't know if it makes
> sense for you to run a few 100 worker nodes in such a cluster, and have
> many such clusters.
>
> /Patrik
>
>
>
>
> -Prakhyat M M
>
> On Wednesday, 3 September 2014 18:23:14 UTC+5:30, Patrik Nordwall wrote:
>
> Alright, then I know what is wrong. You are using cluster sharding on the
> worker nodes. That is not done in the activator template.
>
> The WorkerSystem is not part of the cluster, perhaps not part of any
> cluster, or do you have a separate cluster for all worker nodes?
> Do you see the WorkerSystem nodes as Up? Probably not, and then there is
> no coordinator for the worker nodes, and thereby no progress.
>
> If my guessing is correct you can solve this by make sure that the worker
> nodes join a separate worker cluster.
>
> You should also consider if you have a good reason to not include the
> worker nodes in the same cluster as the master/frontend nodes?
> Will you have more than 1000 worker nodes? Are they untrusted or unstable?
> If you answer yes on any of these questions then you might have a good
> reason to not include the worker nodes in the same cluster, otherwise I
> think you should simplify the architecture by using one single cluster. In
> that case you would not use ClusterClient, but instead use
> DistributedPubSub for similar things.
>
> Regards,
> Patrik
>
>
>
>
>
> On Wed, Sep 3, 2014 at 2:26 PM, Prakhyat Mallikarjun <[email protected]>
> wrote:
>
> Hi Patrik,
>
> Thanks.
>
> We are working on top of the activator template and later modified it to
> use cassandra plugin. We see all the nodes in cluster are up and running.
> Front end is able to communicate web request to master, master is able
> store work and workers are able to pull work from master.
>
> We modified the template locally to integrate with cassandra. The
> integration works. No exceptions seen.
>
> The only one big change we made is, workers will do the processing and
> persist state via sharded PersistentActors(implemented as Single Writers as
> we are using DDD/CQRS approach).
>
> Request form Browser is coming to Front end . Request flow,
> Browser -> FrontEnd -> Master -> Worker->Sharded Single
> Writers(PersistentActor)
>
> The worker node is connecting to master using *ClusterClient(this is same
> as typesafe template)*.
>
> Front End, Master, Worker are deployed on different nodes. The actor
> system of front end and master is same i.e. "*ClusterSystem*". The actor
> system of worker is different i.e. "*WorkerSystem*".
>
> Worker's pull the work from master and on successful processing persists
> state via Sharded single writers.
> *Note: Cluster Sharding is initiated with actor system *"*WorkerSystem*".
> Worker actors and Cluster Sharding have same *actor system *"
> *WorkerSystem*".
>
> *Master node and Worker node share same persistence cassandra.* Front end
> node doesn't have any persistence.
>
> The master and front end configuration are as it is from Typesafe
> template. Below is the configuration of worker.
>
> akka {
>
> //actor.provider = "akka.remote.RemoteActorRefProvider"
> actor.provider = "akka.cluster.ClusterActorRefProvider"
>
> extensions = ["akka.contrib.pattern.ClusterReceptionistExtension"]
> loglevel=DEBUG
>
> persistence {
> journal.plugin = "cassandra-journal"
> journal.leveldb.native = off
> }
>
>
> }
>
> contact-points = [
> "akka.tcp://[email protected]:2551"
> // Master nodes info
> ]
>
> cassandra-journal {
> class = "akka.persistence.journal.cassandra.CassandraJournal"
> contact-points = ["172.26.145.251"]
>
> # Name of the keyspace to be created/used by the journal
> keyspace = "akka"
>
> # Name of the table to be created/used by the journal
> table = "messages"
>
> # Replication factor to use when creating a keyspace
> replication-factor = 1
>
> # Write consistency level
> write-consistency = "QUORUM"
>
> # Read consistency level
> read-consistency = "QUORUM"
>
> # Maximum number of entries per partition (= columns per row).
> # Must not be changed after table creation (currently not checked).
> max-partition-size = 5000000
>
> # Maximum size of result set
> max-result-size = 50001
>
> # Dispatcher for fetching and replaying messages
> replay-dispatcher = "akka.persistence.dispatchers.
> default-replay-dispatcher"
> }
>
> When worker sends message to shard only below log is displayed,
> *2014 12:46:46.465] [WorkerSystem-akka.actor.default-dispatcher-16]
> [akka.tcp://[email protected]:4001/user/sharding/LedgerProcessor
> <http://[email protected]:4001/user/sharding/LedgerProcessor>]* *Request
> shard [0] home*
>
> The idExtractor block is not called. The persistent actor doesnt receive
> message. Nor the sent message goes to dead letter queue.
>
> -Prakhyat M M
>
> On Wednesday, 3 September 2014 13:56:49 UTC+5:30, Patrik Nordwall wrote:
>
>
>
>
> On Wed, Sep 3, 2014 at 9:29 AM, Asha <[email protected]> wrote:
>
>
> Hi Patrik,
>
> I didn't get where to put idExtractor.isDefinedAt(msg).
>
>
> That is something provided by Scala PartialFunction.
>
>
>
> As u told i have added akka.loglevel=DEBUG in conf file. I am getting
> below line in logs.
> * 2014 12:46:46.465] [WorkerSystem-akka.actor.default-dispatcher-16]
> [akka.tcp://[email protected]:4001/user/sharding/LedgerProcessor
> <http://[email protected]:4001/user/sharding/LedgerProcessor>]* *Request
> shard [0] home*
> After that no logs.
>
> Below is my conf file.
>
> akka {
>
> //actor.provider = "akka.remote.RemoteActorRefProvider"
> actor.provider = "akka.cluster.ClusterActorRefProvider"
>
> remote.netty.tcp.port=4001
> remote.netty.tcp.hostname=127.0.0.1
>
> extensions = ["akka.contrib.pattern.ClusterReceptionistExtension"]
> loglevel=DEBUG
>
> persistence {
> journal.plugin = "cassandra-journal"
> journal.leveldb.native = off
> }
>
>
> }
>
> contact-points = [
> "akka.tcp://[email protected]:2551"
> // Master nodes info
> ]
>
> cassandra-journal {
> class = "akka.persistence.journal.cassandra.CassandraJournal"
> contact-points = ["172.26.145.251"]
>
> # Name of the keyspace to be created/used by the journal
> keyspace = "akka"
>
> # Name of the table to be created/used by the journal
> table = "messages"
>
> # Replication factor to use when creating a keyspace
> replication-factor = 1
>
> # Write consistency level
> write-consistency = "QUORUM"
>
> # Read consistency level
> read-consistency = "QUORUM"
>
> # Maximum number of entries per partition (= columns per row).
> # Must not be changed after table creation (currently not checked).
> max-partition-size = 5000000
>
> # Maximum size of result set
> max-result-size = 50001
>
> # Dispatcher for fetching and replaying messages
> replay-dispatcher = "akka.persistence.dispatchers.
> default-replay-dispatcher"
> }
>
> I am not using the akka.cluster.seed-nodes to connect to master. From
> this conf i am getting the master's contact point. and using the
> clusterclient i am able to connect to master and communicate.
>
>
> Do you join in some other way then? Do you see (in the logs) that the
> cluster nodes are Up?
>
> Have you tried the activator tutorial
> <http://typesafe.com/activator/template/akka-distributed-workers> and got
> that working? Then change the journal plugin to cassandra, and if it then
> fails it is something wrong with your setup of the cassandra journal plugin.
>
> /Patrik
>
>
>
> Please let me know wt did i missed in the conf file.
>
> Thanks
> Asha
>
>
>
>
> On Wednesday, 3 September 2014 12:11:36 UTC+5:30, Patrik Nordwall wrote:
>
> Hi Asha and Prakhyat (you will not receive help faster by double-posting),
>
> First checks that it is a valid message with idExtractor.isDefinedAt(msg).
> That will not invoke your prints in idExtractor.
> Then it uses the shardExtractor, which you see in the logs.
> Before creating the entry, or delegating to the right node, it must get
> information from the coordinator. It is probably this that is not
> successful.
>
> You will find more information about what is going on by enabling
> akka.loglevel=DEBUG and look log messages from the "/user/sharding" actors.
>
> Regards,
> Patrik
>
>
> On Tue, Sep 2, 2014 at 8:02 PM, Asha <[email protected]> wrote:
>
>
> Hi Martynas,
>
> I am using* Cassandra plugin * for *persistance*, For the same below
> are configuration changes in application.conf for persistance.
>
> akka{
> persistence {
> journal.plugin = "cassandra-journal"
> }
> }
>
> cassandra-journal {
> contact-points = ["xxx.xxxx.xxx.xxx"]
> }
>
> For this i am using below jars
> akka-persistence-cassandra_2.11-0.3.3.jar
> akka-persistence-experimental_2.10-2.3.4.jar
> cassandra-driver-core-2.0.3.jar
>
> I am not using LevelDB.
>
> Thanks
> Asha
>
>
> On Tuesday, 2 September 2014 19:02:22 UTC+5:30, Martynas Mickevičius wrote:
>
> Hi Asha,
>
> I would guess that persistence system fails to initialize. What
> persistence plugin and settings do you use? If you are prototyping and
> using LevelDB you either need to provide native LevelDB libraries or use
> Java LevelDB implementation by adding the following to your
> application.conf file:
>
> akka.persistence.journal.leveldb.native = off
>
> Just remember to select a plugin
> <http://akka.io/community/#journal-plugins> for a distributed journal
> when going to production.
>
>
> On Mon, Sep 1, 2014 at 1:11 PM, Asha <[email protected]> wrote:
>
> I think, i have given very less information.
>
> Here is the detailed description
>
> I am using Akka-pull pattern in my project. i am using
> akka-contrib_2.10-2.3.4.jar and akka-persistence-experimental_
> 2.10-2.3.4.jar
> Front end, master, worker all are running different nodes. The actor
> system of front end and master is "ClusterSystem".
> The actor system of worker is "WorkerSystem". Request form Browser is
> coming to Front end .
> Request flow
> Browser -> FrontEnd -> Master -> Worker
>
> Worker is able to pull the work from master and processing the same. I
> want to persit the state after processing work. I am using single writer.
> Shard for the persisted actor is stared when node comes up. For persisting
> the state, i am hitting the shard region of the Persisted Actor. It is a
> able resovle the shard,
> but failing to resolve the persisted actor(not executing the idExtractor
> ())
>
> Below is the code.
>
> val idExtractor: ShardRegion.IdExtractor = {
> case created: LedgerCreated =>{
> println("%%%%%%%%% IdExtractor : " +created.ledgerId.id.toString)
> (created.ledgerId.id.toString, created)
> }
> case edited: LedgerEdited => (edited.ledgerId.id.toString, edited)
> case deleted: LedgerDeleted => (deleted.ledgerId.id.toString, deleted)
> }
>
> val shardResolver: ShardRegion.ShardResolver = msg => msg match {
> case created: LedgerCreated =>{
> println("%%%%%%%%% ShardResolver : " +created.ledgerId.id % shards)
> (created.ledgerId.id % shards).toString
> }
> case edited: LedgerEdited => (edited.ledgerId.id % shards).toString
> case deleted: LedgerDeleted => (deleted.ledgerId.id % shards).toString
> }
>
> I am getting the log written in the ShardResolver. But the log written in
> the IdExtractor is not coming.
>
> Please help me to solve this problem.
>
>
>
>
> On Monday, 1 September 2014 15:00:16 UTC+5:30, Asha wrote:
>
> Hi,
>
> I am using Akka-pull pattern in my project. Request from the browser
> will land on spray layer, it will pass the request to master then master to
> worker.
> In worker i am hitting the shard region. When i hit the shard region it
> is able to resolve shard but failing to execute the idExtractor. It is not
> throwing any error.
>
> Not getting whether issue with the jar or my code. Please help me to
> resolve this problem
>
> Thanks
> Asha
>
>
> --
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ: http://doc.akka.io/docs/akka/c
> urrent/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
>
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> --
> Martynas Mickevičius
> Typesafe <http://typesafe.com/> – Reactive
> <http://www.reactivemanifesto.org/> Apps on the JVM
>
> --
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ: http://doc.akka.io/docs/akka/c
> urrent/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> --
>
> Patrik Nordwall
> Typesafe <http://typesafe.com/> - Reactive apps on the JVM
> Twitter: @patriknw
>
> --
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ: http://doc.akka.io/docs/akka/c
> urrent/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> --
>
> Patrik Nordwall
> Typesafe <http://typesafe.com/> - Reactive apps on the JVM
> Twitter: @patriknw
>
> --
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ: http://doc.akka.io/docs/akka/
> current/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> --
>
> Patrik Nordwall
> Typesafe <http://typesafe.com/> - Reactive apps on the JVM
> Twitter: @patriknw
>
> --
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected] <javascript:>.
> To post to this group, send email to [email protected]
> <javascript:>.
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
> <b
> ...
--
>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>> Check the FAQ:
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups "Akka
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.