Can you run cluster Sharding on a single node? Is that not possible? I even
tried to use the LevelDB default plugin for journaling and no dice. My Most
recent akka config is below. Thanks a bunch for your reply, this one really
has me stumped. I have tried 3 dozen things and read every doc and google
link I can get my hands on 3 times.
akka {
log-dead-letters-during-shutdown = off
extensions = [
"com.romix.akka.serialization.kryo.KryoSerializationExtension$",
"akka.cluster.metrics.ClusterMetricsExtension"
]
actor {
provider = "akka.cluster.ClusterActorRefProvider"
serializers {
java = "akka.serialization.JavaSerializer"
proto = "akka.remote.serialization.ProtobufSerializer"
// FIXME define bindings in code for config.
kryo = "com.romix.akka.serialization.kryo.KryoSerializer"
}
# See for Documentation: https://github.com/romix/akka-kryo-serialization
kryo {
type = "graph"
idstrategy = "automatic"
buffer-size = 4096
max-buffer-size = -1
use-manifests = false
post-serialization-transformations = "off"
kryo-custom-serializer-init =
"distributed.serialization.SerializationConfigUtil"
implicit-registration-logging = true
kryo-trace = false
}
# default dispatcher used by Play
default-dispatcher {
# This will be used if you have set "executor = "fork-join-executor""
fork-join-executor {
# Min number of threads to cap factor-based parallelism number to
parallelism-min = 8
# The parallelism factor is used to determine thread pool size using
the following formula:
# ceil(available processors * factor). Resulting size is then bounded
by the parallelism-min and
# parallelism-max values.
parallelism-factor = 4.0
# Max number of threads to cap factor-based parallelism number to
parallelism-max = 64
# Setting to "FIFO" to use queue like peeking mode which "poll" or
"LIFO" to use stack like peeking mode
# which "pop".
task-peeking-mode = "FIFO"
}
}
}
# See
http://doc.akka.io/docs/akka/snapshot/general/configuration.html#config-akka-remote
remote {
log-remote-lifecycle-events = off
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
# This causes the server to select a random available port in local mode
and is important for running multiple
# nodes on the same machine. It is overridden in environments.
port = 0
}
}
cluster {
// FIXME cant use static seed nodes config!
seed-nodes = []
metrics {
enabled = on
native-library-extract-folder = ${user.dir}/target/native
}
# auto downing is NOT safe for production deployments.
# you may want to use it during development, read more about it in the docs.
# auto-down-unreachable-after = 10s
# Settings for the ClusterShardingExtension
sharding {
# The extension creates a top level actor with this name in top level
system scope,
# e.g. '/system/sharding'
guardian-name = sharding
# Specifies that entities runs on cluster nodes with a specific role. If
the role is not specified (or empty)
# all nodes in the cluster are used.
role = ""
# When this is set to 'on' the active entity actors will automatically be
restarted upon Shard restart. i.e.
# if the Shard is started on a different ShardRegion due to rebalance or
crash.
remember-entities = off
# If the coordinator can't store state changes it will be stopped and
started again after this duration, with
# an exponential back-off of up to 5 times this duration.
coordinator-failure-backoff = 5 s
# The ShardRegion retries registration and shard location requests to the
ShardCoordinator with this interval
# if it does not reply.
retry-interval = 2 s
# Maximum number of messages that are buffered by a ShardRegion actor.
buffer-size = 100000
# Timeout of the shard rebalancing process.
handoff-timeout = 60 s
# Time given to a region to acknowledge it's hosting a shard.
shard-start-timeout = 10 s
# If the shard is remembering entities and can't store state changes will
be stopped and then started again
# after this duration. Any messages sent to an affected entity may be
lost in this process.
shard-failure-backoff = 10 s
# If the shard is remembering entities and an entity stops itself without
using passivate. The entity will
# be restarted after this duration or when the next message for it is
received, which ever occurs first.
entity-restart-backoff = 10 s
# Rebalance check is performed periodically with this interval.
rebalance-interval = 10 s
# Absolute path to the journal plugin configuration entity that is to be
used for the internal persistence of
# ClusterSharding. If not defined the default journal plugin is used.
Note that this is not related to
# persistence used by the entity actors.
journal-plugin-id = ""
# Absolute path to the snapshot plugin configuration entity that is to be
used for the internal persistence
# of ClusterSharding. If not defined the default snapshot plugin is used.
Note that this is not related to
# persistence used by the entity actors.
snapshot-plugin-id = ""
# Parameter which determines how the coordinator will be store a state
valid values either "persistence" or
# "ddata" The "ddata" mode is experimental, since it depends on the
experimental module
# akka-distributed-data-experimental.
state-store-mode = "persistence"
# The shard saves persistent snapshots after this number of persistent
events. Snapshots are used to reduce
# recovery times.
snapshot-after = 1000
# Setting for the default shard allocation strategy
least-shard-allocation-strategy {
# Threshold of how large the difference between most and least number
of allocated shards must be to begin the
# rebalancing.
rebalance-threshold = 10
# The number of ongoing rebalancing processes is limited to this number.
max-simultaneous-rebalance = 3
}
# Timeout of waiting the initial distributed state (an initial state will
be queried again if the timeout happened)
# works only for state-store-mode = "ddata"
waiting-for-state-timeout = 5 s
# Timeout of waiting for update the distributed state (update will be
retried if the timeout happened)
# works only for state-store-mode = "ddata"
updating-state-timeout = 5 s
# Settings for the coordinator singleton. Same layout as
akka.cluster.singleton. The "role" of the singleton
# configuration is not used. The singleton role will be the same as
"akka.cluster.sharding.role".
coordinator-singleton = ${akka.cluster.singleton}
# The id of the dispatcher to use for ClusterSharding actors. If not
specified default dispatcher is used.
# If specified you need to define the settings of the actual dispatcher.
This dispatcher for the entity actors
# is defined by the user provided Props, i.e. this dispatcher is not used
for the entity actors.
use-dispatcher = ""
}
}
persistence {
journal {
# Absolute path to the journal plugin configuration entry used by
persistent actor or view by default.
# Persistent actor or view can override `journalPluginId` method in order
to rely on a different journal plugin.
# plugin = "akka-persistence-sql-async.journal"
plugin = "akka.persistence.journal.leveldb"
leveldb-shared.store {
# DO NOT USE 'native = off' IN PRODUCTION !!!
native = off
dir = "target/shared-journal"
}
# List of journal plugins to start automatically. Use "" for the default
journal plugin.
auto-start-journals = ["akka.persistence.journal.leveldb"]
}
snapshot-store {
# Absolute path to the snapshot plugin configuration entry used by
persistent actor or view by default.
# Persistent actor or view can override `snapshotPluginId` method in
order to rely on a different snapshot plugin.
# It is not mandatory to specify a snapshot store plugin. If you don't
use snapshots you don't have to configure
# it. Note that Cluster Sharding is using snapshots, so if you use
Cluster Sharding you need to define a snapshot
# store plugin.
# plugin = "akka-persistence-sql-async.snapshot-store"
plugin = "akka.persistence.snapshot-store.local"
local.dir = "target/snapshots"
# List of snapshot stores to start automatically. Use "" for the default
snapshot store.
auto-start-snapshot-stores = ["akka.persistence.snapshot-store.local"]
}
#https://github.com/okumin/akka-persistence-sql-async
#journal.plugin = "akka-persistence-sql-async.journal"
#snapshot-store.plugin = "akka-persistence-sql-async.snapshot-store"
}
}
//akka-persistence-sql-async {
// journal.class = "akka.persistence.journal.sqlasync.MySQLAsyncWriteJournal"
// snapshot-store.class =
"akka.persistence.snapshot.sqlasync.MySQLSnapshotStore"
//
// user = ${db.default.username}
// password = ${db.default.password}
// url = ${db.default.url}
// max-pool-size = 4
// wait-queue-capacity = 10000
//
// metadata-table-name = "akka_persistence_metadata"
// journal-table-name = "akka_persistence_journal"
// snapshot-table-name = "akka_persistence_snapshots"
//}
Thanks for your time.
On Sunday, June 12, 2016 at 9:57:29 AM UTC-5, Justin du coeur wrote:
>
> Well, the problem *looks* to me like the cluster isn't joining up at all.
> So if you don't do auto join, then you need to set that up
> programmatically. (Don't recall offhand how that works -- the seed-node
> approach is usual.)
>
> On Sat, Jun 11, 2016 at 5:38 PM, kraythe <[email protected] <javascript:>>
> wrote:
>
>> What if I dont want any auto join at all. I took out the seed nodes and I
>> get the same problem.
>>
>>
>> On Saturday, June 11, 2016 at 2:24:10 PM UTC-5, Rafał Siwiec wrote:
>>>
>>> akka.remote.netty.tcp.port - instead of 0 you should use 2551.
>>
>> --
>> >>>>>>>>>> Read the docs: http://akka.io/docs/
>> >>>>>>>>>> Check the FAQ:
>> http://doc.akka.io/docs/akka/current/additional/faq.html
>> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "Akka User List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected] <javascript:>.
>> To post to this group, send email to [email protected]
>> <javascript:>.
>> Visit this group at https://groups.google.com/group/akka-user.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
--
>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>> Check the FAQ:
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups "Akka
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.