Hi,
Following is my app.conf. I am losing almost 50% of the messages. Is it
because of the ClusterRouterPool ?
Running it on 2 node cluster(each is m4.xlarge machine)
There are two actors-
akka {
loglevel = "INFO"
loggers = ["akka.event.slf4j.Slf4jLogger"]
log-dead-letters-during-shutdown = off
log-dead-letters = 0
stdout-loglevel = "INFO"
log-config-on-start = off
daemonic = off
jvm-exit-on-fatal-error = on
actor {
provider = "akka.cluster.ClusterActorRefProvider"
unstarted-push-timeout = 100s
default-mailbox {
mailbox-type = "akka.dispatch.SingleConsumerOnlyUnboundedMailbox"
mailbox-push-timeout-time = 20s
}
default-dispatcher {
type = Dispatcher
executor = "fork-join-executor"
fork-join-executor {
parallelism-min = 16
parallelism-factor = 16
parallelism-max = 64
}
throughput = 1
}
producer-dispatcher {
type = Dispatcher
executor = "fork-join-executor"
fork-join-executor {
parallelism-min = 16
parallelism-factor = 4.0
parallelism-max = 64
}
throughput = 1
}
consumer-dispatcher {
type = Dispatcher
executor = "fork-join-executor"
fork-join-executor {
parallelism-min = 4
parallelism-factor = 4.0
parallelism-max = 4
}
throughput = 1
}
}
remote {
log-remote-lifecycle-events = on
netty.tcp {
hostname = "0.0.0.0"
port = 2551
}
}
extensions = [
"akka.contrib.pattern.DistributedPubSubExtension"
]
cluster {
seed-nodes = [
"akka.tcp://[email protected]:2551",
"akka.tcp://[email protected]:2552"
]
auto-down-unreachable-after = 30s
}
}
akka.contrib.cluster.pub-sub {
name = myappPubSubMediator
role = ""
routing-logic = round-robin
gossip-interval = 1s
removed-time-to-live = 120s
}
spray.httpx {
compact-json-printing = true
}
spray.can {
server {
pipelining-limit = 20
reaping-cycle = infinite
request-chunk-aggregation-limit = 0
stats-support = off
remote-address-header = on
}
client {
max-connections = 64
max-retries = 5
parsing = ${spray.can.parsing}
}
parsing {
max-uri-length = 4k
max-response-reason-length = 64
max-header-name-length = 64
max-header-value-length = 8k
max-header-count = 64
max-content-length = 512m
max-chunk-ext-length = 512
max-chunk-size = 4m
illegal-header-warnings = off
}
}
This is how I am creating cluster Pool for both the actors -
system.actorOf(
ClusterRouterPool(AdaptiveLoadBalancingPool(
SystemLoadAverageMetricsSelector),
ClusterRouterPoolSettings(
totalInstances = instances * 64, maxInstancesPerNode =
instances,
allowLocalRoutees = isLocal, useRole = None)
).props(Props[T]).withDispatcher(dispatcher), name)
}
Instances above is 16.
Following are the versions -
"io.spray" %% "spray-can" %
"1.3.2",
"io.spray" %% "spray-routing" %
"1.3.2",
"io.spray" %% "spray-testkit" %
"1.3.2" % "test",
"io.spray" %% "spray-client" %
"1.3.2",
"io.spray" %% "spray-json" %
"1.2.6",
"com.typesafe.akka" %% "akka-actor" %
"2.3.6",
"com.typesafe.akka" %% "akka-cluster" %
"2.3.6",
"com.typesafe.akka" %% "akka-contrib" %
"2.3.6",
"com.typesafe.akka" %% "akka-slf4j" %
"2.3.6",
"com.typesafe.akka" %% "akka-testkit" %
"2.3.6" % "test"
- Two actors: Producers and Consumers
-- Sending in a following way -
system.actorOf(ClusterClient.props(actorSet,Duration.create(10,
TimeUnit.SECONDS)))
clusterClient ! ClusterClient.Send(String.format("/user/%s", target),
message, localAffinity = localAffinity)
-- Throughput is 500 per second.
-- Messages are being lost randomly. Tough to find any pattern.
-- Can't find anything in the logs
-- I even tried Bounded mailbox along with "
automatic-back-pressure-handling = on" to make it blocking. But still
losing messages.
--
>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>> Check the FAQ:
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups "Akka
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.