Thanks for the update
On Mon, Jan 9, 2017 at 2:49 AM, Jordan Messec wrote:
> Upon digging further it appears that in my case the issue is garbage
> collection pauses. I was initially misled because in many cases, the pauses
> occurring immediately following the hiccups in
Upon digging further it appears that in my case the issue is garbage
collection pauses. I was initially misled because in many cases, the pauses
occurring immediately following the hiccups in outgoing heartbeats were
very short. However immediately subsequent pauses are sometimes
significantly
It looks like the dispatcher configuration *is* actually being honored. The
dispatcher reported by logging is the dispatcher used by the *logger*. By
instead specifically printing out the dispatcher from within the actor by
using "context.dispatcher" I can see that the configuration is getting
I am unable to get the dispatcher configuration to be applied. After
running the application and noticing the problem persist, I added a tag to
see the thread name in my logging lines, and notice that actors configured
to use non-default dispatchers are still using the default. Even the
I am unable to get the dispatcher configuration to be applied. After
running the application and noticing the problem persist, I added a tag to
see the thread name in my logging lines, and notice that actors configured
to use non-default dispatchers are still using the default. Even the
Thank you Patrik. As you mentioned might be the case, the added
configuration was not adequate to solve the problem. I have followed the
advice on the dispatchers documentation page, and set up our actors that
have blocking behavior to use either a pinned dispatcher, or in the case of
a router
That is an excellent analysis, Jordan. The verbose-heartbeat-logging is
useful for exactly this kind of debugging. You need to find why NODE-1 was
"paused". You said that you might be doing some blocking activity in your
actors. I strongly recommend that you eliminate such blocking or assign a
Hi Jordan,
It looks very related to the issue we are facing, with the difference we
are not able to recover from the UNREACHABLE mark, probably because the
cluster specs are different : in our scenario we have 3 cluster singletons
and we use auto-downing .
Cheers,
Francesco
On 4 January 2017
Here is an update:
I moved to Akka 2.4.16 and still encountered the problem.
Therefore, I turned on "akka.cluster.debug.verbose-heartbeat-logging = on".
This allowed me to notice that when nodes started entering UNREACHABLE
status from each other, that *outgoing *heartbeat messages (the
Thank you for your response and time. I have updated to version 2.4.16 and
have Akka debug logging enabled. I will keep a further eye on this and
update as appropriate.
On Saturday, December 17, 2016 at 3:28:22 AM UTC-8, √ wrote:
>
> Hi!
>
> Update to most recent version and report back.
>
>
Hi!
Update to most recent version and report back.
--
Cheers,
√
On Dec 17, 2016 08:20, "Jordan Messec" wrote:
> Hello, I am struggling with a problem I have spent days trying to resolve.
> I was hoping someone here may have some input that could help me look in
> the right
Hello, I am struggling with a problem I have spent days trying to resolve.
I was hoping someone here may have some input that could help me look in
the right direction.
I am running a small cluster with 3 nodes. Two nodes reside on one machine,
while the third resides on a separate machine.
12 matches
Mail list logo