I don't know about you, but I dislike mysteries. Could you start the actor
system with config
akka.log-config-on-start=on
and see what it logs for the mode.
/Patrik
On Tue, Aug 9, 2016 at 7:37 PM, Eric Swenson wrote:
> Just checked again. No override of that config
Just checked again. No override of that config parameter. Yet, these were,
indeed, logged as errors not warnings. -- Eric
On Monday, August 8, 2016 at 11:28:33 PM UTC-7, Patrik Nordwall wrote:
>
>
>
> On Mon, Aug 8, 2016 at 10:06 PM, Eric Swenson > wrote:
>
>> We have not
On Mon, Aug 8, 2016 at 10:06 PM, Eric Swenson wrote:
> We have not changed the default mode=repair-by-discard-old config value
> for the replay-filter. Should we? — Eric
>
Then I can't understand how you can get the ERROR as in the first message
in this thread
We have not changed the default mode=repair-by-discard-old config value for the
replay-filter. Should we? — Eric
> On Aug 7, 2016, at 9:47 AM, Patrik Nordwall wrote:
>
> It's typically caused by multiple persistent actors with the same
> persistenceId running at
Great, thanks for the update.
sön 7 aug. 2016 kl. 20:08 skrev Eric Swenson :
> Thanks, Patrik. That is precisely what happened. I had been using
> auto-down-unreachable-after, and while this appeared to work fine in the
> normal "rolling update" mode of deploying nodes to our
Thanks, Patrik. That is precisely what happened. I had been using
auto-down-unreachable-after, and while this appeared to work fine in the
normal "rolling update" mode of deploying nodes to our cluster, it had the
split-brain effects when there were transient cases of unreachability. I
have
It's typically caused by multiple persistent actors with the same
persistenceId running at the same time. E.g. because there were a network
split and your cluster was split into two separate clusters and thereby
starting multiple persistent actors. That is why we so strongly recommend
against
I'm getting this error consistently now, and don't know why this is
happening nor what to do about it. I form the persistentId this way:
override def persistenceId: String = self.path.parent.parent.name + "-"
+ self.path.name
So I don't see how I could have two persisters with the same
sure, but I'm a contributor newbie - it looks like you have to approve my
ticket (#20394) etc before I should do anything more
On 26 April 2016 at 09:09, Patrik Nordwall
wrote:
> sounds good, would you like to open a pull request?
>
> On Tue, Apr 26, 2016 at 9:57 AM,
sounds good, would you like to open a pull request?
On Tue, Apr 26, 2016 at 9:57 AM, Tim Pigden wrote:
> Hi Patrik
> If that's the intent of the warning, would it not be a good idea to make
> it a little more explicit? For example add "check persistence ids" to the
> code
Hi Patrik
If that's the intent of the warning, would it not be a good idea to make it
a little more explicit? For example add "check persistence ids" to the code
or drop a useful comment line into the source code where the warning is
emitted.
Tim
On 26 April 2016 at 06:39, Patrik Nordwall
Yes, that is the purpose of that warning. That is something you must avoid
in production systems.
/Patrik
mån 25 apr. 2016 kl. 18:40 skrev Tim Pigden :
> problem due to error in naming of persistenceIds leading to 2 persistors
> with same id
>
>
> On Monday, April 25, 2016
problem due to error in naming of persistenceIds leading to 2 persistors
with same id
On Monday, April 25, 2016 at 11:34:21 AM UTC+1, Tim Pigden wrote:
>
> Hi
> I'm getting this message. I'm probably doing something wrong but any idea
> what that might be? I know what messages I'm persisting
13 matches
Mail list logo