Roland,

I am wondering how that can happen. When the new node comes in I was under 
the impression a rebalance would happen with cluster sharding and the 
actors would be shut down on the old node and started on the new node. Do I 
have to do something specific other than a cluster.join() to make this 
happen? Your advice is much appreciated.

-- Robert

On Friday, November 18, 2016 at 10:58:57 AM UTC-6, rkuhn wrote:
>
> Hi Robert,
>
> I cannot comment on whether mysql-async has issues, but assuming that it 
> does not this would point towards having two actors with the quoted 
> persistenceId active at the same time.
>
> Regards,
>
> Roland
>
> 18 nov. 2016 kl. 17:49 skrev kraythe <kra...@gmail.com <javascript:>>:
>
> Greetings, I am currently using the community plugin for mysql 
> ("com.github.mauricio" 
> %% "mysql-async" % "0.2.16")  for AKKA persistence. However, when I 
> perform a rolling restart I get exceptions like the following: 
>
> Nov 18 10:11:20 vtest-app01 application-9001.log:  2016-11-18 16:11:19 
>> +0000 - [ERROR] - [PersistentShardCoordinator] 
>> akka.tcp://app@10.77.21.34:2551/system/sharding/UserActivityActorCoordinator/singleton/coordinator
>>  
>> -  Failed to persist event type 
>> [akka.cluster.sharding.ShardCoordinator$Internal$ShardRegionRegistered] 
>> with sequence number [837] for persistenceId 
>> [/sharding/UserActivityActorCoordinator].
>> Nov 18 10:11:20 vtest-app01 application-9001.log: 
>>  com.github.mauricio.async.db.mysql.exceptions.MySQLException: Error 1062 - 
>> #23000 - Duplicate entry '3-837' for key 'PRIMARY'
>> Nov 18 10:11:20 vtest-app01 application-9001.log:   at 
>> com.github.mauricio.async.db.mysql.MySQLConnection.onError(MySQLConnection.scala:124)
>> Nov 18 10:11:20 vtest-app01 application-9001.log:   at 
>> com.github.mauricio.async.db.mysql.codec.MySQLConnectionHandler.channelRead0(MySQLConnectionHandler.scala:105)
>> Nov 18 10:11:20 vtest-app01 application-9001.log:   at 
>> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>
>
> Now I grant that this may be because of the persistence plugin having 
> issues but I am wondering if there is something else I am doing wrong in 
> the shutdown. The steps I used to shut down the rolling node are: 
>
> 1. I send each shard region a graceful shutdown instance and wait for them 
> all to terminate
> 2. send all the top level actors a PoisonPill
> 3. issue a cluster leave and wait for the member to be removed 
> 4. After a 10 second delay terminate the actor system. 
>
> Is there another persistence plugin that would potentially suit my needs 
> better? I would prefer to store the journal and snapshots in our main SQL 
> based RDBMS if possible but I am open to options. 
>
> Thanks
>
> -- 
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ: 
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> --- 
> You received this message because you are subscribed to the Google Groups 
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to akka-user+...@googlegroups.com <javascript:>.
> To post to this group, send email to akka...@googlegroups.com 
> <javascript:>.
> Visit this group at https://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
>>>>>>>>>>      Read the docs: http://akka.io/docs/
>>>>>>>>>>      Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>      Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Reply via email to