[ 
https://issues.apache.org/jira/browse/ASTERIXDB-2311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Maxon reassigned ASTERIXDB-2311:
------------------------------------

    Assignee: Ian Maxon

> After restart, "Failed to redo" exception was generated.
> --------------------------------------------------------
>
>                 Key: ASTERIXDB-2311
>                 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2311
>             Project: Apache AsterixDB
>          Issue Type: Bug
>            Reporter: Taewoo Kim
>            Assignee: Ian Maxon
>            Priority: Major
>         Attachments: nc-4.log
>
>
> During the realtime tweet ingestion process of Cloudberry, I found an issue 
> on the application that feeds a tweet to AsterixDB, I stopped that process. 
> Also, I stopped the feed itself and started the Cloudberry instance and saw 
> the following error message.
> {code:java}
> 21:58:33.129 [Executor-6:4] ERROR 
> org.apache.asterix.app.replication.message.RegistrationTasksResponseMessage - 
> Failed during startup task
> java.lang.IllegalStateException: Failed to redo
> at org.apache.asterix.app.nc.RecoveryManager.redo(RecoveryManager.java:784) 
> ~[asterix-app-0.9.4-SNAPSHOT.jar:0.9.4-SNAPSHOT]
> at 
> org.apache.asterix.app.nc.RecoveryManager.startRecoveryRedoPhase(RecoveryManager.java:368)
>  ~[asterix-app-0.9.4-SNAPSHOT.jar:0.9.4-SNAPSHOT]
> at 
> org.apache.asterix.app.nc.RecoveryManager.replayPartitionsLogs(RecoveryManager.java:178)
>  ~[asterix-app-0.9.4-SNAPSHOT.jar:0.9.4-SNAPSHOT]
> at 
> org.apache.asterix.app.nc.RecoveryManager.startLocalRecovery(RecoveryManager.java:170)
>  ~[asterix-app-0.9.4-SNAPSHOT.jar:0.9.4-SNAPSHOT]
> at 
> org.apache.asterix.app.nc.task.LocalRecoveryTask.perform(LocalRecoveryTask.java:45)
>  ~[asterix-app-0.9.4-SNAPSHOT.jar:0.9.4-SNAPSHOT]
> at 
> org.apache.asterix.app.replication.message.RegistrationTasksResponseMessage.handle(RegistrationTasksResponseMessage.java:62)
>  [asterix-app-0.9.4-SNAPSHOT.jar:0.9.4-SNAPSHOT]
> at 
> org.apache.asterix.messaging.NCMessageBroker.lambda$receivedMessage$3(NCMessageBroker.java:100)
>  [asterix-app-0.9.4-SNAPSHOT.jar:0.9.4-SNAPSHOT]
> at 
> org.apache.asterix.messaging.NCMessageBroker$$Lambda$70/727538728.run(Unknown 
> Source) [asterix-app-0.9.4-SNAPSHOT.jar:0.9.4-SNAPSHOT]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0]
> at java.lang.Thread.run(Thread.java:744) [?:1.8.0]
> Caused by: org.apache.hyracks.api.exceptions.HyracksDataException: HYR0033: 
> Inserting duplicate keys into the primary storage
> at 
> org.apache.hyracks.api.exceptions.HyracksDataException.create(HyracksDataException.java:55)
>  ~[hyracks-api-0.3.4-SNAPSHOT.jar:0.3.4-SNAPSHOT]
> at 
> org.apache.hyracks.storage.am.lsm.btree.impls.LSMBTree.insert(LSMBTree.java:213)
>  ~[hyracks-storage-am-lsm-btree-0.3.4-SNAPSHOT.jar:0.3.4-SNAPSHOT]
> at 
> org.apache.hyracks.storage.am.lsm.btree.impls.LSMBTree.modify(LSMBTree.java:164)
>  ~[hyracks-storage-am-lsm-btree-0.3.4-SNAPSHOT.jar:0.3.4-SNAPSHOT]
> at 
> org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.modify(LSMHarness.java:482)
>  ~[hyracks-storage-am-lsm-common-0.3.4-SNAPSHOT.jar:0.3.4-SNAPSHOT]
> at 
> org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.forceModify(LSMHarness.java:422)
>  ~[hyracks-storage-am-lsm-common-0.3.4-SNAPSHOT.jar:0.3.4-SNAPSHOT]
> at 
> org.apache.hyracks.storage.am.lsm.common.impls.LSMTreeIndexAccessor.forceInsert(LSMTreeIndexAccessor.java:176)
>  ~[hyracks-storage-am-lsm-common-0.3.4-SNAPSHOT.jar:0.3.4-SNAPSHOT]
> at org.apache.asterix.app.nc.RecoveryManager.redo(RecoveryManager.java:774) 
> ~[asterix-app-0.9.4-SNAPSHOT.jar:0.9.4-SNAPSHOT]
> ... 12 more{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to