[
https://issues.apache.org/jira/browse/HDFS-14557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16896871#comment-16896871
]
Stephen O'Donnell commented on HDFS-14557:
------------------------------------------
Of the test failures, the concerning one is
hadoop.hdfs.qjournal.server.TestJournalNodeSync, but I don't think this error
is related to the change. The test passes locally and is failing here with the
stack trace, below indicating something is up with the storage it was using:
{code}
2019-07-31 01:59:54,943 [Listener at localhost/10142] ERROR hdfs.MiniDFSCluster
(MiniDFSCluster.java:shutdown(2070)) - Test resulted in an unexpected exit
1: java.io.IOException: All the storage failed while writing properties to
VERSION file
at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:265)
at
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:480)
at
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:414)
at
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:431)
at
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:485)
at
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:427)
Caused by: java.io.IOException: All the storage failed while writing properties
to VERSION file
at
org.apache.hadoop.hdfs.server.namenode.NNStorage.writeAll(NNStorage.java:1163)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.updateStorageVersion(FSImage.java:1101)
at
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:909)
at
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:287)
at
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:182)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:913)
at
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:333)
at
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:464)
... 4 more
{code}
> JournalNode error: Can't scan a pre-transactional edit log
> ----------------------------------------------------------
>
> Key: HDFS-14557
> URL: https://issues.apache.org/jira/browse/HDFS-14557
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: ha
> Affects Versions: 2.6.0
> Reporter: Wei-Chiu Chuang
> Assignee: Stephen O'Donnell
> Priority: Major
> Attachments: HDFS-14557.001.patch
>
>
> We saw the following error in JournalNodes a few times before.
> {noformat}
> 2016-09-22 12:44:24,505 WARN org.apache.hadoop.hdfs.server.namenode.FSImage:
> Caught exception after scanning through 0 ops from /data/1/dfs/current/ed
> its_inprogress_0000000000000661942 while determining its valid length.
> Position was 761856
> java.io.IOException: Can't scan a pre-transactional edit log.
> at
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$LegacyReader.scanOp(FSEditLogOp.java:4592)
> at
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.scanNextOp(EditLogFileInputStream.java:245)
> at
> org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.scanEditLog(EditLogFileInputStream.java:355)
> at
> org.apache.hadoop.hdfs.server.namenode.FileJournalManager$EditLogFile.scanLog(FileJournalManager.java:551)
> at
> org.apache.hadoop.hdfs.qjournal.server.Journal.scanStorageForLatestEdits(Journal.java:193)
> at org.apache.hadoop.hdfs.qjournal.server.Journal.<init>(Journal.java:153)
> at
> org.apache.hadoop.hdfs.qjournal.server.JournalNode.getOrCreateJournal(JournalNode.java:90)
> {noformat}
> The edit file was corrupt, and one possible culprit of this error is a full
> disk. The JournalNode can't recovered and must be resync manually from other
> JournalNodes.
--
This message was sent by Atlassian JIRA
(v7.6.14#76016)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]