[
https://issues.apache.org/jira/browse/HDFS-14043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675964#comment-16675964
]
Hudson commented on HDFS-14043:
-------------------------------
SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15363 (See
[https://builds.apache.org/job/Hadoop-trunk-Commit/15363/])
HDFS-14043. Tolerate corrupted seen_txid file. Contributed by Lukas (inigoiri:
rev f3296501e09fa7f1e81548dfcefa56f20fe337ca)
* (edit)
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
* (edit)
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
> Tolerate corrupted seen_txid file
> ---------------------------------
>
> Key: HDFS-14043
> URL: https://issues.apache.org/jira/browse/HDFS-14043
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs, namenode
> Affects Versions: 2.9.2, 3.1.2, 2.9.3
> Reporter: Lukas Majercak
> Assignee: Lukas Majercak
> Priority: Major
> Fix For: 2.10.0, 3.0.4, 3.1.2, 3.3.0, 3.2.1, 2.9.3
>
> Attachments: HDFS-14043.001.patch, HDFS-14043.002.patch,
> HDFS-14043.003.patch
>
>
> We already tolerate IOExceptions when reading seen_txid file from namenode's
> dirs. So we take the maximum txid of all the *readable* namenode dirs. We
> should extend this to when the file is corrupted. Currently,
> PersistentLongFile.readFile throws NumberFormatException in this case and the
> whole NN crashes.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]