[
https://issues.apache.org/jira/browse/HDFS-384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13189278#comment-13189278
]
Yoram Arnon commented on HDFS-384:
----------------------------------
I thought it did...
If it doesn't, then this issue is still valid, but there are now three
approaches to solving it:
1. enhance fsck to do this
2. manual editting of the image/edits file(s) (the current approach, tedious
when the image/edits are large)
3. the original proposed solution, of ignoring an error by special operator
request on startup
> optionally ignore a bad entry in namenode state when starting up
> ----------------------------------------------------------------
>
> Key: HDFS-384
> URL: https://issues.apache.org/jira/browse/HDFS-384
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Yoram Arnon
> Assignee: Sameer Paranjpye
>
> if the namenode state (fsimage, edits) contains a bad entry, the namenode
> refused to start.
> Normally that's a good thing, alerting the administrator that something's
> corrupted.
> An option to ignore those entries is useful for recovering from such a
> condition. Anyone but a hard core developer would be helpless in the face of
> a corruption like that, and would prefer to lose a couple of records and be
> able to run than to be down, or remove the entire state of dfs.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira