[ 
https://issues.apache.org/jira/browse/HADOOP-4995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12662248#action_12662248
 ] 

dhruba borthakur commented on HADOOP-4995:
------------------------------------------

I agree with Konstantin. The best tool to verify that the image is good is to 
run the namenode. The only caveat is that running the namenode actually merges 
the fsimage and edits log. 

If there is a way to start the namenode with a "-checkimage" or some such 
parameter.. in this case, the namenode can just load both the fsimage and edits 
and then exits.

> Offline Namenode fsImage verification
> -------------------------------------
>
>                 Key: HADOOP-4995
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4995
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Brian Bockelman
>
> Currently, there is no way to verify that a copy of the fsImage is not 
> corrupt.  I propose that we should have an offline tool that loads the 
> fsImage into memory to see if it is usable.  This will allow us to automate 
> backup testing to some extent.
> One can start a namenode process on the fsImage to see if it can be loaded, 
> but this is not easy to automate.
> To use HDFS in production, it is greatly desired to have both checkpoints - 
> and have some idea that the checkpoints are valid!  No one wants to see the 
> day where they reload from backup only to find that the fsImage in the backup 
> wasn't usable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to