[
https://issues.apache.org/jira/browse/HDFS-1800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13030811#comment-13030811
]
Eli Collins commented on HDFS-1800:
-----------------------------------
bq. Not sure which question you're referring to?
Was referring to your question in the comment above the call to
sd.clearDirectory ("does this still make sense with 1073?"). I'm not sure we
want to remove the files in a failed storage directory. This doesn't pertain to
this patch, just saw the comment, we can address on the jira which adds the
comment.
bq. I have HDFS-1815 filed to track this. The fact that no unit tests are
failing due to this bug means we need to add some real test cases that do
upgrade (perhaps check in a storage dir from a few recent versions).
Great. Awesome to see this code get so much new test coverage.
Agree, other comments are more relevant to HDFS-1893, can discuss there.
Updated patch?
> Extend image checksumming to function with multiple fsimage files
> -----------------------------------------------------------------
>
> Key: HDFS-1800
> URL: https://issues.apache.org/jira/browse/HDFS-1800
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: name-node
> Affects Versions: Edit log branch (HDFS-1073)
> Reporter: Todd Lipcon
> Assignee: Todd Lipcon
> Fix For: Edit log branch (HDFS-1073)
>
> Attachments: hdfs-1800-prelim.txt, hdfs-1800.txt, hdfs-1800.txt
>
>
> HDFS-903 added the MD5 checksum of the fsimage to the VERSION file in each
> image directory. This allows it to verify that the FSImage didn't get
> corrupted or accidentally replaced on disk.
> With HDFS-1073, there may be multiple fsimage_N files in a storage directory
> corresponding to different checkpoints. So having a single MD5 in the VERSION
> file won't suffice. Instead we need to store an MD5 per image file.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira