[
https://issues.apache.org/jira/browse/HADOOP-3941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12624055#action_12624055
]
Tsz Wo (Nicholas), SZE commented on HADOOP-3941:
------------------------------------------------
bq. This patch does not implement checksum on HDFS files, right?
You are correct. The patch only throws a "not supported" exception for HDFS.
bq. Do you plan to generate MD5s for HDFS files too? For HDFS, does it make
sense to create a checksum from the blk*.meta files because the size of the
meta file will be much much lesser than the size of the data file?
No, the original MD5 algorithm may not be efficient for large files. I think
we need a distributed file digest algorithm for HDFS. Yes, one way is to
compute MD5 over the meta files. This will reduce the overhead dramatically.
I probably will implement a MD5-over-CRC32 for HDFS.
> Extend FileSystem API to return file-checksums/file-digests
> -----------------------------------------------------------
>
> Key: HADOOP-3941
> URL: https://issues.apache.org/jira/browse/HADOOP-3941
> Project: Hadoop Core
> Issue Type: New Feature
> Components: fs
> Reporter: Tsz Wo (Nicholas), SZE
> Attachments: 3941_20080818.patch, 3941_20080819.patch,
> 3941_20080819b.patch
>
>
> Suppose we have two files in two locations (may be two clusters) and these
> two files have the same size. How could we tell whether the content of them
> are the same?
> Currently, the only way is to read both files and compare the content of
> them. This is a very expensive operation if the files are huge.
> So, we would like to extend the FileSystem API to support returning
> file-checksums/file-digests.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.