[
https://issues.apache.org/jira/browse/HADOOP-3941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12623837#action_12623837
]
Tsz Wo (Nicholas), SZE commented on HADOOP-3941:
------------------------------------------------
Below is a summary of the default getFileChecksum() implementation options. We
mentioned the first three before. I added the fourth.
# no implementation, declare it as abstract
# returning null
# throwing "Not supported" IOException
# if algorithm is MD5, return a MD5FileChecksum. Otherwise, do #2 or #3.
However, MD5 in #4 may not be efficient for HDFS since it will read the entire
file.
> Extend FileSystem API to return file-checksums/file-digests
> -----------------------------------------------------------
>
> Key: HADOOP-3941
> URL: https://issues.apache.org/jira/browse/HADOOP-3941
> Project: Hadoop Core
> Issue Type: New Feature
> Components: fs
> Reporter: Tsz Wo (Nicholas), SZE
> Attachments: 3941_20080818.patch, 3941_20080819.patch,
> 3941_20080819b.patch
>
>
> Suppose we have two files in two locations (may be two clusters) and these
> two files have the same size. How could we tell whether the content of them
> are the same?
> Currently, the only way is to read both files and compare the content of
> them. This is a very expensive operation if the files are huge.
> So, we would like to extend the FileSystem API to support returning
> file-checksums/file-digests.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.