[
https://issues.apache.org/jira/browse/HADOOP-3941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12623799#action_12623799
]
Doug Cutting commented on HADOOP-3941:
--------------------------------------
Why not have the default implementation of getFileChecksum() throw the
"unsupported operation" exception so that we don't have duplicated code in
every subclass? Also, should this really throw an exception or return null? I
would guess that most applications would want to handle this not as an
exceptional condition somewhere higher on the stack, but rather explicitly
where getFileChecksum() is called, so perhaps null would be better.
Do you intend to implement this for HDFS here, or as a separate issue?
> Extend FileSystem API to return file-checksums/file-digests
> -----------------------------------------------------------
>
> Key: HADOOP-3941
> URL: https://issues.apache.org/jira/browse/HADOOP-3941
> Project: Hadoop Core
> Issue Type: New Feature
> Components: fs
> Reporter: Tsz Wo (Nicholas), SZE
> Attachments: 3941_20080818.patch, 3941_20080819.patch
>
>
> Suppose we have two files in two locations (may be two clusters) and these
> two files have the same size. How could we tell whether the content of them
> are the same?
> Currently, the only way is to read both files and compare the content of
> them. This is a very expensive operation if the files are huge.
> So, we would like to extend the FileSystem API to support returning
> file-checksums/file-digests.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.