[
https://issues.apache.org/jira/browse/HDFS-3177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13440235#comment-13440235
]
Kihwal Lee commented on HDFS-3177:
----------------------------------
bq. append additionally requires read permission. I think it is an unacceptable
incompatible change.
How about allowing getBlockLocations() for both read and write? The block
tokens will contain permission (in mode) so the permission won't be violated on
DN. I would rather get it done in this jira.
> Allow DFSClient to find out and use the CRC type being used for a file.
> -----------------------------------------------------------------------
>
> Key: HDFS-3177
> URL: https://issues.apache.org/jira/browse/HDFS-3177
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: data-node, hdfs client
> Affects Versions: 0.23.0
> Reporter: Kihwal Lee
> Assignee: Kihwal Lee
> Fix For: 2.1.0-alpha, 3.0.0
>
> Attachments: hdfs-3177-after-hadoop-8239-8240.patch.txt,
> hdfs-3177-after-hadoop-8239.patch.txt, hdfs-3177-branch2-trunk.patch.txt,
> hdfs-3177-branch2-trunk.patch.txt, hdfs-3177.patch,
> hdfs-3177-with-hadoop-8239-8240.patch.txt,
> hdfs-3177-with-hadoop-8239-8240.patch.txt,
> hdfs-3177-with-hadoop-8239-8240.patch.txt,
> hdfs-3177-with-hadoop-8239.patch.txt
>
>
> To support HADOOP-8060, DFSClient should be able to find out the checksum
> type being used for files in hdfs.
> In my prototype, DataTransferProtocol was extended to include the checksum
> type in the blockChecksum() response. DFSClient uses it in getFileChecksum()
> to determin the checksum type. Also append() can be configured to use the
> existing checksum type instead of the configured one.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira