[
https://issues.apache.org/jira/browse/HDFS-3177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13438373#comment-13438373
]
Kihwal Lee commented on HDFS-3177:
----------------------------------
{quote}
-1 core tests. The patch failed these unit tests in
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.ha.TestZKFailoverController
org.apache.hadoop.hdfs.TestPersistBlocks
{quote}
These are known issues. HADOOP-8591 and HDFS-3811
> Allow DFSClient to find out and use the CRC type being used for a file.
> -----------------------------------------------------------------------
>
> Key: HDFS-3177
> URL: https://issues.apache.org/jira/browse/HDFS-3177
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: data-node, hdfs client
> Affects Versions: 0.23.0
> Reporter: Kihwal Lee
> Assignee: Kihwal Lee
> Fix For: 2.1.0-alpha, 3.0.0
>
> Attachments: hdfs-3177-after-hadoop-8239-8240.patch.txt,
> hdfs-3177-after-hadoop-8239.patch.txt, hdfs-3177.patch,
> hdfs-3177-with-hadoop-8239-8240.patch.txt,
> hdfs-3177-with-hadoop-8239-8240.patch.txt,
> hdfs-3177-with-hadoop-8239-8240.patch.txt,
> hdfs-3177-with-hadoop-8239.patch.txt
>
>
> To support HADOOP-8060, DFSClient should be able to find out the checksum
> type being used for files in hdfs.
> In my prototype, DataTransferProtocol was extended to include the checksum
> type in the blockChecksum() response. DFSClient uses it in getFileChecksum()
> to determin the checksum type. Also append() can be configured to use the
> existing checksum type instead of the configured one.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira