[
https://issues.apache.org/jira/browse/HDFS-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Todd Lipcon updated HDFS-2130:
------------------------------
Resolution: Fixed
Fix Version/s: 0.23.1
0.24.0
Release Note: The default checksum algorithm used on HDFS is now CRC32C.
Data from previous versions of Hadoop can still be read backwards-compatibly.
Hadoop Flags: Reviewed
Status: Resolved (was: Patch Available)
Committed to branch-0.23 for 0.23.1 (since .0 branched already). Committed to
trunk.
> Switch default checksum to CRC32C
> ---------------------------------
>
> Key: HDFS-2130
> URL: https://issues.apache.org/jira/browse/HDFS-2130
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: hdfs client
> Reporter: Todd Lipcon
> Assignee: Todd Lipcon
> Fix For: 0.24.0, 0.23.1
>
> Attachments: hdfs-2130.txt, hdfs-2130.txt, hdfs-2130.txt
>
>
> Once the other subtasks/parts of HDFS-2080 are complete, CRC32C will be a
> much more efficient checksum algorithm than CRC32. Hence we should change the
> default checksum to CRC32C.
> However, in order to continue to support append against blocks created with
> the old checksum, we will need to implement some kind of handshaking in the
> write pipeline.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira