[
https://issues.apache.org/jira/browse/HADOOP-7444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Todd Lipcon updated HADOOP-7444:
--------------------------------
Resolution: Fixed
Fix Version/s: 0.23.0
Hadoop Flags: [Reviewed]
Status: Resolved (was: Patch Available)
Committed to trunk, thanks for review, Nicholas. I agree that the current code
in DataChecksum is already specific to 32-bit checksums.
> Add Checksum API to verify and calculate checksums "in bulk"
> ------------------------------------------------------------
>
> Key: HADOOP-7444
> URL: https://issues.apache.org/jira/browse/HADOOP-7444
> Project: Hadoop Common
> Issue Type: Improvement
> Reporter: Todd Lipcon
> Assignee: Todd Lipcon
> Fix For: 0.23.0
>
> Attachments: hadoop-7444.txt, hadoop-7444.txt, hadoop-7444.txt
>
>
> Currently, the various checksum types only provide the capability to
> calculate the checksum of a range of a byte array. For HDFS-2080, it's
> advantageous to provide an API that, given a buffer with some number of
> "checksum chunks", can either calculate or verify the checksums of all of the
> chunks. For example, given a 4KB buffer and a 512-byte chunk size, it would
> calculate or verify 8 CRC32s in one call.
> This allows efficient JNI-based checksum implementations since the cost of
> crossing the JNI boundary is amortized across many computations.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira