[ 
https://issues.apache.org/jira/browse/HADOOP-10778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14065649#comment-14065649
 ] 

Todd Lipcon commented on HADOOP-10778:
--------------------------------------

Using crcutil will get us to 1 cycle per byte. But, my implementation I linked 
to above actually improves us to 0.73 cycles/byte for the chunked case, because 
we can compute the checksums in parallel (on different ALUs) from each chunk 
and don't need any kind of "combine" operation at the end.

Either way, i think optimizing CRC32 further is mostly a waste of time given 
that CRC32C is way faster (0.13 cycles/byte) and there aren't any real 
downsides.

> Use NativeCrc32 only if it is faster
> ------------------------------------
>
>                 Key: HADOOP-10778
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10778
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: util
>            Reporter: Tsz Wo Nicholas Sze
>            Assignee: Tsz Wo Nicholas Sze
>         Attachments: c10778_20140702.patch
>
>
> From the benchmark post in [this 
> comment|https://issues.apache.org/jira/browse/HDFS-6560?focusedCommentId=14044060&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14044060],
>  NativeCrc32 is slower than java.util.zip.CRC32 for Java 7 and above when 
> bytesPerChecksum > 512.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to