[ 
https://issues.apache.org/jira/browse/HADOOP-10778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14054053#comment-14054053
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-10778:
----------------------------------------------

> ... Have you had a chance to try it on eg a Sandy Bridge server running RHEL 
> 6?

No.  Could you help running it?

> The new work that James Thomas is doing with native CRC avoids the above 
> problem by chunking the CRC calculation into smaller chunks – the same trick 
> that the JVM uses when memcpying large byte[] arrays. ...

Sound like a good idea.  Do you think the same trick could be used with 
java.util.zip.CRC32?

> Use NativeCrc32 only if it is faster
> ------------------------------------
>
>                 Key: HADOOP-10778
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10778
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: util
>            Reporter: Tsz Wo Nicholas Sze
>            Assignee: Tsz Wo Nicholas Sze
>         Attachments: c10778_20140702.patch
>
>
> From the benchmark post in [this 
> comment|https://issues.apache.org/jira/browse/HDFS-6560?focusedCommentId=14044060&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14044060],
>  NativeCrc32 is slower than java.util.zip.CRC32 for Java 7 and above when 
> bytesPerChecksum > 512.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to