[
https://issues.apache.org/jira/browse/HADOOP-10778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14053908#comment-14053908
]
Colin Patrick McCabe commented on HADOOP-10778:
-----------------------------------------------
I like the idea of having a microbenchmark which can compare the different
implementations. I'm not comfortable selecting the implementation to use at
runtime with some microbenchmark, because it means that behavior may be
nondeterministic. For example, if a GC hits when we're doing the benchmark, or
another process uses a bunch of CPu, we might choose the wrong implementation.
As you guys know, most of our users on x86 just use CRC32C (not CRC32) with
hardware acceleration, and we're not going to beat that from Java or C. Let's
either get rid of the fallback mechanism or add a configuration option to make
it optional.
> Use NativeCrc32 only if it is faster
> ------------------------------------
>
> Key: HADOOP-10778
> URL: https://issues.apache.org/jira/browse/HADOOP-10778
> Project: Hadoop Common
> Issue Type: Improvement
> Components: util
> Reporter: Tsz Wo Nicholas Sze
> Assignee: Tsz Wo Nicholas Sze
> Attachments: c10778_20140702.patch
>
>
> From the benchmark post in [this
> comment|https://issues.apache.org/jira/browse/HDFS-6560?focusedCommentId=14044060&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14044060],
> NativeCrc32 is slower than java.util.zip.CRC32 for Java 7 and above when
> bytesPerChecksum > 512.
--
This message was sent by Atlassian JIRA
(v6.2#6252)