[
https://issues.apache.org/jira/browse/HADOOP-5598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Todd Lipcon updated HADOOP-5598:
--------------------------------
Attachment: TestCrc32Performance.java
hadoop-5598-hybrid.txt
Attached is a new version that is a hybrid implementation. For writes smaller
than a threshold it calculates CRC32 natively. Above the threshold, it uses the
java.util.zip implementation, which it folds back in lazily using crc32_combine
ported from zlib.
On the old TestCrc32Performance benchmark, this version was always faster or as
fast as "theirs". I added a new benchmark test which sizes the writes randomly,
on which the hybrid version is awful in certain cases since it spends most of
its time in crc32_combine. For this hybrid model to work, there will need to be
some kind of hysteresis when switching between implementations, so as to avoid
crc32_combine.
If someone has Java 1.6 update 14 handy, I'd be interested to see if the new
array bounds checking elimination optimization makes the pure Java fast enough
to completely replace java.util.zip's.
> Implement a pure Java CRC32 calculator
> --------------------------------------
>
> Key: HADOOP-5598
> URL: https://issues.apache.org/jira/browse/HADOOP-5598
> Project: Hadoop Core
> Issue Type: Improvement
> Components: dfs
> Reporter: Owen O'Malley
> Assignee: Todd Lipcon
> Attachments: crc32-results.txt, hadoop-5598-hybrid.txt,
> hadoop-5598.txt, TestCrc32Performance.java, TestCrc32Performance.java
>
>
> We've seen a reducer writing 200MB to HDFS with replication = 1 spending a
> long time in crc calculation. In particular, it was spending 5 seconds in crc
> calculation out of a total of 6 for the write. I suspect that it is the
> java-jni border that is causing us grief.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.