[
https://issues.apache.org/jira/browse/HADOOP-6148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731750#action_12731750
]
Scott Carey commented on HADOOP-6148:
-------------------------------------
Interesting results.
Todd, were you using -server on the 32 bit version? If its Windows, client is
the default, if its Linux, server is, and on a Mac 32 bit = Java 1.5.
In this benchmark, the Mac Java 1.5 32 bit JVM has equal performance between
the previous version and the InnerOnly version -- but slower than native except
for sizes 128 and under -- so its a somewhat useless test for the expected
usage of this code.
I wasn't able to run against the newest 32 bit Sun JVM on a Linux server or to
try out -client. There is no -client for the 64 bit Sun JVMs.
> Implement a pure Java CRC32 calculator
> --------------------------------------
>
> Key: HADOOP-6148
> URL: https://issues.apache.org/jira/browse/HADOOP-6148
> Project: Hadoop Common
> Issue Type: Improvement
> Reporter: Owen O'Malley
> Assignee: Todd Lipcon
> Attachments: benchmarks20090714.txt, benchmarks20090715.txt,
> crc32-results.txt, hadoop-5598-evil.txt, hadoop-5598-hybrid.txt,
> hadoop-5598.txt, hadoop-5598.txt, hdfs-297.txt, PureJavaCrc32.java,
> PureJavaCrc32.java, PureJavaCrc32.java, PureJavaCrc32.java,
> PureJavaCrc32New.java, PureJavaCrc32NewInner.java, PureJavaCrc32NewLoop.java,
> TestCrc32Performance.java, TestCrc32Performance.java,
> TestCrc32Performance.java, TestCrc32Performance.java, TestPureJavaCrc32.java
>
>
> We've seen a reducer writing 200MB to HDFS with replication = 1 spending a
> long time in crc calculation. In particular, it was spending 5 seconds in crc
> calculation out of a total of 6 for the write. I suspect that it is the
> java-jni border that is causing us grief.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.