[ 
https://issues.apache.org/jira/browse/HADOOP-6148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12731657#action_12731657
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-6148:
------------------------------------------------

I am using Sun JDK on Windows.
{noformat}
bash-3.2$ java -version
java version "1.6.0_13"
Java(TM) SE Runtime Environment (build 1.6.0_13-b03)
Java HotSpot(TM) Client VM (build 11.3-b02, mixed mode, sharing)
{noformat}

> Implement a pure Java CRC32 calculator
> --------------------------------------
>
>                 Key: HADOOP-6148
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6148
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Owen O'Malley
>            Assignee: Todd Lipcon
>         Attachments: benchmarks20090714.txt, benchmarks20090715.txt, 
> crc32-results.txt, hadoop-5598-evil.txt, hadoop-5598-hybrid.txt, 
> hadoop-5598.txt, hadoop-5598.txt, hdfs-297.txt, PureJavaCrc32.java, 
> PureJavaCrc32.java, PureJavaCrc32.java, PureJavaCrc32.java, 
> TestCrc32Performance.java, TestCrc32Performance.java, 
> TestCrc32Performance.java, TestPureJavaCrc32.java
>
>
> We've seen a reducer writing 200MB to HDFS with replication = 1 spending a 
> long time in crc calculation. In particular, it was spending 5 seconds in crc 
> calculation out of a total of 6 for the write. I suspect that it is the 
> java-jni border that is causing us grief.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to