[ https://issues.apache.org/jira/browse/HADOOP-5598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Todd Lipcon updated HADOOP-5598: -------------------------------- Attachment: hadoop-5598.txt I managed to coerce Java into cooperating with int sized variables, and it's significantly faster now. It's now about 5-10% slower than the built-in CRC for large sizes. > Implement a pure Java CRC32 calculator > -------------------------------------- > > Key: HADOOP-5598 > URL: https://issues.apache.org/jira/browse/HADOOP-5598 > Project: Hadoop Core > Issue Type: Improvement > Components: dfs > Reporter: Owen O'Malley > Assignee: Todd Lipcon > Attachments: crc32-results.txt, hadoop-5598-evil.txt, > hadoop-5598-hybrid.txt, hadoop-5598.txt, hadoop-5598.txt, > TestCrc32Performance.java, TestCrc32Performance.java > > > We've seen a reducer writing 200MB to HDFS with replication = 1 spending a > long time in crc calculation. In particular, it was spending 5 seconds in crc > calculation out of a total of 6 for the write. I suspect that it is the > java-jni border that is causing us grief. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.