[
https://issues.apache.org/jira/browse/HBASE-14738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14985658#comment-14985658
]
Andrew Purtell commented on HBASE-14738:
----------------------------------------
Looks like a build env issue on that test result:
{noformat}
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test
(secondPartTestsExecution) on project hbase-server: ExecutionException:
java.lang.RuntimeException: The forked VM terminated without properly saying
goodbye. VM crash or System.exit called?
{noformat}
Running the suite locally now.
> Backport HBASE-11927 (Use Native Hadoop Library for HFile checksum) to 0.98
> ---------------------------------------------------------------------------
>
> Key: HBASE-14738
> URL: https://issues.apache.org/jira/browse/HBASE-14738
> Project: HBase
> Issue Type: Task
> Reporter: Andrew Purtell
> Assignee: Andrew Purtell
> Fix For: 0.98.16
>
> Attachments: HBASE-14738-0.98.patch
>
>
> Profiling 0.98.15 I see 20-30% of CPU time spent in Hadoop's PureJavaCrc32.
> Not surprising given previous results described on HBASE-11927. Backport.
> There are two issues with the backport:
> # The patch on 11927 changes the default CRC type from CRC32 to CRC32C.
> Although the changes are backwards compatible -files with either CRC type
> will be handled correctly in a transparent manner - we should probably leave
> the default alone in 0.98 and advise users on a site configuration change to
> use CRC32C if desired, for potential hardware acceleration.
> # Need a shim for differences between Hadoop's DataChecksum type.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)