[
https://issues.apache.org/jira/browse/HADOOP-11466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14284649#comment-14284649
]
Colin Patrick McCabe commented on HADOOP-11466:
-----------------------------------------------
Can you add a message logging the exception in the " } catch (Throwable t)
{ // ensure we really catch *everything*" block?
+1 when that's resolved. sorry for the delays in reviews
> FastByteComparisons: do not use UNSAFE_COMPARER on the SPARC architecture
> because it is slower there
> ----------------------------------------------------------------------------------------------------
>
> Key: HADOOP-11466
> URL: https://issues.apache.org/jira/browse/HADOOP-11466
> Project: Hadoop Common
> Issue Type: Improvement
> Components: io, performance, util
> Environment: Linux X86 and Solaris SPARC
> Reporter: Suman Somasundar
> Assignee: Suman Somasundar
> Priority: Minor
> Labels: patch
> Attachments: HADOOP-11466.002.patch
>
>
> One difference between Hadoop 2.x and Hadoop 1.x is a utility to compare two
> byte arrays at coarser 8-byte granularity instead of at the byte-level. The
> discussion at HADOOP-7761 says this fast byte comparison is somewhat faster
> for longer arrays and somewhat slower for smaller arrays ( AVRO-939). In
> order to do 8-byte reads on addresses not aligned to 8-byte boundaries, the
> patch uses Unsafe.getLong. The problem is that this call is incredibly
> expensive on SPARC. The reason is that the Studio compiler detects an
> unaligned pointer read and handles this read in software. x86 supports
> unaligned reads, so there is no penalty for this call on x86.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)