[ 
https://issues.apache.org/jira/browse/HADOOP-11466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14293577#comment-14293577
 ] 

Hudson commented on HADOOP-11466:
---------------------------------

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2037 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2037/])
HADOOP-11466: move to 2.6.1 (cmccabe: rev 
21d5599067adf14d589732a586c3b10aeb0936e9)
* hadoop-common-project/hadoop-common/CHANGES.txt


> FastByteComparisons: do not use UNSAFE_COMPARER on the SPARC architecture 
> because it is slower there
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-11466
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11466
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: io, performance, util
>         Environment: Linux X86 and Solaris SPARC
>            Reporter: Suman Somasundar
>            Assignee: Suman Somasundar
>            Priority: Minor
>              Labels: patch
>             Fix For: 2.6.1
>
>         Attachments: HADOOP-11466.003.patch
>
>
> One difference between Hadoop 2.x and Hadoop 1.x is a utility to compare two 
> byte arrays at coarser 8-byte granularity instead of at the byte-level. The 
> discussion at HADOOP-7761 says this fast byte comparison is somewhat faster 
> for longer arrays and somewhat slower for smaller arrays ( AVRO-939). In 
> order to do 8-byte reads on addresses not aligned to 8-byte boundaries, the 
> patch uses Unsafe.getLong. The problem is that this call is incredibly 
> expensive on SPARC. The reason is that the Studio compiler detects an 
> unaligned pointer read and handles this read in software. x86 supports 
> unaligned reads, so there is no penalty for this call on x86. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to