[jira] [Commented] (HADOOP-9320) Hadoop native build failure on ARM hard-float

2015-11-22 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15021434#comment-15021434
 ] 

Trevor Robinson commented on HADOOP-9320:
-

I no longer have the ability to test this patch, as I have no access to an ARM 
Linux environment. I don't really work on Hadoop anymore either, for that 
matter, so I'm not sure of the status of this issue.


> Hadoop native build failure on ARM hard-float
> -
>
> Key: HADOOP-9320
> URL: https://issues.apache.org/jira/browse/HADOOP-9320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.0.3-alpha
> Environment: $ uname -a
> Linux 3.5.0-1000-highbank #154-Ubuntu SMP Thu Jan 10 09:13:40 UTC 2013 armv7l 
> armv7l armv7l GNU/Linux
> $ java -version
> java version "1.8.0-ea"
> Java(TM) SE Runtime Environment (build 1.8.0-ea-b36e)
> Java HotSpot(TM) Client VM (build 25.0-b04, mixed mode)
>Reporter: Trevor Robinson
>Assignee: Trevor Robinson
>  Labels: BB2015-05-TBR, build-failure
> Attachments: HADOOP-9320.patch
>
>
> ARM JVM float ABI detection is failing in JNIFlags.cmake because 
> JAVA_JVM_LIBRARY is not set at that point. The failure currently causes CMake 
> to assume a soft-float JVM. This causes the build to fail with hard-float 
> OpenJDK (but don't use that) and [Oracle Java 8 Preview for 
> ARM|http://jdk8.java.net/fxarmpreview/]. Hopefully the April update of Oracle 
> Java 7 will support hard-float as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9320) Hadoop native build failure on ARM hard-float

2013-12-09 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13843661#comment-13843661
 ] 

Trevor Robinson commented on HADOOP-9320:
-

Could someone please commit this patch? The build is broken for ARM hard-float 
systems, which are now the default. (Oracle 7u40 supports armhf.) This fix 
trivially reorders two chunks of JNIFlags.cmake so that JAVA_JVM_LIBRARY is 
defined before it is used, and it has no effect on other platforms. Thanks.

 Hadoop native build failure on ARM hard-float
 -

 Key: HADOOP-9320
 URL: https://issues.apache.org/jira/browse/HADOOP-9320
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.3-alpha
 Environment: $ uname -a
 Linux 3.5.0-1000-highbank #154-Ubuntu SMP Thu Jan 10 09:13:40 UTC 2013 armv7l 
 armv7l armv7l GNU/Linux
 $ java -version
 java version 1.8.0-ea
 Java(TM) SE Runtime Environment (build 1.8.0-ea-b36e)
 Java HotSpot(TM) Client VM (build 25.0-b04, mixed mode)
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: build-failure
 Attachments: HADOOP-9320.patch


 ARM JVM float ABI detection is failing in JNIFlags.cmake because 
 JAVA_JVM_LIBRARY is not set at that point. The failure currently causes CMake 
 to assume a soft-float JVM. This causes the build to fail with hard-float 
 OpenJDK (but don't use that) and [Oracle Java 8 Preview for 
 ARM|http://jdk8.java.net/fxarmpreview/]. Hopefully the April update of Oracle 
 Java 7 will support hard-float as well.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-9601) Support native CRC on byte arrays

2013-09-03 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13757097#comment-13757097
 ] 

Trevor Robinson commented on HADOOP-9601:
-

While unaligned loads should be avoided whenever possible for portability and 
performance, ARMv6 and later (which includes low-end ARM11 implementations such 
as Raspberry Pi) do [support unaligned word and half-word accesses in most 
cases|http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0301h/Cdfejcbh.html]
 by default. In other cases, the [Linux kernel will trap and fix the 
access|http://lxr.free-electrons.com/source/arch/arm/mm/alignment.c?a=arm] by 
default (with the obvious performance penalty). The specifics are complicated 
([no 
atomicity|http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0301h/Cdffhdje.html],
 [no kernel support for unaligned 
floats|http://jsolano.net/2012/09/06/arm-unaligned-data-access-and-floating-point-in-linux/],
 etc.), but the point is that this fix is likely to benefit ARM even if it must 
initially do unaligned 32-bit loads. Linux also provides unaligned access 
support on other architectures, such as PowerPC, though the overhead may be 
higher.


 Support native CRC on byte arrays
 -

 Key: HADOOP-9601
 URL: https://issues.apache.org/jira/browse/HADOOP-9601
 Project: Hadoop Common
  Issue Type: Improvement
  Components: performance, util
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Gopal V
  Labels: perfomance
 Attachments: HADOOP-9601-bench.patch, 
 HADOOP-9601-rebase+benchmark.patch, HADOOP-9601-trunk-rebase-2.patch, 
 HADOOP-9601-trunk-rebase.patch, HADOOP-9601-WIP-01.patch, 
 HADOOP-9601-WIP-02.patch


 When we first implemented the Native CRC code, we only did so for direct byte 
 buffers, because these correspond directly to native heap memory and thus 
 make it easy to access via JNI. We'd generally assumed that accessing byte[] 
 arrays from JNI was not efficient enough, but now that I know more about JNI 
 I don't think that's true -- we just need to make sure that the critical 
 sections where we lock the buffers are short.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9320) Hadoop native build failure on ARM hard-float

2013-02-21 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-9320:


Labels: build-failure  (was: )

 Hadoop native build failure on ARM hard-float
 -

 Key: HADOOP-9320
 URL: https://issues.apache.org/jira/browse/HADOOP-9320
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.3-alpha
 Environment: $ uname -a
 Linux 3.5.0-1000-highbank #154-Ubuntu SMP Thu Jan 10 09:13:40 UTC 2013 armv7l 
 armv7l armv7l GNU/Linux
 $ java -version
 java version 1.8.0-ea
 Java(TM) SE Runtime Environment (build 1.8.0-ea-b36e)
 Java HotSpot(TM) Client VM (build 25.0-b04, mixed mode)
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: build-failure
 Attachments: HADOOP-9320.patch


 ARM JVM float ABI detection is failing in JNIFlags.cmake because 
 JAVA_JVM_LIBRARY is not set at that point. The failure currently causes CMake 
 to assume a soft-float JVM. This causes the build to fail with hard-float 
 OpenJDK (but don't use that) and [Oracle Java 8 Preview for 
 ARM|http://jdk8.java.net/fxarmpreview/]. Hopefully the April update of Oracle 
 Java 7 will support hard-float as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9320) Hadoop native build failure on ARM hard-float

2013-02-20 Thread Trevor Robinson (JIRA)
Trevor Robinson created HADOOP-9320:
---

 Summary: Hadoop native build failure on ARM hard-float
 Key: HADOOP-9320
 URL: https://issues.apache.org/jira/browse/HADOOP-9320
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.3-alpha
 Environment: $ uname -a
Linux 3.5.0-1000-highbank #154-Ubuntu SMP Thu Jan 10 09:13:40 UTC 2013 armv7l 
armv7l armv7l GNU/Linux
$ java -version
java version 1.8.0-ea
Java(TM) SE Runtime Environment (build 1.8.0-ea-b36e)
Java HotSpot(TM) Client VM (build 25.0-b04, mixed mode)

Reporter: Trevor Robinson
Assignee: Trevor Robinson


ARM JVM float ABI detection is failing in JNIFlags.cmake because 
JAVA_JVM_LIBRARY is not set at that point. The failure currently causes CMake 
to assume a soft-float JVM. This causes the build to fail with hard-float 
OpenJDK (but don't use that) and [Oracle Java 8 Preview for 
ARM|http://jdk8.java.net/fxarmpreview/]. Hopefully the April update of Oracle 
Java 7 will support hard-float as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9320) Hadoop native build failure on ARM hard-float

2013-02-20 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-9320:


Status: Patch Available  (was: Open)

Note that I tested the attached patch with both JDK7 soft-float and JDK8 
(preview) hard-float on ARM and with JDK7 on x86-64.

 Hadoop native build failure on ARM hard-float
 -

 Key: HADOOP-9320
 URL: https://issues.apache.org/jira/browse/HADOOP-9320
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.3-alpha
 Environment: $ uname -a
 Linux 3.5.0-1000-highbank #154-Ubuntu SMP Thu Jan 10 09:13:40 UTC 2013 armv7l 
 armv7l armv7l GNU/Linux
 $ java -version
 java version 1.8.0-ea
 Java(TM) SE Runtime Environment (build 1.8.0-ea-b36e)
 Java HotSpot(TM) Client VM (build 25.0-b04, mixed mode)
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-9320.patch


 ARM JVM float ABI detection is failing in JNIFlags.cmake because 
 JAVA_JVM_LIBRARY is not set at that point. The failure currently causes CMake 
 to assume a soft-float JVM. This causes the build to fail with hard-float 
 OpenJDK (but don't use that) and [Oracle Java 8 Preview for 
 ARM|http://jdk8.java.net/fxarmpreview/]. Hopefully the April update of Oracle 
 Java 7 will support hard-float as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8713) TestRPCCompatibility fails intermittently with JDK7

2012-10-26 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13485110#comment-13485110
 ] 

Trevor Robinson commented on HADOOP-8713:
-

Thomas, would you mind reviewing and committing this? It's a tiny patch, and 
has been updated based on Vlad's comment.

 TestRPCCompatibility fails intermittently with JDK7
 ---

 Key: HADOOP-8713
 URL: https://issues.apache.org/jira/browse/HADOOP-8713
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.2-alpha
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Attachments: HADOOP-8713.patch, HADOOP-8713.patch


 TestRPCCompatibility can fail intermittently with errors like the following 
 when tests are not run in declaration order:
 {noformat}
 testVersion2ClientVersion1Server(org.apache.hadoop.ipc.TestRPCCompatibility): 
 expected:3 but was:-3
 {noformat}
 Moving the reset of the ProtocolSignature cache from ad-hoc usage in 
 testVersion2ClientVersion2Server to tearDown fixes the issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8769) Tests failures on the ARM hosts

2012-10-02 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13467977#comment-13467977
 ] 

Trevor Robinson commented on HADOOP-8769:
-

This job has been failing due to a configuration issue for the last week or so, 
e.g. https://builds.apache.org/job/Hadoop-trunk-ARM/39/console:

bq. [ERROR] Could not create local repository at /x1/hudson/.m2/repository - 
[Help 1]

Do you know what the problem is? And is there some way I can help fix these 
issues? For instance, I'd like to be able to initiate a build as JDK7 unit test 
fixes are committed.

 Tests failures on the ARM hosts 
 

 Key: HADOOP-8769
 URL: https://issues.apache.org/jira/browse/HADOOP-8769
 Project: Hadoop Common
  Issue Type: Test
  Components: build
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins

 I created a [jenkins job|https://builds.apache.org/job/Hadoop-trunk-ARM] that 
 runs on the ARM machines. The local build is now working and running tests 
 (thanks Gavin!), however there are 40 test failures, looks like most are due 
 to host configuration issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8764) CMake: HADOOP-8737 broke ARM build

2012-09-04 Thread Trevor Robinson (JIRA)
Trevor Robinson created HADOOP-8764:
---

 Summary: CMake: HADOOP-8737 broke ARM build
 Key: HADOOP-8764
 URL: https://issues.apache.org/jira/browse/HADOOP-8764
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.2.0-alpha
 Environment: Apache Maven 3.0.4
Maven home: /usr/share/maven
Java version: 1.7.0_06, vendor: Oracle Corporation
Java home: /usr/lib/jvm/jdk1.7.0_06/jre
Default locale: en_US, platform encoding: ISO-8859-1
OS name: linux, version: 3.2.0-1000-highbank, arch: arm, family: unix
Reporter: Trevor Robinson


ARM build is broken again: CMAKE_SYSTEM_PROCESSOR comes from {{uname -p}}, 
which reports values like armv7l for the ARMv7 architecture. However, the 
OpenJDK and Oracle ARM JREs both use jre/lib/arm for the JVM directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8764) CMake: HADOOP-8737 broke ARM build

2012-09-04 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8764:


Attachment: HADOOP-8764.patch

 CMake: HADOOP-8737 broke ARM build
 --

 Key: HADOOP-8764
 URL: https://issues.apache.org/jira/browse/HADOOP-8764
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.2.0-alpha
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_06, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_06/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-1000-highbank, arch: arm, family: unix
Reporter: Trevor Robinson
 Attachments: HADOOP-8764.patch


 ARM build is broken again: CMAKE_SYSTEM_PROCESSOR comes from {{uname -p}}, 
 which reports values like armv7l for the ARMv7 architecture. However, the 
 OpenJDK and Oracle ARM JREs both use jre/lib/arm for the JVM directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8764) CMake: HADOOP-8737 broke ARM build

2012-09-04 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8764:


Assignee: Trevor Robinson
  Status: Patch Available  (was: Open)

Added processor architecture decode case for ARM similar to x86:

{code}
ELSEIF (CMAKE_SYSTEM_PROCESSOR MATCHES ^arm)
SET(_java_libarch arm)
{code}


 CMake: HADOOP-8737 broke ARM build
 --

 Key: HADOOP-8764
 URL: https://issues.apache.org/jira/browse/HADOOP-8764
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.2.0-alpha
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_06, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_06/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-1000-highbank, arch: arm, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8764.patch


 ARM build is broken again: CMAKE_SYSTEM_PROCESSOR comes from {{uname -p}}, 
 which reports values like armv7l for the ARMv7 architecture. However, the 
 OpenJDK and Oracle ARM JREs both use jre/lib/arm for the JVM directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8713) TestRPCCompatibility fails intermittently with JDK7

2012-08-21 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438875#comment-13438875
 ] 

Trevor Robinson commented on HADOOP-8713:
-

Sure, I guess it's better to start with a known condition.

 TestRPCCompatibility fails intermittently with JDK7
 ---

 Key: HADOOP-8713
 URL: https://issues.apache.org/jira/browse/HADOOP-8713
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0-alpha
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Attachments: HADOOP-8713.patch, HADOOP-8713.patch


 TestRPCCompatibility can fail intermittently with errors like the following 
 when tests are not run in declaration order:
 {noformat}
 testVersion2ClientVersion1Server(org.apache.hadoop.ipc.TestRPCCompatibility): 
 expected:3 but was:-3
 {noformat}
 Moving the reset of the ProtocolSignature cache from ad-hoc usage in 
 testVersion2ClientVersion2Server to tearDown fixes the issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8713) TestRPCCompatibility fails intermittently with JDK7

2012-08-21 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13438936#comment-13438936
 ] 

Trevor Robinson commented on HADOOP-8713:
-

And the TestZKFailoverController failure is HADOOP-8591.

 TestRPCCompatibility fails intermittently with JDK7
 ---

 Key: HADOOP-8713
 URL: https://issues.apache.org/jira/browse/HADOOP-8713
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0-alpha
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Attachments: HADOOP-8713.patch, HADOOP-8713.patch


 TestRPCCompatibility can fail intermittently with errors like the following 
 when tests are not run in declaration order:
 {noformat}
 testVersion2ClientVersion1Server(org.apache.hadoop.ipc.TestRPCCompatibility): 
 expected:3 but was:-3
 {noformat}
 Moving the reset of the ProtocolSignature cache from ad-hoc usage in 
 testVersion2ClientVersion2Server to tearDown fixes the issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8713) TestRPCCompatibility fails intermittently with JDK7

2012-08-20 Thread Trevor Robinson (JIRA)
Trevor Robinson created HADOOP-8713:
---

 Summary: TestRPCCompatibility fails intermittently with JDK7
 Key: HADOOP-8713
 URL: https://issues.apache.org/jira/browse/HADOOP-8713
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0-alpha
 Environment: Apache Maven 3.0.4
Maven home: /usr/share/maven
Java version: 1.7.0_04, vendor: Oracle Corporation
Java home: /usr/lib/jvm/jdk1.7.0_04/jre
Default locale: en_US, platform encoding: ISO-8859-1
OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson


TestRPCCompatibility can fail intermittently with errors like the following 
when tests are not run in declaration order:

{noformat}
testVersion2ClientVersion1Server(org.apache.hadoop.ipc.TestRPCCompatibility): 
expected:3 but was:-3
{noformat}

Moving the reset of the ProtocolSignature cache from ad-hoc usage in 
testVersion2ClientVersion2Server to tearDown fixes the issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8713) TestRPCCompatibility fails intermittently with JDK7

2012-08-20 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8713:


Attachment: HADOOP-8713.patch

 TestRPCCompatibility fails intermittently with JDK7
 ---

 Key: HADOOP-8713
 URL: https://issues.apache.org/jira/browse/HADOOP-8713
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0-alpha
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Attachments: HADOOP-8713.patch


 TestRPCCompatibility can fail intermittently with errors like the following 
 when tests are not run in declaration order:
 {noformat}
 testVersion2ClientVersion1Server(org.apache.hadoop.ipc.TestRPCCompatibility): 
 expected:3 but was:-3
 {noformat}
 Moving the reset of the ProtocolSignature cache from ad-hoc usage in 
 testVersion2ClientVersion2Server to tearDown fixes the issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8713) TestRPCCompatibility fails intermittently with JDK7

2012-08-20 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8713:


Status: Patch Available  (was: Open)

 TestRPCCompatibility fails intermittently with JDK7
 ---

 Key: HADOOP-8713
 URL: https://issues.apache.org/jira/browse/HADOOP-8713
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0-alpha
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Attachments: HADOOP-8713.patch


 TestRPCCompatibility can fail intermittently with errors like the following 
 when tests are not run in declaration order:
 {noformat}
 testVersion2ClientVersion1Server(org.apache.hadoop.ipc.TestRPCCompatibility): 
 expected:3 but was:-3
 {noformat}
 Moving the reset of the ProtocolSignature cache from ad-hoc usage in 
 testVersion2ClientVersion2Server to tearDown fixes the issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8695) TestPathData fails intermittently with JDK7

2012-08-17 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8695:


Attachment: HADOOP-8695-2.patch

Alright, let's fix it harder. :-) New patch uses {{@Before/After}} to create a 
separate {{FileSystem}} instance for each test.

 TestPathData fails intermittently with JDK7
 ---

 Key: HADOOP-8695
 URL: https://issues.apache.org/jira/browse/HADOOP-8695
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 0.23.3, 3.0.0, 2.2.0-alpha
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Attachments: HADOOP-8695-2.patch, HADOOP-8695.patch


 Failed tests:   
 testWithDirStringAndConf(org.apache.hadoop.fs.shell.TestPathData): checking 
 exist
   testToFile(org.apache.hadoop.fs.shell.TestPathData): expected:file:/tmp 
 but 
 was:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/target/test/data/testPD/d1
 {{testWithStringAndConfForBuggyPath}} (which is declared last and therefore 
 runs last with JDK6) overwrites the static variable {{testDir}} with 
 {{file:///tmp}}. With JDK7, test methods run in an undefined order, and the 
 other tests will fail if run after this one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8695) TestPathData fails intermittently with JDK7

2012-08-17 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8695:


Status: Patch Available  (was: Open)

 TestPathData fails intermittently with JDK7
 ---

 Key: HADOOP-8695
 URL: https://issues.apache.org/jira/browse/HADOOP-8695
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 0.23.3, 3.0.0, 2.2.0-alpha
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Attachments: HADOOP-8695-2.patch, HADOOP-8695.patch


 Failed tests:   
 testWithDirStringAndConf(org.apache.hadoop.fs.shell.TestPathData): checking 
 exist
   testToFile(org.apache.hadoop.fs.shell.TestPathData): expected:file:/tmp 
 but 
 was:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/target/test/data/testPD/d1
 {{testWithStringAndConfForBuggyPath}} (which is declared last and therefore 
 runs last with JDK6) overwrites the static variable {{testDir}} with 
 {{file:///tmp}}. With JDK7, test methods run in an undefined order, and the 
 other tests will fail if run after this one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8390) TestFileSystemCanonicalization fails with JDK7

2012-08-15 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13435521#comment-13435521
 ] 

Trevor Robinson commented on HADOOP-8390:
-

Thanks. Could you commit this and/or review the [other JDK7 
fixes|https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=truejqlQuery=project+in+%28HADOOP%2C+HDFS%29+AND+summary+~+jdk7+AND+resolution+%3D+Unresolved]?

 TestFileSystemCanonicalization fails with JDK7
 --

 Key: HADOOP-8390
 URL: https://issues.apache.org/jira/browse/HADOOP-8390
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
 Maven home: /usr/local/apache-maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
 Ubuntu 12.04 LTS
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8390-BeforeClass.patch, HADOOP-8390.patch


 Failed tests:
  testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host:123
  testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host.a:123
 Passes on same machine with JDK 1.6.0_32.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8390) TestFileSystemCanonicalization fails with JDK7

2012-08-14 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13434379#comment-13434379
 ] 

Trevor Robinson commented on HADOOP-8390:
-

That's a larger change because @BeforeClass only works if the class doesn't 
extend TestCase: 
http://stackoverflow.com/questions/733037/why-isnt-my-beforeclass-method-running

Still, it's probably better to upgrade to modern, annotation-based tests, so 
I'll attach a patch.

 TestFileSystemCanonicalization fails with JDK7
 --

 Key: HADOOP-8390
 URL: https://issues.apache.org/jira/browse/HADOOP-8390
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
 Maven home: /usr/local/apache-maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
 Ubuntu 12.04 LTS
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8390-BeforeClass.patch, HADOOP-8390.patch


 Failed tests:
  testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host:123
  testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host.a:123
 Passes on same machine with JDK 1.6.0_32.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8390) TestFileSystemCanonicalization fails with JDK7

2012-08-14 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8390:


Attachment: HADOOP-8390-BeforeClass.patch

 TestFileSystemCanonicalization fails with JDK7
 --

 Key: HADOOP-8390
 URL: https://issues.apache.org/jira/browse/HADOOP-8390
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
 Maven home: /usr/local/apache-maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
 Ubuntu 12.04 LTS
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8390-BeforeClass.patch, HADOOP-8390.patch


 Failed tests:
  testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host:123
  testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host.a:123
 Passes on same machine with JDK 1.6.0_32.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8693) TestSecurityUtil fails intermittently with JDK7

2012-08-14 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13434403#comment-13434403
 ] 

Trevor Robinson commented on HADOOP-8693:
-

Test failures are HADOOP-8699

 TestSecurityUtil fails intermittently with JDK7
 ---

 Key: HADOOP-8693
 URL: https://issues.apache.org/jira/browse/HADOOP-8693
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8693.patch


 Failed tests:   
 testBuildDTServiceName(org.apache.hadoop.security.TestSecurityUtil): 
 expected:[127.0.0.1]:123 but was:[localhost]:123
   testBuildTokenServiceSockAddr(org.apache.hadoop.security.TestSecurityUtil): 
 expected:[127.0.0.1]:123 but was:[localhost]:123
 Test methods run in an arbitrary order with JDK7. In this case, these tests 
 fail because tests like {{testSocketAddrWithName}} (which run afterward with 
 JDK6) are calling {{SecurityUtil.setTokenServiceUseIp(false)}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8695) TestPathData fails intermittently with JDK7

2012-08-14 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13434402#comment-13434402
 ] 

Trevor Robinson commented on HADOOP-8695:
-

Test failures are HADOOP-8699

 TestPathData fails intermittently with JDK7
 ---

 Key: HADOOP-8695
 URL: https://issues.apache.org/jira/browse/HADOOP-8695
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8695.patch


 Failed tests:   
 testWithDirStringAndConf(org.apache.hadoop.fs.shell.TestPathData): checking 
 exist
   testToFile(org.apache.hadoop.fs.shell.TestPathData): expected:file:/tmp 
 but 
 was:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/target/test/data/testPD/d1
 {{testWithStringAndConfForBuggyPath}} (which is declared last and therefore 
 runs last with JDK6) overwrites the static variable {{testDir}} with 
 {{file:///tmp}}. With JDK7, test methods run in an undefined order, and the 
 other tests will fail if run after this one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8699) some common testcases create core-site.xml in test-classes making other testcases to fail

2012-08-14 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13434425#comment-13434425
 ] 

Trevor Robinson commented on HADOOP-8699:
-

To aid in finding this issue, the failing tests include:

* org.apache.hadoop.fs.TestS3_LocalFileContextURI
* org.apache.hadoop.fs.s3native.TestInMemoryNativeS3FileSystemContract
* org.apache.hadoop.fs.TestLocal_S3FileContextURI
* org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract

 some common testcases create core-site.xml in test-classes making other 
 testcases to fail
 -

 Key: HADOOP-8699
 URL: https://issues.apache.org/jira/browse/HADOOP-8699
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8699.patch, HADOOP-8699.patch


 Some of the testcases (HADOOP-8581, MAPREDUCE-4417) create core-site.xml 
 files on the fly in test-classes, overriding the core-site.xml that is part 
 of the test/resources.
 Things fail/pass depending on the order testcases are run (which seems 
 dependent on the platform/jvm you are using).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8390) TestFileSystemCanonicalization fails with JDK7

2012-08-14 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13434443#comment-13434443
 ] 

Trevor Robinson commented on HADOOP-8390:
-

Test failures are HADOOP-8699

 TestFileSystemCanonicalization fails with JDK7
 --

 Key: HADOOP-8390
 URL: https://issues.apache.org/jira/browse/HADOOP-8390
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
 Maven home: /usr/local/apache-maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
 Ubuntu 12.04 LTS
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8390-BeforeClass.patch, HADOOP-8390.patch


 Failed tests:
  testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host:123
  testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host.a:123
 Passes on same machine with JDK 1.6.0_32.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-13 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13432995#comment-13432995
 ] 

Trevor Robinson commented on HADOOP-8659:
-

What's the difference between 
https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/ and 
https://builds.apache.org/job/PreCommit-HDFS-Build/? The former is passing but 
the latter is failing. Does the former not build native libraries? Also 
https://builds.apache.org/job/PreCommit-HDFS-Build/2983/ included this change 
but appears to have built successfully. I'm baffled right now, but it's also 
past 3am for me.

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Fix For: 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-13 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433384#comment-13433384
 ] 

Trevor Robinson commented on HADOOP-8659:
-

Thanks for fixing this, Colin.

bq. we do this is by setting CMAKE_SYSTEM_PROCESOR. However, you must do this 
before find_package(JNI REQUIRED)

So that's why CMAKE_SYSTEM_PROCESSOR was being set... This subtlety screams for 
a comment in the code. :-)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8659-fix-001.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8390) TestFileSystemCanonicalization fails with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8390:


Assignee: Trevor Robinson
  Status: Patch Available  (was: Open)

 TestFileSystemCanonicalization fails with JDK7
 --

 Key: HADOOP-8390
 URL: https://issues.apache.org/jira/browse/HADOOP-8390
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
 Maven home: /usr/local/apache-maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
 Ubuntu 12.04 LTS
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8390.patch


 Failed tests:
  testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host:123
  testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host.a:123
 Passes on same machine with JDK 1.6.0_32.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8390) TestFileSystemCanonicalization fails with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8390:


Attachment: HADOOP-8390.patch

This is simply a test order-dependency bug. {{testSetupResolver()}} is declared 
as a {{@Test}}, but just performs static initialization required by most of the 
other tests ({{NetUtilsTestResolver.install()}}). The attached patch changes 
this test method to a static initializer block.

Perhaps the reason this breaks with JDK7 is that it doesn't seem to preserve 
the declaration of class members for reflection.

 TestFileSystemCanonicalization fails with JDK7
 --

 Key: HADOOP-8390
 URL: https://issues.apache.org/jira/browse/HADOOP-8390
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
 Maven home: /usr/local/apache-maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
 Ubuntu 12.04 LTS
Reporter: Trevor Robinson
 Attachments: HADOOP-8390.patch


 Failed tests:
  testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host:123
  testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host.a:123
 Passes on same machine with JDK 1.6.0_32.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8390) TestFileSystemCanonicalization fails with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433650#comment-13433650
 ] 

Trevor Robinson commented on HADOOP-8390:
-

Err, JDK7 doesn't seem to preserve the declaration *order* of class members for 
reflection.

The reflection methods warn about order being undefined, but JDK6 seemed to 
preserve it. {{testSetupResolver()}} was declared first, so Junit with JDK6 ran 
it first.

 TestFileSystemCanonicalization fails with JDK7
 --

 Key: HADOOP-8390
 URL: https://issues.apache.org/jira/browse/HADOOP-8390
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
 Maven home: /usr/local/apache-maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
 Ubuntu 12.04 LTS
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8390.patch


 Failed tests:
  testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host:123
  testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host.a:123
 Passes on same machine with JDK 1.6.0_32.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8692) TestLocalDirAllocator fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)
Trevor Robinson created HADOOP-8692:
---

 Summary: TestLocalDirAllocator fails intermittently with JDK7
 Key: HADOOP-8692
 URL: https://issues.apache.org/jira/browse/HADOOP-8692
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4
Maven home: /usr/share/maven
Java version: 1.7.0_04, vendor: Oracle Corporation
Java home: /usr/lib/jvm/jdk1.7.0_04/jre
Default locale: en_US, platform encoding: ISO-8859-1
OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson


Failed tests:   test0[0](org.apache.hadoop.fs.TestLocalDirAllocator): Checking 
for build/test/temp/RELATIVE1 in 
build/test/temp/RELATIVE0/block2860496281880890121.tmp - FAILED!
  test0[1](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
/data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
 in 
/data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block7540717865042594902.tmp
 - FAILED!
  test0[2](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
file:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
 in 
/data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block591739547204821805.tmp
 - FAILED!

The recently added {{testRemoveContext()}} (MAPREDUCE-4379) does not clean up 
after itself, so if it runs before test0 (due to undefined test ordering on 
JDK7), test0 fails. This can be fixed by wrapping it with {code}try { ... } 
finally { rmBufferDirs(); }{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8692) TestLocalDirAllocator fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8692:


Attachment: HADOOP-8692.patch

 TestLocalDirAllocator fails intermittently with JDK7
 

 Key: HADOOP-8692
 URL: https://issues.apache.org/jira/browse/HADOOP-8692
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
 Attachments: HADOOP-8692.patch


 Failed tests:   test0[0](org.apache.hadoop.fs.TestLocalDirAllocator): 
 Checking for build/test/temp/RELATIVE1 in 
 build/test/temp/RELATIVE0/block2860496281880890121.tmp - FAILED!
   test0[1](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
  in 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block7540717865042594902.tmp
  - FAILED!
   test0[2](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 file:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
  in 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block591739547204821805.tmp
  - FAILED!
 The recently added {{testRemoveContext()}} (MAPREDUCE-4379) does not clean up 
 after itself, so if it runs before test0 (due to undefined test ordering on 
 JDK7), test0 fails. This can be fixed by wrapping it with {code}try { ... } 
 finally { rmBufferDirs(); }{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8692) TestLocalDirAllocator fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8692:


Assignee: Trevor Robinson
  Status: Patch Available  (was: Open)

 TestLocalDirAllocator fails intermittently with JDK7
 

 Key: HADOOP-8692
 URL: https://issues.apache.org/jira/browse/HADOOP-8692
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8692.patch


 Failed tests:   test0[0](org.apache.hadoop.fs.TestLocalDirAllocator): 
 Checking for build/test/temp/RELATIVE1 in 
 build/test/temp/RELATIVE0/block2860496281880890121.tmp - FAILED!
   test0[1](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
  in 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block7540717865042594902.tmp
  - FAILED!
   test0[2](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 file:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
  in 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block591739547204821805.tmp
  - FAILED!
 The recently added {{testRemoveContext()}} (MAPREDUCE-4379) does not clean up 
 after itself, so if it runs before test0 (due to undefined test ordering on 
 JDK7), test0 fails. This can be fixed by wrapping it with {code}try { ... } 
 finally { rmBufferDirs(); }{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8390) TestFileSystemCanonicalization fails with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13433709#comment-13433709
 ] 

Trevor Robinson commented on HADOOP-8390:
-

Confirmation of the JDK7 issue: 
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7023180

{quote}
Starting in build 129 of JDK 7, the order of methods returned by 
getDeclaredMethods changed and can vary from run to run.  This has been 
observed to cause issues for applications relying on the 
specified-to-be-unspecified ordering of methods retuned by getDeclaredMethods.
The previously implementation of getDeclaredMethods did not have a firm 
ordering guarantee and the specification does not require one.  Merely 
returning a consistent order throughout the run of a VM would not be sufficient 
to address programs expecting a (mostly) sorted order.
Imposing a predictable ordering is not being considered at this time; closing 
as not a bug.
{quote}

 TestFileSystemCanonicalization fails with JDK7
 --

 Key: HADOOP-8390
 URL: https://issues.apache.org/jira/browse/HADOOP-8390
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
 Maven home: /usr/local/apache-maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
 Ubuntu 12.04 LTS
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8390.patch


 Failed tests:
  testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host:123
  testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host.a:123
 Passes on same machine with JDK 1.6.0_32.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8693) TestSecurityUtil fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)
Trevor Robinson created HADOOP-8693:
---

 Summary: TestSecurityUtil fails intermittently with JDK7
 Key: HADOOP-8693
 URL: https://issues.apache.org/jira/browse/HADOOP-8693
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4
Maven home: /usr/share/maven
Java version: 1.7.0_04, vendor: Oracle Corporation
Java home: /usr/lib/jvm/jdk1.7.0_04/jre
Default locale: en_US, platform encoding: ISO-8859-1
OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson


Failed tests:   
testBuildDTServiceName(org.apache.hadoop.security.TestSecurityUtil): 
expected:[127.0.0.1]:123 but was:[localhost]:123
  testBuildTokenServiceSockAddr(org.apache.hadoop.security.TestSecurityUtil): 
expected:[127.0.0.1]:123 but was:[localhost]:123

Test methods run in an arbitrary order with JDK7. In this case, these tests 
fail because tests like {{testSocketAddrWithName}} (which run afterward with 
JDK6) are adding static resolution for localhost.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8693) TestSecurityUtil fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8693:


Description: 
Failed tests:   
testBuildDTServiceName(org.apache.hadoop.security.TestSecurityUtil): 
expected:[127.0.0.1]:123 but was:[localhost]:123
  testBuildTokenServiceSockAddr(org.apache.hadoop.security.TestSecurityUtil): 
expected:[127.0.0.1]:123 but was:[localhost]:123

Test methods run in an arbitrary order with JDK7. In this case, these tests 
fail because tests like {{testSocketAddrWithName}} (which run afterward with 
JDK6) are calling {{SecurityUtil.setTokenServiceUseIp(false)}}.

  was:
Failed tests:   
testBuildDTServiceName(org.apache.hadoop.security.TestSecurityUtil): 
expected:[127.0.0.1]:123 but was:[localhost]:123
  testBuildTokenServiceSockAddr(org.apache.hadoop.security.TestSecurityUtil): 
expected:[127.0.0.1]:123 but was:[localhost]:123

Test methods run in an arbitrary order with JDK7. In this case, these tests 
fail because tests like {{testSocketAddrWithName}} (which run afterward with 
JDK6) are adding static resolution for localhost.


 TestSecurityUtil fails intermittently with JDK7
 ---

 Key: HADOOP-8693
 URL: https://issues.apache.org/jira/browse/HADOOP-8693
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson

 Failed tests:   
 testBuildDTServiceName(org.apache.hadoop.security.TestSecurityUtil): 
 expected:[127.0.0.1]:123 but was:[localhost]:123
   testBuildTokenServiceSockAddr(org.apache.hadoop.security.TestSecurityUtil): 
 expected:[127.0.0.1]:123 but was:[localhost]:123
 Test methods run in an arbitrary order with JDK7. In this case, these tests 
 fail because tests like {{testSocketAddrWithName}} (which run afterward with 
 JDK6) are calling {{SecurityUtil.setTokenServiceUseIp(false)}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8693) TestSecurityUtil fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8693:


Attachment: HADOOP-8693.patch

Call {{SecurityUtil.setTokenServiceUseIp(true)}} at the beginning of 
{{testBuildDTServiceName}} and {{testBuildTokenServiceSockAddr}}, since they 
are expecting an IP address.

 TestSecurityUtil fails intermittently with JDK7
 ---

 Key: HADOOP-8693
 URL: https://issues.apache.org/jira/browse/HADOOP-8693
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
 Attachments: HADOOP-8693.patch


 Failed tests:   
 testBuildDTServiceName(org.apache.hadoop.security.TestSecurityUtil): 
 expected:[127.0.0.1]:123 but was:[localhost]:123
   testBuildTokenServiceSockAddr(org.apache.hadoop.security.TestSecurityUtil): 
 expected:[127.0.0.1]:123 but was:[localhost]:123
 Test methods run in an arbitrary order with JDK7. In this case, these tests 
 fail because tests like {{testSocketAddrWithName}} (which run afterward with 
 JDK6) are calling {{SecurityUtil.setTokenServiceUseIp(false)}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8693) TestSecurityUtil fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8693:


Assignee: Trevor Robinson
  Status: Patch Available  (was: Open)

 TestSecurityUtil fails intermittently with JDK7
 ---

 Key: HADOOP-8693
 URL: https://issues.apache.org/jira/browse/HADOOP-8693
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8693.patch


 Failed tests:   
 testBuildDTServiceName(org.apache.hadoop.security.TestSecurityUtil): 
 expected:[127.0.0.1]:123 but was:[localhost]:123
   testBuildTokenServiceSockAddr(org.apache.hadoop.security.TestSecurityUtil): 
 expected:[127.0.0.1]:123 but was:[localhost]:123
 Test methods run in an arbitrary order with JDK7. In this case, these tests 
 fail because tests like {{testSocketAddrWithName}} (which run afterward with 
 JDK6) are calling {{SecurityUtil.setTokenServiceUseIp(false)}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8695) TestPathData fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)
Trevor Robinson created HADOOP-8695:
---

 Summary: TestPathData fails intermittently with JDK7
 Key: HADOOP-8695
 URL: https://issues.apache.org/jira/browse/HADOOP-8695
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4
Maven home: /usr/share/maven
Java version: 1.7.0_04, vendor: Oracle Corporation
Java home: /usr/lib/jvm/jdk1.7.0_04/jre
Default locale: en_US, platform encoding: ISO-8859-1
OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson


Failed tests:   
testWithDirStringAndConf(org.apache.hadoop.fs.shell.TestPathData): checking 
exist
  testToFile(org.apache.hadoop.fs.shell.TestPathData): expected:file:/tmp but 
was:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/target/test/data/testPD/d1

{{testWithStringAndConfForBuggyPath}} (which is declared last and therefore 
runs last with JDK6) overwrites the static variables {{dirString}} and 
{{testDir}} with {{file:///tmp}}. With JDK7, test methods run in an undefined 
order, and the other tests will fail if run after this one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8695) TestPathData fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8695:


Description: 
Failed tests:   
testWithDirStringAndConf(org.apache.hadoop.fs.shell.TestPathData): checking 
exist
  testToFile(org.apache.hadoop.fs.shell.TestPathData): expected:file:/tmp but 
was:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/target/test/data/testPD/d1

{{testWithStringAndConfForBuggyPath}} (which is declared last and therefore 
runs last with JDK6) overwrites the static variable {{testDir}} with 
{{file:///tmp}}. With JDK7, test methods run in an undefined order, and the 
other tests will fail if run after this one.

  was:
Failed tests:   
testWithDirStringAndConf(org.apache.hadoop.fs.shell.TestPathData): checking 
exist
  testToFile(org.apache.hadoop.fs.shell.TestPathData): expected:file:/tmp but 
was:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/target/test/data/testPD/d1

{{testWithStringAndConfForBuggyPath}} (which is declared last and therefore 
runs last with JDK6) overwrites the static variables {{dirString}} and 
{{testDir}} with {{file:///tmp}}. With JDK7, test methods run in an undefined 
order, and the other tests will fail if run after this one.


 TestPathData fails intermittently with JDK7
 ---

 Key: HADOOP-8695
 URL: https://issues.apache.org/jira/browse/HADOOP-8695
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson

 Failed tests:   
 testWithDirStringAndConf(org.apache.hadoop.fs.shell.TestPathData): checking 
 exist
   testToFile(org.apache.hadoop.fs.shell.TestPathData): expected:file:/tmp 
 but 
 was:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/target/test/data/testPD/d1
 {{testWithStringAndConfForBuggyPath}} (which is declared last and therefore 
 runs last with JDK6) overwrites the static variable {{testDir}} with 
 {{file:///tmp}}. With JDK7, test methods run in an undefined order, and the 
 other tests will fail if run after this one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8695) TestPathData fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8695:


Component/s: test

 TestPathData fails intermittently with JDK7
 ---

 Key: HADOOP-8695
 URL: https://issues.apache.org/jira/browse/HADOOP-8695
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson

 Failed tests:   
 testWithDirStringAndConf(org.apache.hadoop.fs.shell.TestPathData): checking 
 exist
   testToFile(org.apache.hadoop.fs.shell.TestPathData): expected:file:/tmp 
 but 
 was:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/target/test/data/testPD/d1
 {{testWithStringAndConfForBuggyPath}} (which is declared last and therefore 
 runs last with JDK6) overwrites the static variable {{testDir}} with 
 {{file:///tmp}}. With JDK7, test methods run in an undefined order, and the 
 other tests will fail if run after this one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8693) TestSecurityUtil fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8693:


Component/s: test

 TestSecurityUtil fails intermittently with JDK7
 ---

 Key: HADOOP-8693
 URL: https://issues.apache.org/jira/browse/HADOOP-8693
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8693.patch


 Failed tests:   
 testBuildDTServiceName(org.apache.hadoop.security.TestSecurityUtil): 
 expected:[127.0.0.1]:123 but was:[localhost]:123
   testBuildTokenServiceSockAddr(org.apache.hadoop.security.TestSecurityUtil): 
 expected:[127.0.0.1]:123 but was:[localhost]:123
 Test methods run in an arbitrary order with JDK7. In this case, these tests 
 fail because tests like {{testSocketAddrWithName}} (which run afterward with 
 JDK6) are calling {{SecurityUtil.setTokenServiceUseIp(false)}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8692) TestLocalDirAllocator fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8692:


Component/s: test

 TestLocalDirAllocator fails intermittently with JDK7
 

 Key: HADOOP-8692
 URL: https://issues.apache.org/jira/browse/HADOOP-8692
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8692.patch


 Failed tests:   test0[0](org.apache.hadoop.fs.TestLocalDirAllocator): 
 Checking for build/test/temp/RELATIVE1 in 
 build/test/temp/RELATIVE0/block2860496281880890121.tmp - FAILED!
   test0[1](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
  in 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block7540717865042594902.tmp
  - FAILED!
   test0[2](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 file:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
  in 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block591739547204821805.tmp
  - FAILED!
 The recently added {{testRemoveContext()}} (MAPREDUCE-4379) does not clean up 
 after itself, so if it runs before test0 (due to undefined test ordering on 
 JDK7), test0 fails. This can be fixed by wrapping it with {code}try { ... } 
 finally { rmBufferDirs(); }{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8390) TestFileSystemCanonicalization fails with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8390:


Component/s: test

 TestFileSystemCanonicalization fails with JDK7
 --

 Key: HADOOP-8390
 URL: https://issues.apache.org/jira/browse/HADOOP-8390
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
 Maven home: /usr/local/apache-maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
 Ubuntu 12.04 LTS
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8390.patch


 Failed tests:
  testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host:123
  testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host.a:123
 Passes on same machine with JDK 1.6.0_32.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8695) TestPathData fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8695:


Attachment: HADOOP-8695.patch

Removed static variable {{dirString}} and changed 
{{testWithStringAndConfForBuggyPath}} to not modify {{testDir}}.

 TestPathData fails intermittently with JDK7
 ---

 Key: HADOOP-8695
 URL: https://issues.apache.org/jira/browse/HADOOP-8695
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
 Attachments: HADOOP-8695.patch


 Failed tests:   
 testWithDirStringAndConf(org.apache.hadoop.fs.shell.TestPathData): checking 
 exist
   testToFile(org.apache.hadoop.fs.shell.TestPathData): expected:file:/tmp 
 but 
 was:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/target/test/data/testPD/d1
 {{testWithStringAndConfForBuggyPath}} (which is declared last and therefore 
 runs last with JDK6) overwrites the static variable {{testDir}} with 
 {{file:///tmp}}. With JDK7, test methods run in an undefined order, and the 
 other tests will fail if run after this one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8695) TestPathData fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8695:


Assignee: Trevor Robinson
  Status: Patch Available  (was: Open)

 TestPathData fails intermittently with JDK7
 ---

 Key: HADOOP-8695
 URL: https://issues.apache.org/jira/browse/HADOOP-8695
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8695.patch


 Failed tests:   
 testWithDirStringAndConf(org.apache.hadoop.fs.shell.TestPathData): checking 
 exist
   testToFile(org.apache.hadoop.fs.shell.TestPathData): expected:file:/tmp 
 but 
 was:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/target/test/data/testPD/d1
 {{testWithStringAndConfForBuggyPath}} (which is declared last and therefore 
 runs last with JDK6) overwrites the static variable {{testDir}} with 
 {{file:///tmp}}. With JDK7, test methods run in an undefined order, and the 
 other tests will fail if run after this one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8697) TestWritableName fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)
Trevor Robinson created HADOOP-8697:
---

 Summary: TestWritableName fails intermittently with JDK7
 Key: HADOOP-8697
 URL: https://issues.apache.org/jira/browse/HADOOP-8697
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
Maven home: /usr/share/maven
Java version: 1.7.0_04, vendor: Oracle Corporation
Java home: /usr/lib/jvm/jdk1.7.0_04/jre
Default locale: en_US, platform encoding: ISO-8859-1
OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson


On JDK7, {{testAddName}} can run before {{testSetName}}, which causes it to 
fail with:

{noformat}
testAddName(org.apache.hadoop.io.TestWritableName): WritableName can't load 
class: mystring
{noformat}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8697) TestWritableName fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8697:


Attachment: HADOOP-8697.patch

Remove dependency of {{testAddName}} on {{testSetName}} running first by 
explicitly calling {{WritableName.setName}}.

 TestWritableName fails intermittently with JDK7
 ---

 Key: HADOOP-8697
 URL: https://issues.apache.org/jira/browse/HADOOP-8697
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
 Attachments: HADOOP-8697.patch


 On JDK7, {{testAddName}} can run before {{testSetName}}, which causes it to 
 fail with:
 {noformat}
 testAddName(org.apache.hadoop.io.TestWritableName): WritableName can't load 
 class: mystring
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8697) TestWritableName fails intermittently with JDK7

2012-08-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8697:


Assignee: Trevor Robinson
  Status: Patch Available  (was: Open)

 TestWritableName fails intermittently with JDK7
 ---

 Key: HADOOP-8697
 URL: https://issues.apache.org/jira/browse/HADOOP-8697
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8697.patch


 On JDK7, {{testAddName}} can run before {{testSetName}}, which causes it to 
 fail with:
 {noformat}
 testAddName(org.apache.hadoop.io.TestWritableName): WritableName can't load 
 class: mystring
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-09 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13432054#comment-13432054
 ] 

Trevor Robinson commented on HADOOP-8659:
-

It's not as easy as just CHECK_SYMBOL_EXISTS/CHECK_LIBRARY_EXISTS, since the 
soft-float libraries are identical to the hard-float ones, but are installed in 
different directories. However, I can do a test compilation against an 
arbitrary libc symbol with the soft-float flag:

{code}
include(CMakePushCheckState)
cmake_push_check_state()
set(CMAKE_REQUIRED_FLAGS ${CMAKE_REQUIRED_FLAGS} -mfloat-abi=softfp)
include(CheckSymbolExists)
check_symbol_exists(exit stdlib.h SOFTFP_AVAILABLE)
cmake_pop_check_state()
{code}

Unfortunately, there is currently no good way to determine the JVM's float ABI. 
It's not reported at all by the Oracle EJRE or OpenJDK. The current behavior of 
linking against the JVM library with the wrong ABI doesn't report an error. 
What I can do is restrict this code path to Linux (since this issue is 
Linux-specific for now), where readelf is part of binutils (like ld), so it 
should always be available. But I'll also check for it and issue a warning if 
it's not found. For example:

{code}
find_program(READELF readelf)
if (READELF MATCHES NOTFOUND)
message(WARNING readelf not found; JVM float ABI detection disabled)
endif ()
{code}


 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-09 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Status: Open  (was: Patch Available)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-09 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Attachment: HADOOP-8659.patch

Attached updated patch based on Colin's comments.

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-09 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Status: Patch Available  (was: Open)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-09 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Attachment: HADOOP-8659.patch

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-09 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Attachment: (was: HADOOP-8659.patch)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-08 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Status: Open  (was: Patch Available)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-08 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Attachment: HADOOP-8659.patch

Update patch to remove unnecessary dependency on JAVA_JVM_LIBRARY from 
hadooppipes, which caused build failure in Jenkins:

{noformat}CMake Error: The following variables are used in this project, but 
they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake 
files:
JAVA_JVM_LIBRARY (ADVANCED)
linked by target hadooppipes in directory 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-tools/hadoop-pipes/src
{noformat}

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-08 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Status: Patch Available  (was: Open)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-08 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Attachment: (was: HADOOP-8659.patch)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-08 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Attachment: HADOOP-8659.patch

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-08 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Status: Open  (was: Patch Available)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-08 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Status: Patch Available  (was: Open)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM

2012-08-07 Thread Trevor Robinson (JIRA)
Trevor Robinson created HADOOP-8659:
---

 Summary: Native libraries must build with soft-float ABI for 
Oracle JVM
 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson


There was recently an ABI (application binary interface) change in most Linux 
distributions for modern ARM processors (ARMv7). Historically, hardware 
floating-point (FP) support was optional/vendor-specific for ARM processors, so 
for software compatibility, the default ABI required that processors with FP 
units copy FP arguments into integer registers (or memory) when calling a 
shared library function. Now that hardware floating-point has been standardized 
for some time, Linux distributions such as Ubuntu 12.04 have changed the 
default ABI to leave FP arguments in FP registers, since this can significantly 
improve performance for FP libraries.

Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports the 
new ABI, presumably since this involves some non-trivial changes to components 
like JNI. While the soft-float JVM can run on systems with multi-arch support 
(currently Debian/Ubuntu) using compatibility libraries, this configuration 
requires that any third-party JNI libraries also be compiled using the 
soft-float ABI. Since hard-float systems default to compiling for hard-float, 
an extra argument to GCC (and installation of a compatibility library) is 
required to build soft-float Hadoop native libraries that work with the Oracle 
JVM.

Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
libraries to use it as well. Therefore the fix for this issue requires 
detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM

2012-08-07 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Attachment: HADOOP-8659.patch

The attached patch factors out platform-specific build configuration for 
various native libraries (e.g. HADOOP-8538) into a single included file and 
adds support for building soft-float libraries on hard-float ARM systems when 
using a soft-float JVM.

 Native libraries must build with soft-float ABI for Oracle JVM
 --

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM

2012-08-07 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Status: Patch Available  (was: Open)

 Native libraries must build with soft-float ABI for Oracle JVM
 --

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-07 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Summary: Native libraries must build with soft-float ABI for Oracle JVM on 
ARM  (was: Native libraries must build with soft-float ABI for Oracle JVM)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-07 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13430842#comment-13430842
 ] 

Trevor Robinson commented on HADOOP-8659:
-

I don't think it's different for non-float args. The problem is all of the 
transitive dependencies, such as using a different libc. When trying to load a 
JNI native library with the wrong float ABI, the JVM usually crashes silently 
with exit code 1. For instance, the build currently dies on hard-float ARM with 
the Oracle JVM running 
hadoop-hdfs-project/hadoop-hdfs/target/native/test_libhdfs_threaded.

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-07 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Status: Open  (was: Patch Available)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-07 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Attachment: HADOOP-8659.patch

Updated patch based on testing with hard-float OpenJDK. Also verified unchanged 
behavior on x86-64.

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-07 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Status: Patch Available  (was: Open)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8370) Native build failure: javah: class file for org.apache.hadoop.classification.InterfaceAudience not found

2012-07-05 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13407465#comment-13407465
 ] 

Trevor Robinson commented on HADOOP-8370:
-

Is there anything I can do to help resolve this issue?

Also, any idea why it works for everyone else with scope 'provided'? Am I using 
a newer Maven version?

 Native build failure: javah: class file for 
 org.apache.hadoop.classification.InterfaceAudience not found
 

 Key: HADOOP-8370
 URL: https://issues.apache.org/jira/browse/HADOOP-8370
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 0.23.1
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
 Maven home: /usr/local/apache-maven-3.0.4
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8370.patch


 [INFO] --- native-maven-plugin:1.0-alpha-7:javah (default) @ hadoop-common ---
 [INFO] /bin/sh -c cd /build/hadoop-common/hadoop-common-project/hadoop-common 
  /usr/lib/jvm/jdk1.7.0_02/bin/javah -d 
 /build/hadoop-common/hadoop-common-project/hadoop-common/target/native/javah 
 -classpath ... org.apache.hadoop.io.compress.zlib.ZlibDecompressor 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping 
 org.apache.hadoop.io.nativeio.NativeIO 
 org.apache.hadoop.security.JniBasedUnixGroupsNetgroupMapping 
 org.apache.hadoop.io.compress.snappy.SnappyCompressor 
 org.apache.hadoop.io.compress.snappy.SnappyDecompressor 
 org.apache.hadoop.io.compress.lz4.Lz4Compressor 
 org.apache.hadoop.io.compress.lz4.Lz4Decompressor 
 org.apache.hadoop.util.NativeCrc32
 Cannot find annotation method 'value()' in type 
 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate': class 
 file for org.apache.hadoop.classification.InterfaceAudience not found
 Cannot find annotation method 'value()' in type 
 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
 Error: cannot access org.apache.hadoop.classification.InterfaceStability
   class file for org.apache.hadoop.classification.InterfaceStability not found
 The fix for me was to changing the scope of hadoop-annotations from
 provided to compile in pom.xml:
dependency
  groupIdorg.apache.hadoop/groupId
  artifactIdhadoop-annotations/artifactId
  scopecompile/scope
/dependency
 For some reason, it was the only dependency with scope provided.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8538) CMake builds fail on ARM

2012-06-28 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13403136#comment-13403136
 ] 

Trevor Robinson commented on HADOOP-8538:
-

The 2 test failures appear to be unrelated known issues:

HADOOP-8110: junit.framework.AssertionFailedError: -expunge failed expected:0 
but was:1
https://builds.apache.org/job/PreCommit-HADOOP-Build/1146/testReport/org.apache.hadoop.fs.viewfs/TestViewFsTrash/testTrash/

HDFS-2881: java.util.concurrent.TimeoutException: Timed out waiting for corrupt 
replicas. Waiting for 2, but only found 0
https://builds.apache.org/job/PreCommit-HADOOP-Build/1146/testReport/org.apache.hadoop.hdfs/TestDatanodeBlockScanner/testBlockCorruptionRecoveryPolicy2/


 CMake builds fail on ARM
 

 Key: HADOOP-8538
 URL: https://issues.apache.org/jira/browse/HADOOP-8538
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5)
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: hadoop-cmake.patch


 CMake native builds fail with this error:
 cc1: error: unrecognized command line option '-m32'
 -m32 is only defined by GCC for x86, PowerPC, and SPARC.
 The following files specify -m32 when the JVM data model is 32-bit:
 hadoop-common-project/hadoop-common/src/CMakeLists.txt
 hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
 hadoop-tools/hadoop-pipes/src/CMakeLists.txt
 hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
 This is a partial regression of HDFS-1920.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8538) CMake builds fail on ARM

2012-06-27 Thread Trevor Robinson (JIRA)
Trevor Robinson created HADOOP-8538:
---

 Summary: CMake builds fail on ARM
 Key: HADOOP-8538
 URL: https://issues.apache.org/jira/browse/HADOOP-8538
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5)
Reporter: Trevor Robinson


CMake native builds fail with this error:

cc1: error: unrecognized command line option '-m32'

-m32 is only defined by GCC for x86, PowerPC, and SPARC.

The following files specify -m32 when the JVM data model is 32-bit:

hadoop-common-project/hadoop-common/src/CMakeLists.txt
hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
hadoop-tools/hadoop-pipes/src/CMakeLists.txt
hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt

This is a partial regression of HDFS-1920.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8538) CMake builds fail on ARM

2012-06-27 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8538:


Attachment: hadoop-cmake.patch

Patch CMake files to add -m32 compile/link flag only when using GCC on 64-bit 
platforms. Verified that native libraries build correctly on amd64 with 32-bit 
and 64-bit JVM and on ARM.

 CMake builds fail on ARM
 

 Key: HADOOP-8538
 URL: https://issues.apache.org/jira/browse/HADOOP-8538
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5)
Reporter: Trevor Robinson
 Attachments: hadoop-cmake.patch


 CMake native builds fail with this error:
 cc1: error: unrecognized command line option '-m32'
 -m32 is only defined by GCC for x86, PowerPC, and SPARC.
 The following files specify -m32 when the JVM data model is 32-bit:
 hadoop-common-project/hadoop-common/src/CMakeLists.txt
 hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
 hadoop-tools/hadoop-pipes/src/CMakeLists.txt
 hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
 This is a partial regression of HDFS-1920.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8538) CMake builds fail on ARM

2012-06-27 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8538:


Status: Patch Available  (was: Open)

 CMake builds fail on ARM
 

 Key: HADOOP-8538
 URL: https://issues.apache.org/jira/browse/HADOOP-8538
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5)
Reporter: Trevor Robinson
 Attachments: hadoop-cmake.patch


 CMake native builds fail with this error:
 cc1: error: unrecognized command line option '-m32'
 -m32 is only defined by GCC for x86, PowerPC, and SPARC.
 The following files specify -m32 when the JVM data model is 32-bit:
 hadoop-common-project/hadoop-common/src/CMakeLists.txt
 hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
 hadoop-tools/hadoop-pipes/src/CMakeLists.txt
 hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
 This is a partial regression of HDFS-1920.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8370) Native build failure: javah: class file for org.apache.hadoop.classification.InterfaceAudience not found

2012-06-26 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13401705#comment-13401705
 ] 

Trevor Robinson commented on HADOOP-8370:
-

I'm not sure where to check for that. Does the classpath just contain all of 
the jars in the appropriate directory under share/hadoop?

With this patch, jdiff-*.jar shows up in the same place as the official CDH4 
build, only under share/hadoop/mapreduce/lib:

$ find hadoop-dist/target/hadoop-2.0.0-cdh4.0.0 -name '*jdiff*.jar'
hadoop-dist/target/hadoop-2.0.0-cdh4.0.0/share/hadoop/mapreduce/lib/jdiff-1.0.9.jar


 Native build failure: javah: class file for 
 org.apache.hadoop.classification.InterfaceAudience not found
 

 Key: HADOOP-8370
 URL: https://issues.apache.org/jira/browse/HADOOP-8370
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 0.23.1
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
 Maven home: /usr/local/apache-maven-3.0.4
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8370.patch


 [INFO] --- native-maven-plugin:1.0-alpha-7:javah (default) @ hadoop-common ---
 [INFO] /bin/sh -c cd /build/hadoop-common/hadoop-common-project/hadoop-common 
  /usr/lib/jvm/jdk1.7.0_02/bin/javah -d 
 /build/hadoop-common/hadoop-common-project/hadoop-common/target/native/javah 
 -classpath ... org.apache.hadoop.io.compress.zlib.ZlibDecompressor 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping 
 org.apache.hadoop.io.nativeio.NativeIO 
 org.apache.hadoop.security.JniBasedUnixGroupsNetgroupMapping 
 org.apache.hadoop.io.compress.snappy.SnappyCompressor 
 org.apache.hadoop.io.compress.snappy.SnappyDecompressor 
 org.apache.hadoop.io.compress.lz4.Lz4Compressor 
 org.apache.hadoop.io.compress.lz4.Lz4Decompressor 
 org.apache.hadoop.util.NativeCrc32
 Cannot find annotation method 'value()' in type 
 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate': class 
 file for org.apache.hadoop.classification.InterfaceAudience not found
 Cannot find annotation method 'value()' in type 
 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
 Error: cannot access org.apache.hadoop.classification.InterfaceStability
   class file for org.apache.hadoop.classification.InterfaceStability not found
 The fix for me was to changing the scope of hadoop-annotations from
 provided to compile in pom.xml:
dependency
  groupIdorg.apache.hadoop/groupId
  artifactIdhadoop-annotations/artifactId
  scopecompile/scope
/dependency
 For some reason, it was the only dependency with scope provided.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8329) Hadoop-Common build fails with IBM Java 7 on branch-1.0

2012-05-17 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13278393#comment-13278393
 ] 

Trevor Robinson commented on HADOOP-8329:
-

I get the same error with Oracle Java 7 (update 4).

 Hadoop-Common build fails with IBM Java 7 on branch-1.0
 ---

 Key: HADOOP-8329
 URL: https://issues.apache.org/jira/browse/HADOOP-8329
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.0.2, 1.0.3
Reporter: Kumar Ravi

 I am seeing the following message running IBM Java 7 running branch-1.0 code.
 compile:
 [echo] contrib: gridmix
 [javac] Compiling 31 source files to 
 /home/hadoop/branch-1.0_0427/build/contrib/gridmix/classes
 [javac] 
 /home/hadoop/branch-1.0_0427/src/contrib/gridmix/src/java/org/apache/hadoop/mapred/gridmix/Gridmix.java:396:
  error: type argument ? extends T is not within bounds of type-variable E
 [javac] private T String getEnumValues(Enum? extends T[] e) {
 [javac] ^
 [javac] where T,E are type-variables:
 [javac] T extends Object declared in method TgetEnumValues(Enum? extends 
 T[])
 [javac] E extends EnumE declared in class Enum
 [javac] 
 /home/hadoop/branch-1.0_0427/src/contrib/gridmix/src/java/org/apache/hadoop/mapred/gridmix/Gridmix.java:399:
  error: type argument ? extends T is not within bounds of type-variable E
 [javac] for (Enum? extends T v : e) {
 [javac] ^
 [javac] where T,E are type-variables:
 [javac] T extends Object declared in method TgetEnumValues(Enum? extends 
 T[])
 [javac] E extends EnumE declared in class Enum
 [javac] Note: Some input files use unchecked or unsafe operations.
 [javac] Note: Recompile with -Xlint:unchecked for details.
 [javac] 2 errors
 BUILD FAILED
 /home/hadoop/branch-1.0_0427/build.xml:703: The following error occurred 
 while executing this line:
 /home/hadoop/branch-1.0_0427/src/contrib/build.xml:30: The following error 
 occurred while executing this line:
 /home/hadoop/branch-1.0_0427/src/contrib/build-contrib.xml:185: Compile 
 failed; see the compiler error output for details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8329) Hadoop-Common build fails with Java 7 on branch-1.0

2012-05-17 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8329:


Summary: Hadoop-Common build fails with Java 7 on branch-1.0  (was: 
Hadoop-Common build fails with IBM Java 7 on branch-1.0)

 Hadoop-Common build fails with Java 7 on branch-1.0
 ---

 Key: HADOOP-8329
 URL: https://issues.apache.org/jira/browse/HADOOP-8329
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.0.2, 1.0.3
Reporter: Kumar Ravi

 I am seeing the following message running IBM Java 7 running branch-1.0 code.
 compile:
 [echo] contrib: gridmix
 [javac] Compiling 31 source files to 
 /home/hadoop/branch-1.0_0427/build/contrib/gridmix/classes
 [javac] 
 /home/hadoop/branch-1.0_0427/src/contrib/gridmix/src/java/org/apache/hadoop/mapred/gridmix/Gridmix.java:396:
  error: type argument ? extends T is not within bounds of type-variable E
 [javac] private T String getEnumValues(Enum? extends T[] e) {
 [javac] ^
 [javac] where T,E are type-variables:
 [javac] T extends Object declared in method TgetEnumValues(Enum? extends 
 T[])
 [javac] E extends EnumE declared in class Enum
 [javac] 
 /home/hadoop/branch-1.0_0427/src/contrib/gridmix/src/java/org/apache/hadoop/mapred/gridmix/Gridmix.java:399:
  error: type argument ? extends T is not within bounds of type-variable E
 [javac] for (Enum? extends T v : e) {
 [javac] ^
 [javac] where T,E are type-variables:
 [javac] T extends Object declared in method TgetEnumValues(Enum? extends 
 T[])
 [javac] E extends EnumE declared in class Enum
 [javac] Note: Some input files use unchecked or unsafe operations.
 [javac] Note: Recompile with -Xlint:unchecked for details.
 [javac] 2 errors
 BUILD FAILED
 /home/hadoop/branch-1.0_0427/build.xml:703: The following error occurred 
 while executing this line:
 /home/hadoop/branch-1.0_0427/src/contrib/build.xml:30: The following error 
 occurred while executing this line:
 /home/hadoop/branch-1.0_0427/src/contrib/build-contrib.xml:185: Compile 
 failed; see the compiler error output for details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8390) TestFileSystemCanonicalization fails with JDK7

2012-05-10 Thread Trevor Robinson (JIRA)
Trevor Robinson created HADOOP-8390:
---

 Summary: TestFileSystemCanonicalization fails with JDK7
 Key: HADOOP-8390
 URL: https://issues.apache.org/jira/browse/HADOOP-8390
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
Maven home: /usr/local/apache-maven
Java version: 1.7.0_04, vendor: Oracle Corporation
Java home: /usr/lib/jvm/jdk1.7.0_04/jre
Default locale: en_US, platform encoding: ISO-8859-1
OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
Ubuntu 12.04 LTS
Reporter: Trevor Robinson


Failed tests:
 testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
expected:myfs://host.a.b:123 but was:myfs://host:123
 testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
expected:myfs://host.a.b:123 but was:myfs://host.a:123

Passes on same machine with JDK 1.6.0_32.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7868) Hadoop native fails to compile when default linker option is -Wl,--as-needed

2012-05-08 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13270622#comment-13270622
 ] 

Trevor Robinson commented on HADOOP-7868:
-

I was surprised that supporting three different tools was necessary, but I 
wasn't bold enough to assume it was safe to remove any. ;-)

As a bit of context for someone thinking about committing this patch (please 
do!), it along with HADOOP-8370 and HDFS-3383 enabling building on Ubuntu 12.04 
ARM Server.

 Hadoop native fails to compile when default linker option is -Wl,--as-needed
 

 Key: HADOOP-7868
 URL: https://issues.apache.org/jira/browse/HADOOP-7868
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 0.20.205.0, 1.0.0, 0.23.0
 Environment: Ubuntu Precise, Ubuntu Oneiric, Debian Unstable
Reporter: James Page
 Attachments: HADOOP-7868-portable.patch, HADOOP-7868.patch


 Recent releases of Ubuntu and Debian have switched to using --as-needed as 
 default when linking binaries.
 As a result the AC_COMPUTE_NEEDED_DSO fails to find the required DSO names 
 during execution of configure resulting in a build failure.
 Explicitly using -Wl,--no-as-needed in this macro when required resolves 
 this issue.
 See http://wiki.debian.org/ToolChain/DSOLinking for a few more details

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8370) Native build failure: javah: class file for org.apache.hadoop.classification.InterfaceAudience not found

2012-05-07 Thread Trevor Robinson (JIRA)
Trevor Robinson created HADOOP-8370:
---

 Summary: Native build failure: javah: class file for 
org.apache.hadoop.classification.InterfaceAudience not found
 Key: HADOOP-8370
 URL: https://issues.apache.org/jira/browse/HADOOP-8370
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 0.23.1
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
Maven home: /usr/local/apache-maven-3.0.4
Java version: 1.7.0_04, vendor: Oracle Corporation
Java home: /usr/lib/jvm/jdk1.7.0_04/jre
Default locale: en_US, platform encoding: ISO-8859-1
OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
Reporter: Trevor Robinson


[INFO] --- native-maven-plugin:1.0-alpha-7:javah (default) @ hadoop-common ---
[INFO] /bin/sh -c cd /build/hadoop-common/hadoop-common-project/hadoop-common 
 /usr/lib/jvm/jdk1.7.0_02/bin/javah -d 
/build/hadoop-common/hadoop-common-project/hadoop-common/target/native/javah 
-classpath ... org.apache.hadoop.io.compress.zlib.ZlibDecompressor 
org.apache.hadoop.security.JniBasedUnixGroupsMapping 
org.apache.hadoop.io.nativeio.NativeIO 
org.apache.hadoop.security.JniBasedUnixGroupsNetgroupMapping 
org.apache.hadoop.io.compress.snappy.SnappyCompressor 
org.apache.hadoop.io.compress.snappy.SnappyDecompressor 
org.apache.hadoop.io.compress.lz4.Lz4Compressor 
org.apache.hadoop.io.compress.lz4.Lz4Decompressor 
org.apache.hadoop.util.NativeCrc32
Cannot find annotation method 'value()' in type 
'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate': class file 
for org.apache.hadoop.classification.InterfaceAudience not found
Cannot find annotation method 'value()' in type 
'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
Error: cannot access org.apache.hadoop.classification.InterfaceStability
  class file for org.apache.hadoop.classification.InterfaceStability not found

The fix for me was to changing the scope of hadoop-annotations from
provided to compile in pom.xml:

   dependency
 groupIdorg.apache.hadoop/groupId
 artifactIdhadoop-annotations/artifactId
 scopecompile/scope
   /dependency

For some reason, it was the only dependency with scope provided.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8370) Native build failure: javah: class file for org.apache.hadoop.classification.InterfaceAudience not found

2012-05-07 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8370:


Status: Patch Available  (was: Open)

 Native build failure: javah: class file for 
 org.apache.hadoop.classification.InterfaceAudience not found
 

 Key: HADOOP-8370
 URL: https://issues.apache.org/jira/browse/HADOOP-8370
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 0.23.1
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
 Maven home: /usr/local/apache-maven-3.0.4
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
 Attachments: HADOOP-8370.patch


 [INFO] --- native-maven-plugin:1.0-alpha-7:javah (default) @ hadoop-common ---
 [INFO] /bin/sh -c cd /build/hadoop-common/hadoop-common-project/hadoop-common 
  /usr/lib/jvm/jdk1.7.0_02/bin/javah -d 
 /build/hadoop-common/hadoop-common-project/hadoop-common/target/native/javah 
 -classpath ... org.apache.hadoop.io.compress.zlib.ZlibDecompressor 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping 
 org.apache.hadoop.io.nativeio.NativeIO 
 org.apache.hadoop.security.JniBasedUnixGroupsNetgroupMapping 
 org.apache.hadoop.io.compress.snappy.SnappyCompressor 
 org.apache.hadoop.io.compress.snappy.SnappyDecompressor 
 org.apache.hadoop.io.compress.lz4.Lz4Compressor 
 org.apache.hadoop.io.compress.lz4.Lz4Decompressor 
 org.apache.hadoop.util.NativeCrc32
 Cannot find annotation method 'value()' in type 
 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate': class 
 file for org.apache.hadoop.classification.InterfaceAudience not found
 Cannot find annotation method 'value()' in type 
 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
 Error: cannot access org.apache.hadoop.classification.InterfaceStability
   class file for org.apache.hadoop.classification.InterfaceStability not found
 The fix for me was to changing the scope of hadoop-annotations from
 provided to compile in pom.xml:
dependency
  groupIdorg.apache.hadoop/groupId
  artifactIdhadoop-annotations/artifactId
  scopecompile/scope
/dependency
 For some reason, it was the only dependency with scope provided.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8370) Native build failure: javah: class file for org.apache.hadoop.classification.InterfaceAudience not found

2012-05-07 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8370:


Attachment: HADOOP-8370.patch

 Native build failure: javah: class file for 
 org.apache.hadoop.classification.InterfaceAudience not found
 

 Key: HADOOP-8370
 URL: https://issues.apache.org/jira/browse/HADOOP-8370
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 0.23.1
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
 Maven home: /usr/local/apache-maven-3.0.4
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
 Attachments: HADOOP-8370.patch


 [INFO] --- native-maven-plugin:1.0-alpha-7:javah (default) @ hadoop-common ---
 [INFO] /bin/sh -c cd /build/hadoop-common/hadoop-common-project/hadoop-common 
  /usr/lib/jvm/jdk1.7.0_02/bin/javah -d 
 /build/hadoop-common/hadoop-common-project/hadoop-common/target/native/javah 
 -classpath ... org.apache.hadoop.io.compress.zlib.ZlibDecompressor 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping 
 org.apache.hadoop.io.nativeio.NativeIO 
 org.apache.hadoop.security.JniBasedUnixGroupsNetgroupMapping 
 org.apache.hadoop.io.compress.snappy.SnappyCompressor 
 org.apache.hadoop.io.compress.snappy.SnappyDecompressor 
 org.apache.hadoop.io.compress.lz4.Lz4Compressor 
 org.apache.hadoop.io.compress.lz4.Lz4Decompressor 
 org.apache.hadoop.util.NativeCrc32
 Cannot find annotation method 'value()' in type 
 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate': class 
 file for org.apache.hadoop.classification.InterfaceAudience not found
 Cannot find annotation method 'value()' in type 
 'org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate'
 Error: cannot access org.apache.hadoop.classification.InterfaceStability
   class file for org.apache.hadoop.classification.InterfaceStability not found
 The fix for me was to changing the scope of hadoop-annotations from
 provided to compile in pom.xml:
dependency
  groupIdorg.apache.hadoop/groupId
  artifactIdhadoop-annotations/artifactId
  scopecompile/scope
/dependency
 For some reason, it was the only dependency with scope provided.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7868) Hadoop native fails to compile when default linker option is -Wl,--as-needed

2012-05-07 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-7868:


Attachment: HADOOP-7868-portable.patch

This patch fixes the issue using the approach suggested by Daryn: it outputs 
configure test code that makes use of the library being detected for both zlib 
and snappy.

 Hadoop native fails to compile when default linker option is -Wl,--as-needed
 

 Key: HADOOP-7868
 URL: https://issues.apache.org/jira/browse/HADOOP-7868
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 0.20.205.0, 1.0.0, 0.23.0
 Environment: Ubuntu Precise, Ubuntu Oneiric, Debian Unstable
Reporter: James Page
 Attachments: HADOOP-7868-portable.patch, HADOOP-7868.patch


 Recent releases of Ubuntu and Debian have switched to using --as-needed as 
 default when linking binaries.
 As a result the AC_COMPUTE_NEEDED_DSO fails to find the required DSO names 
 during execution of configure resulting in a build failure.
 Explicitly using -Wl,--no-as-needed in this macro when required resolves 
 this issue.
 See http://wiki.debian.org/ToolChain/DSOLinking for a few more details

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7290) Unit test failure in TestUserGroupInformation.testGetServerSideGroups

2011-05-13 Thread Trevor Robinson (JIRA)
Unit test failure in TestUserGroupInformation.testGetServerSideGroups
-

 Key: HADOOP-7290
 URL: https://issues.apache.org/jira/browse/HADOOP-7290
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 0.23.0
 Environment: Linux 2.6.38-8-generic #42-Ubuntu SMP Mon Apr 11 03:31:24 
UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)
Reporter: Trevor Robinson
Priority: Minor


Testsuite: org.apache.hadoop.security.TestUserGroupInformation
Tests run: 14, Failures: 1, Errors: 0, Time elapsed: 0.278 sec
- Standard Output ---
trobinson:users guest git
-  ---

Testcase: testGetServerSideGroups took 0.051 sec
   FAILED
expected:g[ues]t but was:g[i]t
junit.framework.AssertionFailedError: expected:g[ues]t but was:g[i]t
   at 
org.apache.hadoop.security.TestUserGroupInformation.testGetServerSideGroups(TestUserGroupInformation.java:94)

It seems like the test is assuming that the groups returned by 
UserGroupInformation.getGroupNames() are in the same order as those returned by 
executing `id -Gn`. getGroupNames() only documents that the primary group is 
first, and `man id` doesn't document any ordering, so it seems like the test 
needs to be reworked to remove that assumption.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7290) Unit test failure in TestUserGroupInformation.testGetServerSideGroups

2011-05-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-7290:


Status: Patch Available  (was: Open)

 Unit test failure in TestUserGroupInformation.testGetServerSideGroups
 -

 Key: HADOOP-7290
 URL: https://issues.apache.org/jira/browse/HADOOP-7290
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 0.23.0
 Environment: Linux 2.6.38-8-generic #42-Ubuntu SMP Mon Apr 11 
 03:31:24 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
 Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
 Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)
Reporter: Trevor Robinson
Priority: Minor
  Labels: test
 Attachments: TestUserGroupInformation-id-order.patch


 Testsuite: org.apache.hadoop.security.TestUserGroupInformation
 Tests run: 14, Failures: 1, Errors: 0, Time elapsed: 0.278 sec
 - Standard Output ---
 trobinson:users guest git
 -  ---
 Testcase: testGetServerSideGroups took 0.051 sec
FAILED
 expected:g[ues]t but was:g[i]t
 junit.framework.AssertionFailedError: expected:g[ues]t but was:g[i]t
at 
 org.apache.hadoop.security.TestUserGroupInformation.testGetServerSideGroups(TestUserGroupInformation.java:94)
 It seems like the test is assuming that the groups returned by 
 UserGroupInformation.getGroupNames() are in the same order as those returned 
 by executing `id -Gn`. getGroupNames() only documents that the primary group 
 is first, and `man id` doesn't document any ordering, so it seems like the 
 test needs to be reworked to remove that assumption.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7290) Unit test failure in TestUserGroupInformation.testGetServerSideGroups

2011-05-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-7290:


Attachment: TestUserGroupInformation-id-order.patch

Patch puts `id` output in LinkedHashSet instead of ArrayList, then tests 
against getGroupNames() for size and containment of all values, rather than 
comparing corresponding array elements.

 Unit test failure in TestUserGroupInformation.testGetServerSideGroups
 -

 Key: HADOOP-7290
 URL: https://issues.apache.org/jira/browse/HADOOP-7290
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 0.23.0
 Environment: Linux 2.6.38-8-generic #42-Ubuntu SMP Mon Apr 11 
 03:31:24 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
 Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
 Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)
Reporter: Trevor Robinson
Priority: Minor
  Labels: test
 Attachments: TestUserGroupInformation-id-order.patch


 Testsuite: org.apache.hadoop.security.TestUserGroupInformation
 Tests run: 14, Failures: 1, Errors: 0, Time elapsed: 0.278 sec
 - Standard Output ---
 trobinson:users guest git
 -  ---
 Testcase: testGetServerSideGroups took 0.051 sec
FAILED
 expected:g[ues]t but was:g[i]t
 junit.framework.AssertionFailedError: expected:g[ues]t but was:g[i]t
at 
 org.apache.hadoop.security.TestUserGroupInformation.testGetServerSideGroups(TestUserGroupInformation.java:94)
 It seems like the test is assuming that the groups returned by 
 UserGroupInformation.getGroupNames() are in the same order as those returned 
 by executing `id -Gn`. getGroupNames() only documents that the primary group 
 is first, and `man id` doesn't document any ordering, so it seems like the 
 test needs to be reworked to remove that assumption.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7290) Unit test failure in TestUserGroupInformation.testGetServerSideGroups

2011-05-13 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13033257#comment-13033257
 ] 

Trevor Robinson commented on HADOOP-7290:
-

Contrib tests unchanged. Trunk/PreCommit configuration broken by revision 
1102848?

 [exec] 
==
 [exec] 
==
 [exec] Running contrib tests.
 [exec] 
==
 [exec] 
==
 [exec] 
 [exec] 
 [exec] /bin/kill -9 12446 
 [exec] kill: No such process
 [exec] /homes/hudson/tools/ant/latest/bin/ant 
-Dversion=1102861_HADOOP-7290_PATCH-12479160 
-Declipse.home=/homes/hudson/tools/eclipse/latest 
-Dpython.home=/homes/hudson/tools/python/latest -DHadoopPatchProcess= 
-Dtest.junit.output.format=xml -Dtest.output=no test-contrib
 [exec] Buildfile: build.xml
 [exec] 
 [exec] BUILD FAILED
 [exec] Target test-contrib does not exist in the project 
Hadoop-Common. 
 [exec] 
 [exec] Total time: 0 seconds

 Unit test failure in TestUserGroupInformation.testGetServerSideGroups
 -

 Key: HADOOP-7290
 URL: https://issues.apache.org/jira/browse/HADOOP-7290
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 0.23.0
 Environment: Linux 2.6.38-8-generic #42-Ubuntu SMP Mon Apr 11 
 03:31:24 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
 Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
 Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)
Reporter: Trevor Robinson
Priority: Minor
  Labels: test
 Attachments: TestUserGroupInformation-id-order.patch


 Testsuite: org.apache.hadoop.security.TestUserGroupInformation
 Tests run: 14, Failures: 1, Errors: 0, Time elapsed: 0.278 sec
 - Standard Output ---
 trobinson:users guest git
 -  ---
 Testcase: testGetServerSideGroups took 0.051 sec
FAILED
 expected:g[ues]t but was:g[i]t
 junit.framework.AssertionFailedError: expected:g[ues]t but was:g[i]t
at 
 org.apache.hadoop.security.TestUserGroupInformation.testGetServerSideGroups(TestUserGroupInformation.java:94)
 It seems like the test is assuming that the groups returned by 
 UserGroupInformation.getGroupNames() are in the same order as those returned 
 by executing `id -Gn`. getGroupNames() only documents that the primary group 
 is first, and `man id` doesn't document any ordering, so it seems like the 
 test needs to be reworked to remove that assumption.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7137) Remove hod contrib

2011-05-13 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13033259#comment-13033259
 ] 

Trevor Robinson commented on HADOOP-7137:
-

This commit removed test-contrib which is run by Hudson in 
PreCommit-HADOOP-Build:

 [exec] 
==
 [exec] 
==
 [exec] Running contrib tests.
 [exec] 
==
 [exec] 
==
 [exec] 
 [exec] 
 [exec] /bin/kill -9 12446 
 [exec] kill: No such process
 [exec] /homes/hudson/tools/ant/latest/bin/ant 
-Dversion=1102861_HADOOP-7290_PATCH-12479160 
-Declipse.home=/homes/hudson/tools/eclipse/latest 
-Dpython.home=/homes/hudson/tools/python/latest -DHadoopPatchProcess= 
-Dtest.junit.output.format=xml -Dtest.output=no test-contrib
 [exec] Buildfile: build.xml
 [exec] 
 [exec] BUILD FAILED
 [exec] Target test-contrib does not exist in the project 
Hadoop-Common. 
 [exec] 
 [exec] Total time: 0 seconds

 Remove hod contrib
 --

 Key: HADOOP-7137
 URL: https://issues.apache.org/jira/browse/HADOOP-7137
 Project: Hadoop Common
  Issue Type: Task
Reporter: Nigel Daley
Assignee: Nigel Daley
 Fix For: 0.22.0

 Attachments: HADOOP-7137.patch, HADOOP-7137.patch


 As per vote on general@ 
 (http://mail-archives.apache.org/mod_mbox/hadoop-general/201102.mbox/%3cac35a7ef-1d68-4055-8d47-eda2fcf8c...@mac.com%3E)
  I will 
 svn remove common/trunk/src/contrib/hod
 using this Jira.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7290) Unit test failure in TestUserGroupInformation.testGetServerSideGroups

2011-05-13 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-7290:


Component/s: (was: test)
 security

 Unit test failure in TestUserGroupInformation.testGetServerSideGroups
 -

 Key: HADOOP-7290
 URL: https://issues.apache.org/jira/browse/HADOOP-7290
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.0
 Environment: Linux 2.6.38-8-generic #42-Ubuntu SMP Mon Apr 11 
 03:31:24 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
 Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
 Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)
Reporter: Trevor Robinson
Priority: Minor
  Labels: test
 Attachments: TestUserGroupInformation-id-order.patch


 Testsuite: org.apache.hadoop.security.TestUserGroupInformation
 Tests run: 14, Failures: 1, Errors: 0, Time elapsed: 0.278 sec
 - Standard Output ---
 trobinson:users guest git
 -  ---
 Testcase: testGetServerSideGroups took 0.051 sec
FAILED
 expected:g[ues]t but was:g[i]t
 junit.framework.AssertionFailedError: expected:g[ues]t but was:g[i]t
at 
 org.apache.hadoop.security.TestUserGroupInformation.testGetServerSideGroups(TestUserGroupInformation.java:94)
 It seems like the test is assuming that the groups returned by 
 UserGroupInformation.getGroupNames() are in the same order as those returned 
 by executing `id -Gn`. getGroupNames() only documents that the primary group 
 is first, and `man id` doesn't document any ordering, so it seems like the 
 test needs to be reworked to remove that assumption.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7276) Hadoop native builds fail on ARM due to -m32

2011-05-12 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-7276:


Status: Patch Available  (was: Open)

 Hadoop native builds fail on ARM due to -m32
 

 Key: HADOOP-7276
 URL: https://issues.apache.org/jira/browse/HADOOP-7276
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 0.21.0
 Environment: $ gcc -v
 Using built-in specs.
 COLLECT_GCC=gcc
 COLLECT_LTO_WRAPPER=/usr/lib/arm-linux-gnueabi/gcc/arm-linux-gnueabi/4.5.2/lto-wrapper
 Target: arm-linux-gnueabi
 Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 
 4.5.2-8ubuntu4' --with-bugurl=file:///usr/share/doc/gcc-4.5/README.Bugs 
 --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr 
 --program-suffix=-4.5 --enable-shared --enable-multiarch 
 --with-multiarch-defaults=arm-linux-gnueabi --enable-linker-build-id 
 --with-system-zlib --libexecdir=/usr/lib/arm-linux-gnueabi 
 --without-included-gettext --enable-threads=posix 
 --with-gxx-include-dir=/usr/include/c++/4.5 
 --libdir=/usr/lib/arm-linux-gnueabi --enable-nls --with-sysroot=/ 
 --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes 
 --enable-plugin --enable-gold --enable-ld=default --with-plugin-ld=ld.gold 
 --enable-objc-gc --disable-sjlj-exceptions --with-arch=armv7-a 
 --with-float=softfp --with-fpu=vfpv3-d16 --with-mode=thumb --disable-werror 
 --enable-checking=release --build=arm-linux-gnueabi --host=arm-linux-gnueabi 
 --target=arm-linux-gnueabi
 Thread model: posix
 gcc version 4.5.2 (Ubuntu/Linaro 4.5.2-8ubuntu4)
 $ uname -a
 Linux panda0 2.6.38-1002-linaro-omap #3-Ubuntu SMP Fri Apr 15 14:00:54 UTC 
 2011 armv7l armv7l armv7l GNU/Linux
Reporter: Trevor Robinson
 Attachments: hadoop-common-arm.patch


 The native build fails on machine targets where gcc does not support -m32. 
 This is any target other than x86, SPARC, RS/6000, or PowerPC, such as ARM.
 $ ant -Dcompile.native=true
 ...
  [exec] make  all-am
  [exec] make[1]: Entering directory
 `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
  [exec] /bin/bash ./libtool  --tag=CC   --mode=compile gcc
 -DHAVE_CONFIG_H -I. -I/home/trobinson/dev/hadoop-common/src/native
 -I/usr/lib/jvm/java-6-openjdk/include
 -I/usr/lib/jvm/java-6-openjdk/include/linux
 -I/home/trobinson/dev/hadoop-common/src/native/src
 -Isrc/org/apache/hadoop/io/compress/zlib
 -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
 -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
 .deps/ZlibCompressor.Tpo -c -o ZlibCompressor.lo `test -f
 'src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c' || echo
 '/home/trobinson/dev/hadoop-common/src/native/'`src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
  [exec] libtool: compile:  gcc -DHAVE_CONFIG_H -I.
 -I/home/trobinson/dev/hadoop-common/src/native
 -I/usr/lib/jvm/java-6-openjdk/include
 -I/usr/lib/jvm/java-6-openjdk/include/linux
 -I/home/trobinson/dev/hadoop-common/src/native/src
 -Isrc/org/apache/hadoop/io/compress/zlib
 -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
 -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
 .deps/ZlibCompressor.Tpo -c
 /home/trobinson/dev/hadoop-common/src/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
  -fPIC -DPIC -o .libs/ZlibCompressor.o
  [exec] make[1]: Leaving directory
 `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
  [exec] cc1: error: unrecognized command line option -m32
  [exec] make[1]: *** [ZlibCompressor.lo] Error 1
  [exec] make: *** [all] Error 2

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7276) Hadoop native builds fail on ARM due to -m32

2011-05-12 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13032649#comment-13032649
 ] 

Trevor Robinson commented on HADOOP-7276:
-

No tests included because this change just fixes a build failure. Manually 
verified that x86-64 builds unchanged (-m64 is properly specified) and that ARM 
now builds (-m32 is not specified).

Findbugs issue seems to be a configuration issue unrelated to change. From 
console output:

 [exec] findbugs:
 [exec] [mkdir] Created dir: 
/grid/0/hudson/hudson-slave/workspace/PreCommit-HADOOP-Build/trunk/build/test/findbugs
 [exec]  [findbugs] Executing findbugs from ant task
 [exec]  [findbugs] Running FindBugs...
 [exec]  [findbugs] The following classes needed for analysis were missing:
 [exec]  [findbugs]   com.sun.javadoc.Doclet
 [exec]  [findbugs]   com.sun.javadoc.DocErrorReporter
 [exec]  [findbugs]   com.sun.javadoc.AnnotationTypeDoc
 [exec]  [findbugs]   com.sun.javadoc.RootDoc
 [exec]  [findbugs]   com.sun.javadoc.MethodDoc
 [exec]  [findbugs]   com.sun.javadoc.Doc
 [exec]  [findbugs]   com.sun.javadoc.PackageDoc
 [exec]  [findbugs]   com.sun.javadoc.LanguageVersion
 [exec]  [findbugs]   com.sun.javadoc.AnnotationDesc
 [exec]  [findbugs]   com.sun.javadoc.ConstructorDoc
 [exec]  [findbugs]   com.sun.javadoc.FieldDoc
 [exec]  [findbugs]   com.sun.javadoc.ProgramElementDoc
 [exec]  [findbugs]   com.sun.javadoc.ClassDoc
 [exec]  [findbugs]   com.sun.tools.doclets.standard.Standard
 [exec]  [findbugs] Warnings generated: 1
 [exec]  [findbugs] Missing classes: 15
 [exec]  [findbugs] Calculating exit code...
 [exec]  [findbugs] Setting 'missing class' flag (2)
 [exec]  [findbugs] Setting 'bugs found' flag (1)
 [exec]  [findbugs] Exit code set to: 3
 [exec]  [findbugs] Classes needed for analysis were missing
 [exec]  [findbugs] Output saved to 
/grid/0/hudson/hudson-slave/workspace/PreCommit-HADOOP-Build/trunk/build/test/findbugs/hadoop-findbugs-report.xml
 [exec]  [findbugs] Java Result: 3

Also, the precommit build queue 
(https://builds.apache.org/hudson/job/PreCommit-HADOOP-Build/) seems to be 
hanging multiple recent jobs at Recording test results. They're still running 
at 1hr.

Would a committer please review the change and let me know if I need to 
resubmit it?

 Hadoop native builds fail on ARM due to -m32
 

 Key: HADOOP-7276
 URL: https://issues.apache.org/jira/browse/HADOOP-7276
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 0.21.0
 Environment: $ gcc -v
 Using built-in specs.
 COLLECT_GCC=gcc
 COLLECT_LTO_WRAPPER=/usr/lib/arm-linux-gnueabi/gcc/arm-linux-gnueabi/4.5.2/lto-wrapper
 Target: arm-linux-gnueabi
 Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 
 4.5.2-8ubuntu4' --with-bugurl=file:///usr/share/doc/gcc-4.5/README.Bugs 
 --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr 
 --program-suffix=-4.5 --enable-shared --enable-multiarch 
 --with-multiarch-defaults=arm-linux-gnueabi --enable-linker-build-id 
 --with-system-zlib --libexecdir=/usr/lib/arm-linux-gnueabi 
 --without-included-gettext --enable-threads=posix 
 --with-gxx-include-dir=/usr/include/c++/4.5 
 --libdir=/usr/lib/arm-linux-gnueabi --enable-nls --with-sysroot=/ 
 --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes 
 --enable-plugin --enable-gold --enable-ld=default --with-plugin-ld=ld.gold 
 --enable-objc-gc --disable-sjlj-exceptions --with-arch=armv7-a 
 --with-float=softfp --with-fpu=vfpv3-d16 --with-mode=thumb --disable-werror 
 --enable-checking=release --build=arm-linux-gnueabi --host=arm-linux-gnueabi 
 --target=arm-linux-gnueabi
 Thread model: posix
 gcc version 4.5.2 (Ubuntu/Linaro 4.5.2-8ubuntu4)
 $ uname -a
 Linux panda0 2.6.38-1002-linaro-omap #3-Ubuntu SMP Fri Apr 15 14:00:54 UTC 
 2011 armv7l armv7l armv7l GNU/Linux
Reporter: Trevor Robinson
 Attachments: hadoop-common-arm.patch


 The native build fails on machine targets where gcc does not support -m32. 
 This is any target other than x86, SPARC, RS/6000, or PowerPC, such as ARM.
 $ ant -Dcompile.native=true
 ...
  [exec] make  all-am
  [exec] make[1]: Entering directory
 `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
  [exec] /bin/bash ./libtool  --tag=CC   --mode=compile gcc
 -DHAVE_CONFIG_H -I. -I/home/trobinson/dev/hadoop-common/src/native
 -I/usr/lib/jvm/java-6-openjdk/include
 -I/usr/lib/jvm/java-6-openjdk/include/linux
 -I/home/trobinson/dev/hadoop-common/src/native/src
 -Isrc/org/apache/hadoop/io/compress/zlib
 -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
 -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP 

[jira] [Created] (HADOOP-7276) Hadoop native builds fail on ARM due to -m32

2011-05-11 Thread Trevor Robinson (JIRA)
Hadoop native builds fail on ARM due to -m32


 Key: HADOOP-7276
 URL: https://issues.apache.org/jira/browse/HADOOP-7276
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 0.21.0
 Environment: $ gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/arm-linux-gnueabi/gcc/arm-linux-gnueabi/4.5.2/lto-wrapper
Target: arm-linux-gnueabi
Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 
4.5.2-8ubuntu4' --with-bugurl=file:///usr/share/doc/gcc-4.5/README.Bugs 
--enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr 
--program-suffix=-4.5 --enable-shared --enable-multiarch 
--with-multiarch-defaults=arm-linux-gnueabi --enable-linker-build-id 
--with-system-zlib --libexecdir=/usr/lib/arm-linux-gnueabi 
--without-included-gettext --enable-threads=posix 
--with-gxx-include-dir=/usr/include/c++/4.5 --libdir=/usr/lib/arm-linux-gnueabi 
--enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug 
--enable-libstdcxx-time=yes --enable-plugin --enable-gold --enable-ld=default 
--with-plugin-ld=ld.gold --enable-objc-gc --disable-sjlj-exceptions 
--with-arch=armv7-a --with-float=softfp --with-fpu=vfpv3-d16 --with-mode=thumb 
--disable-werror --enable-checking=release --build=arm-linux-gnueabi 
--host=arm-linux-gnueabi --target=arm-linux-gnueabi
Thread model: posix
gcc version 4.5.2 (Ubuntu/Linaro 4.5.2-8ubuntu4)
$ uname -a
Linux panda0 2.6.38-1002-linaro-omap #3-Ubuntu SMP Fri Apr 15 14:00:54 UTC 2011 
armv7l armv7l armv7l GNU/Linux

Reporter: Trevor Robinson


The native build fails on machine targets where gcc does not support -m32. This 
is any target other than x86, SPARC, RS/6000, or PowerPC, such as ARM.

$ ant -Dcompile.native=true
...
 [exec] make  all-am
 [exec] make[1]: Entering directory
`/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
 [exec] /bin/bash ./libtool  --tag=CC   --mode=compile gcc
-DHAVE_CONFIG_H -I. -I/home/trobinson/dev/hadoop-common/src/native
-I/usr/lib/jvm/java-6-openjdk/include
-I/usr/lib/jvm/java-6-openjdk/include/linux
-I/home/trobinson/dev/hadoop-common/src/native/src
-Isrc/org/apache/hadoop/io/compress/zlib
-Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
-g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
.deps/ZlibCompressor.Tpo -c -o ZlibCompressor.lo `test -f
'src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c' || echo
'/home/trobinson/dev/hadoop-common/src/native/'`src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
 [exec] libtool: compile:  gcc -DHAVE_CONFIG_H -I.
-I/home/trobinson/dev/hadoop-common/src/native
-I/usr/lib/jvm/java-6-openjdk/include
-I/usr/lib/jvm/java-6-openjdk/include/linux
-I/home/trobinson/dev/hadoop-common/src/native/src
-Isrc/org/apache/hadoop/io/compress/zlib
-Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
-g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
.deps/ZlibCompressor.Tpo -c
/home/trobinson/dev/hadoop-common/src/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
 -fPIC -DPIC -o .libs/ZlibCompressor.o
 [exec] make[1]: Leaving directory
`/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
 [exec] cc1: error: unrecognized command line option -m32
 [exec] make[1]: *** [ZlibCompressor.lo] Error 1
 [exec] make: *** [all] Error 2


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7276) Hadoop native builds fail on ARM due to -m32

2011-05-11 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-7276:


Attachment: hadoop-common-arm.patch

This patch introduces an AM_CONDITIONAL called SPECIFY_DATA_MODEL that is 
disabled when host_cpu starts with arm (and is easily extensible for other 
CPUs). -m$(JVM_DATA_MODEL) is only added to AM_CFLAGS and AM_LDFLAGS when 
SPECIFY_DATA_MODEL is true.

 Hadoop native builds fail on ARM due to -m32
 

 Key: HADOOP-7276
 URL: https://issues.apache.org/jira/browse/HADOOP-7276
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 0.21.0
 Environment: $ gcc -v
 Using built-in specs.
 COLLECT_GCC=gcc
 COLLECT_LTO_WRAPPER=/usr/lib/arm-linux-gnueabi/gcc/arm-linux-gnueabi/4.5.2/lto-wrapper
 Target: arm-linux-gnueabi
 Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 
 4.5.2-8ubuntu4' --with-bugurl=file:///usr/share/doc/gcc-4.5/README.Bugs 
 --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr 
 --program-suffix=-4.5 --enable-shared --enable-multiarch 
 --with-multiarch-defaults=arm-linux-gnueabi --enable-linker-build-id 
 --with-system-zlib --libexecdir=/usr/lib/arm-linux-gnueabi 
 --without-included-gettext --enable-threads=posix 
 --with-gxx-include-dir=/usr/include/c++/4.5 
 --libdir=/usr/lib/arm-linux-gnueabi --enable-nls --with-sysroot=/ 
 --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes 
 --enable-plugin --enable-gold --enable-ld=default --with-plugin-ld=ld.gold 
 --enable-objc-gc --disable-sjlj-exceptions --with-arch=armv7-a 
 --with-float=softfp --with-fpu=vfpv3-d16 --with-mode=thumb --disable-werror 
 --enable-checking=release --build=arm-linux-gnueabi --host=arm-linux-gnueabi 
 --target=arm-linux-gnueabi
 Thread model: posix
 gcc version 4.5.2 (Ubuntu/Linaro 4.5.2-8ubuntu4)
 $ uname -a
 Linux panda0 2.6.38-1002-linaro-omap #3-Ubuntu SMP Fri Apr 15 14:00:54 UTC 
 2011 armv7l armv7l armv7l GNU/Linux
Reporter: Trevor Robinson
 Attachments: hadoop-common-arm.patch


 The native build fails on machine targets where gcc does not support -m32. 
 This is any target other than x86, SPARC, RS/6000, or PowerPC, such as ARM.
 $ ant -Dcompile.native=true
 ...
  [exec] make  all-am
  [exec] make[1]: Entering directory
 `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
  [exec] /bin/bash ./libtool  --tag=CC   --mode=compile gcc
 -DHAVE_CONFIG_H -I. -I/home/trobinson/dev/hadoop-common/src/native
 -I/usr/lib/jvm/java-6-openjdk/include
 -I/usr/lib/jvm/java-6-openjdk/include/linux
 -I/home/trobinson/dev/hadoop-common/src/native/src
 -Isrc/org/apache/hadoop/io/compress/zlib
 -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
 -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
 .deps/ZlibCompressor.Tpo -c -o ZlibCompressor.lo `test -f
 'src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c' || echo
 '/home/trobinson/dev/hadoop-common/src/native/'`src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
  [exec] libtool: compile:  gcc -DHAVE_CONFIG_H -I.
 -I/home/trobinson/dev/hadoop-common/src/native
 -I/usr/lib/jvm/java-6-openjdk/include
 -I/usr/lib/jvm/java-6-openjdk/include/linux
 -I/home/trobinson/dev/hadoop-common/src/native/src
 -Isrc/org/apache/hadoop/io/compress/zlib
 -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
 -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
 .deps/ZlibCompressor.Tpo -c
 /home/trobinson/dev/hadoop-common/src/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
  -fPIC -DPIC -o .libs/ZlibCompressor.o
  [exec] make[1]: Leaving directory
 `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
  [exec] cc1: error: unrecognized command line option -m32
  [exec] make[1]: *** [ZlibCompressor.lo] Error 1
  [exec] make: *** [all] Error 2

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira