[jira] [Updated] (HADOOP-8630) rename isSingleSwitch() methods in new topo base class to isFlatTopology()

2012-08-09 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HADOOP-8630:
---

Attachment: HADOOP-8630.2.patch

Attached patch is the patch based on trunk by using git format-patch command.  
And, I sent pull request based on your github.

 rename isSingleSwitch() methods in new topo base class to isFlatTopology()
 --

 Key: HADOOP-8630
 URL: https://issues.apache.org/jira/browse/HADOOP-8630
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Steve Loughran
Priority: Trivial
 Attachments: HADOOP-8630.2.patch, HADOOP-8630.patch

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 The new topology logic that is not yet turned on in HDFS uses the method 
 {{isSingleSwitch()}} for implementations to declare whether or not they are 
 single switch. 
 The use of switch is an implementation issue; the big VM-based patch shows 
 that really it's about flat vs hierarchical, with Hadoop assuming that 
 subtrees in the hierarchy have better bandwidth (good) but correlated 
 failures (bad). 
 Renaming the method now -before it's fixed and used- is time time to do it. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8603) Test failures with Container .. is running beyond virtual memory limits

2012-08-09 Thread Ilya Katsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Katsov updated HADOOP-8603:


Status: Open  (was: Patch Available)

Moved to https://issues.apache.org/jira/browse/MAPREDUCE-4535

 Test failures with Container .. is running beyond virtual memory limits
 -

 Key: HADOOP-8603
 URL: https://issues.apache.org/jira/browse/HADOOP-8603
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 0.23.3
 Environment: CentOS 6.2
Reporter: Ilya Katsov
  Labels: test
 Attachments: HADOOP-8603-branch-0.23.patch, 
 HADOOP-8603-branch-0.23_002.patch


 Tests 
 org.apache.hadoop.tools.TestHadoopArchives.{testRelativePath,testPathWithSpaces}
  fail with the following message:
 {code}
 Container [pid=7785,containerID=container_1342495768864_0001_01_01] is 
 running beyond virtual memory limits. Current usage: 143.6mb of 1.5gb 
 physical memory used; 3.4gb of 3.1gb virtual memory used. Killing container.
 Dump of the process-tree for container_1342495768864_0001_01_01 :
   |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
 SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
   |- 7797 7785 7785 7785 (java) 573 38 3517018112 36421 
 /usr/java/jdk1.6.0_33/jre/bin/java 
 -Dlog4j.configuration=container-log4j.properties 
 -Dyarn.app.mapreduce.container.log.dir=/var/lib/jenkins/workspace/Hadoop_gd-branch0.23_integration/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/org.apache.hadoop.mapred.MiniMRCluster/org.apache.hadoop.mapred.MiniMRCluster-logDir-nm-0_3/application_1342495768864_0001/container_1342495768864_0001_01_01
  -Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
 -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 
   |- 7785 7101 7785 7785 (bash) 1 1 108605440 332 /bin/bash -c 
 /usr/java/jdk1.6.0_33/jre/bin/java 
 -Dlog4j.configuration=container-log4j.properties 
 -Dyarn.app.mapreduce.container.log.dir=/var/lib/jenkins/workspace/Hadoop_gd-branch0.23_integration/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/org.apache.hadoop.mapred.MiniMRCluster/org.apache.hadoop.mapred.MiniMRCluster-logDir-nm-0_3/application_1342495768864_0001/container_1342495768864_0001_01_01
  -Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
 -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 
 1/var/lib/jenkins/workspace/Hadoop_gd-branch0.23_integration/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/org.apache.hadoop.mapred.MiniMRCluster/org.apache.hadoop.mapred.MiniMRCluster-logDir-nm-0_3/application_1342495768864_0001/container_1342495768864_0001_01_01/stdout
  
 2/var/lib/jenkins/workspace/Hadoop_gd-branch0.23_integration/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/org.apache.hadoop.mapred.MiniMRCluster/org.apache.hadoop.mapred.MiniMRCluster-logDir-nm-0_3/application_1342495768864_0001/container_1342495768864_0001_01_01/stderr

 {code}
 Is it related to https://issues.apache.org/jira/browse/MAPREDUCE-3933 ? This 
 is not a stably reproducible problem, but it seems that adding 
 MALLOC_ARENA_MAX resolves the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8603) Test failures with Container .. is running beyond virtual memory limits

2012-08-09 Thread Ilya Katsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Katsov resolved HADOOP-8603.
-

Resolution: Duplicate

Moved to https://issues.apache.org/jira/browse/MAPREDUCE-4535

 Test failures with Container .. is running beyond virtual memory limits
 -

 Key: HADOOP-8603
 URL: https://issues.apache.org/jira/browse/HADOOP-8603
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 0.23.3
 Environment: CentOS 6.2
Reporter: Ilya Katsov
  Labels: test
 Attachments: HADOOP-8603-branch-0.23.patch, 
 HADOOP-8603-branch-0.23_002.patch


 Tests 
 org.apache.hadoop.tools.TestHadoopArchives.{testRelativePath,testPathWithSpaces}
  fail with the following message:
 {code}
 Container [pid=7785,containerID=container_1342495768864_0001_01_01] is 
 running beyond virtual memory limits. Current usage: 143.6mb of 1.5gb 
 physical memory used; 3.4gb of 3.1gb virtual memory used. Killing container.
 Dump of the process-tree for container_1342495768864_0001_01_01 :
   |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
 SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
   |- 7797 7785 7785 7785 (java) 573 38 3517018112 36421 
 /usr/java/jdk1.6.0_33/jre/bin/java 
 -Dlog4j.configuration=container-log4j.properties 
 -Dyarn.app.mapreduce.container.log.dir=/var/lib/jenkins/workspace/Hadoop_gd-branch0.23_integration/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/org.apache.hadoop.mapred.MiniMRCluster/org.apache.hadoop.mapred.MiniMRCluster-logDir-nm-0_3/application_1342495768864_0001/container_1342495768864_0001_01_01
  -Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
 -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 
   |- 7785 7101 7785 7785 (bash) 1 1 108605440 332 /bin/bash -c 
 /usr/java/jdk1.6.0_33/jre/bin/java 
 -Dlog4j.configuration=container-log4j.properties 
 -Dyarn.app.mapreduce.container.log.dir=/var/lib/jenkins/workspace/Hadoop_gd-branch0.23_integration/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/org.apache.hadoop.mapred.MiniMRCluster/org.apache.hadoop.mapred.MiniMRCluster-logDir-nm-0_3/application_1342495768864_0001/container_1342495768864_0001_01_01
  -Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
 -Xmx1024m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 
 1/var/lib/jenkins/workspace/Hadoop_gd-branch0.23_integration/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/org.apache.hadoop.mapred.MiniMRCluster/org.apache.hadoop.mapred.MiniMRCluster-logDir-nm-0_3/application_1342495768864_0001/container_1342495768864_0001_01_01/stdout
  
 2/var/lib/jenkins/workspace/Hadoop_gd-branch0.23_integration/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/org.apache.hadoop.mapred.MiniMRCluster/org.apache.hadoop.mapred.MiniMRCluster-logDir-nm-0_3/application_1342495768864_0001/container_1342495768864_0001_01_01/stderr

 {code}
 Is it related to https://issues.apache.org/jira/browse/MAPREDUCE-3933 ? This 
 is not a stably reproducible problem, but it seems that adding 
 MALLOC_ARENA_MAX resolves the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431665#comment-13431665
 ] 

Hadoop QA commented on HADOOP-8659:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12539957/HADOOP-8659.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:

  org.apache.hadoop.ha.TestZKFailoverController
  
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS
  org.apache.hadoop.hdfs.TestFileConcurrentReader
  
org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics
  org.apache.hadoop.hdfs.server.namenode.TestFsck

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1271//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1271//console

This message is automatically generated.

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8660) TestPseudoAuthenticator failing with NPE

2012-08-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431779#comment-13431779
 ] 

Hudson commented on HADOOP-8660:


Integrated in Hadoop-Hdfs-trunk #1130 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1130/])
HADOOP-8660. TestPseudoAuthenticator failing with NPE. (tucu) (Revision 
1370812)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1370812
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 TestPseudoAuthenticator failing with NPE
 

 Key: HADOOP-8660
 URL: https://issues.apache.org/jira/browse/HADOOP-8660
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Alejandro Abdelnur
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8660.patch


 This test started failing recently, on top of trunk:
 testAuthenticationAnonymousAllowed(org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator)
   Time elapsed: 0.241 sec   ERROR!
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:75)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:127)
 at 
 org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator.testAuthenticationAnonymousAllowed(TestPseudoAuthenticator.java:65)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8581) add support for HTTPS to the web UIs

2012-08-09 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431785#comment-13431785
 ] 

Aaron T. Myers commented on HADOOP-8581:


Patch looks pretty good to me, Tucu. Just a few small comments:

# Per our coding conventions, I don't think that HttpConfig#SSL_ENABLED should 
be all caps.
# In the HttpServer constructor, move the .setHost and .setPort to after the 
if/else:
{code}
if (...) {
...
  sslListener.setHost(bindAddress);
  sslListener.setPort(port);
  listener = sslListener;
} else {
  listener = createBaseListener(conf);
  listener.setHost(bindAddress);
  listener.setPort(port);
}
{code}
# In the core-default.xml description, take out the word it and change 
webuis to web UIs:
{code}
+Whether to use SSL for the HTTP endpoints. If set to true, it the
+NameNode, DataNode, ResourceManager, NodeManager, HistoryServer and
+MapReduceAppMaster webuis will be served over HTTPS instead HTTP.
{code}
# Rather than go through the headache of writing out a core-default.xml 
containing the appropriate SSL config, how about just adding a 
setSslEnabledForTesting static function to HttpConfig?
# Considering that every place you call HttpConfig#getScheme you immediately 
append ://, maybe just append that in HttpConfig#getScheme? Or perhaps have a 
HttpConfig#getPrefix which returns HttpConfig#getScheme() + :// ?
# I think you inadvertently incorrectly changed the indentation in 
HostUtil#getTaskLogUrl to be 4 spaces instead of 2.
# There are some inadvertent and unnecessary whitespace changes in 
RMAppAttemptImpl.

 add support for HTTPS to the web UIs
 

 Key: HADOOP-8581
 URL: https://issues.apache.org/jira/browse/HADOOP-8581
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.1.0-alpha

 Attachments: HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, 
 HADOOP-8581.patch, HADOOP-8581.patch


 HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is 
 hardcoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8660) TestPseudoAuthenticator failing with NPE

2012-08-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431813#comment-13431813
 ] 

Hudson commented on HADOOP-8660:


Integrated in Hadoop-Mapreduce-trunk #1162 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1162/])
HADOOP-8660. TestPseudoAuthenticator failing with NPE. (tucu) (Revision 
1370812)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1370812
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 TestPseudoAuthenticator failing with NPE
 

 Key: HADOOP-8660
 URL: https://issues.apache.org/jira/browse/HADOOP-8660
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Alejandro Abdelnur
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8660.patch


 This test started failing recently, on top of trunk:
 testAuthenticationAnonymousAllowed(org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator)
   Time elapsed: 0.241 sec   ERROR!
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:75)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:127)
 at 
 org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator.testAuthenticationAnonymousAllowed(TestPseudoAuthenticator.java:65)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8649) ChecksumFileSystem should have an overriding implementation of listStatus(Path, PathFilter) for improved performance

2012-08-09 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8649:
-

Attachment: trunk-HADOOP-8649.patch
branch1-HADOOP-8649.patch

Uploading updated patches for branch1 and trunk.

 ChecksumFileSystem should have an overriding implementation of 
 listStatus(Path, PathFilter) for improved performance
 

 Key: HADOOP-8649
 URL: https://issues.apache.org/jira/browse/HADOOP-8649
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.3, 2.0.0-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: HADOOP-8649_branch1.patch, HADOOP-8649_branch1.patch, 
 HADOOP-8649_branch1.patch_v2, HADOOP-8649_branch1.patch_v3, 
 TestChecksumFileSystemOnDFS.java, branch1-HADOOP-8649.patch, 
 branch1-HADOOP-8649.patch, trunk-HADOOP-8649.patch, trunk-HADOOP-8649.patch


 Currently, ChecksumFileSystem implements only listStatus(Path). 
 The other form of listStatus(Path, customFilter) results in parsing the list 
 twice to apply each of the filters - custom and checksum filter.
 By using a composite filter instead, we limit the parsing to once.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-09 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8661:


Attachment: HADOOP-8661.txt

This patch will split the stack trace from the message in the RemoteException.  
It will also parse the stack trace and insert it into the generated exception.

If this is a critical issue then I would like to see OOZIE-946 go into OOZIE 
3.2 not just 3.3, but in either case I think this patch should be OK to go in, 
even for 0.23.

 Stack Trace in Exception.getMessage causing oozie DB to have issues
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical
 Attachments: HADOOP-8661.txt


 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is causing issues 
 for oozie because they store the message in their database and it is getting 
 very large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-09 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8661:


Status: Patch Available  (was: Open)

 Stack Trace in Exception.getMessage causing oozie DB to have issues
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.0.0-alpha, 0.23.3, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical
 Attachments: HADOOP-8661.txt


 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is causing issues 
 for oozie because they store the message in their database and it is getting 
 very large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-09 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431918#comment-13431918
 ] 

Robert Joseph Evans commented on HADOOP-8661:
-

The patch did not apply because it was based off of 0.23, and  HDFS-3504 
apparently touched this file too.  I will rebase on trunk.

 Stack Trace in Exception.getMessage causing oozie DB to have issues
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical
 Attachments: HADOOP-8661.txt


 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is causing issues 
 for oozie because they store the message in their database and it is getting 
 very large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-09 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8661:


Attachment: HADOOP-8661.txt

 Stack Trace in Exception.getMessage causing oozie DB to have issues
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical
 Attachments: HADOOP-8661.txt, HADOOP-8661.txt


 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is causing issues 
 for oozie because they store the message in their database and it is getting 
 very large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-09 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431933#comment-13431933
 ] 

Colin Patrick McCabe commented on HADOOP-8659:
--

This looks good overall.

{code}
+else ()
+# On hard-float systems, soft-float compatibility dev packages are 
required,
+# e.g. libc6-dev-armel on Ubuntu 12.04.
+message(Soft-float JVM detected; ensure that soft-float dev packages 
are installed)
+set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -mfloat-abi=softfp)
+endif ()
{code}

Would it be possible to use {{CHECK_SYMBOL_EXISTS}} or {{CHECK_LIBRARY_EXISTS}} 
to ensure that the soft-float dev packages are installed?  I'm not too familiar 
with soft-float libraries on ARM, so I'm just guessing here.

{code}
+execute_process(
+COMMAND readelf -A ${JAVA_JVM_LIBRARY}
+OUTPUT_VARIABLE JVM_ELF_ARCH
+ERROR_QUIET)
+if (JVM_ELF_ARCH MATCHES Tag_ABI_VFP_args: VFP registers)
+message(Hard-float JVM detected)
+set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -mfloat-abi=hard)
{code}

I wish there were a way to use CHECK_C_SOURCE_COMPILES or something here to 
determine if the JVM library was soft-float or hard-float.  I don't know if 
everyone has {{readelf}} installed by default, and it's preferable to reduce 
the number of dependencies we have.  However, if you can't find an easy way to 
do this, then feel free to ignore this comment.

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8632) Configuration leaking class-loaders

2012-08-09 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431938#comment-13431938
 ] 

Robert Joseph Evans commented on HADOOP-8632:
-

Costin,

I understand your issue more fully now, and I am fine if you want to add in 
WeakReferences to the ClassLoaders.  If you have a patch for this leak, I would 
be happy to review it.

 Configuration leaking class-loaders
 ---

 Key: HADOOP-8632
 URL: https://issues.apache.org/jira/browse/HADOOP-8632
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Costin Leau

 The newly introduced CACHE_CLASSES leaks class loaders causing associated 
 classes to not be reclaimed.
 One solution is to remove the cache itself since each class loader 
 implementation caches the classes it loads automatically and preventing an 
 exception from being raised is just a micro-optimization that, as one can 
 tell, causes bugs instead of improving anything.
 In fact, I would argue in a highly-concurrent environment, the weakhashmap 
 synchronization/lookup probably costs more then creating the exception itself.
 Another is to prevent the leak from occurring, by inserting the loadedclass 
 into the WeakHashMap wrapped in a WeakReference. Otherwise the class has a 
 strong reference to its classloader (the key) meaning neither gets GC'ed.
 And since the cache_class is static, even if the originating Configuration 
 instance gets GC'ed, its classloader won't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8619) WritableComparator must implement no-arg constructor

2012-08-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431940#comment-13431940
 ] 

Steve Loughran commented on HADOOP-8619:


I don't see this breaking anything, and it lets people chose how they serialise 
stuff, that's their choice.

+1


 WritableComparator must implement no-arg constructor
 

 Key: HADOOP-8619
 URL: https://issues.apache.org/jira/browse/HADOOP-8619
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 3.0.0
Reporter: Radim Kolar
 Fix For: 0.23.0, 2.0.0-alpha, 3.0.0

 Attachments: writable-comparator.txt


 Because of reasons listed here: 
 http://findbugs.sourceforge.net/bugDescriptions.html#SE_COMPARATOR_SHOULD_BE_SERIALIZABLE
 comparators should be serializable. To make deserialization work, it is 
 required that all superclasses have no-arg constructor. 
 http://findbugs.sourceforge.net/bugDescriptions.html#SE_NO_SUITABLE_CONSTRUCTOR
 Simply add no=arg constructor to  WritableComparator.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8663) UnresolvedAddressException while connect causes NPE

2012-08-09 Thread John George (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HADOOP-8663:


Affects Version/s: (was: 2.2.0-alpha)
   (was: 3.0.0)
   (was: 1.0.3)
   0.20.205.0

This problems seems to be fixed in 1.0 and forward - hence removing it from the 
Affects Version.

The fix was to catch (throwable) in setupIOstreams() instead of just 
IOException.

I might close this JIRA in a little bit since its already fixed in later 
relevant releases.

 UnresolvedAddressException while connect causes NPE
 ---

 Key: HADOOP-8663
 URL: https://issues.apache.org/jira/browse/HADOOP-8663
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.205.0
Reporter: John George
Assignee: John George

 If connect() fails due to UnresolvedAddressException  in setupConnection() in 
 Client.java, that causes 'out' to be NOT set and thus cause NPE when the next 
 connection comes through. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8663) UnresolvedAddressException while connect causes NPE

2012-08-09 Thread John George (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George resolved HADOOP-8663.
-

  Resolution: Not A Problem
Release Note: Marking it as 'Not a Problem', since its already fixed in 1.0 
and later

 UnresolvedAddressException while connect causes NPE
 ---

 Key: HADOOP-8663
 URL: https://issues.apache.org/jira/browse/HADOOP-8663
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.205.0
Reporter: John George
Assignee: John George

 If connect() fails due to UnresolvedAddressException  in setupConnection() in 
 Client.java, that causes 'out' to be NOT set and thus cause NPE when the next 
 connection comes through. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431960#comment-13431960
 ] 

Hadoop QA commented on HADOOP-8661:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12540050/HADOOP-8661.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1274//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1274//console

This message is automatically generated.

 Stack Trace in Exception.getMessage causing oozie DB to have issues
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical
 Attachments: HADOOP-8661.txt, HADOOP-8661.txt


 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is causing issues 
 for oozie because they store the message in their database and it is getting 
 very large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8619) WritableComparator must implement no-arg constructor

2012-08-09 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431963#comment-13431963
 ] 

Steve Loughran commented on HADOOP-8619:


Before committing this, I'm going to add one more bit of homework:

add a test that serializes then deserializes something of the std. writables.

Why? Your goal is to ser/deser things -the ctor is just a means to that end. 
Add a test of the desired behaviour and you can be confident that it doesn't 
break in future.

 WritableComparator must implement no-arg constructor
 

 Key: HADOOP-8619
 URL: https://issues.apache.org/jira/browse/HADOOP-8619
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 3.0.0
Reporter: Radim Kolar
 Fix For: 0.23.0, 2.0.0-alpha, 3.0.0

 Attachments: writable-comparator.txt


 Because of reasons listed here: 
 http://findbugs.sourceforge.net/bugDescriptions.html#SE_COMPARATOR_SHOULD_BE_SERIALIZABLE
 comparators should be serializable. To make deserialization work, it is 
 required that all superclasses have no-arg constructor. 
 http://findbugs.sourceforge.net/bugDescriptions.html#SE_NO_SUITABLE_CONSTRUCTOR
 Simply add no=arg constructor to  WritableComparator.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8649) ChecksumFileSystem should have an overriding implementation of listStatus(Path, PathFilter) for improved performance

2012-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431992#comment-13431992
 ] 

Hadoop QA commented on HADOOP-8649:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12540032/trunk-HADOOP-8649.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient:

  org.apache.hadoop.hdfs.TestFileConcurrentReader
  
org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1272//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1272//console

This message is automatically generated.

 ChecksumFileSystem should have an overriding implementation of 
 listStatus(Path, PathFilter) for improved performance
 

 Key: HADOOP-8649
 URL: https://issues.apache.org/jira/browse/HADOOP-8649
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.3, 2.0.0-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: HADOOP-8649_branch1.patch, HADOOP-8649_branch1.patch, 
 HADOOP-8649_branch1.patch_v2, HADOOP-8649_branch1.patch_v3, 
 TestChecksumFileSystemOnDFS.java, branch1-HADOOP-8649.patch, 
 branch1-HADOOP-8649.patch, trunk-HADOOP-8649.patch, trunk-HADOOP-8649.patch


 Currently, ChecksumFileSystem implements only listStatus(Path). 
 The other form of listStatus(Path, customFilter) results in parsing the list 
 twice to apply each of the filters - custom and checksum filter.
 By using a composite filter instead, we limit the parsing to once.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8649) ChecksumFileSystem should have an overriding implementation of listStatus(Path, PathFilter) for improved performance

2012-08-09 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13432021#comment-13432021
 ] 

Karthik Kambatla commented on HADOOP-8649:
--

I don't think the patch has anything to do with the two failing tests, these 
tests fail on the latest trunk as well. Casual code examination shows no 
intersection between the patch and failing tests.

 ChecksumFileSystem should have an overriding implementation of 
 listStatus(Path, PathFilter) for improved performance
 

 Key: HADOOP-8649
 URL: https://issues.apache.org/jira/browse/HADOOP-8649
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.3, 2.0.0-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: HADOOP-8649_branch1.patch, HADOOP-8649_branch1.patch, 
 HADOOP-8649_branch1.patch_v2, HADOOP-8649_branch1.patch_v3, 
 TestChecksumFileSystemOnDFS.java, branch1-HADOOP-8649.patch, 
 branch1-HADOOP-8649.patch, trunk-HADOOP-8649.patch, trunk-HADOOP-8649.patch


 Currently, ChecksumFileSystem implements only listStatus(Path). 
 The other form of listStatus(Path, customFilter) results in parsing the list 
 twice to apply each of the filters - custom and checksum filter.
 By using a composite filter instead, we limit the parsing to once.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8619) WritableComparator must implement no-arg constructor

2012-08-09 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-8619:
--

Attachment: 8619-0.patch

 WritableComparator must implement no-arg constructor
 

 Key: HADOOP-8619
 URL: https://issues.apache.org/jira/browse/HADOOP-8619
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 3.0.0
Reporter: Radim Kolar
 Fix For: 0.23.0, 2.0.0-alpha, 3.0.0

 Attachments: 8619-0.patch, writable-comparator.txt


 Because of reasons listed here: 
 http://findbugs.sourceforge.net/bugDescriptions.html#SE_COMPARATOR_SHOULD_BE_SERIALIZABLE
 comparators should be serializable. To make deserialization work, it is 
 required that all superclasses have no-arg constructor. 
 http://findbugs.sourceforge.net/bugDescriptions.html#SE_NO_SUITABLE_CONSTRUCTOR
 Simply add no=arg constructor to  WritableComparator.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-09 Thread Trevor Robinson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13432054#comment-13432054
 ] 

Trevor Robinson commented on HADOOP-8659:
-

It's not as easy as just CHECK_SYMBOL_EXISTS/CHECK_LIBRARY_EXISTS, since the 
soft-float libraries are identical to the hard-float ones, but are installed in 
different directories. However, I can do a test compilation against an 
arbitrary libc symbol with the soft-float flag:

{code}
include(CMakePushCheckState)
cmake_push_check_state()
set(CMAKE_REQUIRED_FLAGS ${CMAKE_REQUIRED_FLAGS} -mfloat-abi=softfp)
include(CheckSymbolExists)
check_symbol_exists(exit stdlib.h SOFTFP_AVAILABLE)
cmake_pop_check_state()
{code}

Unfortunately, there is currently no good way to determine the JVM's float ABI. 
It's not reported at all by the Oracle EJRE or OpenJDK. The current behavior of 
linking against the JVM library with the wrong ABI doesn't report an error. 
What I can do is restrict this code path to Linux (since this issue is 
Linux-specific for now), where readelf is part of binutils (like ld), so it 
should always be available. But I'll also check for it and issue a warning if 
it's not found. For example:

{code}
find_program(READELF readelf)
if (READELF MATCHES NOTFOUND)
message(WARNING readelf not found; JVM float ABI detection disabled)
endif ()
{code}


 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-09 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Status: Open  (was: Patch Available)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-09 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Attachment: HADOOP-8659.patch

Attached updated patch based on Colin's comments.

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-09 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Status: Patch Available  (was: Open)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-09 Thread Virag Kothari (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13432066#comment-13432066
 ] 

Virag Kothari commented on HADOOP-8661:
---

@Bobby, Just realized while fixing OOZIE-946 that the column storing error 
message is Blob and not varchar. So the issue is not as severe as I thought 
was. Sorry for missing this earlier.

 Stack Trace in Exception.getMessage causing oozie DB to have issues
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical
 Attachments: HADOOP-8661.txt, HADOOP-8661.txt


 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is causing issues 
 for oozie because they store the message in their database and it is getting 
 very large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-09 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Attachment: HADOOP-8659.patch

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-09 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Attachment: (was: HADOOP-8659.patch)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8619) WritableComparator must implement no-arg constructor

2012-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13432072#comment-13432072
 ] 

Hadoop QA commented on HADOOP-8619:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12540087/8619-0.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1275//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1275//console

This message is automatically generated.

 WritableComparator must implement no-arg constructor
 

 Key: HADOOP-8619
 URL: https://issues.apache.org/jira/browse/HADOOP-8619
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 3.0.0
Reporter: Radim Kolar
 Fix For: 0.23.0, 2.0.0-alpha, 3.0.0

 Attachments: 8619-0.patch, writable-comparator.txt


 Because of reasons listed here: 
 http://findbugs.sourceforge.net/bugDescriptions.html#SE_COMPARATOR_SHOULD_BE_SERIALIZABLE
 comparators should be serializable. To make deserialization work, it is 
 required that all superclasses have no-arg constructor. 
 http://findbugs.sourceforge.net/bugDescriptions.html#SE_NO_SUITABLE_CONSTRUCTOR
 Simply add no=arg constructor to  WritableComparator.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-09 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13432078#comment-13432078
 ] 

Colin Patrick McCabe commented on HADOOP-8659:
--

bq. Unfortunately, there is currently no good way to determine the JVM's float 
ABI...

Yeah, I was afraid of that.  That aborting at runtime behavior is really nasty. 
 I'm glad your change prevents people from being exposed to that.

+1.

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8661) RemoteException's Stack Trace would be better returned by getStackTrace

2012-08-09 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8661:


 Description: It looks like all exceptions produced by RemoteException 
include the full stack trace of the original exception in the message.  This is 
different from 1.0 behavior to aid in debugging, but it would be nice to 
actually parse the stack trace and return it through getStackTrace instead of 
through getMessage.  (was: It looks like all exceptions produced by 
RemoteException include the full stack trace of the original exception in the 
message.  This is causing issues for oozie because they store the message in 
their database and it is getting very large.  This appears to be a regression 
from 1.0 behavior.)
Priority: Major  (was: Critical)
Target Version/s: 2.2.0-alpha  (was: 0.23.3)
  Issue Type: Improvement  (was: Bug)
 Summary: RemoteException's Stack Trace would be better returned by 
getStackTrace  (was: Stack Trace in Exception.getMessage causing oozie DB to 
have issues)

OOZIE-946 was just closed as invalid, because the data is stored in a blob not 
a varchar.  I am changing this over to be an improvement, dropping the 
severity, and renaming the JIRA.  Sorry for causing others issues by filing a 
JIRA as a bug before totally understanding the issue. I am also updating the 
target to 2.2 as 0.23 is closed to new work.

 RemoteException's Stack Trace would be better returned by getStackTrace
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Attachments: HADOOP-8661.txt, HADOOP-8661.txt


 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is different from 
 1.0 behavior to aid in debugging, but it would be nice to actually parse the 
 stack trace and return it through getStackTrace instead of through getMessage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8581) add support for HTTPS to the web UIs

2012-08-09 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8581:
---

Attachment: HADOOP-8581.patch

@atm, thx for there review. new patch addresses all your comments except for 
the generation of the core-site.xml. The MR AM needs that info coming from the 
core-site.xml, not from the job.xml. And the MR AM is started in a separate VM, 
thus cannot set it from the testcase bootstrap of the minicluster.

build and install a pseudo cluster, configured for ssl and verified pages work 
over ssl for all services.

 add support for HTTPS to the web UIs
 

 Key: HADOOP-8581
 URL: https://issues.apache.org/jira/browse/HADOOP-8581
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.1.0-alpha

 Attachments: HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, 
 HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch


 HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is 
 hardcoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8581) add support for HTTPS to the web UIs

2012-08-09 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13432101#comment-13432101
 ] 

Aaron T. Myers commented on HADOOP-8581:


bq. new patch addresses all your comments except for the generation of the 
core-site.xml. The MR AM needs that info coming from the core-site.xml, not 
from the job.xml. And the MR AM is started in a separate VM, thus cannot set it 
from the testcase bootstrap of the minicluster.

Got it. Makes sense. Maybe add a comment in the test to that effect?

The latest patch looks good to me. +1 pending Jenkins.

 add support for HTTPS to the web UIs
 

 Key: HADOOP-8581
 URL: https://issues.apache.org/jira/browse/HADOOP-8581
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.1.0-alpha

 Attachments: HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, 
 HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch


 HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is 
 hardcoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8581) add support for HTTPS to the web UIs

2012-08-09 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8581:
---

Attachment: HADOOP-8581.patch

adding comment following comment to the testcase regarding #4 above: 

{code}

//we do this trick because the MR AppMaster is started in another VM and
//the HttpServer configuration is not loaded from the job.xml but from the
//site.xml files in the classpath
{code}

 add support for HTTPS to the web UIs
 

 Key: HADOOP-8581
 URL: https://issues.apache.org/jira/browse/HADOOP-8581
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.1.0-alpha

 Attachments: HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, 
 HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch


 HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is 
 hardcoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7967) Need generalized multi-token filesystem support

2012-08-09 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13432130#comment-13432130
 ] 

Sanjay Radia commented on HADOOP-7967:
--

+1 - i have some changes in the delta patch I am attaching.
* Some minor improvements to javadoc and added comments to some tests (in my 
delta patch)
* Given that Hadoop 1 did not have audience annotations we can't make 
getDelegationToken protected. I have fixed that in my delta patch. Note the 
changes you made in tests to call addDelegationTokens instead of 
getDelegationToken are correct and should remain.
* File a Jira to make getDelegationTokens protected - lets see if community 
feels this can be done at some stage.
* File a Jira to make the corresponding changes to 
FileContext/AbstractFileSystem as we discussed.

Thanks for adding more tests and for refactoring some of the test internals.


 Need generalized multi-token filesystem support
 ---

 Key: HADOOP-7967
 URL: https://issues.apache.org/jira/browse/HADOOP-7967
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 0.23.1, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-7967-2.patch, HADOOP-7967-3.patch, 
 HADOOP-7967-4.patch, HADOOP-7967-compat.patch, HADOOP-7967.newapi.2.patch, 
 HADOOP-7967.newapi.3.patch, HADOOP-7967.newapi.patch, HADOOP-7967.patch, 
 hadoop7967-javadoc.patch


 Multi-token filesystem support and its interactions with the MR 
 {{TokenCache}} is problematic.  The {{TokenCache}} tries to assume it has the 
 knowledge to know if the tokens for a filesystem are available, which it 
 can't possibly know for multi-token filesystems.  Filtered filesystems are 
 also problematic, such as har on viewfs.  When mergeFs is implemented, it too 
 will become a problem with the current implementation.  Currently 
 {{FileSystem}} will leak tokens even when some tokens are already present.
 The decision for token acquisition, and which tokens, should be pushed all 
 the way down into the {{FileSystem}} level.  The {{TokenCache}} should be 
 ignorant and simply request tokens from each {{FileSystem}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7967) Need generalized multi-token filesystem support

2012-08-09 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HADOOP-7967:
-

Target Version/s: 2.0.0-alpha, 0.23.3, 3.0.0  (was: 0.23.3, 2.0.0-alpha, 
3.0.0)
  Status: Open  (was: Patch Available)

 Need generalized multi-token filesystem support
 ---

 Key: HADOOP-7967
 URL: https://issues.apache.org/jira/browse/HADOOP-7967
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 0.23.1, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-7967-2.patch, HADOOP-7967-3.patch, 
 HADOOP-7967-4.patch, HADOOP-7967-compat.patch, HADOOP-7967.newapi.2.patch, 
 HADOOP-7967.newapi.3.patch, HADOOP-7967.newapi.patch, HADOOP-7967.patch, 
 hadoop7967-javadoc.patch


 Multi-token filesystem support and its interactions with the MR 
 {{TokenCache}} is problematic.  The {{TokenCache}} tries to assume it has the 
 knowledge to know if the tokens for a filesystem are available, which it 
 can't possibly know for multi-token filesystems.  Filtered filesystems are 
 also problematic, such as har on viewfs.  When mergeFs is implemented, it too 
 will become a problem with the current implementation.  Currently 
 {{FileSystem}} will leak tokens even when some tokens are already present.
 The decision for token acquisition, and which tokens, should be pushed all 
 the way down into the {{FileSystem}} level.  The {{TokenCache}} should be 
 ignorant and simply request tokens from each {{FileSystem}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7967) Need generalized multi-token filesystem support

2012-08-09 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HADOOP-7967:
-

Attachment: hadoop7967-deltas.patch

 Need generalized multi-token filesystem support
 ---

 Key: HADOOP-7967
 URL: https://issues.apache.org/jira/browse/HADOOP-7967
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 0.23.1, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-7967-2.patch, HADOOP-7967-3.patch, 
 HADOOP-7967-4.patch, HADOOP-7967-compat.patch, HADOOP-7967.newapi.2.patch, 
 HADOOP-7967.newapi.3.patch, HADOOP-7967.newapi.patch, HADOOP-7967.patch, 
 hadoop7967-deltas.patch, hadoop7967-javadoc.patch


 Multi-token filesystem support and its interactions with the MR 
 {{TokenCache}} is problematic.  The {{TokenCache}} tries to assume it has the 
 knowledge to know if the tokens for a filesystem are available, which it 
 can't possibly know for multi-token filesystems.  Filtered filesystems are 
 also problematic, such as har on viewfs.  When mergeFs is implemented, it too 
 will become a problem with the current implementation.  Currently 
 {{FileSystem}} will leak tokens even when some tokens are already present.
 The decision for token acquisition, and which tokens, should be pushed all 
 the way down into the {{FileSystem}} level.  The {{TokenCache}} should be 
 ignorant and simply request tokens from each {{FileSystem}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-09 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13432202#comment-13432202
 ] 

Todd Lipcon commented on HADOOP-8659:
-

+1, looks good to me too. Thanks, Trevor.

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-09 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8659:


   Resolution: Fixed
Fix Version/s: 2.2.0-alpha
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2, thanks Trevor!

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Fix For: 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8581) add support for HTTPS to the web UIs

2012-08-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13432243#comment-13432243
 ] 

Hadoop QA commented on HADOOP-8581:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12540104/HADOOP-8581.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy:

  org.apache.hadoop.hdfs.TestDatanodeBlockScanner
  
org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1277//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1277//console

This message is automatically generated.

 add support for HTTPS to the web UIs
 

 Key: HADOOP-8581
 URL: https://issues.apache.org/jira/browse/HADOOP-8581
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.1.0-alpha

 Attachments: HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, 
 HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch


 HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is 
 hardcoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13432252#comment-13432252
 ] 

Hudson commented on HADOOP-8659:


Integrated in Hadoop-Hdfs-trunk-Commit #2634 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2634/])
HADOOP-8659. Native libraries must build with soft-float ABI for Oracle JVM 
on ARM. Contributed by Trevor Robinson. (Revision 1371507)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1371507
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/JNIFlags.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-tools/hadoop-pipes/src/CMakeLists.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt


 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Fix For: 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13432260#comment-13432260
 ] 

Hudson commented on HADOOP-8659:


Integrated in Hadoop-Common-trunk-Commit #2569 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2569/])
HADOOP-8659. Native libraries must build with soft-float ABI for Oracle JVM 
on ARM. Contributed by Trevor Robinson. (Revision 1371507)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1371507
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/JNIFlags.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-tools/hadoop-pipes/src/CMakeLists.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt


 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Fix For: 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8581) add support for HTTPS to the web UIs

2012-08-09 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13432270#comment-13432270
 ] 

Alejandro Abdelnur commented on HADOOP-8581:


test failures seem unrelated.

 add support for HTTPS to the web UIs
 

 Key: HADOOP-8581
 URL: https://issues.apache.org/jira/browse/HADOOP-8581
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.1.0-alpha

 Attachments: HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, 
 HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch


 HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is 
 hardcoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8581) add support for HTTPS to the web UIs

2012-08-09 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13432281#comment-13432281
 ] 

Alejandro Abdelnur commented on HADOOP-8581:


Committed to trunk. Looking into branch-2 as it seems a JIRA touching 
HttpServer didn't make it there yet and the merge does not cleanly apply.

 add support for HTTPS to the web UIs
 

 Key: HADOOP-8581
 URL: https://issues.apache.org/jira/browse/HADOOP-8581
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.1.0-alpha

 Attachments: HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, 
 HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch


 HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is 
 hardcoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13432293#comment-13432293
 ] 

Hudson commented on HADOOP-8659:


Integrated in Hadoop-Mapreduce-trunk-Commit #2590 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2590/])
HADOOP-8659. Native libraries must build with soft-float ABI for Oracle JVM 
on ARM. Contributed by Trevor Robinson. (Revision 1371507)

 Result = FAILURE
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1371507
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/JNIFlags.cmake
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
* /hadoop/common/trunk/hadoop-tools/hadoop-pipes/src/CMakeLists.txt
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt


 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Fix For: 3.0.0, 2.2.0-alpha

 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch, 
 HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8686) hadoop-common: fix warnings in native code

2012-08-09 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8686:
-

Attachment: HADOOP-8686.002.patch

* SnappyCompressor: fix a case where we were passing in a pointer to a
4-byte value rather than a pointer to a (usually) 8-byte value.

* fix LZ4_compress prototype.

* don't put junk on the line after an #endif -- it triggers a compiler 
warning.

* NativeIO: #define _GNU_SOURCE so that sync_file_range is not an
implicitly declared function on Linux.

* Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs:
don't leak memory on error.

* NativeIO: throw_ioe: we were using strerror_r, but assuming that it
returned an int.  However, it returns a char* in GNU glibc.
(Yes, there was an #ifdef that was supposed to prevent this, but it was
busted.)  Instead, just use sys_errlist directly if we have an errno
that is in range; otherwise come up with our own message.  This should
work on all platforms.

* move a few declarations to the tops of functions.  This was done so
that error handling using goto didn't cause those variables to be used
with undefined values.  (This would happen if you used goto from a
position before a C99-style variable declaration.)


 hadoop-common: fix warnings in native code
 --

 Key: HADOOP-8686
 URL: https://issues.apache.org/jira/browse/HADOOP-8686
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.2.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8686.002.patch


 Fix warnings about const-correctness, improper conversion between pointers of 
 different sizes, implicit declaration of sync_file_range, variables being 
 used with uninitialized values, and so forth in the hadoop-common native code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8684) Deadlock between WritableComparator and WritableComparable

2012-08-09 Thread Hiroshi Ikeda (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hiroshi Ikeda updated HADOOP-8684:
--

Description: 
Classes implementing WriableComparable in Hadoop call the method 
WritableComparator.define() in their static initializers. This means, the 
classes call the method define() while thier class loading, under locking their 
class objects. And, the method WritableComparator.define() locks the 
WritableComaprator class object.

On the other hand, WritableComparator.get() also locks the WritableComparator 
class object, and the method may create instances of the targeted comparable 
class, involving loading the targeted comparable class if any. This means, the 
method might try to lock the targeted comparable class object under locking the 
WritableComparator class object.

There are reversed orders of locking objects, and you might fall in deadlock.

  was:
Classes implementing WriableComparable in Hadoop call the method 
WritableComparator.define() in their static initializers.
This means, the classes call the method define() while thier class loading, 
under locking their class objects.
And, the method WritableComparator.define() locks the WritableComaprator class 
object.

On the other hand, WritableComparator.get() also locks the WritableComparator 
class object, and the method may create instances of the targeted comparable 
class, involving loading the targeted comparable class if any. This means, the 
method might try to lock the targeted comparable class object under locking the 
WritableComparator class object.

There are reversed orders of locking objects, and you might fall in deadlock.


 Deadlock between WritableComparator and WritableComparable
 --

 Key: HADOOP-8684
 URL: https://issues.apache.org/jira/browse/HADOOP-8684
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3
Reporter: Hiroshi Ikeda
Priority: Minor

 Classes implementing WriableComparable in Hadoop call the method 
 WritableComparator.define() in their static initializers. This means, the 
 classes call the method define() while thier class loading, under locking 
 their class objects. And, the method WritableComparator.define() locks the 
 WritableComaprator class object.
 On the other hand, WritableComparator.get() also locks the WritableComparator 
 class object, and the method may create instances of the targeted comparable 
 class, involving loading the targeted comparable class if any. This means, 
 the method might try to lock the targeted comparable class object under 
 locking the WritableComparator class object.
 There are reversed orders of locking objects, and you might fall in deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8684) Deadlock between WritableComparator and WritableComparable

2012-08-09 Thread Hiroshi Ikeda (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hiroshi Ikeda updated HADOOP-8684:
--

Attachment: WritableComparatorDeadLockTestApp.java

Sample application for the deadlock.

 Deadlock between WritableComparator and WritableComparable
 --

 Key: HADOOP-8684
 URL: https://issues.apache.org/jira/browse/HADOOP-8684
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3
Reporter: Hiroshi Ikeda
Priority: Minor
 Attachments: WritableComparatorDeadLockTestApp.java


 Classes implementing WriableComparable in Hadoop call the method 
 WritableComparator.define() in their static initializers. This means, the 
 classes call the method define() while thier class loading, under locking 
 their class objects. And, the method WritableComparator.define() locks the 
 WritableComaprator class object.
 On the other hand, WritableComparator.get() also locks the WritableComparator 
 class object, and the method may create instances of the targeted comparable 
 class, involving loading the targeted comparable class if any. This means, 
 the method might try to lock the targeted comparable class object under 
 locking the WritableComparator class object.
 There are reversed orders of locking objects, and you might fall in deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8685) Deadlock between WritableComparator and WritableComparable

2012-08-09 Thread Hiroshi Ikeda (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13432377#comment-13432377
 ] 

Hiroshi Ikeda commented on HADOOP-8685:
---

Sorry, duplicated with HADOOP-8684

 Deadlock between WritableComparator and WritableComparable
 --

 Key: HADOOP-8685
 URL: https://issues.apache.org/jira/browse/HADOOP-8685
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3
Reporter: Hiroshi Ikeda
Priority: Minor

 Classes implementing WriableComparable in Hadoop call the method 
 WritableComparator.define() in their static initializers. This means, the 
 classes call the method define() while thier class loading, under locking 
 their class objects. And, the method WritableComparator.define() locks the 
 WritableComaprator class object.
 On the other hand, WritableComparator.get() also locks the WritableComparator 
 class object, and the method may create instances of the targeted comparable 
 class, involving loading the targeted comparable class if any. This means, 
 the method might try to lock the targeted comparable class object under 
 locking the WritableComparator class object.
 There are reversed orders of locking objects, and you might fall in deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7967) Need generalized multi-token filesystem support

2012-08-09 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13432482#comment-13432482
 ] 

Sanjay Radia commented on HADOOP-7967:
--

Clarification:
Given that Hadoop 1 did not have audience annotations we can't make 
getDelegationToken protected
What I meant was that in Hadoop1, getDelegationToken was public without any 
annotations; hence we would be breaking compatibility if we made it protected.

 Need generalized multi-token filesystem support
 ---

 Key: HADOOP-7967
 URL: https://issues.apache.org/jira/browse/HADOOP-7967
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 0.23.1, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-7967-2.patch, HADOOP-7967-3.patch, 
 HADOOP-7967-4.patch, HADOOP-7967-compat.patch, HADOOP-7967.newapi.2.patch, 
 HADOOP-7967.newapi.3.patch, HADOOP-7967.newapi.patch, HADOOP-7967.patch, 
 hadoop7967-deltas.patch, hadoop7967-javadoc.patch


 Multi-token filesystem support and its interactions with the MR 
 {{TokenCache}} is problematic.  The {{TokenCache}} tries to assume it has the 
 knowledge to know if the tokens for a filesystem are available, which it 
 can't possibly know for multi-token filesystems.  Filtered filesystems are 
 also problematic, such as har on viewfs.  When mergeFs is implemented, it too 
 will become a problem with the current implementation.  Currently 
 {{FileSystem}} will leak tokens even when some tokens are already present.
 The decision for token acquisition, and which tokens, should be pushed all 
 the way down into the {{FileSystem}} level.  The {{TokenCache}} should be 
 ignorant and simply request tokens from each {{FileSystem}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira