[jira] [Updated] (HADOOP-8653) FTPFileSystem rename broken

2012-08-08 Thread Karel Kolman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karel Kolman updated HADOOP-8653:
-

Attachment: HDFS-8653-1.patch

patch making the argument to changeWorkingDirectory() in rename() correct - 
without ftp:// scheme prefix

 FTPFileSystem rename broken
 ---

 Key: HADOOP-8653
 URL: https://issues.apache.org/jira/browse/HADOOP-8653
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.20.2, 2.0.0-alpha
Reporter: Karel Kolman
 Attachments: HDFS-8653-1.patch


 The FTPFileSystem.rename(FTPClient client, Path src, Path dst) method is 
 broken.
 The changeWorkingDirectory command underneath is being passed a string with 
 file:// uri prefix (which FTP server does not understand obviously)
 {noformat}
 INFO [2012-08-06 12:59:39] (DefaultSession.java:297) - Received command: [CWD 
 ftp://localhost:61246/tmp/myfile]
  WARN [2012-08-06 12:59:39] (AbstractFakeCommandHandler.java:213) - Error 
 handling command: Command[CWD:[ftp://localhost:61246/tmp/myfile]]; 
 org.mockftpserver.fake.filesystem.FileSystemException: 
 /ftp://localhost:61246/tmp/myfile
 org.mockftpserver.fake.filesystem.FileSystemException: 
 /ftp://localhost:61246/tmp/myfile
   at 
 org.mockftpserver.fake.command.AbstractFakeCommandHandler.verifyFileSystemCondition(AbstractFakeCommandHandler.java:264)
   at 
 org.mockftpserver.fake.command.CwdCommandHandler.handle(CwdCommandHandler.java:44)
   at 
 org.mockftpserver.fake.command.AbstractFakeCommandHandler.handleCommand(AbstractFakeCommandHandler.java:76)
   at 
 org.mockftpserver.core.session.DefaultSession.readAndProcessCommand(DefaultSession.java:421)
   at 
 org.mockftpserver.core.session.DefaultSession.run(DefaultSession.java:384)
   at java.lang.Thread.run(Thread.java:680)
 {noformat}
 The solution would be this:
 {noformat}
 --- a/FTPFileSystem.java
 +++ b/FTPFileSystem.java
 @@ -549,15 +549,15 @@ public class FTPFileSystem extends FileSystem {
throw new IOException(Destination path  + dst
+  already exist, cannot rename!);
  }
 -String parentSrc = absoluteSrc.getParent().toUri().toString();
 -String parentDst = absoluteDst.getParent().toUri().toString();
 +URI parentSrc = absoluteSrc.getParent().toUri();
 +URI parentDst = absoluteDst.getParent().toUri();
  String from = src.getName();
  String to = dst.getName();
 -if (!parentSrc.equals(parentDst)) {
 +if (!parentSrc.toString().equals(parentDst.toString())) {
throw new IOException(Cannot rename parent(source):  + parentSrc
+ , parent(destination):   + parentDst);
  }
 -client.changeWorkingDirectory(parentSrc);
 +client.changeWorkingDirectory(parentSrc.getPath().toString());
  boolean renamed = client.rename(from, to);
  return renamed;
}
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8653) FTPFileSystem rename broken

2012-08-08 Thread Karel Kolman (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13430976#comment-13430976
 ] 

Karel Kolman commented on HADOOP-8653:
--

well testing on live ftp, the CWD file://path is still being sent to it (and 
of course failing on the ftp server side), no mock problem, ftp server commands 
have no clue about schemes...  

attaching the patch against trunk

 FTPFileSystem rename broken
 ---

 Key: HADOOP-8653
 URL: https://issues.apache.org/jira/browse/HADOOP-8653
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.20.2, 2.0.0-alpha
Reporter: Karel Kolman
 Fix For: 2.0.1-alpha

 Attachments: HDFS-8653-1.patch


 The FTPFileSystem.rename(FTPClient client, Path src, Path dst) method is 
 broken.
 The changeWorkingDirectory command underneath is being passed a string with 
 file:// uri prefix (which FTP server does not understand obviously)
 {noformat}
 INFO [2012-08-06 12:59:39] (DefaultSession.java:297) - Received command: [CWD 
 ftp://localhost:61246/tmp/myfile]
  WARN [2012-08-06 12:59:39] (AbstractFakeCommandHandler.java:213) - Error 
 handling command: Command[CWD:[ftp://localhost:61246/tmp/myfile]]; 
 org.mockftpserver.fake.filesystem.FileSystemException: 
 /ftp://localhost:61246/tmp/myfile
 org.mockftpserver.fake.filesystem.FileSystemException: 
 /ftp://localhost:61246/tmp/myfile
   at 
 org.mockftpserver.fake.command.AbstractFakeCommandHandler.verifyFileSystemCondition(AbstractFakeCommandHandler.java:264)
   at 
 org.mockftpserver.fake.command.CwdCommandHandler.handle(CwdCommandHandler.java:44)
   at 
 org.mockftpserver.fake.command.AbstractFakeCommandHandler.handleCommand(AbstractFakeCommandHandler.java:76)
   at 
 org.mockftpserver.core.session.DefaultSession.readAndProcessCommand(DefaultSession.java:421)
   at 
 org.mockftpserver.core.session.DefaultSession.run(DefaultSession.java:384)
   at java.lang.Thread.run(Thread.java:680)
 {noformat}
 The solution would be this:
 {noformat}
 --- a/FTPFileSystem.java
 +++ b/FTPFileSystem.java
 @@ -549,15 +549,15 @@ public class FTPFileSystem extends FileSystem {
throw new IOException(Destination path  + dst
+  already exist, cannot rename!);
  }
 -String parentSrc = absoluteSrc.getParent().toUri().toString();
 -String parentDst = absoluteDst.getParent().toUri().toString();
 +URI parentSrc = absoluteSrc.getParent().toUri();
 +URI parentDst = absoluteDst.getParent().toUri();
  String from = src.getName();
  String to = dst.getName();
 -if (!parentSrc.equals(parentDst)) {
 +if (!parentSrc.toString().equals(parentDst.toString())) {
throw new IOException(Cannot rename parent(source):  + parentSrc
+ , parent(destination):   + parentDst);
  }
 -client.changeWorkingDirectory(parentSrc);
 +client.changeWorkingDirectory(parentSrc.getPath().toString());
  boolean renamed = client.rename(from, to);
  return renamed;
}
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8653) FTPFileSystem rename broken

2012-08-08 Thread Karel Kolman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karel Kolman updated HADOOP-8653:
-

   Fix Version/s: 2.0.1-alpha
Target Version/s: 2.0.1-alpha
  Status: Patch Available  (was: Open)

patch for FTPFileSystem rename method fix

 FTPFileSystem rename broken
 ---

 Key: HADOOP-8653
 URL: https://issues.apache.org/jira/browse/HADOOP-8653
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha, 0.20.2
Reporter: Karel Kolman
 Fix For: 2.0.1-alpha

 Attachments: HDFS-8653-1.patch


 The FTPFileSystem.rename(FTPClient client, Path src, Path dst) method is 
 broken.
 The changeWorkingDirectory command underneath is being passed a string with 
 file:// uri prefix (which FTP server does not understand obviously)
 {noformat}
 INFO [2012-08-06 12:59:39] (DefaultSession.java:297) - Received command: [CWD 
 ftp://localhost:61246/tmp/myfile]
  WARN [2012-08-06 12:59:39] (AbstractFakeCommandHandler.java:213) - Error 
 handling command: Command[CWD:[ftp://localhost:61246/tmp/myfile]]; 
 org.mockftpserver.fake.filesystem.FileSystemException: 
 /ftp://localhost:61246/tmp/myfile
 org.mockftpserver.fake.filesystem.FileSystemException: 
 /ftp://localhost:61246/tmp/myfile
   at 
 org.mockftpserver.fake.command.AbstractFakeCommandHandler.verifyFileSystemCondition(AbstractFakeCommandHandler.java:264)
   at 
 org.mockftpserver.fake.command.CwdCommandHandler.handle(CwdCommandHandler.java:44)
   at 
 org.mockftpserver.fake.command.AbstractFakeCommandHandler.handleCommand(AbstractFakeCommandHandler.java:76)
   at 
 org.mockftpserver.core.session.DefaultSession.readAndProcessCommand(DefaultSession.java:421)
   at 
 org.mockftpserver.core.session.DefaultSession.run(DefaultSession.java:384)
   at java.lang.Thread.run(Thread.java:680)
 {noformat}
 The solution would be this:
 {noformat}
 --- a/FTPFileSystem.java
 +++ b/FTPFileSystem.java
 @@ -549,15 +549,15 @@ public class FTPFileSystem extends FileSystem {
throw new IOException(Destination path  + dst
+  already exist, cannot rename!);
  }
 -String parentSrc = absoluteSrc.getParent().toUri().toString();
 -String parentDst = absoluteDst.getParent().toUri().toString();
 +URI parentSrc = absoluteSrc.getParent().toUri();
 +URI parentDst = absoluteDst.getParent().toUri();
  String from = src.getName();
  String to = dst.getName();
 -if (!parentSrc.equals(parentDst)) {
 +if (!parentSrc.toString().equals(parentDst.toString())) {
throw new IOException(Cannot rename parent(source):  + parentSrc
+ , parent(destination):   + parentDst);
  }
 -client.changeWorkingDirectory(parentSrc);
 +client.changeWorkingDirectory(parentSrc.getPath().toString());
  boolean renamed = client.rename(from, to);
  return renamed;
}
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8653) FTPFileSystem rename broken

2012-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13430999#comment-13430999
 ] 

Hadoop QA commented on HADOOP-8653:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12539796/HDFS-8653-1.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1264//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1264//console

This message is automatically generated.

 FTPFileSystem rename broken
 ---

 Key: HADOOP-8653
 URL: https://issues.apache.org/jira/browse/HADOOP-8653
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.20.2, 2.0.0-alpha
Reporter: Karel Kolman
 Fix For: 2.0.1-alpha

 Attachments: HDFS-8653-1.patch


 The FTPFileSystem.rename(FTPClient client, Path src, Path dst) method is 
 broken.
 The changeWorkingDirectory command underneath is being passed a string with 
 file:// uri prefix (which FTP server does not understand obviously)
 {noformat}
 INFO [2012-08-06 12:59:39] (DefaultSession.java:297) - Received command: [CWD 
 ftp://localhost:61246/tmp/myfile]
  WARN [2012-08-06 12:59:39] (AbstractFakeCommandHandler.java:213) - Error 
 handling command: Command[CWD:[ftp://localhost:61246/tmp/myfile]]; 
 org.mockftpserver.fake.filesystem.FileSystemException: 
 /ftp://localhost:61246/tmp/myfile
 org.mockftpserver.fake.filesystem.FileSystemException: 
 /ftp://localhost:61246/tmp/myfile
   at 
 org.mockftpserver.fake.command.AbstractFakeCommandHandler.verifyFileSystemCondition(AbstractFakeCommandHandler.java:264)
   at 
 org.mockftpserver.fake.command.CwdCommandHandler.handle(CwdCommandHandler.java:44)
   at 
 org.mockftpserver.fake.command.AbstractFakeCommandHandler.handleCommand(AbstractFakeCommandHandler.java:76)
   at 
 org.mockftpserver.core.session.DefaultSession.readAndProcessCommand(DefaultSession.java:421)
   at 
 org.mockftpserver.core.session.DefaultSession.run(DefaultSession.java:384)
   at java.lang.Thread.run(Thread.java:680)
 {noformat}
 The solution would be this:
 {noformat}
 --- a/FTPFileSystem.java
 +++ b/FTPFileSystem.java
 @@ -549,15 +549,15 @@ public class FTPFileSystem extends FileSystem {
throw new IOException(Destination path  + dst
+  already exist, cannot rename!);
  }
 -String parentSrc = absoluteSrc.getParent().toUri().toString();
 -String parentDst = absoluteDst.getParent().toUri().toString();
 +URI parentSrc = absoluteSrc.getParent().toUri();
 +URI parentDst = absoluteDst.getParent().toUri();
  String from = src.getName();
  String to = dst.getName();
 -if (!parentSrc.equals(parentDst)) {
 +if (!parentSrc.toString().equals(parentDst.toString())) {
throw new IOException(Cannot rename parent(source):  + parentSrc
+ , parent(destination):   + parentDst);
  }
 -client.changeWorkingDirectory(parentSrc);
 +client.changeWorkingDirectory(parentSrc.getPath().toString());
  boolean renamed = client.rename(from, to);
  return renamed;
}
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8658) Add support for configuring the encryption algorithm used for Hadoop RPC

2012-08-08 Thread amol kamble (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431105#comment-13431105
 ] 

amol kamble commented on HADOOP-8658:
-

Aron can you explain me what it actually means?Means I didnot get the meaning 
of HDFS block on wire.

 Add support for configuring the encryption algorithm used for Hadoop RPC
 

 Key: HADOOP-8658
 URL: https://issues.apache.org/jira/browse/HADOOP-8658
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc, security
Affects Versions: 2.0.0-alpha
Reporter: Aaron T. Myers
Assignee: Joey Echeverria

 HDFS-3637 recently introduced the ability to encrypt actual HDFS block data 
 on the wire, including the ability to choose which encryption algorithm is 
 used. It would be great if Hadoop RPC similarly had support for choosing the 
 encryption algorithm.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8658) Add support for configuring the encryption algorithm used for Hadoop RPC

2012-08-08 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431110#comment-13431110
 ] 

Aaron T. Myers commented on HADOOP-8658:


Hi Amol, if you haven't already, you should read through the dialogue on 
HDFS-3637.

What I mean by on the wire is as opposed to at rest, i.e. the data is 
encrypted while it is being transmitted over the network between clients and 
DNs, but not encrypted when the data is stored on disk.

By block data, I mean as opposed to the RPC data. When security is enabled, 
encrypting the RPC traffic between clients and the NN, or DNs to the NN can 
already be encrypted. But, until HDFS-3637 was implemented, there was no 
support for encrypting the actual file data being read or written by clients 
and DNs.

The purpose of this JIRA is to add support for configuring the encryption 
algorithm used for encrypting RPC traffic.

 Add support for configuring the encryption algorithm used for Hadoop RPC
 

 Key: HADOOP-8658
 URL: https://issues.apache.org/jira/browse/HADOOP-8658
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc, security
Affects Versions: 2.0.0-alpha
Reporter: Aaron T. Myers
Assignee: Joey Echeverria

 HDFS-3637 recently introduced the ability to encrypt actual HDFS block data 
 on the wire, including the ability to choose which encryption algorithm is 
 used. It would be great if Hadoop RPC similarly had support for choosing the 
 encryption algorithm.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8581) add support for HTTPS to the web UIs

2012-08-08 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8581:
---

Status: Open  (was: Patch Available)

canceling patch as I have to rebased it due to the yarn move.

 add support for HTTPS to the web UIs
 

 Key: HADOOP-8581
 URL: https://issues.apache.org/jira/browse/HADOOP-8581
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.1.0-alpha

 Attachments: HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, 
 HADOOP-8581.patch


 HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is 
 hardcoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8660) TestPseudoAuthenticator failing with NPE

2012-08-08 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8660:
---

   Resolution: Fixed
Fix Version/s: 2.2.0-alpha
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

@eli, I think it was a case of  'tucu' error, run the tests from the IDE and it 
looks I didn't force a recompile and it was pickup the previous compiled class. 

Committed to trunk and branch-2.

 TestPseudoAuthenticator failing with NPE
 

 Key: HADOOP-8660
 URL: https://issues.apache.org/jira/browse/HADOOP-8660
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Alejandro Abdelnur
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8660.patch


 This test started failing recently, on top of trunk:
 testAuthenticationAnonymousAllowed(org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator)
   Time elapsed: 0.241 sec   ERROR!
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:75)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:127)
 at 
 org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator.testAuthenticationAnonymousAllowed(TestPseudoAuthenticator.java:65)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8581) add support for HTTPS to the web UIs

2012-08-08 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8581:
---

Attachment: HADOOP-8581.patch

patch rebased to trunk (after yarn move)

 add support for HTTPS to the web UIs
 

 Key: HADOOP-8581
 URL: https://issues.apache.org/jira/browse/HADOOP-8581
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.1.0-alpha

 Attachments: HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, 
 HADOOP-8581.patch, HADOOP-8581.patch


 HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is 
 hardcoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8581) add support for HTTPS to the web UIs

2012-08-08 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8581:
---

Status: Patch Available  (was: Open)

 add support for HTTPS to the web UIs
 

 Key: HADOOP-8581
 URL: https://issues.apache.org/jira/browse/HADOOP-8581
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.1.0-alpha

 Attachments: HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, 
 HADOOP-8581.patch, HADOOP-8581.patch


 HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is 
 hardcoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8660) TestPseudoAuthenticator failing with NPE

2012-08-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431200#comment-13431200
 ] 

Hudson commented on HADOOP-8660:


Integrated in Hadoop-Hdfs-trunk-Commit #2628 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2628/])
HADOOP-8660. TestPseudoAuthenticator failing with NPE. (tucu) (Revision 
1370812)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1370812
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 TestPseudoAuthenticator failing with NPE
 

 Key: HADOOP-8660
 URL: https://issues.apache.org/jira/browse/HADOOP-8660
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Alejandro Abdelnur
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8660.patch


 This test started failing recently, on top of trunk:
 testAuthenticationAnonymousAllowed(org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator)
   Time elapsed: 0.241 sec   ERROR!
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:75)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:127)
 at 
 org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator.testAuthenticationAnonymousAllowed(TestPseudoAuthenticator.java:65)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-08 Thread Robert Joseph Evans (JIRA)
Robert Joseph Evans created HADOOP-8661:
---

 Summary: Stack Trace in Exception.getMessage causing oozie DB to 
have issues
 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.0.0-alpha, 0.23.3, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans


It looks like all exceptions produced by RemoteException include the full stack 
trace of the original exception in the message.  This is causing issues for 
oozie because they store the message in their database and it is getting very 
large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431219#comment-13431219
 ] 

Suresh Srinivas commented on HADOOP-8661:
-

Should we be fixing this in Hadoop - the content of exception stack trace? Also 
not clear why Oozie stores exceptions.

 Stack Trace in Exception.getMessage causing oozie DB to have issues
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans

 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is causing issues 
 for oozie because they store the message in their database and it is getting 
 very large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8660) TestPseudoAuthenticator failing with NPE

2012-08-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431224#comment-13431224
 ] 

Hudson commented on HADOOP-8660:


Integrated in Hadoop-Mapreduce-trunk-Commit #2582 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2582/])
HADOOP-8660. TestPseudoAuthenticator failing with NPE. (tucu) (Revision 
1370812)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1370812
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 TestPseudoAuthenticator failing with NPE
 

 Key: HADOOP-8660
 URL: https://issues.apache.org/jira/browse/HADOOP-8660
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Alejandro Abdelnur
 Fix For: 2.2.0-alpha

 Attachments: HADOOP-8660.patch


 This test started failing recently, on top of trunk:
 testAuthenticationAnonymousAllowed(org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator)
   Time elapsed: 0.241 sec   ERROR!
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:75)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:127)
 at 
 org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator.testAuthenticationAnonymousAllowed(TestPseudoAuthenticator.java:65)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Contributing to the Hadoop project

2012-08-08 Thread Clay McDonald
Hello all,  I would like to know how I can assist with the Hadoop project? It 
doesn't matter in what capacity. I just want to help out with whatever is 
needed. 

Thanks,

Clay McDonald

 


[jira] [Updated] (HADOOP-8649) ChecksumFileSystem should have an overriding implementation of listStatus(Path, PathFilter) for improved performance

2012-08-08 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8649:
-

Attachment: trunk-HADOOP-8649.patch
branch1-HADOOP-8649.patch

Uploading patches for trunk and branch-1 addressing Daryn's comments.

 ChecksumFileSystem should have an overriding implementation of 
 listStatus(Path, PathFilter) for improved performance
 

 Key: HADOOP-8649
 URL: https://issues.apache.org/jira/browse/HADOOP-8649
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: HADOOP-8649_branch1.patch, HADOOP-8649_branch1.patch, 
 HADOOP-8649_branch1.patch_v2, HADOOP-8649_branch1.patch_v3, 
 TestChecksumFileSystemOnDFS.java, branch1-HADOOP-8649.patch, 
 trunk-HADOOP-8649.patch


 Currently, ChecksumFileSystem implements only listStatus(Path). 
 The other form of listStatus(Path, customFilter) results in parsing the list 
 twice to apply each of the filters - custom and checksum filter.
 By using a composite filter instead, we limit the parsing to once.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8649) ChecksumFileSystem should have an overriding implementation of listStatus(Path, PathFilter) for improved performance

2012-08-08 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8649:
-

Affects Version/s: 1.0.3
   2.0.0-alpha
   Status: Patch Available  (was: Open)

 ChecksumFileSystem should have an overriding implementation of 
 listStatus(Path, PathFilter) for improved performance
 

 Key: HADOOP-8649
 URL: https://issues.apache.org/jira/browse/HADOOP-8649
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha, 1.0.3
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: HADOOP-8649_branch1.patch, HADOOP-8649_branch1.patch, 
 HADOOP-8649_branch1.patch_v2, HADOOP-8649_branch1.patch_v3, 
 TestChecksumFileSystemOnDFS.java, branch1-HADOOP-8649.patch, 
 trunk-HADOOP-8649.patch


 Currently, ChecksumFileSystem implements only listStatus(Path). 
 The other form of listStatus(Path, customFilter) results in parsing the list 
 twice to apply each of the filters - custom and checksum filter.
 By using a composite filter instead, we limit the parsing to once.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-08 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431262#comment-13431262
 ] 

Robert Joseph Evans commented on HADOOP-8661:
-

The issue is not with the contents of the stack trace.  The stack trace is 
fine.  The issue is that the message of the exception includes the stack trace.

For example:

{code}
try {
  Path p = new Path(file);
  FileSystem fs = p.getFileSystem(conf);
  fs.delete(p, true);
} catch( IOException e) {
  System.err.println(MESSAGE:  +e.getMessage());
}
{code}

If this is run on 1.0.2 and it gets a permission denied error you only get 
something like
{noformat}
MESSAGE: Permission denied: user=notme, access=EXECUTE, 
inode=/user/me/test:me:supergroup:d-
{noformat}

But on trunk, 2.0, and 0.23 you get
{noformat}
MESSAGE: Permission denied: user=notme, access=EXECUTE, 
inode=/user/me/test:me:supergroup:d-
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:3572)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1931)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1896)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:539)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:394)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1528)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1524)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1177)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1522)
{noformat}

Oozie tries to save the message in its database, but when the full stack trace 
is included it gets very large and can cause issues.

 Stack Trace in Exception.getMessage causing oozie DB to have issues
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans

 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is causing issues 
 for oozie because they store the message in their database and it is getting 
 very large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-08 Thread Virag Kothari (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431266#comment-13431266
 ] 

Virag Kothari commented on HADOOP-8661:
---

Oozie only stores the value returned by Exception.getMessage() so the users can 
debug their workflows if the hadoop job fails.
If the getMessage() returns the entire stack trace, the database cannot handle 
it.

 Stack Trace in Exception.getMessage causing oozie DB to have issues
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans

 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is causing issues 
 for oozie because they store the message in their database and it is getting 
 very large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-08 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8661:


Priority: Critical  (was: Major)

Bumping this to critical because after talking with some oozie guys the message 
length can cause overrun the varchar limit of the column and cause some bad 
errors for oozie.

 Stack Trace in Exception.getMessage causing oozie DB to have issues
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical

 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is causing issues 
 for oozie because they store the message in their database and it is getting 
 very large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-08 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431270#comment-13431270
 ] 

Jason Lowe commented on HADOOP-8661:


Curious, why isn't Oozie checking the length of what will be written to the 
database and taking appropriate action if it's too long (e.g.: truncating).

 Stack Trace in Exception.getMessage causing oozie DB to have issues
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical

 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is causing issues 
 for oozie because they store the message in their database and it is getting 
 very large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431276#comment-13431276
 ] 

Suresh Srinivas commented on HADOOP-8661:
-

On 2.0, the stack trace may be a little different, since we have switched to 
protobuf RPC engine. This change was done in HADOOP-6686. So any solution needs 
to consider some of the discussions from that jira.

 Stack Trace in Exception.getMessage causing oozie DB to have issues
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical

 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is causing issues 
 for oozie because they store the message in their database and it is getting 
 very large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431281#comment-13431281
 ] 

Suresh Srinivas commented on HADOOP-8661:
-

Echoing what Jason said, it is a good idea to handle this in Oozie. Making an 
assumption about the length of exception message and expecting that it would 
not change in upstream project is not a good idea. 

That said, we could change the message back to shorter length given the thrown 
exception has been set with initCause() from where the stack trace can be 
derived.

 Stack Trace in Exception.getMessage causing oozie DB to have issues
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical

 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is causing issues 
 for oozie because they store the message in their database and it is getting 
 very large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-08 Thread Virag Kothari (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431312#comment-13431312
 ] 

Virag Kothari commented on HADOOP-8661:
---

I agree that Oozie should not make any assumption about the length of the error 
message. Created a JIRA OOZIE-946

 Stack Trace in Exception.getMessage causing oozie DB to have issues
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical

 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is causing issues 
 for oozie because they store the message in their database and it is getting 
 very large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8652) Change website to reflect new u...@hadoop.apache.org mailing list

2012-08-08 Thread Doug Cutting (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431318#comment-13431318
 ] 

Doug Cutting commented on HADOOP-8652:
--

'svn up' works without complaint for me.  Is it still failing for you?

The protections look right to me: the group is 'hadoop' and all directories are 
group writable.

 Change website to reflect new u...@hadoop.apache.org mailing list
 -

 Key: HADOOP-8652
 URL: https://issues.apache.org/jira/browse/HADOOP-8652
 Project: Hadoop Common
  Issue Type: Task
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Attachments: HADOOP-8652.patch


 Change website to reflect new u...@hadoop.apache.org mailing list since we've 
 merged the user lists per discussion on general@: http://s.apache.org/hv

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-08 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Status: Open  (was: Patch Available)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8662) remove separate pages for Common, HDFS MR projects

2012-08-08 Thread Doug Cutting (JIRA)
Doug Cutting created HADOOP-8662:


 Summary: remove separate pages for Common, HDFS  MR projects
 Key: HADOOP-8662
 URL: https://issues.apache.org/jira/browse/HADOOP-8662
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: site
Reporter: Doug Cutting
Assignee: Doug Cutting
Priority: Minor
 Fix For: site


The tabs on the top of http://hadoop.apache.org/ link to separate sites for 
Common, HDFS and MapReduce modules.  These sites are identical except for the 
mailing lists.  I propose we move the mailing list information to the TLP 
mailing list page and remove these sub-project websites.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-08 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Attachment: HADOOP-8659.patch

Update patch to remove unnecessary dependency on JAVA_JVM_LIBRARY from 
hadooppipes, which caused build failure in Jenkins:

{noformat}CMake Error: The following variables are used in this project, but 
they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake 
files:
JAVA_JVM_LIBRARY (ADVANCED)
linked by target hadooppipes in directory 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-tools/hadoop-pipes/src
{noformat}

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-08 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Status: Patch Available  (was: Open)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-08 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431334#comment-13431334
 ] 

Robert Joseph Evans commented on HADOOP-8661:
-

Glad to see OOZIE-946.  Virag, is it OK for us to drop the severity on 
HADOOP-8661 given OOZIE-946?

I did read through HADOOP-6686 and the reason for including the entire stack 
trace is to improve debugging, which is obviously something we want.  Most 
exceptions do not have a complete stack trace in their message though.  That is 
what getStackTrace() is for.

I have spent a little time to write some code that can parse the stack trace 
and insert it back into the generated exception. I think this is the cleaner 
way to get the debugging, and keep the generated massage almost identical to 
the original message.  I wanted to know what others thought about the 
suggestion. I am fine with dropping it if OOZIE-946 is sufficient. I need some 
time to clean up the code a bit and add some more tests to be sure everything 
works OK even in error cases, but I don't want to spend much time on it if this 
is going to be contentious.


 Stack Trace in Exception.getMessage causing oozie DB to have issues
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical

 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is causing issues 
 for oozie because they store the message in their database and it is getting 
 very large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8649) ChecksumFileSystem should have an overriding implementation of listStatus(Path, PathFilter) for improved performance

2012-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431337#comment-13431337
 ] 

Hadoop QA commented on HADOOP-8649:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12539893/trunk-HADOOP-8649.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

-1 javadoc.  The javadoc tool appears to have generated 1 warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient:

  org.apache.hadoop.fs.TestLocalFSFileContextMainOperations
  org.apache.hadoop.fs.TestFileContextDeleteOnExit
  org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem
  org.apache.hadoop.hdfs.web.TestWebHDFS
  
org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1266//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1266//console

This message is automatically generated.

 ChecksumFileSystem should have an overriding implementation of 
 listStatus(Path, PathFilter) for improved performance
 

 Key: HADOOP-8649
 URL: https://issues.apache.org/jira/browse/HADOOP-8649
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.3, 2.0.0-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: HADOOP-8649_branch1.patch, HADOOP-8649_branch1.patch, 
 HADOOP-8649_branch1.patch_v2, HADOOP-8649_branch1.patch_v3, 
 TestChecksumFileSystemOnDFS.java, branch1-HADOOP-8649.patch, 
 trunk-HADOOP-8649.patch


 Currently, ChecksumFileSystem implements only listStatus(Path). 
 The other form of listStatus(Path, customFilter) results in parsing the list 
 twice to apply each of the filters - custom and checksum filter.
 By using a composite filter instead, we limit the parsing to once.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431344#comment-13431344
 ] 

Suresh Srinivas commented on HADOOP-8661:
-

bq. I have spent a little time to write some code that can parse the stack 
trace and insert it back into the generated exception
Not sure what you mean here. You could still shorten the message in this jira. 
The cause of the exception is already in the thrown exception from where stack 
trace can be obtained.

 Stack Trace in Exception.getMessage causing oozie DB to have issues
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical

 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is causing issues 
 for oozie because they store the message in their database and it is getting 
 very large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-08 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431351#comment-13431351
 ] 

Robert Joseph Evans commented on HADOOP-8661:
-

@Suresh,

Yes I completely agree about the name of the JIRA, once Virag agrees that 
OOZIE-946 is the real fix for the issue, I will probably close this JIRA as a 
dupe of OOZIE-946, and then file a separate one for parsing the stack trace as 
new work.  But I would like to leave this open just in case there is something 
that will prevent oozie form fixing the issue quickly, so that we can unblock 
them.

 Stack Trace in Exception.getMessage causing oozie DB to have issues
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical

 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is causing issues 
 for oozie because they store the message in their database and it is getting 
 very large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-08 Thread Virag Kothari (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431357#comment-13431357
 ] 

Virag Kothari commented on HADOOP-8661:
---

OOZIE-946 will go in the next release of Oozie (Oozie 3.3). But as this is 
regression from 1.0, this JIRA needs to be fixed for Oozie 3.2 to work with 23. 

 Stack Trace in Exception.getMessage causing oozie DB to have issues
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical

 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is causing issues 
 for oozie because they store the message in their database and it is getting 
 very large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-08 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431358#comment-13431358
 ] 

Todd Lipcon commented on HADOOP-7754:
-

Hey Ahmed. Can you upload just the trunk version alone so that test-patch can 
run? When you upload two at once, it gets confused and seems to be trying to 
test the branch-1 versoin instead of the trunk one.

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: HADOOP-7754_branch-1_rev2.patch, 
 HADOOP-7754_trunk.patch, HADOOP-7754_trunk_rev2.patch, 
 hadoop-7754-0.23.0-hasfd.txt, hasfd.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-08 Thread Virag Kothari (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431367#comment-13431367
 ] 

Virag Kothari commented on HADOOP-8661:
---

Also, in hadoop 1.0, the error message was
{code}
message[org.apache.hadoop.security.AccessControlException: Permission denied:
user=strat_ci, access=ALL, inode=output-mr:strat_ci:hdfs:r--r--r--]
{code}

But in 23, the message doesn't have the Exception class name 
(org.apache.hadoop.security.AccessControlException).

Also, I think getMessage() should just have the message with which the 
exception is constructed while getStackTrace() should have the entire stack 
trace. 
IMO, user's running their hadoop jobs through Oozie shouldn't be seeing the 
entire stack trace.

Oozie-946 will only ensure that a large value doesn't blow up the column in the 
database. But the getMessage() should be fixed in hadoop.




 Stack Trace in Exception.getMessage causing oozie DB to have issues
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical

 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is causing issues 
 for oozie because they store the message in their database and it is getting 
 very large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-08 Thread Ahmed Radwan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Radwan updated HADOOP-7754:
-

Attachment: HADOOP-7754_trunk_rev2.patch

Thanks Todd, Here is the trunk version alone.

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: HADOOP-7754_branch-1_rev2.patch, 
 HADOOP-7754_trunk.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev2.patch, hadoop-7754-0.23.0-hasfd.txt, hasfd.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-08 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431371#comment-13431371
 ] 

Alejandro Abdelnur commented on HADOOP-7754:


I don't see the HasFileDescriptor.java in the patch. The method getFD() should 
be getFileDescriptor(), other than that looks good. Missing Javadocs.

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: HADOOP-7754_branch-1_rev2.patch, 
 HADOOP-7754_trunk.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev2.patch, hadoop-7754-0.23.0-hasfd.txt, hasfd.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-08 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431372#comment-13431372
 ] 

Alejandro Abdelnur commented on HADOOP-7754:


my bad on the missing HasDescriptor.java  javadocs. I was doing an 'svn diff' 
and didn't show up as I did not 'svn add' it.

So the only thing is the getFD() renaming.

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: HADOOP-7754_branch-1_rev2.patch, 
 HADOOP-7754_trunk.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev2.patch, hadoop-7754-0.23.0-hasfd.txt, hasfd.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8649) ChecksumFileSystem should have an overriding implementation of listStatus(Path, PathFilter) for improved performance

2012-08-08 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431389#comment-13431389
 ] 

Karthik Kambatla commented on HADOOP-8649:
--

Found the javadoc warning. The noticed test failures seem to be due to one of 
the patch's tests creating a file and not deleting it. Running all the tests 
locally to make sure these issues are fixed. Will upload updated patch soon.

 ChecksumFileSystem should have an overriding implementation of 
 listStatus(Path, PathFilter) for improved performance
 

 Key: HADOOP-8649
 URL: https://issues.apache.org/jira/browse/HADOOP-8649
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.3, 2.0.0-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: HADOOP-8649_branch1.patch, HADOOP-8649_branch1.patch, 
 HADOOP-8649_branch1.patch_v2, HADOOP-8649_branch1.patch_v3, 
 TestChecksumFileSystemOnDFS.java, branch1-HADOOP-8649.patch, 
 trunk-HADOOP-8649.patch


 Currently, ChecksumFileSystem implements only listStatus(Path). 
 The other form of listStatus(Path, customFilter) results in parsing the list 
 twice to apply each of the filters - custom and checksum filter.
 By using a composite filter instead, we limit the parsing to once.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-08 Thread Ahmed Radwan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431413#comment-13431413
 ] 

Ahmed Radwan commented on HADOOP-7754:
--

Thanks Tucu for the review! I think the getFD() naming was to be similar with 
methods like FileInputStream#getFD(). But, I also see your point in using the 
full name to be more clear and descriptive, and to also avoid any confusion. I 
have updated the patched per your comments. Thanks again!

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: HADOOP-7754_branch-1_rev2.patch, 
 HADOOP-7754_branch-1_rev3.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, hadoop-7754-0.23.0-hasfd.txt, hasfd.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-08 Thread Ahmed Radwan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Radwan updated HADOOP-7754:
-

Attachment: HADOOP-7754_branch-1_rev3.patch

Here is the branch-1 update.

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: HADOOP-7754_branch-1_rev2.patch, 
 HADOOP-7754_branch-1_rev3.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, hadoop-7754-0.23.0-hasfd.txt, hasfd.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-08 Thread Ahmed Radwan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Radwan updated HADOOP-7754:
-

Attachment: HADOOP-7754_trunk_rev3.patch

Here is the trunk update.

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: HADOOP-7754_branch-1_rev2.patch, 
 HADOOP-7754_branch-1_rev3.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, hadoop-7754-0.23.0-hasfd.txt, hasfd.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431442#comment-13431442
 ] 

Hadoop QA commented on HADOOP-8659:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12539913/HADOOP-8659.patch
  against trunk revision .

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1269//console

This message is automatically generated.

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8662) remove separate pages for Common, HDFS MR projects

2012-08-08 Thread Doug Cutting (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doug Cutting updated HADOOP-8662:
-

Attachment: HADOOP-8662.patch

Here's a patch that implements this.

It adds all of the mailing lists to a single mailing list page.  It adds a 
releases page instead of using common/releases.html.  It adds a single issue 
tracker page that lists all issue trackers.

The getting started stuff is no longer replicated on each site, nor is a lot 
of other boilerplate.  I also took the opportunity to update the copyright 
year, trademark symbol, and a few other minor things.

I didn't (yet) replace the term 'subproject' with 'module'.  We also need to 
migrate the names of subproject committers to the 'who' page.

Is this worth it?

 remove separate pages for Common, HDFS  MR projects
 

 Key: HADOOP-8662
 URL: https://issues.apache.org/jira/browse/HADOOP-8662
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: site
Reporter: Doug Cutting
Assignee: Doug Cutting
Priority: Minor
 Fix For: site

 Attachments: HADOOP-8662.patch


 The tabs on the top of http://hadoop.apache.org/ link to separate sites for 
 Common, HDFS and MapReduce modules.  These sites are identical except for the 
 mailing lists.  I propose we move the mailing list information to the TLP 
 mailing list page and remove these sub-project websites.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8663) UnresolvedAddressException while connect causes NPE

2012-08-08 Thread John George (JIRA)
John George created HADOOP-8663:
---

 Summary: UnresolvedAddressException while connect causes NPE
 Key: HADOOP-8663
 URL: https://issues.apache.org/jira/browse/HADOOP-8663
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 3.0.0, 2.2.0-alpha
Reporter: John George
Assignee: John George


If connect() fails due to UnresolvedAddressException  in setupConnection() in 
Client.java, that causes 'out' to be NOT set and thus cause NPE when the next 
connection comes through. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8663) UnresolvedAddressException while connect causes NPE

2012-08-08 Thread John George (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431457#comment-13431457
 ] 

John George commented on HADOOP-8663:
-

A connect fails due to some kinda n/w hiccup and throws an 
UnresolvedAddressException as follows:

12-07-30 16:31:07,223 WARN org.apache.hadoop.ipc.Client: Address change 
detected. Old: hostname/ipaddress:port New: hostname
2012-07-30 16:31:08,225 INFO org.apache.hadoop.ipc.Client: Retrying connect to 
server: hostname:port. Already tried 0 time(s).
2012-07-30 16:31:08,226 INFO org.apache.hadoop.mapred.TaskTracker: Received 
KillTaskAction for task: attempt_201205090815_3706185_m_00_1
2012-07-30 16:31:08,226 INFO org.apache.hadoop.mapred.TaskTracker: About to 
purge task: attempt_201205090815_3706185_m_00_1
2012-07-30 16:31:08,226 WARN org.apache.hadoop.mapred.TaskTracker: Error 
initializing attempt_201205090815_3706185_m_00_1:
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:30)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:487)
at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:656)
at 
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:434)
at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:560)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:184)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1202)
at org.apache.hadoop.ipc.Client.call(Client.java:1046)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy8.getFileInfo(Unknown Source)
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy8.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:757)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:497)
at 
org.apache.hadoop.mapred.TaskTracker.localizeJobTokenFile(TaskTracker.java:4229)
at 
org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1150)
at 
org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1091)
at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2360)
at java.lang.Thread.run(Thread.java:619)


Clients using the same object will now get an NPE since 'out' is not 
initialized.

va.lang.NullPointerException
at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:766)
at org.apache.hadoop.ipc.Client.call(Client.java:1047)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy8.getFileInfo(Unknown Source)
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy8.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:757)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:497)
at 
org.apache.hadoop.mapred.TaskTracker.localizeJobTokenFile(TaskTracker.java:4229)
at 
org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1150)
at 
org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1091)
at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2360)

This seems to be an issue in trunk as well, but I need to look closer to 
confirm. 

 UnresolvedAddressException while connect causes NPE
 ---

 Key: HADOOP-8663
 URL: https://issues.apache.org/jira/browse/HADOOP-8663
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 3.0.0, 2.2.0-alpha
Reporter: John George
Assignee: John George

 If connect() fails due to UnresolvedAddressException  in setupConnection() in 
 Client.java, that causes 'out' to be NOT set and thus cause NPE when the next 
 connection comes through. 

--
This message is automatically generated by JIRA.
If 

[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431461#comment-13431461
 ] 

Steve Loughran commented on HADOOP-8661:


Is the problem at the RPC layer, or is it the way that stack traces get passed 
around.

For Java RMI I ended up writing a wrapper class that we could be confident 
would be at the far end -this extracted and propagated the stack traces
http://smartfrog.svn.sourceforge.net/viewvc/smartfrog/trunk/core/smartfrog/src/org/smartfrog/sfcore/common/SmartFrogExtractedException.java?revision=8882view=markup

We did something similar in Axis, converting the stack trace to something in an 
axis-namespaced element (then stripping that by default in responses unless the 
server's debug flag is set): 
http://svn.apache.org/viewvc/axis/axis2/java/core/trunk/modules/kernel/src/org/apache/axis2/AxisFault.java?view=markup


 Stack Trace in Exception.getMessage causing oozie DB to have issues
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical

 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is causing issues 
 for oozie because they store the message in their database and it is getting 
 very large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431460#comment-13431460
 ] 

Suresh Srinivas commented on HADOOP-8661:
-

bq. But in 23, the message doesn't have the Exception class name 
(org.apache.hadoop.security.AccessControlException).
We removed this redundant information from message in HADOOP-6686. You can get 
back to previous message on Oozie with:
{{exception.getClass().getName() + :  + exception.getMessage()}}

 Stack Trace in Exception.getMessage causing oozie DB to have issues
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical

 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is causing issues 
 for oozie because they store the message in their database and it is getting 
 very large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7754) Expose file descriptors from Hadoop-wrapped local FileSystems

2012-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431469#comment-13431469
 ] 

Hadoop QA commented on HADOOP-7754:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12539932/HADOOP-7754_trunk_rev3.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1268//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1268//console

This message is automatically generated.

 Expose file descriptors from Hadoop-wrapped local FileSystems
 -

 Key: HADOOP-7754
 URL: https://issues.apache.org/jira/browse/HADOOP-7754
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: HADOOP-7754_branch-1_rev2.patch, 
 HADOOP-7754_branch-1_rev3.patch, HADOOP-7754_trunk.patch, 
 HADOOP-7754_trunk_rev2.patch, HADOOP-7754_trunk_rev2.patch, 
 HADOOP-7754_trunk_rev3.patch, hadoop-7754-0.23.0-hasfd.txt, hasfd.txt


 In HADOOP-7714, we determined that using fadvise inside of the MapReduce 
 shuffle can yield very good performance improvements. But many parts of the 
 shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and 
 RawLocalFileSystems. This JIRA is to figure out how to allow 
 RawLocalFileSystem to expose its FileDescriptor object without unnecessarily 
 polluting the public APIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8661) Stack Trace in Exception.getMessage causing oozie DB to have issues

2012-08-08 Thread Virag Kothari (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431478#comment-13431478
 ] 

Virag Kothari commented on HADOOP-8661:
---

bq. We removed this redundant information from message in HADOOP-6686. 

Ok..Makes sense not to have the class name in the exception message.

 Stack Trace in Exception.getMessage causing oozie DB to have issues
 ---

 Key: HADOOP-8661
 URL: https://issues.apache.org/jira/browse/HADOOP-8661
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
Priority: Critical

 It looks like all exceptions produced by RemoteException include the full 
 stack trace of the original exception in the message.  This is causing issues 
 for oozie because they store the message in their database and it is getting 
 very large.  This appears to be a regression from 1.0 behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8664) hadoop streaming job need the full path to commands even when they are in the path

2012-08-08 Thread Bikas Saha (JIRA)
Bikas Saha created HADOOP-8664:
--

 Summary: hadoop streaming job need the full path to commands even 
when they are in the path
 Key: HADOOP-8664
 URL: https://issues.apache.org/jira/browse/HADOOP-8664
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha


run a hadoop streaming job as
bin/hadoop jar path_to_streaming_jar -input path_on_hdfs -mapper cat -output 
path_on_hdfs -reducer cat
will fail saying program cat not found. cat is in the path and works from cmd 
prompt.
If i give the full path to cmd.exe the exception is not seen

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8581) add support for HTTPS to the web UIs

2012-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13431508#comment-13431508
 ] 

Hadoop QA commented on HADOOP-8581:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12539884/HADOOP-8581.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy:

  org.apache.hadoop.hdfs.TestDatanodeBlockScanner
  
org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1267//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1267//console

This message is automatically generated.

 add support for HTTPS to the web UIs
 

 Key: HADOOP-8581
 URL: https://issues.apache.org/jira/browse/HADOOP-8581
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.1.0-alpha

 Attachments: HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, 
 HADOOP-8581.patch, HADOOP-8581.patch


 HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is 
 hardcoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-08 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Attachment: (was: HADOOP-8659.patch)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-08 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Attachment: HADOOP-8659.patch

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-08 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Status: Open  (was: Patch Available)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8659) Native libraries must build with soft-float ABI for Oracle JVM on ARM

2012-08-08 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HADOOP-8659:


Status: Patch Available  (was: Open)

 Native libraries must build with soft-float ABI for Oracle JVM on ARM
 -

 Key: HADOOP-8659
 URL: https://issues.apache.org/jira/browse/HADOOP-8659
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: armhf Linux with Oracle JVM
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Attachments: HADOOP-8659.patch, HADOOP-8659.patch, HADOOP-8659.patch


 There was recently an ABI (application binary interface) change in most Linux 
 distributions for modern ARM processors (ARMv7). Historically, hardware 
 floating-point (FP) support was optional/vendor-specific for ARM processors, 
 so for software compatibility, the default ABI required that processors with 
 FP units copy FP arguments into integer registers (or memory) when calling a 
 shared library function. Now that hardware floating-point has been 
 standardized for some time, Linux distributions such as Ubuntu 12.04 have 
 changed the default ABI to leave FP arguments in FP registers, since this can 
 significantly improve performance for FP libraries.
 Unfortunately, Oracle has not yet released a JVM (as of 7u4) that supports 
 the new ABI, presumably since this involves some non-trivial changes to 
 components like JNI. While the soft-float JVM can run on systems with 
 multi-arch support (currently Debian/Ubuntu) using compatibility libraries, 
 this configuration requires that any third-party JNI libraries also be 
 compiled using the soft-float ABI. Since hard-float systems default to 
 compiling for hard-float, an extra argument to GCC (and installation of a 
 compatibility library) is required to build soft-float Hadoop native 
 libraries that work with the Oracle JVM.
 Note that OpenJDK on hard-float systems does use the new ABI, and expects JNI 
 libraries to use it as well. Therefore the fix for this issue requires 
 detecting the float ABI of the current JVM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira