[jira] [Commented] (HADOOP-8804) Improve Web UIs when the wildcard address is used

2012-10-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470041#comment-13470041
 ] 

Hadoop QA commented on HADOOP-8804:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12547877/HADOOP-8804-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1561//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1561//console

This message is automatically generated.

 Improve Web UIs when the wildcard address is used
 -

 Key: HADOOP-8804
 URL: https://issues.apache.org/jira/browse/HADOOP-8804
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Eli Collins
Assignee: Senthil V Kumar
Priority: Minor
  Labels: newbie
 Attachments: DisplayOptions.jpg, HADOOP-8804-1.0.patch, 
 HADOOP-8804-1.1.patch, HADOOP-8804-1.1.patch, HADOOP-8804-1.1.patch, 
 HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch, 
 HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch


 When IPC addresses are bound to the wildcard (ie the default config) the NN, 
 JT (and probably RM etc) Web UIs are a little goofy. Eg 0 Hadoop Map/Reduce 
 Administration and NameNode '0.0.0.0:18021' (active). Let's improve them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8437) getLocalPathForWrite is not throwing any expection for invalid paths

2012-10-05 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470076#comment-13470076
 ] 

Brahma Reddy Battula commented on HADOOP-8437:
--

Hi Harsh,

do u have any suggestions on this apart from permissions..?  or following will 
be okie.
{code}
 for (int i = 0; i  256; i++) {
+  invalidPath.append(A);
+}
{code}

I tried with special chars which are not supported by OS..

 getLocalPathForWrite is not throwing any expection for invalid paths
 

 Key: HADOOP-8437
 URL: https://issues.apache.org/jira/browse/HADOOP-8437
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HADOOP-8437_1.patch, HADOOP-8437.patch


 call dirAllocator.getLocalPathForWrite ( /InvalidPath, conf );
 Here it will not thrown any exception but earlier version it used throw.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8842) local file system behavior of mv into an empty directory is inconsistent with HDFS

2012-10-05 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470083#comment-13470083
 ] 

Suresh Srinivas commented on HADOOP-8842:
-

branch 20.x, since some of the applications could be dependent on the behavior 
they might be getting from their implementation of the file system, we decided 
not to break the compatibility by retaining the old behavior. 

 local file system behavior of mv into an empty directory is inconsistent with 
 HDFS
 --

 Key: HADOOP-8842
 URL: https://issues.apache.org/jira/browse/HADOOP-8842
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.2
Reporter: Julien Le Dem

 moving into an empty directory replaces the directory instead.
 See output of attached script to reproduce :
 repro.sh
 {noformat}
 rm -rf local_fs_bug
 mkdir local_fs_bug
 hdfs -rmr local_fs_bug
 hdfs -mkdir local_fs_bug
 echo  HDFS: normal behavior
 touch part-
 hdfs -mkdir local_fs_bug/a
 hdfs -copyFromLocal part- local_fs_bug/a
 hdfs -mkdir local_fs_bug/b
 hdfs -mkdir local_fs_bug/b/c
 echo content of a: 1 part
 hdfs -ls local_fs_bug/a
 echo content of b/c: empty
 hdfs -ls local_fs_bug/b/c
 echo mv a b/c
 hdfs -mv local_fs_bug/a local_fs_bug/b/c
 echo resulting content of b/c
 hdfs -ls local_fs_bug/b/c
 echo a is moved inside of c
 echo
 echo  local fs: bug
 mkdir -p local_fs_bug/a
 touch local_fs_bug/a/part-
 mkdir -p local_fs_bug/b/c
 echo content of a: 1 part
 hdfs -fs local -ls local_fs_bug/a
 echo content of b/c: empty
 hdfs -fs local -ls local_fs_bug/b/c
 echo mv a b/c
 hdfs -fs local -mv local_fs_bug/a local_fs_bug/b/c
 echo resulting content of b/c
 hdfs -fs local -ls local_fs_bug/b/c
 echo bug: a replaces c
 echo
 echo  but it works if the destination is not empty
 mkdir local_fs_bug/a2
 touch local_fs_bug/a2/part-
 mkdir -p local_fs_bug/b2/c2
 touch local_fs_bug/b2/c2/dummy
 echo content of a2: 1 part
 hdfs -fs local -ls local_fs_bug/a2
 echo content of b2/c2: 1 dummy file
 hdfs -fs local -ls local_fs_bug/b2/c2
 echo mv a2 b2/c2
 hdfs -fs local -mv local_fs_bug/a2 local_fs_bug/b2/c2
 echo resulting content of b/c
 hdfs -fs local -ls local_fs_bug/b2/c2
 echo a2 is moved inside of c2
 {noformat}
 Output:
 {noformat}
  HDFS: normal behavior
 content of a: 1 part
 Found 1 items
 -rw-r--r--   3 julien g  0 2012-09-25 17:16 
 /user/julien/local_fs_bug/a/part-
 content of b/c: empty
 mv a b/c
 resulting content of b/c
 Found 1 items
 drwxr-xr-x   - julien g  0 2012-09-25 17:16 
 /user/julien/local_fs_bug/b/c/a
 a is moved inside of c
  local fs: bug
 content of a: 1 part
 12/09/25 17:16:34 WARN fs.FileSystem: local is a deprecated filesystem 
 name. Use file:/// instead.
 Found 1 items
 -rw-r--r--   1 julien g  0 2012-09-25 17:16 
 /home/julien/local_fs_bug/a/part-
 content of b/c: empty
 12/09/25 17:16:34 WARN fs.FileSystem: local is a deprecated filesystem 
 name. Use file:/// instead.
 mv a b/c
 12/09/25 17:16:35 WARN fs.FileSystem: local is a deprecated filesystem 
 name. Use file:/// instead.
 resulting content of b/c
 12/09/25 17:16:35 WARN fs.FileSystem: local is a deprecated filesystem 
 name. Use file:/// instead.
 Found 1 items
 -rw-r--r--   1 julien g  0 2012-09-25 17:16 
 /home/julien/local_fs_bug/b/c/part-
 bug: a replaces c
  but it works if the destination is not empty
 content of a2: 1 part
 12/09/25 17:16:36 WARN fs.FileSystem: local is a deprecated filesystem 
 name. Use file:/// instead.
 Found 1 items
 -rw-r--r--   1 julien g  0 2012-09-25 17:16 
 /home/julien/local_fs_bug/a2/part-
 content of b2/c2: 1 dummy file
 12/09/25 17:16:37 WARN fs.FileSystem: local is a deprecated filesystem 
 name. Use file:/// instead.
 Found 1 items
 -rw-r--r--   1 julien g  0 2012-09-25 17:16 
 /home/julien/local_fs_bug/b2/c2/dummy
 mv a2 b2/c2
 12/09/25 17:16:37 WARN fs.FileSystem: local is a deprecated filesystem 
 name. Use file:/// instead.
 resulting content of b/c
 12/09/25 17:16:38 WARN fs.FileSystem: local is a deprecated filesystem 
 name. Use file:/// instead.
 Found 2 items
 drwxr-xr-x   - julien g   4096 2012-09-25 17:16 
 /home/julien/local_fs_bug/b2/c2/a2
 -rw-r--r--   1 julien g  0 2012-09-25 17:16 
 /home/julien/local_fs_bug/b2/c2/dummy
 a2 is moved inside of c2
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8881) FileBasedKeyStoresFactory initialization logging should be debug not info

2012-10-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470286#comment-13470286
 ] 

Hudson commented on HADOOP-8881:


Integrated in Hadoop-Hdfs-trunk #1186 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1186/])
HADOOP-8881. FileBasedKeyStoresFactory initialization logging should be 
debug not info. (tucu) (Revision 1394165)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1394165
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java


 FileBasedKeyStoresFactory initialization logging should be debug not info
 -

 Key: HADOOP-8881
 URL: https://issues.apache.org/jira/browse/HADOOP-8881
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8881.patch


 When hadoop.ssl.enabled is set to true hadoop client invocations get a log 
 message on the terminal with the initialization of the keystores, switching 
 to debug will disable this log message by default.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8878) uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem and fsck to fail when security is on

2012-10-05 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470301#comment-13470301
 ] 

Daryn Sharp commented on HADOOP-8878:
-

Looks good, but I'm curious when it would be legitimate to pass null or an 
empty string for the host?

 uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem 
 and fsck to fail when security is on
 

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.branch-1.patch, 
 HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8881) FileBasedKeyStoresFactory initialization logging should be debug not info

2012-10-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470341#comment-13470341
 ] 

Hudson commented on HADOOP-8881:


Integrated in Hadoop-Mapreduce-trunk #1217 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1217/])
HADOOP-8881. FileBasedKeyStoresFactory initialization logging should be 
debug not info. (tucu) (Revision 1394165)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1394165
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java


 FileBasedKeyStoresFactory initialization logging should be debug not info
 -

 Key: HADOOP-8881
 URL: https://issues.apache.org/jira/browse/HADOOP-8881
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8881.patch


 When hadoop.ssl.enabled is set to true hadoop client invocations get a log 
 message on the terminal with the initialization of the keystores, switching 
 to debug will disable this log message by default.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2012-10-05 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-8849:
---

Attachment: (was: HADOOP-8849-vs-trunk.patch)

 FileUtil#fullyDelete should grant the target directories +rwx permissions 
 before trying to delete them
 --

 Key: HADOOP-8849
 URL: https://issues.apache.org/jira/browse/HADOOP-8849
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Priority: Minor

 2 improvements are suggested for implementation of methods 
 org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
 org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
  
 1) We should grant +rwx permissions the target directories before trying to 
 delete them.
 The mentioned methods fail to delete directories that don't have read or 
 execute permissions.
 Actual problem appears if an hdfs-related test is timed out (with a short 
 timeout like tens of seconds), and the forked test process is killed, some 
 directories are left on disk that are not readable and/or executable. This 
 prevents next tests from being executed properly because these directories 
 cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
 So, its recommended to grant the read, write, and execute permissions the 
 directories whose content is to be deleted.
 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
 return value, use File#exists() instead. 
 FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
 this is not reliable because File#delete() returns true only if the file was 
 deleted as a result of the #delete() method invocation. E.g. in the following 
 code
 if (f.exists()) { // 1
   return f.delete(); // 2
 }
 if the file f was deleted by another thread or process between calls 1 and 
 2, this fragment will return false, while the file f does not exist upon 
 the method return.
 So, better to write
 if (f.exists()) {
   f.delete();
   return !f.exists();
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2012-10-05 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky reassigned HADOOP-8849:
--

Assignee: Ivan A. Veselovsky

 FileUtil#fullyDelete should grant the target directories +rwx permissions 
 before trying to delete them
 --

 Key: HADOOP-8849
 URL: https://issues.apache.org/jira/browse/HADOOP-8849
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor

 2 improvements are suggested for implementation of methods 
 org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
 org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
  
 1) We should grant +rwx permissions the target directories before trying to 
 delete them.
 The mentioned methods fail to delete directories that don't have read or 
 execute permissions.
 Actual problem appears if an hdfs-related test is timed out (with a short 
 timeout like tens of seconds), and the forked test process is killed, some 
 directories are left on disk that are not readable and/or executable. This 
 prevents next tests from being executed properly because these directories 
 cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
 So, its recommended to grant the read, write, and execute permissions the 
 directories whose content is to be deleted.
 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
 return value, use File#exists() instead. 
 FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
 this is not reliable because File#delete() returns true only if the file was 
 deleted as a result of the #delete() method invocation. E.g. in the following 
 code
 if (f.exists()) { // 1
   return f.delete(); // 2
 }
 if the file f was deleted by another thread or process between calls 1 and 
 2, this fragment will return false, while the file f does not exist upon 
 the method return.
 So, better to write
 if (f.exists()) {
   f.delete();
   return !f.exists();
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2012-10-05 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-8849:
---

Attachment: HADOOP-8849-vs-trunk-2.patch

version #2 of the patch where the findbugs warning is worked around.

 FileUtil#fullyDelete should grant the target directories +rwx permissions 
 before trying to delete them
 --

 Key: HADOOP-8849
 URL: https://issues.apache.org/jira/browse/HADOOP-8849
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8849-vs-trunk-2.patch


 2 improvements are suggested for implementation of methods 
 org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
 org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
  
 1) We should grant +rwx permissions the target directories before trying to 
 delete them.
 The mentioned methods fail to delete directories that don't have read or 
 execute permissions.
 Actual problem appears if an hdfs-related test is timed out (with a short 
 timeout like tens of seconds), and the forked test process is killed, some 
 directories are left on disk that are not readable and/or executable. This 
 prevents next tests from being executed properly because these directories 
 cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
 So, its recommended to grant the read, write, and execute permissions the 
 directories whose content is to be deleted.
 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
 return value, use File#exists() instead. 
 FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
 this is not reliable because File#delete() returns true only if the file was 
 deleted as a result of the #delete() method invocation. E.g. in the following 
 code
 if (f.exists()) { // 1
   return f.delete(); // 2
 }
 if the file f was deleted by another thread or process between calls 1 and 
 2, this fragment will return false, while the file f does not exist upon 
 the method return.
 So, better to write
 if (f.exists()) {
   f.delete();
   return !f.exists();
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2012-10-05 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-8849:
---

Status: Open  (was: Patch Available)

 FileUtil#fullyDelete should grant the target directories +rwx permissions 
 before trying to delete them
 --

 Key: HADOOP-8849
 URL: https://issues.apache.org/jira/browse/HADOOP-8849
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8849-vs-trunk-2.patch


 2 improvements are suggested for implementation of methods 
 org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
 org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
  
 1) We should grant +rwx permissions the target directories before trying to 
 delete them.
 The mentioned methods fail to delete directories that don't have read or 
 execute permissions.
 Actual problem appears if an hdfs-related test is timed out (with a short 
 timeout like tens of seconds), and the forked test process is killed, some 
 directories are left on disk that are not readable and/or executable. This 
 prevents next tests from being executed properly because these directories 
 cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
 So, its recommended to grant the read, write, and execute permissions the 
 directories whose content is to be deleted.
 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
 return value, use File#exists() instead. 
 FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
 this is not reliable because File#delete() returns true only if the file was 
 deleted as a result of the #delete() method invocation. E.g. in the following 
 code
 if (f.exists()) { // 1
   return f.delete(); // 2
 }
 if the file f was deleted by another thread or process between calls 1 and 
 2, this fragment will return false, while the file f does not exist upon 
 the method return.
 So, better to write
 if (f.exists()) {
   f.delete();
   return !f.exists();
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2012-10-05 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-8849:
---

Status: Patch Available  (was: Open)

 FileUtil#fullyDelete should grant the target directories +rwx permissions 
 before trying to delete them
 --

 Key: HADOOP-8849
 URL: https://issues.apache.org/jira/browse/HADOOP-8849
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8849-vs-trunk-2.patch


 2 improvements are suggested for implementation of methods 
 org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
 org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
  
 1) We should grant +rwx permissions the target directories before trying to 
 delete them.
 The mentioned methods fail to delete directories that don't have read or 
 execute permissions.
 Actual problem appears if an hdfs-related test is timed out (with a short 
 timeout like tens of seconds), and the forked test process is killed, some 
 directories are left on disk that are not readable and/or executable. This 
 prevents next tests from being executed properly because these directories 
 cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
 So, its recommended to grant the read, write, and execute permissions the 
 directories whose content is to be deleted.
 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
 return value, use File#exists() instead. 
 FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
 this is not reliable because File#delete() returns true only if the file was 
 deleted as a result of the #delete() method invocation. E.g. in the following 
 code
 if (f.exists()) { // 1
   return f.delete(); // 2
 }
 if the file f was deleted by another thread or process between calls 1 and 
 2, this fragment will return false, while the file f does not exist upon 
 the method return.
 So, better to write
 if (f.exists()) {
   f.delete();
   return !f.exists();
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2012-10-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470368#comment-13470368
 ] 

Hadoop QA commented on HADOOP-8849:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12547980/HADOOP-8849-vs-trunk-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1562//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1562//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1562//console

This message is automatically generated.

 FileUtil#fullyDelete should grant the target directories +rwx permissions 
 before trying to delete them
 --

 Key: HADOOP-8849
 URL: https://issues.apache.org/jira/browse/HADOOP-8849
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8849-vs-trunk-2.patch


 2 improvements are suggested for implementation of methods 
 org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
 org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
  
 1) We should grant +rwx permissions the target directories before trying to 
 delete them.
 The mentioned methods fail to delete directories that don't have read or 
 execute permissions.
 Actual problem appears if an hdfs-related test is timed out (with a short 
 timeout like tens of seconds), and the forked test process is killed, some 
 directories are left on disk that are not readable and/or executable. This 
 prevents next tests from being executed properly because these directories 
 cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
 So, its recommended to grant the read, write, and execute permissions the 
 directories whose content is to be deleted.
 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
 return value, use File#exists() instead. 
 FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
 this is not reliable because File#delete() returns true only if the file was 
 deleted as a result of the #delete() method invocation. E.g. in the following 
 code
 if (f.exists()) { // 1
   return f.delete(); // 2
 }
 if the file f was deleted by another thread or process between calls 1 and 
 2, this fragment will return false, while the file f does not exist upon 
 the method return.
 So, better to write
 if (f.exists()) {
   f.delete();
   return !f.exists();
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8884) DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError

2012-10-05 Thread Anthony Rojas (JIRA)
Anthony Rojas created HADOOP-8884:
-

 Summary: DEBUG should be WARN for DEBUG util.NativeCodeLoader: 
Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError
 Key: HADOOP-8884
 URL: https://issues.apache.org/jira/browse/HADOOP-8884
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.1-alpha
Reporter: Anthony Rojas


Recommending to change the following debug message and promote it to a warning 
instead:

12/07/02 18:41:44 DEBUG util.NativeCodeLoader: Failed to load native-hadoop 
with error: java.lang.UnsatisfiedLinkError: 
/usr/lib/hadoop/lib/native/libhadoop.so.1.0.0: /lib64/libc.so.6: version 
`GLIBC_2.6' not found (required by 
/usr/lib/hadoop/lib/native/libhadoop.so.1.0.0)



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8884) DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError

2012-10-05 Thread Anthony Rojas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anthony Rojas updated HADOOP-8884:
--

Attachment: HADOOP-8884.patch

-First pass changed the nativecodeloader.java to throw a warning instead of a 
debug message when attempting to load native-hadoop but fails loading.

- Local unit tests failed, this is my first patch and I may have missed 
something, any comments / feedback are appreciated

 DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load 
 native-hadoop with error: java.lang.UnsatisfiedLinkError
 -

 Key: HADOOP-8884
 URL: https://issues.apache.org/jira/browse/HADOOP-8884
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.1-alpha
Reporter: Anthony Rojas
 Attachments: HADOOP-8884.patch


 Recommending to change the following debug message and promote it to a 
 warning instead:
 12/07/02 18:41:44 DEBUG util.NativeCodeLoader: Failed to load native-hadoop 
 with error: java.lang.UnsatisfiedLinkError: 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0: /lib64/libc.so.6: version 
 `GLIBC_2.6' not found (required by 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8878) uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem and fsck to fail when security is on

2012-10-05 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470389#comment-13470389
 ] 

Arpit Gupta commented on HADOOP-8878:
-

@Dayrn

I just replicated what we do in SecurityUtil.replacePattern for the same

 uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem 
 and fsck to fail when security is on
 

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.branch-1.patch, 
 HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-8884) DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError

2012-10-05 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers reassigned HADOOP-8884:
--

Assignee: Anthony Rojas

 DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load 
 native-hadoop with error: java.lang.UnsatisfiedLinkError
 -

 Key: HADOOP-8884
 URL: https://issues.apache.org/jira/browse/HADOOP-8884
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.1-alpha
Reporter: Anthony Rojas
Assignee: Anthony Rojas
 Attachments: HADOOP-8884.patch


 Recommending to change the following debug message and promote it to a 
 warning instead:
 12/07/02 18:41:44 DEBUG util.NativeCodeLoader: Failed to load native-hadoop 
 with error: java.lang.UnsatisfiedLinkError: 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0: /lib64/libc.so.6: version 
 `GLIBC_2.6' not found (required by 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8884) DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError

2012-10-05 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-8884:
---

Target Version/s: 2.0.3-alpha
  Status: Patch Available  (was: Open)

 DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load 
 native-hadoop with error: java.lang.UnsatisfiedLinkError
 -

 Key: HADOOP-8884
 URL: https://issues.apache.org/jira/browse/HADOOP-8884
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.1-alpha
Reporter: Anthony Rojas
Assignee: Anthony Rojas
 Attachments: HADOOP-8884.patch


 Recommending to change the following debug message and promote it to a 
 warning instead:
 12/07/02 18:41:44 DEBUG util.NativeCodeLoader: Failed to load native-hadoop 
 with error: java.lang.UnsatisfiedLinkError: 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0: /lib64/libc.so.6: version 
 `GLIBC_2.6' not found (required by 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8884) DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError

2012-10-05 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470417#comment-13470417
 ] 

Suresh Srinivas commented on HADOOP-8884:
-

Anthony, thanks for the patch. Making these logs from debug to warning makes 
sense. 

I suggest combining both those logs into a single log, along the lines:
{noformat}
LOG.warn(Continuing after failing to load native-hadoop - 
java.library.path= +
System.getProperty(java.library.path) +  with error: + t);
{noformat}


 DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load 
 native-hadoop with error: java.lang.UnsatisfiedLinkError
 -

 Key: HADOOP-8884
 URL: https://issues.apache.org/jira/browse/HADOOP-8884
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.1-alpha
Reporter: Anthony Rojas
Assignee: Anthony Rojas
 Attachments: HADOOP-8884.patch


 Recommending to change the following debug message and promote it to a 
 warning instead:
 12/07/02 18:41:44 DEBUG util.NativeCodeLoader: Failed to load native-hadoop 
 with error: java.lang.UnsatisfiedLinkError: 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0: /lib64/libc.so.6: version 
 `GLIBC_2.6' not found (required by 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2012-10-05 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-8849:
---

Attachment: (was: HADOOP-8849-vs-trunk-2.patch)

 FileUtil#fullyDelete should grant the target directories +rwx permissions 
 before trying to delete them
 --

 Key: HADOOP-8849
 URL: https://issues.apache.org/jira/browse/HADOOP-8849
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor

 2 improvements are suggested for implementation of methods 
 org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
 org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
  
 1) We should grant +rwx permissions the target directories before trying to 
 delete them.
 The mentioned methods fail to delete directories that don't have read or 
 execute permissions.
 Actual problem appears if an hdfs-related test is timed out (with a short 
 timeout like tens of seconds), and the forked test process is killed, some 
 directories are left on disk that are not readable and/or executable. This 
 prevents next tests from being executed properly because these directories 
 cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
 So, its recommended to grant the read, write, and execute permissions the 
 directories whose content is to be deleted.
 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
 return value, use File#exists() instead. 
 FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
 this is not reliable because File#delete() returns true only if the file was 
 deleted as a result of the #delete() method invocation. E.g. in the following 
 code
 if (f.exists()) { // 1
   return f.delete(); // 2
 }
 if the file f was deleted by another thread or process between calls 1 and 
 2, this fragment will return false, while the file f does not exist upon 
 the method return.
 So, better to write
 if (f.exists()) {
   f.delete();
   return !f.exists();
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2012-10-05 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-8849:
---

Attachment: HADOOP-8849-vs-trunk-3.patch

fixed another findbugs warning that appeared in path #2.

 FileUtil#fullyDelete should grant the target directories +rwx permissions 
 before trying to delete them
 --

 Key: HADOOP-8849
 URL: https://issues.apache.org/jira/browse/HADOOP-8849
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8849-vs-trunk-3.patch


 2 improvements are suggested for implementation of methods 
 org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
 org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
  
 1) We should grant +rwx permissions the target directories before trying to 
 delete them.
 The mentioned methods fail to delete directories that don't have read or 
 execute permissions.
 Actual problem appears if an hdfs-related test is timed out (with a short 
 timeout like tens of seconds), and the forked test process is killed, some 
 directories are left on disk that are not readable and/or executable. This 
 prevents next tests from being executed properly because these directories 
 cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
 So, its recommended to grant the read, write, and execute permissions the 
 directories whose content is to be deleted.
 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
 return value, use File#exists() instead. 
 FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
 this is not reliable because File#delete() returns true only if the file was 
 deleted as a result of the #delete() method invocation. E.g. in the following 
 code
 if (f.exists()) { // 1
   return f.delete(); // 2
 }
 if the file f was deleted by another thread or process between calls 1 and 
 2, this fragment will return false, while the file f does not exist upon 
 the method return.
 So, better to write
 if (f.exists()) {
   f.delete();
   return !f.exists();
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2012-10-05 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-8849:
---

Status: Patch Available  (was: Open)

 FileUtil#fullyDelete should grant the target directories +rwx permissions 
 before trying to delete them
 --

 Key: HADOOP-8849
 URL: https://issues.apache.org/jira/browse/HADOOP-8849
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8849-vs-trunk-3.patch


 2 improvements are suggested for implementation of methods 
 org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
 org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
  
 1) We should grant +rwx permissions the target directories before trying to 
 delete them.
 The mentioned methods fail to delete directories that don't have read or 
 execute permissions.
 Actual problem appears if an hdfs-related test is timed out (with a short 
 timeout like tens of seconds), and the forked test process is killed, some 
 directories are left on disk that are not readable and/or executable. This 
 prevents next tests from being executed properly because these directories 
 cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
 So, its recommended to grant the read, write, and execute permissions the 
 directories whose content is to be deleted.
 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
 return value, use File#exists() instead. 
 FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
 this is not reliable because File#delete() returns true only if the file was 
 deleted as a result of the #delete() method invocation. E.g. in the following 
 code
 if (f.exists()) { // 1
   return f.delete(); // 2
 }
 if the file f was deleted by another thread or process between calls 1 and 
 2, this fragment will return false, while the file f does not exist upon 
 the method return.
 So, better to write
 if (f.exists()) {
   f.delete();
   return !f.exists();
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8884) DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError

2012-10-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470429#comment-13470429
 ] 

Hadoop QA commented on HADOOP-8884:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12547984/HADOOP-8884.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1563//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1563//console

This message is automatically generated.

 DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load 
 native-hadoop with error: java.lang.UnsatisfiedLinkError
 -

 Key: HADOOP-8884
 URL: https://issues.apache.org/jira/browse/HADOOP-8884
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.1-alpha
Reporter: Anthony Rojas
Assignee: Anthony Rojas
 Attachments: HADOOP-8884.patch


 Recommending to change the following debug message and promote it to a 
 warning instead:
 12/07/02 18:41:44 DEBUG util.NativeCodeLoader: Failed to load native-hadoop 
 with error: java.lang.UnsatisfiedLinkError: 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0: /lib64/libc.so.6: version 
 `GLIBC_2.6' not found (required by 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2012-10-05 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-8849:
---

Attachment: (was: HADOOP-8849-vs-trunk-3.patch)

 FileUtil#fullyDelete should grant the target directories +rwx permissions 
 before trying to delete them
 --

 Key: HADOOP-8849
 URL: https://issues.apache.org/jira/browse/HADOOP-8849
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor

 2 improvements are suggested for implementation of methods 
 org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
 org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
  
 1) We should grant +rwx permissions the target directories before trying to 
 delete them.
 The mentioned methods fail to delete directories that don't have read or 
 execute permissions.
 Actual problem appears if an hdfs-related test is timed out (with a short 
 timeout like tens of seconds), and the forked test process is killed, some 
 directories are left on disk that are not readable and/or executable. This 
 prevents next tests from being executed properly because these directories 
 cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
 So, its recommended to grant the read, write, and execute permissions the 
 directories whose content is to be deleted.
 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
 return value, use File#exists() instead. 
 FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
 this is not reliable because File#delete() returns true only if the file was 
 deleted as a result of the #delete() method invocation. E.g. in the following 
 code
 if (f.exists()) { // 1
   return f.delete(); // 2
 }
 if the file f was deleted by another thread or process between calls 1 and 
 2, this fragment will return false, while the file f does not exist upon 
 the method return.
 So, better to write
 if (f.exists()) {
   f.delete();
   return !f.exists();
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2012-10-05 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-8849:
---

Attachment: HADOOP-8849-vs-trunk-4.patch

corrected the formatting of the code (2 spaces, no tabs.)

 FileUtil#fullyDelete should grant the target directories +rwx permissions 
 before trying to delete them
 --

 Key: HADOOP-8849
 URL: https://issues.apache.org/jira/browse/HADOOP-8849
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8849-vs-trunk-4.patch


 2 improvements are suggested for implementation of methods 
 org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
 org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
  
 1) We should grant +rwx permissions the target directories before trying to 
 delete them.
 The mentioned methods fail to delete directories that don't have read or 
 execute permissions.
 Actual problem appears if an hdfs-related test is timed out (with a short 
 timeout like tens of seconds), and the forked test process is killed, some 
 directories are left on disk that are not readable and/or executable. This 
 prevents next tests from being executed properly because these directories 
 cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
 So, its recommended to grant the read, write, and execute permissions the 
 directories whose content is to be deleted.
 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
 return value, use File#exists() instead. 
 FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
 this is not reliable because File#delete() returns true only if the file was 
 deleted as a result of the #delete() method invocation. E.g. in the following 
 code
 if (f.exists()) { // 1
   return f.delete(); // 2
 }
 if the file f was deleted by another thread or process between calls 1 and 
 2, this fragment will return false, while the file f does not exist upon 
 the method return.
 So, better to write
 if (f.exists()) {
   f.delete();
   return !f.exists();
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8884) DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError

2012-10-05 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470477#comment-13470477
 ] 

Aaron T. Myers commented on HADOOP-8884:


The test failure is spurious and unrelated, so don't worry about that.

I agree with Suresh's suggestion of making the log one line, except that I 
recommend you pass the Throwable as the second argument to LOG.warn, so that 
the full stack trace is printed, i.e.:

{code}
LOG.warn(Continuing after failing to load native-hadoop - java.library.path= +
System.getProperty(java.library.path) +  with error:, t);
{code}

 DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load 
 native-hadoop with error: java.lang.UnsatisfiedLinkError
 -

 Key: HADOOP-8884
 URL: https://issues.apache.org/jira/browse/HADOOP-8884
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.1-alpha
Reporter: Anthony Rojas
Assignee: Anthony Rojas
 Attachments: HADOOP-8884.patch


 Recommending to change the following debug message and promote it to a 
 warning instead:
 12/07/02 18:41:44 DEBUG util.NativeCodeLoader: Failed to load native-hadoop 
 with error: java.lang.UnsatisfiedLinkError: 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0: /lib64/libc.so.6: version 
 `GLIBC_2.6' not found (required by 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2012-10-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470480#comment-13470480
 ] 

Hadoop QA commented on HADOOP-8849:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12548002/HADOOP-8849-vs-trunk-4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1565//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1565//console

This message is automatically generated.

 FileUtil#fullyDelete should grant the target directories +rwx permissions 
 before trying to delete them
 --

 Key: HADOOP-8849
 URL: https://issues.apache.org/jira/browse/HADOOP-8849
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8849-vs-trunk-4.patch


 2 improvements are suggested for implementation of methods 
 org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
 org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
  
 1) We should grant +rwx permissions the target directories before trying to 
 delete them.
 The mentioned methods fail to delete directories that don't have read or 
 execute permissions.
 Actual problem appears if an hdfs-related test is timed out (with a short 
 timeout like tens of seconds), and the forked test process is killed, some 
 directories are left on disk that are not readable and/or executable. This 
 prevents next tests from being executed properly because these directories 
 cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
 So, its recommended to grant the read, write, and execute permissions the 
 directories whose content is to be deleted.
 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
 return value, use File#exists() instead. 
 FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
 this is not reliable because File#delete() returns true only if the file was 
 deleted as a result of the #delete() method invocation. E.g. in the following 
 code
 if (f.exists()) { // 1
   return f.delete(); // 2
 }
 if the file f was deleted by another thread or process between calls 1 and 
 2, this fragment will return false, while the file f does not exist upon 
 the method return.
 So, better to write
 if (f.exists()) {
   f.delete();
   return !f.exists();
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8885) Need to add fs shim to use QFS

2012-10-05 Thread thilee (JIRA)
thilee created HADOOP-8885:
--

 Summary: Need to add fs shim to use QFS
 Key: HADOOP-8885
 URL: https://issues.apache.org/jira/browse/HADOOP-8885
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Affects Versions: 2.0.1-alpha, 2.0.0-alpha, 0.23.3, 1.0.3, 1.0.2
Reporter: thilee


Quantcast has released QFS 1.0 (http://quantcast.github.com/qfs), a C++ 
distributed filesystem based on Kosmos File System(KFS). QFS comes with various 
feature, performance, and stability improvements over KFS.

A hadoop 'fs' shim needs be added to support QFS through 'qfs://' URIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8886) Remove KFS support

2012-10-05 Thread Eli Collins (JIRA)
Eli Collins created HADOOP-8886:
---

 Summary: Remove KFS support
 Key: HADOOP-8886
 URL: https://issues.apache.org/jira/browse/HADOOP-8886
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Eli Collins
Assignee: Eli Collins


KFS is no longer maintained (is replaced by QFS, which HADOOP-8885 is adding), 
let's remove it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8804) Improve Web UIs when the wildcard address is used

2012-10-05 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470505#comment-13470505
 ] 

Eli Collins commented on HADOOP-8804:
-

+1

 Improve Web UIs when the wildcard address is used
 -

 Key: HADOOP-8804
 URL: https://issues.apache.org/jira/browse/HADOOP-8804
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Eli Collins
Assignee: Senthil V Kumar
Priority: Minor
  Labels: newbie
 Attachments: DisplayOptions.jpg, HADOOP-8804-1.0.patch, 
 HADOOP-8804-1.1.patch, HADOOP-8804-1.1.patch, HADOOP-8804-1.1.patch, 
 HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch, 
 HADOOP-8804-trunk.patch, HADOOP-8804-trunk.patch


 When IPC addresses are bound to the wildcard (ie the default config) the NN, 
 JT (and probably RM etc) Web UIs are a little goofy. Eg 0 Hadoop Map/Reduce 
 Administration and NameNode '0.0.0.0:18021' (active). Let's improve them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-8427) Convert Forrest docs to APT

2012-10-05 Thread Andy Isaacson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Isaacson reassigned HADOOP-8427:
-

Assignee: Andy Isaacson

 Convert Forrest docs to APT
 ---

 Key: HADOOP-8427
 URL: https://issues.apache.org/jira/browse/HADOOP-8427
 Project: Hadoop Common
  Issue Type: Task
  Components: documentation
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Andy Isaacson
  Labels: newbie

 Some of the forrest docs content in src/docs/src/documentation/content/xdocs 
 has not yet been converted to APT and moved to src/site/apt. Let's convert 
 the forrest docs that haven't been converted yet to new APT content in 
 hadoop-common/src/site/apt (and link the new content into 
 hadoop-project/src/site/apt/index.apt.vm) and remove all forrest dependencies.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8887) Use a Maven plugin to build the native code using CMake

2012-10-05 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-8887:


 Summary: Use a Maven plugin to build the native code using CMake
 Key: HADOOP-8887
 URL: https://issues.apache.org/jira/browse/HADOOP-8887
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor


Currently, we build the native code using ant-build invocations.  Although this 
works, it has some limitations:

* compiler warning messages are hidden, which can cause people to check in code 
with warnings unintentionally
* there is no framework for running native unit tests; instead, we use ad-hoc 
constructs involving shell scripts
* the antrun code is very platform specific
* there is no way to run a specific native unit test
* it's more or less impossible for scripts like test-patch.sh to separate a 
native test failing from the build itself failing (no files are created) or to 
enumerate which native tests failed.

Using a native Maven plugin would overcome these limitations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8887) Use a Maven plugin to build the native code using CMake

2012-10-05 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8887:
-

Attachment: HADOOP-8887.001.patch

 Use a Maven plugin to build the native code using CMake
 ---

 Key: HADOOP-8887
 URL: https://issues.apache.org/jira/browse/HADOOP-8887
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8887.001.patch


 Currently, we build the native code using ant-build invocations.  Although 
 this works, it has some limitations:
 * compiler warning messages are hidden, which can cause people to check in 
 code with warnings unintentionally
 * there is no framework for running native unit tests; instead, we use ad-hoc 
 constructs involving shell scripts
 * the antrun code is very platform specific
 * there is no way to run a specific native unit test
 * it's more or less impossible for scripts like test-patch.sh to separate a 
 native test failing from the build itself failing (no files are created) or 
 to enumerate which native tests failed.
 Using a native Maven plugin would overcome these limitations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8887) Use a Maven plugin to build the native code using CMake

2012-10-05 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8887:
-

Status: Patch Available  (was: Open)

 Use a Maven plugin to build the native code using CMake
 ---

 Key: HADOOP-8887
 URL: https://issues.apache.org/jira/browse/HADOOP-8887
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8887.001.patch


 Currently, we build the native code using ant-build invocations.  Although 
 this works, it has some limitations:
 * compiler warning messages are hidden, which can cause people to check in 
 code with warnings unintentionally
 * there is no framework for running native unit tests; instead, we use ad-hoc 
 constructs involving shell scripts
 * the antrun code is very platform specific
 * there is no way to run a specific native unit test
 * it's more or less impossible for scripts like test-patch.sh to separate a 
 native test failing from the build itself failing (no files are created) or 
 to enumerate which native tests failed.
 Using a native Maven plugin would overcome these limitations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8887) Use a Maven plugin to build the native code using CMake

2012-10-05 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470538#comment-13470538
 ] 

Colin Patrick McCabe commented on HADOOP-8887:
--

A word of explanation about the {{test-container-executor.c}} change: it seems 
that when launched directly from Maven rather than from a shell, {{SIGQUIT}} 
starts off blocked, causing the test to fail.  The change manually unblocks 
this signal-- always a good idea to do before you start using a signal.

 Use a Maven plugin to build the native code using CMake
 ---

 Key: HADOOP-8887
 URL: https://issues.apache.org/jira/browse/HADOOP-8887
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8887.001.patch


 Currently, we build the native code using ant-build invocations.  Although 
 this works, it has some limitations:
 * compiler warning messages are hidden, which can cause people to check in 
 code with warnings unintentionally
 * there is no framework for running native unit tests; instead, we use ad-hoc 
 constructs involving shell scripts
 * the antrun code is very platform specific
 * there is no way to run a specific native unit test
 * it's more or less impossible for scripts like test-patch.sh to separate a 
 native test failing from the build itself failing (no files are created) or 
 to enumerate which native tests failed.
 Using a native Maven plugin would overcome these limitations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8878) uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem and fsck to fail when security is on

2012-10-05 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-8878:


Attachment: HADOOP-8878.patch

patch for trunk.

Instead of using SecuirtyUtil.getLocalHostname add a similar method in 
KerberosUtil as hadoop-auth depending on hadoop-common creates a circular 
dependency.

 uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem 
 and fsck to fail when security is on
 

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.branch-1.patch, 
 HADOOP-8878.patch, HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8878) uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem and fsck to fail when security is on

2012-10-05 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-8878:


Status: Patch Available  (was: Open)

 uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem 
 and fsck to fail when security is on
 

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.branch-1.patch, 
 HADOOP-8878.patch, HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8882) uppercase namenode host name causes fsck to fail when useKsslAuth is on

2012-10-05 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470619#comment-13470619
 ] 

Arpit Gupta commented on HADOOP-8882:
-

@Steve

Thanks for the heads up. I followed the same approach that HADOOP-7988 followed 
to convert the service principal hostnames to lower case with locale.

My be should open up a different jira to make them all handle locale. Let me 
know what you think.

 uppercase namenode host name causes fsck to fail when useKsslAuth is on
 ---

 Key: HADOOP-8882
 URL: https://issues.apache.org/jira/browse/HADOOP-8882
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8882.branch-1.patch


 {code}
  public static void fetchServiceTicket(URL remoteHost) throws IOException {
 if(!UserGroupInformation.isSecurityEnabled())
   return;
 
 String serviceName = host/ + remoteHost.getHost();
 {code}
 the hostname should be converted to lower case. Saw this in branch 1, will 
 look at trunk and update the bug accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8882) uppercase namenode host name causes fsck to fail when useKsslAuth is on

2012-10-05 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470621#comment-13470621
 ] 

Arpit Gupta commented on HADOOP-8882:
-

i meant without locale info in the above comment.

 uppercase namenode host name causes fsck to fail when useKsslAuth is on
 ---

 Key: HADOOP-8882
 URL: https://issues.apache.org/jira/browse/HADOOP-8882
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8882.branch-1.patch


 {code}
  public static void fetchServiceTicket(URL remoteHost) throws IOException {
 if(!UserGroupInformation.isSecurityEnabled())
   return;
 
 String serviceName = host/ + remoteHost.getHost();
 {code}
 the hostname should be converted to lower case. Saw this in branch 1, will 
 look at trunk and update the bug accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8887) Use a Maven plugin to build the native code using CMake

2012-10-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470646#comment-13470646
 ] 

Hadoop QA commented on HADOOP-8887:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12548020/HADOOP-8887.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 2057 javac 
compiler warnings (more than the trunk's current 2053 warnings).

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 8 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 15 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
dev-support/cmake-maven-ng-plugin hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1566//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1566//artifact/trunk/patchprocess/newPatchFindbugsWarningscmake-maven-ng-plugin.html
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1566//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1566//console

This message is automatically generated.

 Use a Maven plugin to build the native code using CMake
 ---

 Key: HADOOP-8887
 URL: https://issues.apache.org/jira/browse/HADOOP-8887
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8887.001.patch


 Currently, we build the native code using ant-build invocations.  Although 
 this works, it has some limitations:
 * compiler warning messages are hidden, which can cause people to check in 
 code with warnings unintentionally
 * there is no framework for running native unit tests; instead, we use ad-hoc 
 constructs involving shell scripts
 * the antrun code is very platform specific
 * there is no way to run a specific native unit test
 * it's more or less impossible for scripts like test-patch.sh to separate a 
 native test failing from the build itself failing (no files are created) or 
 to enumerate which native tests failed.
 Using a native Maven plugin would overcome these limitations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8878) uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem and fsck to fail when security is on

2012-10-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470660#comment-13470660
 ] 

Hadoop QA commented on HADOOP-8878:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12548033/HADOOP-8878.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1567//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1567//console

This message is automatically generated.

 uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem 
 and fsck to fail when security is on
 

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.branch-1.patch, 
 HADOOP-8878.patch, HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8888) add the ability to suppress the deprecated warnings when using hadoop cli

2012-10-05 Thread Arpit Gupta (JIRA)
Arpit Gupta created HADOOP-:
---

 Summary: add the ability to suppress the deprecated warnings when 
using hadoop cli
 Key: HADOOP-
 URL: https://issues.apache.org/jira/browse/HADOOP-
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta


some this similar to what HADOOP_HOME_WARN_SUPPRESS is used for in branch-1

May we can introduce

HADOOP_DEPRECATED_WARN_SUPPRESS

which if set to yes will suppress the various warnings that are thrown.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8437) getLocalPathForWrite is not throwing any expection for invalid paths

2012-10-05 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470755#comment-13470755
 ] 

Harsh J commented on HADOOP-8437:
-

Would permissions not work?

I had tried the special char approach on your patch earlier but Linux supports 
almost everything we could use as a reasonable test. If permission tweaks are 
not gonna work, I'm fine with the 256 length test, with a comment added along 
that this depends on the FS in use to fail and may pass on some FS.

 getLocalPathForWrite is not throwing any expection for invalid paths
 

 Key: HADOOP-8437
 URL: https://issues.apache.org/jira/browse/HADOOP-8437
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HADOOP-8437_1.patch, HADOOP-8437.patch


 call dirAllocator.getLocalPathForWrite ( /InvalidPath, conf );
 Here it will not thrown any exception but earlier version it used throw.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8887) Use a Maven plugin to build the native code using CMake

2012-10-05 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8887:
-

Attachment: HADOOP-8887.002.patch

Get rid of some warnings (none of them were actual bugs.)

 Use a Maven plugin to build the native code using CMake
 ---

 Key: HADOOP-8887
 URL: https://issues.apache.org/jira/browse/HADOOP-8887
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8887.001.patch, HADOOP-8887.002.patch


 Currently, we build the native code using ant-build invocations.  Although 
 this works, it has some limitations:
 * compiler warning messages are hidden, which can cause people to check in 
 code with warnings unintentionally
 * there is no framework for running native unit tests; instead, we use ad-hoc 
 constructs involving shell scripts
 * the antrun code is very platform specific
 * there is no way to run a specific native unit test
 * it's more or less impossible for scripts like test-patch.sh to separate a 
 native test failing from the build itself failing (no files are created) or 
 to enumerate which native tests failed.
 Using a native Maven plugin would overcome these limitations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8889) Upgrade to Surefire 2.12.3

2012-10-05 Thread Todd Lipcon (JIRA)
Todd Lipcon created HADOOP-8889:
---

 Summary: Upgrade to Surefire 2.12.3
 Key: HADOOP-8889
 URL: https://issues.apache.org/jira/browse/HADOOP-8889
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, test
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-8889.txt

Surefire 2.12.3 has a couple improvements which are helpful for us. In 
particular, it fixes http://jira.codehaus.org/browse/SUREFIRE-817 which has 
been aggravating in the past.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8889) Upgrade to Surefire 2.12.3

2012-10-05 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8889:


Attachment: hadoop-8889.txt

 Upgrade to Surefire 2.12.3
 --

 Key: HADOOP-8889
 URL: https://issues.apache.org/jira/browse/HADOOP-8889
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, test
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-8889.txt


 Surefire 2.12.3 has a couple improvements which are helpful for us. In 
 particular, it fixes http://jira.codehaus.org/browse/SUREFIRE-817 which has 
 been aggravating in the past.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8889) Upgrade to Surefire 2.12.3

2012-10-05 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8889:


Status: Patch Available  (was: Open)

 Upgrade to Surefire 2.12.3
 --

 Key: HADOOP-8889
 URL: https://issues.apache.org/jira/browse/HADOOP-8889
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, test
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-8889.txt


 Surefire 2.12.3 has a couple improvements which are helpful for us. In 
 particular, it fixes http://jira.codehaus.org/browse/SUREFIRE-817 which has 
 been aggravating in the past.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8885) Need to add fs shim to use QFS

2012-10-05 Thread thilee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

thilee updated HADOOP-8885:
---

Attachment: HADOOP-8885-branch-1.patch

Attached is the patch for *branch-1*. 
In addition to applying the patch, you would need to do two things:

(1) To compile: you need to copy the qfs access JAR (qfs-version.jar) to the 
branch-1 _lib_ directory.
(2) To run: you need to have the directory of libqfs*.so in your 
{{LD_LIBRARY_PATH}}.

The tarball containing the JAR file and the shared libraries can be downloaded 
here: (please download the one that matches your platform, untar and look 
inside _lib_ directory) 
[https://github.com/quantcast/qfs/wiki/Binary-Distributions]

The JAR file should eventually be integrated to Apache Hadoop, under _lib_ 
directory, as qfs-1.0.0.jar. 

 Need to add fs shim to use QFS
 --

 Key: HADOOP-8885
 URL: https://issues.apache.org/jira/browse/HADOOP-8885
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Affects Versions: 1.0.2, 1.0.3, 0.23.3, 2.0.0-alpha, 2.0.1-alpha
Reporter: thilee
 Attachments: HADOOP-8885-branch-1.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 Quantcast has released QFS 1.0 (http://quantcast.github.com/qfs), a C++ 
 distributed filesystem based on Kosmos File System(KFS). QFS comes with 
 various feature, performance, and stability improvements over KFS.
 A hadoop 'fs' shim needs be added to support QFS through 'qfs://' URIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8888) add the ability to suppress the deprecated warnings when using hadoop cli

2012-10-05 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-:


Description: 
some this similar to what HADOOP_HOME_WARN_SUPPRESS is used for in branch-1

May be we can introduce

HADOOP_DEPRECATED_WARN_SUPPRESS

which if set to yes will suppress the various warnings that are thrown.

  was:
some this similar to what HADOOP_HOME_WARN_SUPPRESS is used for in branch-1

May we can introduce

HADOOP_DEPRECATED_WARN_SUPPRESS

which if set to yes will suppress the various warnings that are thrown.


 add the ability to suppress the deprecated warnings when using hadoop cli
 -

 Key: HADOOP-
 URL: https://issues.apache.org/jira/browse/HADOOP-
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta

 some this similar to what HADOOP_HOME_WARN_SUPPRESS is used for in branch-1
 May be we can introduce
 HADOOP_DEPRECATED_WARN_SUPPRESS
 which if set to yes will suppress the various warnings that are thrown.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8889) Upgrade to Surefire 2.12.3

2012-10-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470805#comment-13470805
 ] 

Hadoop QA commented on HADOOP-8889:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12548070/hadoop-8889.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1568//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1568//console

This message is automatically generated.

 Upgrade to Surefire 2.12.3
 --

 Key: HADOOP-8889
 URL: https://issues.apache.org/jira/browse/HADOOP-8889
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, test
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-8889.txt


 Surefire 2.12.3 has a couple improvements which are helpful for us. In 
 particular, it fixes http://jira.codehaus.org/browse/SUREFIRE-817 which has 
 been aggravating in the past.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8852) WebHdfsFileSystem and HftpFileSystem don't need delegation tokens

2012-10-05 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8852:
-

Description: 
Parent JIRA to track the work of removing delegation tokens from these 
filesystems. 

This JIRA has evolved from the initial issue of these filesystems not stopping 
the DelegationTokenRenewer thread they were creating.

After further investigation, Daryn pointed out - If you can get a token, you 
don't need a token! Hence, these filesystems shouldn't use delegation tokens.

Evolution of the JIRA is listed below:
Update 2:
DelegationTokenRenewer is not required. The filesystems that are using it 
already have Krb tickets and do not need tokens. Remove DelegationTokenRenewer 
and all the related logic from WebHdfs and Hftp filesystems.

Update1:
DelegationTokenRenewer should be Singleton - the instance and renewer threads 
should be created/started lazily. The filesystems using the renewer shouldn't 
need to explicity start/stop the renewer, and only register/de-register for 
token renewal.

Initial issue:
HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
thread when they are closed. 

  was:
Update 2:
DelegationTokenRenewer is not required. The filesystems that are using it 
already have Krb tickets and do not need tokens. Remove DelegationTokenRenewer 
and all the related logic from WebHdfs and Hftp filesystems.

Update1:
DelegationTokenRenewer should be Singleton - the instance and renewer threads 
should be created/started lazily. The filesystems using the renewer shouldn't 
need to explicity start/stop the renewer, and only register/de-register for 
token renewal.

Original issue:
HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
thread when they are closed. 

 Issue Type: Improvement  (was: Bug)
Summary: WebHdfsFileSystem and HftpFileSystem don't need delegation 
tokens  (was: Remove DelegationTokenRenewer)

 WebHdfsFileSystem and HftpFileSystem don't need delegation tokens
 -

 Key: HADOOP-8852
 URL: https://issues.apache.org/jira/browse/HADOOP-8852
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Tom White
Assignee: Karthik Kambatla
 Attachments: hadoop-8852.patch, hadoop-8852.patch, 
 hadoop-8852-v1.patch


 Parent JIRA to track the work of removing delegation tokens from these 
 filesystems. 
 This JIRA has evolved from the initial issue of these filesystems not 
 stopping the DelegationTokenRenewer thread they were creating.
 After further investigation, Daryn pointed out - If you can get a token, you 
 don't need a token! Hence, these filesystems shouldn't use delegation tokens.
 Evolution of the JIRA is listed below:
 Update 2:
 DelegationTokenRenewer is not required. The filesystems that are using it 
 already have Krb tickets and do not need tokens. Remove 
 DelegationTokenRenewer and all the related logic from WebHdfs and Hftp 
 filesystems.
 Update1:
 DelegationTokenRenewer should be Singleton - the instance and renewer threads 
 should be created/started lazily. The filesystems using the renewer shouldn't 
 need to explicity start/stop the renewer, and only register/de-register for 
 token renewal.
 Initial issue:
 HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
 thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8889) Upgrade to Surefire 2.12.3

2012-10-05 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470809#comment-13470809
 ] 

Aaron T. Myers commented on HADOOP-8889:


+1

 Upgrade to Surefire 2.12.3
 --

 Key: HADOOP-8889
 URL: https://issues.apache.org/jira/browse/HADOOP-8889
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, test
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-8889.txt


 Surefire 2.12.3 has a couple improvements which are helpful for us. In 
 particular, it fixes http://jira.codehaus.org/browse/SUREFIRE-817 which has 
 been aggravating in the past.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8890) Remove unused TokenRenewer implementation from WebHdfsFileSystem and HftpFileSystem

2012-10-05 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created HADOOP-8890:


 Summary: Remove unused TokenRenewer implementation from 
WebHdfsFileSystem and HftpFileSystem
 Key: HADOOP-8890
 URL: https://issues.apache.org/jira/browse/HADOOP-8890
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.1-alpha
Reporter: Karthik Kambatla


WebHdfsFileSystem and HftpFileSystem implement TokenRenewer without using 
anywhere.

As we are in the process of migrating them to not use tokens, this code should 
be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8891) Remove DelegationTokenRenewer and its logic from WebHdfsFileSystem and HftpFileSystem

2012-10-05 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created HADOOP-8891:


 Summary: Remove DelegationTokenRenewer and its logic from 
WebHdfsFileSystem and HftpFileSystem
 Key: HADOOP-8891
 URL: https://issues.apache.org/jira/browse/HADOOP-8891
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Karthik Kambatla




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8892) WebHdfsFileSystem shouldn't use delegation tokens

2012-10-05 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created HADOOP-8892:


 Summary: WebHdfsFileSystem shouldn't use delegation tokens
 Key: HADOOP-8892
 URL: https://issues.apache.org/jira/browse/HADOOP-8892
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.1-alpha
Reporter: Karthik Kambatla




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-8890) Remove unused TokenRenewer implementation from WebHdfsFileSystem and HftpFileSystem

2012-10-05 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla reassigned HADOOP-8890:


Assignee: Karthik Kambatla

 Remove unused TokenRenewer implementation from WebHdfsFileSystem and 
 HftpFileSystem
 ---

 Key: HADOOP-8890
 URL: https://issues.apache.org/jira/browse/HADOOP-8890
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.0.1-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla

 WebHdfsFileSystem and HftpFileSystem implement TokenRenewer without using 
 anywhere.
 As we are in the process of migrating them to not use tokens, this code 
 should be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-8891) Remove DelegationTokenRenewer and its logic from WebHdfsFileSystem and HftpFileSystem

2012-10-05 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla reassigned HADOOP-8891:


Assignee: Karthik Kambatla

 Remove DelegationTokenRenewer and its logic from WebHdfsFileSystem and 
 HftpFileSystem
 -

 Key: HADOOP-8891
 URL: https://issues.apache.org/jira/browse/HADOOP-8891
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8891) Remove DelegationTokenRenewer and its logic from WebHdfsFileSystem and HftpFileSystem

2012-10-05 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8891:
-

Issue Type: Improvement  (was: Sub-task)
Parent: (was: HADOOP-8852)

 Remove DelegationTokenRenewer and its logic from WebHdfsFileSystem and 
 HftpFileSystem
 -

 Key: HADOOP-8891
 URL: https://issues.apache.org/jira/browse/HADOOP-8891
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8891) Remove DelegationTokenRenewer and its logic from WebHdfsFileSystem and HftpFileSystem

2012-10-05 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8891:
-

  Description: Moved the HDFS part of HADOOP-8852 to HDFS-4009 along 
with other sub-tasks. Created this to track the removal of 
DelegationTokenRenewer alone.
Affects Version/s: 2.0.1-alpha

 Remove DelegationTokenRenewer and its logic from WebHdfsFileSystem and 
 HftpFileSystem
 -

 Key: HADOOP-8891
 URL: https://issues.apache.org/jira/browse/HADOOP-8891
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.1-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla

 Moved the HDFS part of HADOOP-8852 to HDFS-4009 along with other sub-tasks. 
 Created this to track the removal of DelegationTokenRenewer alone.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-8885) Need to add fs shim to use QFS

2012-10-05 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers reassigned HADOOP-8885:
--

Assignee: thilee

 Need to add fs shim to use QFS
 --

 Key: HADOOP-8885
 URL: https://issues.apache.org/jira/browse/HADOOP-8885
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Affects Versions: 1.0.2, 1.0.3, 0.23.3, 2.0.0-alpha, 2.0.1-alpha
Reporter: thilee
Assignee: thilee
 Attachments: HADOOP-8885-branch-1.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 Quantcast has released QFS 1.0 (http://quantcast.github.com/qfs), a C++ 
 distributed filesystem based on Kosmos File System(KFS). QFS comes with 
 various feature, performance, and stability improvements over KFS.
 A hadoop 'fs' shim needs be added to support QFS through 'qfs://' URIs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8889) Upgrade to Surefire 2.12.3

2012-10-05 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8889:


   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to branch-2 and trunk, thanks for the review.

 Upgrade to Surefire 2.12.3
 --

 Key: HADOOP-8889
 URL: https://issues.apache.org/jira/browse/HADOOP-8889
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, test
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: hadoop-8889.txt


 Surefire 2.12.3 has a couple improvements which are helpful for us. In 
 particular, it fixes http://jira.codehaus.org/browse/SUREFIRE-817 which has 
 been aggravating in the past.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8894) GenericTestUtils.waitFor should dump thread stacks on timeout

2012-10-05 Thread Todd Lipcon (JIRA)
Todd Lipcon created HADOOP-8894:
---

 Summary: GenericTestUtils.waitFor should dump thread stacks on 
timeout
 Key: HADOOP-8894
 URL: https://issues.apache.org/jira/browse/HADOOP-8894
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Todd Lipcon


Many tests use this utility to wait for a condition to become true. In the 
event that it times out, we should dump all the thread stack traces, in case 
the timeout was due to a deadlock. This should make it easier to debug 
scenarios like HDFS-4001.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8894) GenericTestUtils.waitFor should dump thread stacks on timeout

2012-10-05 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8894:


Attachment: hadoop-8894.txt

Attached patch borrows the code from TimedOutTestListener so that the timeout 
exception contains all the thread stacks as well as potential deadlock info.

No new tests included because this is itself test code.

 GenericTestUtils.waitFor should dump thread stacks on timeout
 -

 Key: HADOOP-8894
 URL: https://issues.apache.org/jira/browse/HADOOP-8894
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-8894.txt


 Many tests use this utility to wait for a condition to become true. In the 
 event that it times out, we should dump all the thread stack traces, in case 
 the timeout was due to a deadlock. This should make it easier to debug 
 scenarios like HDFS-4001.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8894) GenericTestUtils.waitFor should dump thread stacks on timeout

2012-10-05 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8894:


Status: Patch Available  (was: Open)

submitting patch to Jenkins. I did a manual test by changing a waitFor() call 
to only wait 1ms, and verified that the exception had the expected info.

 GenericTestUtils.waitFor should dump thread stacks on timeout
 -

 Key: HADOOP-8894
 URL: https://issues.apache.org/jira/browse/HADOOP-8894
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-8894.txt


 Many tests use this utility to wait for a condition to become true. In the 
 event that it times out, we should dump all the thread stack traces, in case 
 the timeout was due to a deadlock. This should make it easier to debug 
 scenarios like HDFS-4001.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-6616) Improve documentation for rack awareness

2012-10-05 Thread Adam Faris (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Faris updated HADOOP-6616:
---

Attachment: hadoop-6616.patch

Here's a documentation update for cluster_setup.xml.  Inside the update one 
will find several topology script examples, a link to the NetworkTopology.java 
file in Apache's subversion tree, and a expanded explanation of how rack 
awareness works. 

 Improve documentation for rack awareness
 

 Key: HADOOP-6616
 URL: https://issues.apache.org/jira/browse/HADOOP-6616
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Jeff Hammerbacher
  Labels: newbie
 Attachments: hadoop-6616.patch


 The current documentation for rack awareness 
 (http://hadoop.apache.org/common/docs/r0.20.0/cluster_setup.html#Hadoop+Rack+Awareness)
  should be augmented to include a sample script.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8591) TestZKFailoverController tests time out

2012-10-05 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers resolved HADOOP-8591.


Resolution: Invalid
  Assignee: Aaron T. Myers

I looked into this today and realized that it was a problem with a particular 
Jenkins slave. Whenever a pre-commit test or nightly build was run on hadoop1, 
it would fail. Whenever it was run anywhere else, it would pass. When I logged 
in to hadoop1, I noticed that there were a bunch of pre-commit processes and 
even a nightly build that had been running for weeks or months. After killing 
these zombie processes, TestZKFailoverController now passes reliably on hadoop1.

 TestZKFailoverController tests time out
 ---

 Key: HADOOP-8591
 URL: https://issues.apache.org/jira/browse/HADOOP-8591
 Project: Hadoop Common
  Issue Type: Bug
  Components: auto-failover, ha, test
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Aaron T. Myers
  Labels: test-fail

 Looks like the TestZKFailoverController timeout needs to be bumped.
 {noformat}
 java.lang.Exception: test timed out after 3 milliseconds
   at java.lang.Object.wait(Native Method)
   at 
 org.apache.hadoop.ha.ZKFailoverController.waitForActiveAttempt(ZKFailoverController.java:460)
   at 
 org.apache.hadoop.ha.ZKFailoverController.doGracefulFailover(ZKFailoverController.java:648)
   at 
 org.apache.hadoop.ha.ZKFailoverController.access$400(ZKFailoverController.java:58)
   at 
 org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:593)
   at 
 org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:590)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1334)
   at 
 org.apache.hadoop.ha.ZKFailoverController.gracefulFailoverToYou(ZKFailoverController.java:590)
   at 
 org.apache.hadoop.ha.TestZKFailoverController.testOneOfEverything(TestZKFailoverController.java:575)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8887) Use a Maven plugin to build the native code using CMake

2012-10-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470853#comment-13470853
 ] 

Hadoop QA commented on HADOOP-8887:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12548065/HADOOP-8887.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 2057 javac 
compiler warnings (more than the trunk's current 2053 warnings).

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 8 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 10 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
dev-support/cmake-maven-ng-plugin hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1569//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1569//artifact/trunk/patchprocess/newPatchFindbugsWarningscmake-maven-ng-plugin.html
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1569//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1569//console

This message is automatically generated.

 Use a Maven plugin to build the native code using CMake
 ---

 Key: HADOOP-8887
 URL: https://issues.apache.org/jira/browse/HADOOP-8887
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8887.001.patch, HADOOP-8887.002.patch


 Currently, we build the native code using ant-build invocations.  Although 
 this works, it has some limitations:
 * compiler warning messages are hidden, which can cause people to check in 
 code with warnings unintentionally
 * there is no framework for running native unit tests; instead, we use ad-hoc 
 constructs involving shell scripts
 * the antrun code is very platform specific
 * there is no way to run a specific native unit test
 * it's more or less impossible for scripts like test-patch.sh to separate a 
 native test failing from the build itself failing (no files are created) or 
 to enumerate which native tests failed.
 Using a native Maven plugin would overcome these limitations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8889) Upgrade to Surefire 2.12.3

2012-10-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470855#comment-13470855
 ] 

Hudson commented on HADOOP-8889:


Integrated in Hadoop-Hdfs-trunk-Commit #2881 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2881/])
HADOOP-8889. Upgrade to Surefire 2.12.3. Contributed by Todd Lipcon. 
(Revision 1394881)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1394881
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml


 Upgrade to Surefire 2.12.3
 --

 Key: HADOOP-8889
 URL: https://issues.apache.org/jira/browse/HADOOP-8889
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, test
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: hadoop-8889.txt


 Surefire 2.12.3 has a couple improvements which are helpful for us. In 
 particular, it fixes http://jira.codehaus.org/browse/SUREFIRE-817 which has 
 been aggravating in the past.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8889) Upgrade to Surefire 2.12.3

2012-10-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470856#comment-13470856
 ] 

Hudson commented on HADOOP-8889:


Integrated in Hadoop-Common-trunk-Commit #2819 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2819/])
HADOOP-8889. Upgrade to Surefire 2.12.3. Contributed by Todd Lipcon. 
(Revision 1394881)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1394881
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml


 Upgrade to Surefire 2.12.3
 --

 Key: HADOOP-8889
 URL: https://issues.apache.org/jira/browse/HADOOP-8889
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, test
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: hadoop-8889.txt


 Surefire 2.12.3 has a couple improvements which are helpful for us. In 
 particular, it fixes http://jira.codehaus.org/browse/SUREFIRE-817 which has 
 been aggravating in the past.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8889) Upgrade to Surefire 2.12.3

2012-10-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470860#comment-13470860
 ] 

Hudson commented on HADOOP-8889:


Integrated in Hadoop-Mapreduce-trunk-Commit #2843 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2843/])
HADOOP-8889. Upgrade to Surefire 2.12.3. Contributed by Todd Lipcon. 
(Revision 1394881)

 Result = FAILURE
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1394881
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml


 Upgrade to Surefire 2.12.3
 --

 Key: HADOOP-8889
 URL: https://issues.apache.org/jira/browse/HADOOP-8889
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, test
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: hadoop-8889.txt


 Surefire 2.12.3 has a couple improvements which are helpful for us. In 
 particular, it fixes http://jira.codehaus.org/browse/SUREFIRE-817 which has 
 been aggravating in the past.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-6616) Improve documentation for rack awareness

2012-10-05 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470862#comment-13470862
 ] 

Joep Rottinghuis commented on HADOOP-6616:
--

Nice Adam. Nit int the second-last paragraph:
{noformat}
If neither codetopology.script.file.name/code or 
codetopology.script.file.name/code is 
not set, the rack id '/default-rack' is returned for any passed IP address. 
{noformat}
Neither not is a double negative.
You mention the property topology.script.file.name twice. Did you mean the 
following?
{noformat}
If neither codetopology.script.file.name/code nor 
codetopology.node.switch.mapping.impl/code is set, the rack id 
'/default-rack' is returned for any passed IP address. 
{noformat}



 Improve documentation for rack awareness
 

 Key: HADOOP-6616
 URL: https://issues.apache.org/jira/browse/HADOOP-6616
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Jeff Hammerbacher
  Labels: newbie
 Attachments: hadoop-6616.patch


 The current documentation for rack awareness 
 (http://hadoop.apache.org/common/docs/r0.20.0/cluster_setup.html#Hadoop+Rack+Awareness)
  should be augmented to include a sample script.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8884) DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError

2012-10-05 Thread Anthony Rojas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470865#comment-13470865
 ] 

Anthony Rojas commented on HADOOP-8884:
---

Thanks for the feedback.  I agree with both recommendations, will consolidate 
feedback and re-submit an updated patch.


 DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load 
 native-hadoop with error: java.lang.UnsatisfiedLinkError
 -

 Key: HADOOP-8884
 URL: https://issues.apache.org/jira/browse/HADOOP-8884
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.1-alpha
Reporter: Anthony Rojas
Assignee: Anthony Rojas
 Attachments: HADOOP-8884.patch


 Recommending to change the following debug message and promote it to a 
 warning instead:
 12/07/02 18:41:44 DEBUG util.NativeCodeLoader: Failed to load native-hadoop 
 with error: java.lang.UnsatisfiedLinkError: 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0: /lib64/libc.so.6: version 
 `GLIBC_2.6' not found (required by 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8884) DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError

2012-10-05 Thread Anthony Rojas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anthony Rojas updated HADOOP-8884:
--

Status: Open  (was: Patch Available)

 DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load 
 native-hadoop with error: java.lang.UnsatisfiedLinkError
 -

 Key: HADOOP-8884
 URL: https://issues.apache.org/jira/browse/HADOOP-8884
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.1-alpha
Reporter: Anthony Rojas
Assignee: Anthony Rojas
 Attachments: HADOOP-8884.patch


 Recommending to change the following debug message and promote it to a 
 warning instead:
 12/07/02 18:41:44 DEBUG util.NativeCodeLoader: Failed to load native-hadoop 
 with error: java.lang.UnsatisfiedLinkError: 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0: /lib64/libc.so.6: version 
 `GLIBC_2.6' not found (required by 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8894) GenericTestUtils.waitFor should dump thread stacks on timeout

2012-10-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470866#comment-13470866
 ] 

Hadoop QA commented on HADOOP-8894:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12548083/hadoop-8894.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1570//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1570//console

This message is automatically generated.

 GenericTestUtils.waitFor should dump thread stacks on timeout
 -

 Key: HADOOP-8894
 URL: https://issues.apache.org/jira/browse/HADOOP-8894
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-8894.txt


 Many tests use this utility to wait for a condition to become true. In the 
 event that it times out, we should dump all the thread stack traces, in case 
 the timeout was due to a deadlock. This should make it easier to debug 
 scenarios like HDFS-4001.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8884) DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError

2012-10-05 Thread Anthony Rojas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anthony Rojas updated HADOOP-8884:
--

Attachment: HADOOP-8884-v2.patch

Uploading version 2 of the patch, consolidating feedback from Suresh and ATM.

 DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load 
 native-hadoop with error: java.lang.UnsatisfiedLinkError
 -

 Key: HADOOP-8884
 URL: https://issues.apache.org/jira/browse/HADOOP-8884
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.1-alpha
Reporter: Anthony Rojas
Assignee: Anthony Rojas
 Attachments: HADOOP-8884.patch, HADOOP-8884-v2.patch


 Recommending to change the following debug message and promote it to a 
 warning instead:
 12/07/02 18:41:44 DEBUG util.NativeCodeLoader: Failed to load native-hadoop 
 with error: java.lang.UnsatisfiedLinkError: 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0: /lib64/libc.so.6: version 
 `GLIBC_2.6' not found (required by 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8884) DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError

2012-10-05 Thread Anthony Rojas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anthony Rojas updated HADOOP-8884:
--

Status: Patch Available  (was: Open)

Version 2 of the patch, consolidating feedback from Suresh and ATM.

 DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load 
 native-hadoop with error: java.lang.UnsatisfiedLinkError
 -

 Key: HADOOP-8884
 URL: https://issues.apache.org/jira/browse/HADOOP-8884
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.1-alpha
Reporter: Anthony Rojas
Assignee: Anthony Rojas
 Attachments: HADOOP-8884.patch, HADOOP-8884-v2.patch


 Recommending to change the following debug message and promote it to a 
 warning instead:
 12/07/02 18:41:44 DEBUG util.NativeCodeLoader: Failed to load native-hadoop 
 with error: java.lang.UnsatisfiedLinkError: 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0: /lib64/libc.so.6: version 
 `GLIBC_2.6' not found (required by 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-6616) Improve documentation for rack awareness

2012-10-05 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470873#comment-13470873
 ] 

Joep Rottinghuis commented on HADOOP-6616:
--

Perhaps you can add a few words about codetopology.script.number.argscode 
which  IIRC defaults to 100 and drives the number of host IPs passed to the 
script in one go, to allow the script to do some internal caching. When set to 
1 then a process will be spawned to invoke the script for each host.

You describe how the rack awareness works. Would it be useful to add a sentence 
or two about why this is used (ie. block placement uses rack awareness for 
fault tolerance)?

 Improve documentation for rack awareness
 

 Key: HADOOP-6616
 URL: https://issues.apache.org/jira/browse/HADOOP-6616
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Jeff Hammerbacher
  Labels: newbie
 Attachments: hadoop-6616.patch


 The current documentation for rack awareness 
 (http://hadoop.apache.org/common/docs/r0.20.0/cluster_setup.html#Hadoop+Rack+Awareness)
  should be augmented to include a sample script.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8427) Convert Forrest docs to APT

2012-10-05 Thread Andy Isaacson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Isaacson updated HADOOP-8427:
--

Attachment: hadoop8427.txt

Convert commands_manual.html and file_system_shell.html to APT.  Remove a bunch 
of out of date documentation that no longer correctly describes Hadoop 2.0.

 Convert Forrest docs to APT
 ---

 Key: HADOOP-8427
 URL: https://issues.apache.org/jira/browse/HADOOP-8427
 Project: Hadoop Common
  Issue Type: Task
  Components: documentation
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Andy Isaacson
  Labels: newbie
 Attachments: hadoop8427.txt


 Some of the forrest docs content in src/docs/src/documentation/content/xdocs 
 has not yet been converted to APT and moved to src/site/apt. Let's convert 
 the forrest docs that haven't been converted yet to new APT content in 
 hadoop-common/src/site/apt (and link the new content into 
 hadoop-project/src/site/apt/index.apt.vm) and remove all forrest dependencies.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8891) Remove DelegationTokenRenewer and its logic from WebHdfsFileSystem and HftpFileSystem

2012-10-05 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8891:
-

Status: Patch Available  (was: Open)

 Remove DelegationTokenRenewer and its logic from WebHdfsFileSystem and 
 HftpFileSystem
 -

 Key: HADOOP-8891
 URL: https://issues.apache.org/jira/browse/HADOOP-8891
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.1-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: HADOOP-8891.patch


 Moved the HDFS part of HADOOP-8852 to HDFS-4009 along with other sub-tasks. 
 Created this to track the removal of DelegationTokenRenewer alone.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8891) Remove DelegationTokenRenewer and its logic from WebHdfsFileSystem and HftpFileSystem

2012-10-05 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8891:
-

Attachment: HADOOP-8891.patch

Uploading a patch that removes DelegationTokenRenewer and its use in WebHdfs, 
Hftp, HttpFS filesystems.

Ran all *WebHdfs*, *Hftp*, and *HttpFS* tests - the patch doesn't introduce any 
new test failures.

 Remove DelegationTokenRenewer and its logic from WebHdfsFileSystem and 
 HftpFileSystem
 -

 Key: HADOOP-8891
 URL: https://issues.apache.org/jira/browse/HADOOP-8891
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.1-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: HADOOP-8891.patch


 Moved the HDFS part of HADOOP-8852 to HDFS-4009 along with other sub-tasks. 
 Created this to track the removal of DelegationTokenRenewer alone.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8895) TokenRenewer should be an interface, it is currently a fully abstract class

2012-10-05 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created HADOOP-8895:


 Summary: TokenRenewer should be an interface, it is currently a 
fully abstract class
 Key: HADOOP-8895
 URL: https://issues.apache.org/jira/browse/HADOOP-8895
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Minor


TokenRenewer is a fully abstract class. Making it an interface will allow 
classes extending other classes to implement the interface.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8884) DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError

2012-10-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470907#comment-13470907
 ] 

Hadoop QA commented on HADOOP-8884:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12548090/HADOOP-8884-v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1571//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1571//console

This message is automatically generated.

 DEBUG should be WARN for DEBUG util.NativeCodeLoader: Failed to load 
 native-hadoop with error: java.lang.UnsatisfiedLinkError
 -

 Key: HADOOP-8884
 URL: https://issues.apache.org/jira/browse/HADOOP-8884
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.0.1-alpha
Reporter: Anthony Rojas
Assignee: Anthony Rojas
 Attachments: HADOOP-8884.patch, HADOOP-8884-v2.patch


 Recommending to change the following debug message and promote it to a 
 warning instead:
 12/07/02 18:41:44 DEBUG util.NativeCodeLoader: Failed to load native-hadoop 
 with error: java.lang.UnsatisfiedLinkError: 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0: /lib64/libc.so.6: version 
 `GLIBC_2.6' not found (required by 
 /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8891) Remove DelegationTokenRenewer and its logic from WebHdfsFileSystem and HftpFileSystem

2012-10-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13470925#comment-13470925
 ] 

Hadoop QA commented on HADOOP-8891:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12548097/HADOOP-8891.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-httpfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1572//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1572//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1572//console

This message is automatically generated.

 Remove DelegationTokenRenewer and its logic from WebHdfsFileSystem and 
 HftpFileSystem
 -

 Key: HADOOP-8891
 URL: https://issues.apache.org/jira/browse/HADOOP-8891
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.1-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: HADOOP-8891.patch


 Moved the HDFS part of HADOOP-8852 to HDFS-4009 along with other sub-tasks. 
 Created this to track the removal of DelegationTokenRenewer alone.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira