[jira] [Created] (HADOOP-8848) hadoop-mapreduce-client-core fails compilation in Eclipse due to missing Avro-generated classes

2012-09-26 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-8848:
-

 Summary: hadoop-mapreduce-client-core fails compilation in Eclipse 
due to missing Avro-generated classes
 Key: HADOOP-8848
 URL: https://issues.apache.org/jira/browse/HADOOP-8848
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth


After importing all of hadoop-common trunk into Eclipse with the m2e plugin, 
the Avro-generated classes in hadoop-mapreduce-client-core don't show up on 
Eclipse's classpath.  This causes compilation errors for anything that depends 
on those classes.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8848) hadoop-mapreduce-client-core fails compilation in Eclipse due to missing Avro-generated classes

2012-09-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8848:
--

Attachment: HADOOP-8848.patch

 hadoop-mapreduce-client-core fails compilation in Eclipse due to missing 
 Avro-generated classes
 ---

 Key: HADOOP-8848
 URL: https://issues.apache.org/jira/browse/HADOOP-8848
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-8848.patch


 After importing all of hadoop-common trunk into Eclipse with the m2e plugin, 
 the Avro-generated classes in hadoop-mapreduce-client-core don't show up on 
 Eclipse's classpath.  This causes compilation errors for anything that 
 depends on those classes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8848) hadoop-mapreduce-client-core fails compilation in Eclipse due to missing Avro-generated classes

2012-09-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463581#comment-13463581
 ] 

Chris Nauroth commented on HADOOP-8848:
---

The attached patch fixes the problem in the module's pom.xml by adding the 
Avro-generated source folder using build-helper-maven-plugin.  Other modules 
have used a similar strategy.  (See 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml.)
  With this patch in place, it's possible to do a fresh import of all of 
hadoop-common trunk into Eclipse and see it compile successfully immediately.

Note that this problem did not harm the typical Maven build.  It was just a 
problem for fresh project imports into Eclipse.


 hadoop-mapreduce-client-core fails compilation in Eclipse due to missing 
 Avro-generated classes
 ---

 Key: HADOOP-8848
 URL: https://issues.apache.org/jira/browse/HADOOP-8848
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-8848.patch


 After importing all of hadoop-common trunk into Eclipse with the m2e plugin, 
 the Avro-generated classes in hadoop-mapreduce-client-core don't show up on 
 Eclipse's classpath.  This causes compilation errors for anything that 
 depends on those classes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8848) hadoop-mapreduce-client-core fails compilation in Eclipse due to missing Avro-generated classes

2012-09-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8848:
--

Status: Patch Available  (was: Open)

 hadoop-mapreduce-client-core fails compilation in Eclipse due to missing 
 Avro-generated classes
 ---

 Key: HADOOP-8848
 URL: https://issues.apache.org/jira/browse/HADOOP-8848
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-8848.patch


 After importing all of hadoop-common trunk into Eclipse with the m2e plugin, 
 the Avro-generated classes in hadoop-mapreduce-client-core don't show up on 
 Eclipse's classpath.  This causes compilation errors for anything that 
 depends on those classes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8841) In trunk for command rm, the flags -[rR] and -f are not documented

2012-09-26 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HADOOP-8841:
--

Status: Patch Available  (was: Open)

 In trunk for command rm, the flags -[rR] and -f are not documented
 --

 Key: HADOOP-8841
 URL: https://issues.apache.org/jira/browse/HADOOP-8841
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Attachments: HADOOP-8841.001.patch


 We need to add description about the flags in the document for trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8848) hadoop-mapreduce-client-core fails compilation in Eclipse due to missing Avro-generated classes

2012-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463588#comment-13463588
 ] 

Hadoop QA commented on HADOOP-8848:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12546648/HADOOP-8848.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1525//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1525//console

This message is automatically generated.

 hadoop-mapreduce-client-core fails compilation in Eclipse due to missing 
 Avro-generated classes
 ---

 Key: HADOOP-8848
 URL: https://issues.apache.org/jira/browse/HADOOP-8848
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-8848.patch


 After importing all of hadoop-common trunk into Eclipse with the m2e plugin, 
 the Avro-generated classes in hadoop-mapreduce-client-core don't show up on 
 Eclipse's classpath.  This causes compilation errors for anything that 
 depends on those classes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8841) In trunk for command rm, the flags -[rR] and -f are not documented

2012-09-26 Thread Hemanth Yamijala (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463592#comment-13463592
 ] 

Hemanth Yamijala commented on HADOOP-8841:
--

I think this is a duplicate of HADOOP-8808. I am a bit confused about what our 
positioning of FsShell documentation is, as I believe we are no longer 
publishing the Forrest style docs ?

 In trunk for command rm, the flags -[rR] and -f are not documented
 --

 Key: HADOOP-8841
 URL: https://issues.apache.org/jira/browse/HADOOP-8841
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Attachments: HADOOP-8841.001.patch


 We need to add description about the flags in the document for trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8845) When looking for parent paths info, globStatus must filter out non-directory elements to avoid an AccessControlException

2012-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463603#comment-13463603
 ] 

Hadoop QA commented on HADOOP-8845:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12546641/HADOOP-8845.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.ha.TestZKFailoverController
  
org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics
  org.apache.hadoop.hdfs.TestPersistBlocks

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1524//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1524//console

This message is automatically generated.

 When looking for parent paths info, globStatus must filter out non-directory 
 elements to avoid an AccessControlException
 

 Key: HADOOP-8845
 URL: https://issues.apache.org/jira/browse/HADOOP-8845
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
  Labels: glob
 Attachments: HADOOP-8845.patch, HADOOP-8845.patch, HADOOP-8845.patch


 A brief description from my colleague Stephen Fritz who helped discover it:
 {code}
 [root@node1 ~]# su - hdfs
 -bash-4.1$ echo My Test Stringtestfile -- just a text file, for testing 
 below
 -bash-4.1$ hadoop dfs -mkdir /tmp/testdir -- create a directory
 -bash-4.1$ hadoop dfs -mkdir /tmp/testdir/1 -- create a subdirectory
 -bash-4.1$ hadoop dfs -put testfile /tmp/testdir/1/testfile -- put the test 
 file in the subdirectory
 -bash-4.1$ hadoop dfs -put testfile /tmp/testdir/testfile -- put the test 
 file in the directory
 -bash-4.1$ hadoop dfs -lsr /tmp/testdir
 drwxr-xr-x   - hdfs hadoop  0 2012-09-25 06:52 /tmp/testdir/1
 -rw-r--r--   3 hdfs hadoop 15 2012-09-25 06:52 /tmp/testdir/1/testfile
 -rw-r--r--   3 hdfs hadoop 15 2012-09-25 06:52 /tmp/testdir/testfile
 All files are where we expect them...OK, let's try reading
 -bash-4.1$ hadoop dfs -cat /tmp/testdir/testfile
 My Test String -- success!
 -bash-4.1$ hadoop dfs -cat /tmp/testdir/1/testfile
 My Test String -- success!
 -bash-4.1$ hadoop dfs -cat /tmp/testdir/*/testfile
 My Test String -- success!  
 Note that we used an '*' in the cat command, and it correctly found the 
 subdirectory '/tmp/testdir/1', and ignore the regular file 
 '/tmp/testdir/testfile'
 -bash-4.1$ exit
 logout
 [root@node1 ~]# su - testuser -- lets try it as a different user:
 [testuser@node1 ~]$ hadoop dfs -lsr /tmp/testdir
 drwxr-xr-x   - hdfs hadoop  0 2012-09-25 06:52 /tmp/testdir/1
 -rw-r--r--   3 hdfs hadoop 15 2012-09-25 06:52 /tmp/testdir/1/testfile
 -rw-r--r--   3 hdfs hadoop 15 2012-09-25 06:52 /tmp/testdir/testfile
 [testuser@node1 ~]$ hadoop dfs -cat /tmp/testdir/testfile
 My Test String -- good
 [testuser@node1 ~]$ hadoop dfs -cat /tmp/testdir/1/testfile
 My Test String -- so far so good
 [testuser@node1 ~]$ hadoop dfs -cat /tmp/testdir/*/testfile
 cat: org.apache.hadoop.security.AccessControlException: Permission denied: 
 user=testuser, access=EXECUTE, 
 inode=/tmp/testdir/testfile:hdfs:hadoop:-rw-r--r--
 {code}
 Essentially, we hit a ACE with access=EXECUTE on file /tmp/testdir/testfile 
 cause we tried to access the /tmp/testdir/testfile/testfile as a path. This 
 shouldn't happen, as the testfile is a file and not a path parent to be 
 looked up upon.
 {code}
 2012-09-25 07:24:27,406 INFO org.apache.hadoop.ipc.Server: IPC Server
 handler 2 on 8020, call getFileInfo(/tmp/testdir/testfile/testfile)
 {code}
 Surprisingly the superuser avoids hitting into the error, as a result of 
 

[jira] [Commented] (HADOOP-8841) In trunk for command rm, the flags -[rR] and -f are not documented

2012-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463607#comment-13463607
 ] 

Hadoop QA commented on HADOOP-8841:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12546650/HADOOP-8841.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

+0 tests included.  The patch appears to be a documentation patch that 
doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1526//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1526//console

This message is automatically generated.

 In trunk for command rm, the flags -[rR] and -f are not documented
 --

 Key: HADOOP-8841
 URL: https://issues.apache.org/jira/browse/HADOOP-8841
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Attachments: HADOOP-8841.001.patch


 We need to add description about the flags in the document for trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-09-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463638#comment-13463638
 ] 

Steve Loughran commented on HADOOP-8847:


Bikas, you know that the java untar doesn't set FS permissions? Even if that's 
considered unimportant, the big worry I have is over long filenames.

The Ant tar/untar logic doesn't do perms either, but does handle gnu  posix 
extensions:
[ http://svn.apache.org/viewvc/ant/core/trunk/src/main/org/apache/tools/tar/ ]
you can pick this up via Apache Compress: [ http://commons.apache.org/compress/ 
] -I'm not sure that version is up to date w/ Posix patches.

You need tests to verify that 
# filenames  140 chars can be untarred (tar --format=gnu )
# LFNs in old gnu format are handled (tar --format=oldgnu)
# long filenames in a tar created w/ posix (tar --format=posix)

These files could all be created on a Linux box and added to svn, so that the 
tests on windows will be consistent.

Without tests showing that long filenames are handled, switching to a pure Java 
API will not be backwards compatible and runs a risk of things breaking. Sun's 
implementation cannot handle such files.


 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, test-untar.tar, 
 test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-09-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463639#comment-13463639
 ] 

Steve Loughran commented on HADOOP-8847:


correction -you aren't using any Sun Java APIs -I misread that. Commons 
compress 1.1+ claims to handle Posix [ 
https://issues.apache.org/jira/browse/COMPRESS-110 ]; Ant's handled LFNs for a 
long time. 

All that is needed is tests to verify that LFNs are handled.

Ignoring file permissions may be backwards incompatible, but I'm not sure if 
anyone was actually using it. It's probably best to tag it as such just to 
avoid surprises.

-steve

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, test-untar.tar, 
 test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2012-09-26 Thread Ivan A. Veselovsky (JIRA)
Ivan A. Veselovsky created HADOOP-8849:
--

 Summary: FileUtil#fullyDelete should grant the target directories 
+rwx permissions before trying to delete them
 Key: HADOOP-8849
 URL: https://issues.apache.org/jira/browse/HADOOP-8849
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Priority: Minor


2 improvements are suggested for implementation of methods 
org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
 
1) We should grant +rwx permissions the target directories before trying to 
delete them.
The mentioned methods fail to dlete directories that don't have read or execute 
permissions.
Actual problem appears if an hdfs-related test is timed out (with a short 
timeout like tesns of seconds), and the forked test process is killed, some 
directories are left on disk that are not readable and/or executable. This 
prevents next tests from being executed properly because these directories 
cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
So, its recommended to grant the read, write, and execute permissions the 
directories whose content is to be deleted.

2) We shouldn't rely upon File#delete() return value, use File#exists() 
instead. 
FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
this is not reliable because File#delete() returns true only if the file was 
deleted as a result of the #delete() method invocation. E.g. in the following 
code
if (f.exists()) { // 1
  return f.delete(); // 2
}
if the file f was deleted by another thread or process between calls 1 and 
2, this fragment will return false, while the file f does not exist upon 
the method return.
So, better to write
if (f.exists()) {
  f.delete();
  return !f.exists();
}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8822) relnotes.py was deleted post mavenization

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463762#comment-13463762
 ] 

Hudson commented on HADOOP-8822:


Integrated in Hadoop-Hdfs-0.23-Build #386 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/386/])
svn merge -c 1390133 FIXES: HADOOP-8822. relnotes.py was deleted post 
mavenization (bobby) (Revision 1390142)

 Result = UNSTABLE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390142
Files : 
* /hadoop/common/branches/branch-0.23/dev-support/relnotes.py
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt


 relnotes.py was deleted post mavenization
 -

 Key: HADOOP-8822
 URL: https://issues.apache.org/jira/browse/HADOOP-8822
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Fix For: 0.23.4, 2.0.3-alpha

 Attachments: HADOOP-8822.txt, HADOOP-8822.txt, HADOOP-8822.txt, 
 HADOOP-8822.txt, HADOOP-8822.txt, HADOOP-8822.txt


 relnotes.py was removed post mavinization.  It needs to be added back in so 
 we can generate release notes, and it should be updated to deal with YARN and 
 the separate release notes files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8794) Modifiy bin/hadoop to point to HADOOP_YARN_HOME

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463780#comment-13463780
 ] 

Hudson commented on HADOOP-8794:


Integrated in Hadoop-Hdfs-trunk #1177 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1177/])
HADOOP-8794. Rename YARN_HOME to HADOOP_YARN_HOME. Contributed by Vinod K 
V. (Revision 1390221)

 Result = SUCCESS
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390221
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/start-all.sh


 Modifiy bin/hadoop to point to HADOOP_YARN_HOME
 ---

 Key: HADOOP-8794
 URL: https://issues.apache.org/jira/browse/HADOOP-8794
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0, 2.0.1-alpha
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8794-20120912.txt, HADOOP-8794-20120923.txt


 YARN-9 renames YARN_HOME to HADOOP_YARN_HOME. bin/hadoop script also needs to 
 do the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8822) relnotes.py was deleted post mavenization

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463777#comment-13463777
 ] 

Hudson commented on HADOOP-8822:


Integrated in Hadoop-Hdfs-trunk #1177 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1177/])
HADOOP-8822. relnotes.py was deleted post mavenization (bobby) (Revision 
1390133)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390133
Files : 
* /hadoop/common/trunk/dev-support/relnotes.py
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 relnotes.py was deleted post mavenization
 -

 Key: HADOOP-8822
 URL: https://issues.apache.org/jira/browse/HADOOP-8822
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Fix For: 0.23.4, 2.0.3-alpha

 Attachments: HADOOP-8822.txt, HADOOP-8822.txt, HADOOP-8822.txt, 
 HADOOP-8822.txt, HADOOP-8822.txt, HADOOP-8822.txt


 relnotes.py was removed post mavinization.  It needs to be added back in so 
 we can generate release notes, and it should be updated to deal with YARN and 
 the separate release notes files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-3957) Fix javac warnings in DistCp and the corresponding tests

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-3957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463782#comment-13463782
 ] 

Hudson commented on HADOOP-3957:


Integrated in Hadoop-Hdfs-trunk #1177 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1177/])
HADOOP-3957. Change MutableQuantiles to use a shared thread for rolling 
over metrics. Contributed by Andrew Wang. (Revision 1390210)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390210
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableQuantiles.java


 Fix javac warnings in DistCp and the corresponding tests
 

 Key: HADOOP-3957
 URL: https://issues.apache.org/jira/browse/HADOOP-3957
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Fix For: 0.19.0

 Attachments: 3957_20080814.patch


 There are a few javac warning in DistCp and TestCopyFiles.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8791) rm Only deletes non empty directory and files.

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463781#comment-13463781
 ] 

Hudson commented on HADOOP-8791:


Integrated in Hadoop-Hdfs-trunk #1177 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1177/])
HADOOP-8791. Fix rm command documentation to indicte it deletes files and 
not directories. Contributed by Jing Zhao. (Revision 1390109)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390109
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/file_system_shell.xml


 rm Only deletes non empty directory and files.
 

 Key: HADOOP-8791
 URL: https://issues.apache.org/jira/browse/HADOOP-8791
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.0.3, 3.0.0
Reporter: Bertrand Dechoux
Assignee: Jing Zhao
  Labels: documentation
 Fix For: 1.2.0, 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-8791-branch-1.001.patch, 
 HADOOP-8791-branch-1.patch, HADOOP-8791-branch-1.patch, 
 HADOOP-8791-trunk.001.patch, HADOOP-8791-trunk.patch, HADOOP-8791-trunk.patch


 The documentation (1.0.3) is describing the opposite of what rm does.
 It should be  Only delete files and empty directories.
 With regards to file, the size of the file should not matter, should it?
 OR I am totally misunderstanding the semantic of this command and I am not 
 the only one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8840) Fix the test-patch colorizer to cover all sorts of +1 lines.

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463773#comment-13463773
 ] 

Hudson commented on HADOOP-8840:


Integrated in Hadoop-Hdfs-trunk #1177 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1177/])
HADOOP-8840. Fix the test-patch colorizer to cover all sorts of +1 lines. 
(Harsh J via bobby) (Revision 1390129)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390129
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Fix the test-patch colorizer to cover all sorts of +1 lines.
 

 Key: HADOOP-8840
 URL: https://issues.apache.org/jira/browse/HADOOP-8840
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Harsh J
Assignee: Harsh J
 Fix For: 3.0.0

 Attachments: HADOOP-8840.patch


 As noticed by Jason on HADOOP-8838, I missed some of the entries needed to be 
 colorized.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8838) Colorize the test-patch output sent to JIRA

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463778#comment-13463778
 ] 

Hudson commented on HADOOP-8838:


Integrated in Hadoop-Hdfs-trunk #1177 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1177/])
HADOOP-8838. Colorize the test-patch output sent to JIRA (Harsh J via 
bobby) (Revision 1389875)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1389875
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Colorize the test-patch output sent to JIRA
 ---

 Key: HADOOP-8838
 URL: https://issues.apache.org/jira/browse/HADOOP-8838
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 3.0.0

 Attachments: HADOOP-8838.patch


 It would be helpful to mark the -1s in red and +1s in green. Helps avoid 
 missing stuff like findbugs warnings, etc., we've been bitten by. Also helps 
 run through the results faster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2012-09-26 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-8849:
---

Attachment: HADOOP-8849-vs-trunk.patch

Attaching the suggested patch.
The test TestFileUtil modified accordingly.

 FileUtil#fullyDelete should grant the target directories +rwx permissions 
 before trying to delete them
 --

 Key: HADOOP-8849
 URL: https://issues.apache.org/jira/browse/HADOOP-8849
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8849-vs-trunk.patch


 2 improvements are suggested for implementation of methods 
 org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
 org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
  
 1) We should grant +rwx permissions the target directories before trying to 
 delete them.
 The mentioned methods fail to dlete directories that don't have read or 
 execute permissions.
 Actual problem appears if an hdfs-related test is timed out (with a short 
 timeout like tesns of seconds), and the forked test process is killed, some 
 directories are left on disk that are not readable and/or executable. This 
 prevents next tests from being executed properly because these directories 
 cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
 So, its recommended to grant the read, write, and execute permissions the 
 directories whose content is to be deleted.
 2) We shouldn't rely upon File#delete() return value, use File#exists() 
 instead. 
 FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
 this is not reliable because File#delete() returns true only if the file was 
 deleted as a result of the #delete() method invocation. E.g. in the following 
 code
 if (f.exists()) { // 1
   return f.delete(); // 2
 }
 if the file f was deleted by another thread or process between calls 1 and 
 2, this fragment will return false, while the file f does not exist upon 
 the method return.
 So, better to write
 if (f.exists()) {
   f.delete();
   return !f.exists();
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8840) Fix the test-patch colorizer to cover all sorts of +1 lines.

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463821#comment-13463821
 ] 

Hudson commented on HADOOP-8840:


Integrated in Hadoop-Mapreduce-trunk #1208 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1208/])
HADOOP-8840. Fix the test-patch colorizer to cover all sorts of +1 lines. 
(Harsh J via bobby) (Revision 1390129)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390129
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Fix the test-patch colorizer to cover all sorts of +1 lines.
 

 Key: HADOOP-8840
 URL: https://issues.apache.org/jira/browse/HADOOP-8840
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Harsh J
Assignee: Harsh J
 Fix For: 3.0.0

 Attachments: HADOOP-8840.patch


 As noticed by Jason on HADOOP-8838, I missed some of the entries needed to be 
 colorized.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8822) relnotes.py was deleted post mavenization

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463826#comment-13463826
 ] 

Hudson commented on HADOOP-8822:


Integrated in Hadoop-Mapreduce-trunk #1208 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1208/])
HADOOP-8822. relnotes.py was deleted post mavenization (bobby) (Revision 
1390133)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390133
Files : 
* /hadoop/common/trunk/dev-support/relnotes.py
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 relnotes.py was deleted post mavenization
 -

 Key: HADOOP-8822
 URL: https://issues.apache.org/jira/browse/HADOOP-8822
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Fix For: 0.23.4, 2.0.3-alpha

 Attachments: HADOOP-8822.txt, HADOOP-8822.txt, HADOOP-8822.txt, 
 HADOOP-8822.txt, HADOOP-8822.txt, HADOOP-8822.txt


 relnotes.py was removed post mavinization.  It needs to be added back in so 
 we can generate release notes, and it should be updated to deal with YARN and 
 the separate release notes files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8838) Colorize the test-patch output sent to JIRA

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463827#comment-13463827
 ] 

Hudson commented on HADOOP-8838:


Integrated in Hadoop-Mapreduce-trunk #1208 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1208/])
HADOOP-8838. Colorize the test-patch output sent to JIRA (Harsh J via 
bobby) (Revision 1389875)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1389875
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Colorize the test-patch output sent to JIRA
 ---

 Key: HADOOP-8838
 URL: https://issues.apache.org/jira/browse/HADOOP-8838
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Fix For: 3.0.0

 Attachments: HADOOP-8838.patch


 It would be helpful to mark the -1s in red and +1s in green. Helps avoid 
 missing stuff like findbugs warnings, etc., we've been bitten by. Also helps 
 run through the results faster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8794) Modifiy bin/hadoop to point to HADOOP_YARN_HOME

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463829#comment-13463829
 ] 

Hudson commented on HADOOP-8794:


Integrated in Hadoop-Mapreduce-trunk #1208 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1208/])
HADOOP-8794. Rename YARN_HOME to HADOOP_YARN_HOME. Contributed by Vinod K 
V. (Revision 1390221)

 Result = SUCCESS
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390221
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/start-all.sh


 Modifiy bin/hadoop to point to HADOOP_YARN_HOME
 ---

 Key: HADOOP-8794
 URL: https://issues.apache.org/jira/browse/HADOOP-8794
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0, 2.0.1-alpha
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8794-20120912.txt, HADOOP-8794-20120923.txt


 YARN-9 renames YARN_HOME to HADOOP_YARN_HOME. bin/hadoop script also needs to 
 do the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8791) rm Only deletes non empty directory and files.

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463830#comment-13463830
 ] 

Hudson commented on HADOOP-8791:


Integrated in Hadoop-Mapreduce-trunk #1208 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1208/])
HADOOP-8791. Fix rm command documentation to indicte it deletes files and 
not directories. Contributed by Jing Zhao. (Revision 1390109)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390109
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/file_system_shell.xml


 rm Only deletes non empty directory and files.
 

 Key: HADOOP-8791
 URL: https://issues.apache.org/jira/browse/HADOOP-8791
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.0.3, 3.0.0
Reporter: Bertrand Dechoux
Assignee: Jing Zhao
  Labels: documentation
 Fix For: 1.2.0, 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-8791-branch-1.001.patch, 
 HADOOP-8791-branch-1.patch, HADOOP-8791-branch-1.patch, 
 HADOOP-8791-trunk.001.patch, HADOOP-8791-trunk.patch, HADOOP-8791-trunk.patch


 The documentation (1.0.3) is describing the opposite of what rm does.
 It should be  Only delete files and empty directories.
 With regards to file, the size of the file should not matter, should it?
 OR I am totally misunderstanding the semantic of this command and I am not 
 the only one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-3957) Fix javac warnings in DistCp and the corresponding tests

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-3957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463831#comment-13463831
 ] 

Hudson commented on HADOOP-3957:


Integrated in Hadoop-Mapreduce-trunk #1208 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1208/])
HADOOP-3957. Change MutableQuantiles to use a shared thread for rolling 
over metrics. Contributed by Andrew Wang. (Revision 1390210)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390210
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableQuantiles.java


 Fix javac warnings in DistCp and the corresponding tests
 

 Key: HADOOP-3957
 URL: https://issues.apache.org/jira/browse/HADOOP-3957
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Fix For: 0.19.0

 Attachments: 3957_20080814.patch


 There are a few javac warning in DistCp and TestCopyFiles.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2012-09-26 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-8849:
---

Description: 
2 improvements are suggested for implementation of methods 
org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
 
1) We should grant +rwx permissions the target directories before trying to 
delete them.
The mentioned methods fail to delete directories that don't have read or 
execute permissions.
Actual problem appears if an hdfs-related test is timed out (with a short 
timeout like tens of seconds), and the forked test process is killed, some 
directories are left on disk that are not readable and/or executable. This 
prevents next tests from being executed properly because these directories 
cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
So, its recommended to grant the read, write, and execute permissions the 
directories whose content is to be deleted.

2) Generic reliability improvement: we shouldn't rely upon File#delete() return 
value, use File#exists() instead. 
FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
this is not reliable because File#delete() returns true only if the file was 
deleted as a result of the #delete() method invocation. E.g. in the following 
code
if (f.exists()) { // 1
  return f.delete(); // 2
}
if the file f was deleted by another thread or process between calls 1 and 
2, this fragment will return false, while the file f does not exist upon 
the method return.
So, better to write
if (f.exists()) {
  f.delete();
  return !f.exists();
}

  was:
2 improvements are suggested for implementation of methods 
org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
 
1) We should grant +rwx permissions the target directories before trying to 
delete them.
The mentioned methods fail to dlete directories that don't have read or execute 
permissions.
Actual problem appears if an hdfs-related test is timed out (with a short 
timeout like tesns of seconds), and the forked test process is killed, some 
directories are left on disk that are not readable and/or executable. This 
prevents next tests from being executed properly because these directories 
cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
So, its recommended to grant the read, write, and execute permissions the 
directories whose content is to be deleted.

2) We shouldn't rely upon File#delete() return value, use File#exists() 
instead. 
FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
this is not reliable because File#delete() returns true only if the file was 
deleted as a result of the #delete() method invocation. E.g. in the following 
code
if (f.exists()) { // 1
  return f.delete(); // 2
}
if the file f was deleted by another thread or process between calls 1 and 
2, this fragment will return false, while the file f does not exist upon 
the method return.
So, better to write
if (f.exists()) {
  f.delete();
  return !f.exists();
}


 FileUtil#fullyDelete should grant the target directories +rwx permissions 
 before trying to delete them
 --

 Key: HADOOP-8849
 URL: https://issues.apache.org/jira/browse/HADOOP-8849
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8849-vs-trunk.patch


 2 improvements are suggested for implementation of methods 
 org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
 org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
  
 1) We should grant +rwx permissions the target directories before trying to 
 delete them.
 The mentioned methods fail to delete directories that don't have read or 
 execute permissions.
 Actual problem appears if an hdfs-related test is timed out (with a short 
 timeout like tens of seconds), and the forked test process is killed, some 
 directories are left on disk that are not readable and/or executable. This 
 prevents next tests from being executed properly because these directories 
 cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
 So, its recommended to grant the read, write, and execute permissions the 
 directories whose content is to be deleted.
 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
 return value, use File#exists() instead. 
 FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
 this is not reliable because File#delete() returns true only if the file was 
 deleted as a result of the #delete() method invocation. E.g. in the following 
 code
 if (f.exists()) { // 

[jira] [Created] (HADOOP-8850) Method org.apache.hadoop.hdfs.TestHftpFileSystem.tearDown() sometimes throws NPEs

2012-09-26 Thread Ivan A. Veselovsky (JIRA)
Ivan A. Veselovsky created HADOOP-8850:
--

 Summary: Method 
org.apache.hadoop.hdfs.TestHftpFileSystem.tearDown() sometimes throws NPEs
 Key: HADOOP-8850
 URL: https://issues.apache.org/jira/browse/HADOOP-8850
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Priority: Minor


Recommended to add null-checking.
Suggested patch is attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8850) Method org.apache.hadoop.hdfs.TestHftpFileSystem.tearDown() sometimes throws NPEs

2012-09-26 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-8850:
---

Attachment: HADOOP-8850-vs-trunk.patch

 Method org.apache.hadoop.hdfs.TestHftpFileSystem.tearDown() sometimes throws 
 NPEs
 -

 Key: HADOOP-8850
 URL: https://issues.apache.org/jira/browse/HADOOP-8850
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8850-vs-trunk.patch


 Recommended to add null-checking.
 Suggested patch is attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8843) Old trash directories are never deleted on upgrade from 1.x

2012-09-26 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463866#comment-13463866
 ] 

Jason Lowe commented on HADOOP-8843:


Thanks, Todd.  Pushing this in.

 Old trash directories are never deleted on upgrade from 1.x
 ---

 Key: HADOOP-8843
 URL: https://issues.apache.org/jira/browse/HADOOP-8843
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3, 2.0.2-alpha
Reporter: Robert Joseph Evans
Assignee: Jason Lowe
Priority: Critical
 Attachments: HADOOP-8843.patch, HADOOP-8843.patch


 The older format of the trash checkpoint for 1.x is yyMMddHHmm the new format 
 is yyMMddHHmmss(-\d+)? so if you upgrade from an old cluster to a new one, 
 all of the entires in .trash will never be deleted because they currently are 
 always ignored on deletion.
 We should support deleting the older format as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8851) Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests

2012-09-26 Thread Ivan A. Veselovsky (JIRA)
Ivan A. Veselovsky created HADOOP-8851:
--

 Summary: Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the 
forked tests
 Key: HADOOP-8851
 URL: https://issues.apache.org/jira/browse/HADOOP-8851
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Priority: Minor


This can help to reveal the cause of issue in the event of OOME in tests.
Suggested patch attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8851) Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests

2012-09-26 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-8851:
---

Attachment: HADOOP-8851-vs-trunk.patch

 Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests
 --

 Key: HADOOP-8851
 URL: https://issues.apache.org/jira/browse/HADOOP-8851
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8851-vs-trunk.patch


 This can help to reveal the cause of issue in the event of OOME in tests.
 Suggested patch attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8852) DelegationTokenRenewer thread is not stopped when its filesystem is closed

2012-09-26 Thread Tom White (JIRA)
Tom White created HADOOP-8852:
-

 Summary: DelegationTokenRenewer thread is not stopped when its 
filesystem is closed
 Key: HADOOP-8852
 URL: https://issues.apache.org/jira/browse/HADOOP-8852
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tom White


HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8853) BytesWritable setsize unchecked

2012-09-26 Thread Sven Meys (JIRA)
Sven Meys created HADOOP-8853:
-

 Summary: BytesWritable setsize unchecked
 Key: HADOOP-8853
 URL: https://issues.apache.org/jira/browse/HADOOP-8853
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.1-alpha
Reporter: Sven Meys


When setting an array of length 1183230720 (in my case), the method will return 
a negative array index exception.

Cause is the following method.

public void setSize(int size) {
if (size  getCapacity()) {
  setCapacity(size * 3 / 2);
}
this.size = size;
  }

size * 3 has precedence which means that for any value greater than 715.827.882 
(682,6 MB) this method is unsafe.

It would be nice to have this hidden feature documented or have a failsafe in 
place.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8853) BytesWritable setsize unchecked

2012-09-26 Thread Sven Meys (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sven Meys updated HADOOP-8853:
--

Description: 
When setting an array of length 1183230720 (in my case), the method will return 
a negative array index exception.

Cause is the following method.

public void setSize(int size) {
if (size  getCapacity()) {
  setCapacity(size * 3 / 2);
}
this.size = size;
  }

size * 3 has precedence which means that for any value greater than 715.827.882 
(682,6 MB), the result will overflow and become negative. Thus this method is 
unsafe.

It would be nice to have this hidden feature documented or have a failsafe in 
place.


  was:
When setting an array of length 1183230720 (in my case), the method will return 
a negative array index exception.

Cause is the following method.

public void setSize(int size) {
if (size  getCapacity()) {
  setCapacity(size * 3 / 2);
}
this.size = size;
  }

size * 3 has precedence which means that for any value greater than 715.827.882 
(682,6 MB) this method is unsafe.

It would be nice to have this hidden feature documented or have a failsafe in 
place.



 BytesWritable setsize unchecked
 ---

 Key: HADOOP-8853
 URL: https://issues.apache.org/jira/browse/HADOOP-8853
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.1-alpha
Reporter: Sven Meys
   Original Estimate: 1h
  Remaining Estimate: 1h

 When setting an array of length 1183230720 (in my case), the method will 
 return a negative array index exception.
 Cause is the following method.
 public void setSize(int size) {
 if (size  getCapacity()) {
   setCapacity(size * 3 / 2);
 }
 this.size = size;
   }
 size * 3 has precedence which means that for any value greater than 
 715.827.882 (682,6 MB), the result will overflow and become negative. Thus 
 this method is unsafe.
 It would be nice to have this hidden feature documented or have a failsafe in 
 place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8776) Provide an option in test-patch that can enable / disable compiling native code

2012-09-26 Thread Hemanth Yamijala (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463916#comment-13463916
 ] 

Hemanth Yamijala commented on HADOOP-8776:
--

bq. And default it to true on Linux, and false on MacOS? You can use uname to 
find the operating system name in the script. What do you think?

Given that you are fixing the native build, I assume we would like to enable 
native compile by default on MacOS too after that. Given that, I feel like 
leaving it enabled by default like in the last patch. 

We can remove the hint, I agree that it seems out of place. If the developer 
gets a failure due to native build, he could probably look at the test-patch 
script to see if there's an option to turn it off and find it. We could 
probably also document it on Wiki etc.

Feels like this would be an option to keep things going for now, and not need 
any further changes once things get better with the native build. Thoughts ?

 Provide an option in test-patch that can enable / disable compiling native 
 code
 ---

 Key: HADOOP-8776
 URL: https://issues.apache.org/jira/browse/HADOOP-8776
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Hemanth Yamijala
Assignee: Hemanth Yamijala
Priority: Minor
 Attachments: HADOOP-8776.patch, HADOOP-8776.patch


 The test-patch script in Hadoop source runs a native compile with the patch. 
 On platforms like MAC, there are issues with the native compile that make it 
 difficult to use test-patch. This JIRA is to try and provide an option to 
 make the native compilation optional.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HADOOP-8854) Document backward incompatible changes between hadoop-1.x and 2.x

2012-09-26 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta moved HDFS-3978 to HADOOP-8854:
---

Affects Version/s: (was: 2.0.2-alpha)
   (was: 3.0.0)
   2.0.2-alpha
   3.0.0
  Key: HADOOP-8854  (was: HDFS-3978)
  Project: Hadoop Common  (was: Hadoop HDFS)

 Document backward incompatible changes between hadoop-1.x and 2.x
 -

 Key: HADOOP-8854
 URL: https://issues.apache.org/jira/browse/HADOOP-8854
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Arpit Gupta

 We should create a new site document to explicitly list down the know 
 incompatible changes between hadoop 1.x and 2.x
 I believe this will make it easier for users to determine all the changes one 
 needs to make when moving from 1.x to 2.x

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8851) Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests

2012-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463950#comment-13463950
 ] 

Hadoop QA commented on HADOOP-8851:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12546707/HADOOP-8851-vs-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1527//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1527//console

This message is automatically generated.

 Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests
 --

 Key: HADOOP-8851
 URL: https://issues.apache.org/jira/browse/HADOOP-8851
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8851-vs-trunk.patch


 This can help to reveal the cause of issue in the event of OOME in tests.
 Suggested patch attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8776) Provide an option in test-patch that can enable / disable compiling native code

2012-09-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8776:
-

Description:(was: The test-patch script in Hadoop source runs a native 
compile with the patch. On platforms like MAC, there are issues with the native 
compile that make it difficult to use test-patch. This JIRA is to try and 
provide an option to make the native compilation optional.)

 Provide an option in test-patch that can enable / disable compiling native 
 code
 ---

 Key: HADOOP-8776
 URL: https://issues.apache.org/jira/browse/HADOOP-8776
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Hemanth Yamijala
Assignee: Hemanth Yamijala
Priority: Minor
 Attachments: HADOOP-8776.patch, HADOOP-8776.patch


  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8776) Provide an option in test-patch that can enable / disable compiling native code

2012-09-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8776:
-

Description: The test-patch script in Hadoop source runs a native compile 
with the patch. On platforms like MAC, there are issues with the native compile 
that make it difficult to use test-patch. This JIRA is to try and provide an 
option to make the native compilation optional.   (was:  )

 Provide an option in test-patch that can enable / disable compiling native 
 code
 ---

 Key: HADOOP-8776
 URL: https://issues.apache.org/jira/browse/HADOOP-8776
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Hemanth Yamijala
Assignee: Hemanth Yamijala
Priority: Minor
 Attachments: HADOOP-8776.patch, HADOOP-8776.patch


 The test-patch script in Hadoop source runs a native compile with the patch. 
 On platforms like MAC, there are issues with the native compile that make it 
 difficult to use test-patch. This JIRA is to try and provide an option to 
 make the native compilation optional. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8776) Provide an option in test-patch that can enable / disable compiling native code

2012-09-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463954#comment-13463954
 ] 

Colin Patrick McCabe commented on HADOOP-8776:
--

bq. Given that you are fixing the native build, I assume we would like to 
enable native compile by default on MacOS too after that. Given that, I feel 
like leaving it enabled by default like in the last patch.

I would certainly like to review patches or suggest approaches to fix to the 
native Mac build.  However, HADOOP-8744 is currently not assigned to me.  It's 
not likely to be me personally who fixes it because I don't have a Mac :)

Since MacOS is a POSIX platform, we generally try to support it by making our 
native code POSIX-compliant (or at least able to work under POSIX when advanced 
features are not detected.)

bq. Feels like this would be an option to keep things going for now, and not 
need any further changes once things get better with the native build. Thoughts 
?

Sounds reasonable to me.  My main objection to the previous patch was the hint 
text, as I said.

 Provide an option in test-patch that can enable / disable compiling native 
 code
 ---

 Key: HADOOP-8776
 URL: https://issues.apache.org/jira/browse/HADOOP-8776
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Hemanth Yamijala
Assignee: Hemanth Yamijala
Priority: Minor
 Attachments: HADOOP-8776.patch, HADOOP-8776.patch


 The test-patch script in Hadoop source runs a native compile with the patch. 
 On platforms like MAC, there are issues with the native compile that make it 
 difficult to use test-patch. This JIRA is to try and provide an option to 
 make the native compilation optional. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-09-26 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463958#comment-13463958
 ] 

Robert Joseph Evans commented on HADOOP-8847:
-

Steve, what do you mean by losing file permissions? does that mean that all 
files will be created with the default umask permissions?  That could cause 
major problems for people who ship executables inside a tgz.  I thought at one 
point we were post processing the contents of a .zip to give everything execute 
permissions.  If we are doing that the tgzs as well as the zips it may not be 
that big of a deal, but I would still be nervous about a change like that.

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, test-untar.tar, 
 test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-09-26 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463957#comment-13463957
 ] 

Bikas Saha commented on HADOOP-8847:


I will try to add a long file name into the tar file and check that.
From what I have seen, callers who need specific permissions have to set them 
after unTar because tar does not do it for them. I see the code in distributed 
shell explicitly set permissions after the untar operation.

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, test-untar.tar, 
 test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8843) Old trash directories are never deleted on upgrade from 1.x

2012-09-26 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-8843:
---

   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
   0.23.4
   3.0.0
   Status: Resolved  (was: Patch Available)

I committed this to trunk, branch-2, and branch-0.23.

 Old trash directories are never deleted on upgrade from 1.x
 ---

 Key: HADOOP-8843
 URL: https://issues.apache.org/jira/browse/HADOOP-8843
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3, 2.0.2-alpha
Reporter: Robert Joseph Evans
Assignee: Jason Lowe
Priority: Critical
 Fix For: 3.0.0, 0.23.4, 2.0.3-alpha

 Attachments: HADOOP-8843.patch, HADOOP-8843.patch


 The older format of the trash checkpoint for 1.x is yyMMddHHmm the new format 
 is yyMMddHHmmss(-\d+)? so if you upgrade from an old cluster to a new one, 
 all of the entires in .trash will never be deleted because they currently are 
 always ignored on deletion.
 We should support deleting the older format as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-09-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463978#comment-13463978
 ] 

Steve Loughran commented on HADOOP-8847:


@Robert, I know Ant's {{untar}} task doesn't handle perms; a look at the 
apache compress library shows that {{TarEntry.getMode()}} does return them -the 
untarring logic may be able to convert them into FS state operations, using 
{{File.setExecutable()}} and {{File.setWritable()}}. This needs to be done in 
the untarring process itself.

tar perms are only going to matter in Unix-land, so perhaps the strategy here 
is to make the java untar the default on windows, but not the default on other 
platforms.

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, test-untar.tar, 
 test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8850) Method org.apache.hadoop.hdfs.TestHftpFileSystem.tearDown() sometimes throws NPEs

2012-09-26 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-8850:
---

Status: Patch Available  (was: Open)

 Method org.apache.hadoop.hdfs.TestHftpFileSystem.tearDown() sometimes throws 
 NPEs
 -

 Key: HADOOP-8850
 URL: https://issues.apache.org/jira/browse/HADOOP-8850
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8850-vs-trunk.patch


 Recommended to add null-checking.
 Suggested patch is attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2012-09-26 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-8849:
---

Status: Patch Available  (was: Open)

 FileUtil#fullyDelete should grant the target directories +rwx permissions 
 before trying to delete them
 --

 Key: HADOOP-8849
 URL: https://issues.apache.org/jira/browse/HADOOP-8849
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8849-vs-trunk.patch


 2 improvements are suggested for implementation of methods 
 org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
 org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
  
 1) We should grant +rwx permissions the target directories before trying to 
 delete them.
 The mentioned methods fail to delete directories that don't have read or 
 execute permissions.
 Actual problem appears if an hdfs-related test is timed out (with a short 
 timeout like tens of seconds), and the forked test process is killed, some 
 directories are left on disk that are not readable and/or executable. This 
 prevents next tests from being executed properly because these directories 
 cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
 So, its recommended to grant the read, write, and execute permissions the 
 directories whose content is to be deleted.
 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
 return value, use File#exists() instead. 
 FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
 this is not reliable because File#delete() returns true only if the file was 
 deleted as a result of the #delete() method invocation. E.g. in the following 
 code
 if (f.exists()) { // 1
   return f.delete(); // 2
 }
 if the file f was deleted by another thread or process between calls 1 and 
 2, this fragment will return false, while the file f does not exist upon 
 the method return.
 So, better to write
 if (f.exists()) {
   f.delete();
   return !f.exists();
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8851) Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests

2012-09-26 Thread Ivan A. Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464053#comment-13464053
 ] 

Ivan A. Veselovsky commented on HADOOP-8851:


This is the change in pom.xml maven config, it should not require new or 
modified tests.

 Use -XX:+HeapDumpOnOutOfMemoryError JVM option in the forked tests
 --

 Key: HADOOP-8851
 URL: https://issues.apache.org/jira/browse/HADOOP-8851
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8851-vs-trunk.patch


 This can help to reveal the cause of issue in the event of OOME in tests.
 Suggested patch attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8841) In trunk for command rm, the flags -[rR] and -f are not documented

2012-09-26 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464058#comment-13464058
 ] 

Daryn Sharp commented on HADOOP-8841:
-

Can we generate these docs by massaging the output of FsShell -help or -usage?

 In trunk for command rm, the flags -[rR] and -f are not documented
 --

 Key: HADOOP-8841
 URL: https://issues.apache.org/jira/browse/HADOOP-8841
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Attachments: HADOOP-8841.001.patch


 We need to add description about the flags in the document for trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8855) SSL-based image transfer does not work when Kerberos is disabled

2012-09-26 Thread Todd Lipcon (JIRA)
Todd Lipcon created HADOOP-8855:
---

 Summary: SSL-based image transfer does not work when Kerberos is 
disabled
 Key: HADOOP-8855
 URL: https://issues.apache.org/jira/browse/HADOOP-8855
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor


In SecurityUtil.openSecureHttpConnection, we first check 
{{UserGroupInformation.isSecurityEnabled()}}. However, this only checks the 
kerberos config, which is independent of {{hadoop.ssl.enabled}}. Instead, we 
should check {{HttpConfig.isSecure()}}.

Credit to Wing Yew Poon for discovering this bug

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8855) SSL-based image transfer does not work when Kerberos is disabled

2012-09-26 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8855:


Attachment: hadoop-8855.txt

 SSL-based image transfer does not work when Kerberos is disabled
 

 Key: HADOOP-8855
 URL: https://issues.apache.org/jira/browse/HADOOP-8855
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8855.txt


 In SecurityUtil.openSecureHttpConnection, we first check 
 {{UserGroupInformation.isSecurityEnabled()}}. However, this only checks the 
 kerberos config, which is independent of {{hadoop.ssl.enabled}}. Instead, we 
 should check {{HttpConfig.isSecure()}}.
 Credit to Wing Yew Poon for discovering this bug

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8855) SSL-based image transfer does not work when Kerberos is disabled

2012-09-26 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8855:


Status: Patch Available  (was: Open)

 SSL-based image transfer does not work when Kerberos is disabled
 

 Key: HADOOP-8855
 URL: https://issues.apache.org/jira/browse/HADOOP-8855
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8855.txt


 In SecurityUtil.openSecureHttpConnection, we first check 
 {{UserGroupInformation.isSecurityEnabled()}}. However, this only checks the 
 kerberos config, which is independent of {{hadoop.ssl.enabled}}. Instead, we 
 should check {{HttpConfig.isSecure()}}.
 Credit to Wing Yew Poon for discovering this bug

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8856) SecuirtyUtil#openSecureHttpConnection should use an authenticated URL even if kerberos is not enabled

2012-09-26 Thread Eli Collins (JIRA)
Eli Collins created HADOOP-8856:
---

 Summary: SecuirtyUtil#openSecureHttpConnection should use an 
authenticated URL even if kerberos is not enabled
 Key: HADOOP-8856
 URL: https://issues.apache.org/jira/browse/HADOOP-8856
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.2-alpha
Reporter: Eli Collins
Assignee: Eli Collins


HADOOP-8581 updated openSecureHttpConnection to use an ssl factory, however we 
only use it if kerberos security is enabled, so we'll fail to use it if SSL is 
enabled but kerberos/SPNEGO are not. This manifests itself as the 2NN failing 
to checkpoint.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8855) SSL-based image transfer does not work when Kerberos is disabled

2012-09-26 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464070#comment-13464070
 ] 

Eli Collins commented on HADOOP-8855:
-

+1 lgtm (I just filed HADOOP-8856 for the same, will close that).

 SSL-based image transfer does not work when Kerberos is disabled
 

 Key: HADOOP-8855
 URL: https://issues.apache.org/jira/browse/HADOOP-8855
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8855.txt


 In SecurityUtil.openSecureHttpConnection, we first check 
 {{UserGroupInformation.isSecurityEnabled()}}. However, this only checks the 
 kerberos config, which is independent of {{hadoop.ssl.enabled}}. Instead, we 
 should check {{HttpConfig.isSecure()}}.
 Credit to Wing Yew Poon for discovering this bug

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8856) SecuirtyUtil#openSecureHttpConnection should use an authenticated URL even if kerberos is not enabled

2012-09-26 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins resolved HADOOP-8856.
-

Resolution: Duplicate

Dupe of HADOOP-8855

 SecuirtyUtil#openSecureHttpConnection should use an authenticated URL even if 
 kerberos is not enabled
 -

 Key: HADOOP-8856
 URL: https://issues.apache.org/jira/browse/HADOOP-8856
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.2-alpha
Reporter: Eli Collins
Assignee: Eli Collins

 HADOOP-8581 updated openSecureHttpConnection to use an ssl factory, however 
 we only use it if kerberos security is enabled, so we'll fail to use it if 
 SSL is enabled but kerberos/SPNEGO are not. This manifests itself as the 2NN 
 failing to checkpoint.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8855) SSL-based image transfer does not work when Kerberos is disabled

2012-09-26 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464075#comment-13464075
 ] 

Eli Collins commented on HADOOP-8855:
-

Sorry - forgot to mention, we should update the javadoc for 
openSecureHttpConnection to remove the mention of SPNEGO as this method is 
independent of Kerb and SPNEGO.

 SSL-based image transfer does not work when Kerberos is disabled
 

 Key: HADOOP-8855
 URL: https://issues.apache.org/jira/browse/HADOOP-8855
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8855.txt


 In SecurityUtil.openSecureHttpConnection, we first check 
 {{UserGroupInformation.isSecurityEnabled()}}. However, this only checks the 
 kerberos config, which is independent of {{hadoop.ssl.enabled}}. Instead, we 
 should check {{HttpConfig.isSecure()}}.
 Credit to Wing Yew Poon for discovering this bug

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8684) Deadlock between WritableComparator and WritableComparable

2012-09-26 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8684:


Fix Version/s: 2.0.3-alpha
   0.23.4

I pulled this into branch-2, and branch-0.23

 Deadlock between WritableComparator and WritableComparable
 --

 Key: HADOOP-8684
 URL: https://issues.apache.org/jira/browse/HADOOP-8684
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3, 3.0.0
Reporter: Hiroshi Ikeda
Assignee: Jing Zhao
Priority: Minor
 Fix For: 3.0.0, 0.23.4, 2.0.3-alpha

 Attachments: Hadoop-8684.patch, Hadoop-8684.patch, Hadoop-8684.patch, 
 Hadoop-8684.patch, Hadoop-8684.patch, WritableComparatorDeadLockTestApp.java


 Classes implementing WriableComparable in Hadoop call the method 
 WritableComparator.define() in their static initializers. This means, the 
 classes call the method define() while thier class loading, under locking 
 their class objects. And, the method WritableComparator.define() locks the 
 WritableComaprator class object.
 On the other hand, WritableComparator.get() also locks the WritableComparator 
 class object, and the method may create instances of the targeted comparable 
 class, involving loading the targeted comparable class if any. This means, 
 the method might try to lock the targeted comparable class object under 
 locking the WritableComparator class object.
 There are reversed orders of locking objects, and you might fall in deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8855) SSL-based image transfer does not work when Kerberos is disabled

2012-09-26 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464077#comment-13464077
 ] 

Andy Isaacson commented on HADOOP-8855:
---

LGTM.

 SSL-based image transfer does not work when Kerberos is disabled
 

 Key: HADOOP-8855
 URL: https://issues.apache.org/jira/browse/HADOOP-8855
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8855.txt


 In SecurityUtil.openSecureHttpConnection, we first check 
 {{UserGroupInformation.isSecurityEnabled()}}. However, this only checks the 
 kerberos config, which is independent of {{hadoop.ssl.enabled}}. Instead, we 
 should check {{HttpConfig.isSecure()}}.
 Credit to Wing Yew Poon for discovering this bug

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8698) Do not call unneceseary setConf(null) in Configured constructor

2012-09-26 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8698:


Fix Version/s: (was: 0.23.3)
   0.24.0

 Do not call unneceseary setConf(null) in Configured constructor
 ---

 Key: HADOOP-8698
 URL: https://issues.apache.org/jira/browse/HADOOP-8698
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.3, 3.0.0
Reporter: Radim Kolar
Priority: Minor
 Fix For: 0.24.0, 3.0.0

 Attachments: setconf-null.txt


 no-arg constructor of /org/apache/hadoop/conf/Configured calls setConf(null). 
 This is unnecessary and it increases complexity of setConf() code because you 
 have to check for not null object reference before using it. Under normal 
 conditions setConf() is never called with null reference, so not null check 
 is unnecessary.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7996) change location of the native libraries to lib instead of lib/native

2012-09-26 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-7996:


Fix Version/s: (was: 0.23.3)
   0.24.0

 change location of the native libraries to lib instead of lib/native
 

 Key: HADOOP-7996
 URL: https://issues.apache.org/jira/browse/HADOOP-7996
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build, conf, documentation, scripts
Reporter: Roman Shaposhnik
Assignee: Eric Yang
 Fix For: 0.24.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7941) NoClassDefFoundError while running distcp/archive

2012-09-26 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-7941:


Fix Version/s: (was: 0.23.3)
   0.24.0

 NoClassDefFoundError while running distcp/archive
 -

 Key: HADOOP-7941
 URL: https://issues.apache.org/jira/browse/HADOOP-7941
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.1
Reporter: Ramya Sunil
 Fix For: 0.24.0


 bin/hadoop distcp
 {noformat}
 Exception in thread main java.lang.NoClassDefFoundError: 
 org/apache/hadoop/tools/DistCp
 Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.tools.DistCp
 at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
 Could not find the main class: org.apache.hadoop.tools.DistCp.  Program will 
 exit.
 {noformat}
 Same is the case while running 'bin/hadoop archive'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7317) RPC.stopProxy doesn't actually close proxy

2012-09-26 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-7317:


Fix Version/s: (was: 0.23.3)
   0.24.0

 RPC.stopProxy doesn't actually close proxy
 --

 Key: HADOOP-7317
 URL: https://issues.apache.org/jira/browse/HADOOP-7317
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.22.0
Reporter: Todd Lipcon
 Fix For: 0.24.0


 Discovered while investigating HDFS-1965, it turns out that the 
 reference-counting done in WritableRpcEngine.ClientCache doesn't map 
 one-to-one with open TCP connections. This means that it's easy to 
 accidentally leave TCP connections open longer than expected so long as the 
 client has any other connections open at all.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2012-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464078#comment-13464078
 ] 

Hadoop QA commented on HADOOP-8849:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12546696/HADOOP-8849-vs-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1528//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1528//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1528//console

This message is automatically generated.

 FileUtil#fullyDelete should grant the target directories +rwx permissions 
 before trying to delete them
 --

 Key: HADOOP-8849
 URL: https://issues.apache.org/jira/browse/HADOOP-8849
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8849-vs-trunk.patch


 2 improvements are suggested for implementation of methods 
 org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
 org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
  
 1) We should grant +rwx permissions the target directories before trying to 
 delete them.
 The mentioned methods fail to delete directories that don't have read or 
 execute permissions.
 Actual problem appears if an hdfs-related test is timed out (with a short 
 timeout like tens of seconds), and the forked test process is killed, some 
 directories are left on disk that are not readable and/or executable. This 
 prevents next tests from being executed properly because these directories 
 cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
 So, its recommended to grant the read, write, and execute permissions the 
 directories whose content is to be deleted.
 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
 return value, use File#exists() instead. 
 FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
 this is not reliable because File#delete() returns true only if the file was 
 deleted as a result of the #delete() method invocation. E.g. in the following 
 code
 if (f.exists()) { // 1
   return f.delete(); // 2
 }
 if the file f was deleted by another thread or process between calls 1 and 
 2, this fragment will return false, while the file f does not exist upon 
 the method return.
 So, better to write
 if (f.exists()) {
   f.delete();
   return !f.exists();
 }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8855) SSL-based image transfer does not work when Kerberos is disabled

2012-09-26 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464082#comment-13464082
 ] 

Todd Lipcon commented on HADOOP-8855:
-

I'm trying to test this and it doesn't seem to entirely fix the issue... will 
report back with more when I know.

 SSL-based image transfer does not work when Kerberos is disabled
 

 Key: HADOOP-8855
 URL: https://issues.apache.org/jira/browse/HADOOP-8855
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8855.txt


 In SecurityUtil.openSecureHttpConnection, we first check 
 {{UserGroupInformation.isSecurityEnabled()}}. However, this only checks the 
 kerberos config, which is independent of {{hadoop.ssl.enabled}}. Instead, we 
 should check {{HttpConfig.isSecure()}}.
 Credit to Wing Yew Poon for discovering this bug

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8855) SSL-based image transfer does not work when Kerberos is disabled

2012-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464087#comment-13464087
 ] 

Hadoop QA commented on HADOOP-8855:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12546727/hadoop-8855.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1530//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1530//console

This message is automatically generated.

 SSL-based image transfer does not work when Kerberos is disabled
 

 Key: HADOOP-8855
 URL: https://issues.apache.org/jira/browse/HADOOP-8855
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8855.txt


 In SecurityUtil.openSecureHttpConnection, we first check 
 {{UserGroupInformation.isSecurityEnabled()}}. However, this only checks the 
 kerberos config, which is independent of {{hadoop.ssl.enabled}}. Instead, we 
 should check {{HttpConfig.isSecure()}}.
 Credit to Wing Yew Poon for discovering this bug

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-8852) DelegationTokenRenewer thread is not stopped when its filesystem is closed

2012-09-26 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla reassigned HADOOP-8852:


Assignee: Karthik Kambatla

 DelegationTokenRenewer thread is not stopped when its filesystem is closed
 --

 Key: HADOOP-8852
 URL: https://issues.apache.org/jira/browse/HADOOP-8852
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tom White
Assignee: Karthik Kambatla

 HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
 thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8852) DelegationTokenRenewer thread is not stopped when its filesystem is closed

2012-09-26 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464120#comment-13464120
 ] 

Karthik Kambatla commented on HADOOP-8852:
--

{{DelegationTokenRenewer}} fields in both filesystems are static fields. As 
stopping the thread corresponding to static field in close() does not seem like 
good practice, we should make them non-static.

 DelegationTokenRenewer thread is not stopped when its filesystem is closed
 --

 Key: HADOOP-8852
 URL: https://issues.apache.org/jira/browse/HADOOP-8852
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tom White
Assignee: Karthik Kambatla

 HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
 thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8850) Method org.apache.hadoop.hdfs.TestHftpFileSystem.tearDown() sometimes throws NPEs

2012-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464123#comment-13464123
 ] 

Hadoop QA commented on HADOOP-8850:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12546705/HADOOP-8850-vs-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1529//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1529//console

This message is automatically generated.

 Method org.apache.hadoop.hdfs.TestHftpFileSystem.tearDown() sometimes throws 
 NPEs
 -

 Key: HADOOP-8850
 URL: https://issues.apache.org/jira/browse/HADOOP-8850
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8850-vs-trunk.patch


 Recommended to add null-checking.
 Suggested patch is attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-8854) Document backward incompatible changes between hadoop-1.x and 2.x

2012-09-26 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas reassigned HADOOP-8854:
---

Assignee: Suresh Srinivas

 Document backward incompatible changes between hadoop-1.x and 2.x
 -

 Key: HADOOP-8854
 URL: https://issues.apache.org/jira/browse/HADOOP-8854
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Suresh Srinivas

 We should create a new site document to explicitly list down the know 
 incompatible changes between hadoop 1.x and 2.x
 I believe this will make it easier for users to determine all the changes one 
 needs to make when moving from 1.x to 2.x

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8855) SSL-based image transfer does not work when Kerberos is disabled

2012-09-26 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8855:


Attachment: hadoop-8855.txt

This turned out to be more complicated:

- actually need to use the secure URL open code when either SSL or krb5 is 
enabled (or both), since it's also used for SPNEGO
- the SPNEGO client code had a bug where, at least on my test setup, the JDK 
itself was performing the SPNEGO negotiation. So, by the time it got back to 
our code, it was already complete and the Set-Cookie was present with the auth 
token, and a HTTP 200 result. This was causing fallback to the 
PseudoAuthenticator, which had a separate bug that it wasn't setting the SSL 
configuration in its connection

- I also found a separate bug that the dfsadmin -fetchImage code needed a doAs 
to work properly in this type of secure cluster

With this patch in place I'm able to fetch the image on a krb5+ssl cluster. 
I'll swing back and double-check that it also works on a krb5 (no ssl) and ssl 
(no krb5) cluster.

I'd also like someone who knows this code to comment whether we need the SPNEGO 
code in KerberosAuthenticator at all. In my environment at least, it's not 
running at all, since JDK itself supports SPNEGO auth.

 SSL-based image transfer does not work when Kerberos is disabled
 

 Key: HADOOP-8855
 URL: https://issues.apache.org/jira/browse/HADOOP-8855
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8855.txt, hadoop-8855.txt


 In SecurityUtil.openSecureHttpConnection, we first check 
 {{UserGroupInformation.isSecurityEnabled()}}. However, this only checks the 
 kerberos config, which is independent of {{hadoop.ssl.enabled}}. Instead, we 
 should check {{HttpConfig.isSecure()}}.
 Credit to Wing Yew Poon for discovering this bug

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8854) Document backward incompatible changes between hadoop-1.x and 2.x

2012-09-26 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464170#comment-13464170
 ] 

Suresh Srinivas commented on HADOOP-8854:
-

Here are other incompatible changes:
{noformat}
HDFS-3034. Remove the deprecated DFSOutputStream.sync() method.

HDFS-3755. Creating an already-open-for-write file with overwrite=true fails

HDFS-2676. Remove Avro RPC.

HDFS-3138. Move DatanodeInfo#ipcPort to DatanodeID.

HDFS-3164. Move DatanodeInfo#hostName to DatanodeID.

HDFS-2887. FSVolume, is a part of FSDatasetInterface implementation, should
not be referred outside FSDataset.  A new FSVolumeInterface is defined.
The BlockVolumeChoosingPolicy.chooseVolume(..) method signature is also
updated.

HDFS-1825. Remove thriftfs contrib.

HDFS-3446. HostsFileReader silently ignores bad includes/excludes

HDFS-3137. Bump LAST_UPGRADABLE_LAYOUT_VERSION to -16.
Upgrade allowed from only release 0.18 onwards

HDFS-2210. Remove hdfsproxy.

HDFS-2303. Unbundle jsvc.

HDFS-1526. Dfs client name for a map/reduce task should be unique
among threads.

HDFS-1536. Improve HDFS WebUI.

HDFS-1073. Redesign the NameNode's storage layout for image checkpoints
and edit logs to introduce transaction IDs and be more robust.
Please see HDFS-1073 section below for breakout of individual patches.

HDFS-538. Per the contract elucidated in HADOOP-6201, throw
FileNotFoundException from FileSystem::listStatus rather than returning
null.

HDFS-602. DistributedFileSystem mkdirs throws FileAlreadyExistsException
instead of FileNotFoundException.

HDFS-544. Add a rbw subdir to DataNode data directory.

HDFS-576. Block report includes under-construction replicas.

HDFS-636. SafeMode counts complete blocks only.

HDFS-644. Lease recovery, concurrency support.

HDFS-570. Get last block length from a data-node when opening a file
being written to.

HDFS-657. Remove unused legacy data-node protocol methods.

HDFS-658. Block recovery for primary data-node.

HDFS-660. Remove deprecated methods from InterDatanodeProtocol.

HDFS-512. Block.equals() and compareTo() compare blocks based
only on block Ids, ignoring generation stamps.

HDFS-873. Configuration specifies data-node storage directories as URIs.

HDFS-905. Use the new UserGroupInformation from HDFS-6299. 

HDFS-984. Persistent delegation tokens.

HDFS-1016. HDFS side change for HADOOP-6569. This jira changes the
error message on the screen when cat a directory or a non-existent file.
{noformat}

We should add a page with these incompatibilities categories as API related, 
operations related etc.

 Document backward incompatible changes between hadoop-1.x and 2.x
 -

 Key: HADOOP-8854
 URL: https://issues.apache.org/jira/browse/HADOOP-8854
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Arpit Gupta
Assignee: Suresh Srinivas

 We should create a new site document to explicitly list down the know 
 incompatible changes between hadoop 1.x and 2.x
 I believe this will make it easier for users to determine all the changes one 
 needs to make when moving from 1.x to 2.x

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8855) SSL-based image transfer does not work when Kerberos is disabled

2012-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464174#comment-13464174
 ] 

Hadoop QA commented on HADOOP-8855:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12546744/hadoop-8855.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1531//console

This message is automatically generated.

 SSL-based image transfer does not work when Kerberos is disabled
 

 Key: HADOOP-8855
 URL: https://issues.apache.org/jira/browse/HADOOP-8855
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8855.txt, hadoop-8855.txt


 In SecurityUtil.openSecureHttpConnection, we first check 
 {{UserGroupInformation.isSecurityEnabled()}}. However, this only checks the 
 kerberos config, which is independent of {{hadoop.ssl.enabled}}. Instead, we 
 should check {{HttpConfig.isSecure()}}.
 Credit to Wing Yew Poon for discovering this bug

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8855) SSL-based image transfer does not work when Kerberos is disabled

2012-09-26 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464180#comment-13464180
 ] 

Devaraj Das commented on HADOOP-8855:
-

Good find! [~tlipcon], quick question - this patch will work even on JDKs that 
have no inherent support for SPNEGO, right?

 SSL-based image transfer does not work when Kerberos is disabled
 

 Key: HADOOP-8855
 URL: https://issues.apache.org/jira/browse/HADOOP-8855
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8855.txt, hadoop-8855.txt


 In SecurityUtil.openSecureHttpConnection, we first check 
 {{UserGroupInformation.isSecurityEnabled()}}. However, this only checks the 
 kerberos config, which is independent of {{hadoop.ssl.enabled}}. Instead, we 
 should check {{HttpConfig.isSecure()}}.
 Credit to Wing Yew Poon for discovering this bug

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8855) SSL-based image transfer does not work when Kerberos is disabled

2012-09-26 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8855:


Attachment: hadoop-8855.txt

retrying patch upload - the patch depended on HDFS-3972 for some utility method 
in SecurityUtil, and that was just checked in a minute ago

 SSL-based image transfer does not work when Kerberos is disabled
 

 Key: HADOOP-8855
 URL: https://issues.apache.org/jira/browse/HADOOP-8855
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8855.txt, hadoop-8855.txt, hadoop-8855.txt


 In SecurityUtil.openSecureHttpConnection, we first check 
 {{UserGroupInformation.isSecurityEnabled()}}. However, this only checks the 
 kerberos config, which is independent of {{hadoop.ssl.enabled}}. Instead, we 
 should check {{HttpConfig.isSecure()}}.
 Credit to Wing Yew Poon for discovering this bug

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8855) SSL-based image transfer does not work when Kerberos is disabled

2012-09-26 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464198#comment-13464198
 ] 

Todd Lipcon commented on HADOOP-8855:
-

Yes, I think so, but I don't have access to such a JDK. If you have one, do you 
have time to test?

 SSL-based image transfer does not work when Kerberos is disabled
 

 Key: HADOOP-8855
 URL: https://issues.apache.org/jira/browse/HADOOP-8855
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8855.txt, hadoop-8855.txt, hadoop-8855.txt


 In SecurityUtil.openSecureHttpConnection, we first check 
 {{UserGroupInformation.isSecurityEnabled()}}. However, this only checks the 
 kerberos config, which is independent of {{hadoop.ssl.enabled}}. Instead, we 
 should check {{HttpConfig.isSecure()}}.
 Credit to Wing Yew Poon for discovering this bug

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8857) hadoop.http.authentication.signature.secret.file should be created if the configured file does not exist

2012-09-26 Thread Eli Collins (JIRA)
Eli Collins created HADOOP-8857:
---

 Summary: hadoop.http.authentication.signature.secret.file should 
be created if the configured file does not exist
 Key: HADOOP-8857
 URL: https://issues.apache.org/jira/browse/HADOOP-8857
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Priority: Minor


AuthenticationFilterInitializer#initFilter fails if the configured 
{{hadoop.http.authentication.signature.secret.file}} does not exist, eg:

{noformat}
java.lang.RuntimeException: Could not read HTTP signature secret file: 
/var/lib/hadoop-hdfs/hadoop-http-auth-signature-secret
{noformat}

Creating /var/lib/hadoop-hdfs/hadoop-http-auth-signature-secret (populated with 
a string) fixes the issue. Per the auth docs If a secret is not provided a 
random secret is generated at start up time., which sounds like it means the 
file should be generated at startup with a random secrete, which doesn't seem 
to be the case. Also the instructions in the docs should be more clear in this 
regard.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8843) Old trash directories are never deleted on upgrade from 1.x

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464212#comment-13464212
 ] 

Hudson commented on HADOOP-8843:


Integrated in Hadoop-Common-trunk-Commit #2777 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2777/])
HADOOP-8843. Old trash directories are never deleted on upgrade from 1.x.  
Contributed by Jason Lowe (Revision 1390616)

 Result = SUCCESS
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390616
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java


 Old trash directories are never deleted on upgrade from 1.x
 ---

 Key: HADOOP-8843
 URL: https://issues.apache.org/jira/browse/HADOOP-8843
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3, 2.0.2-alpha
Reporter: Robert Joseph Evans
Assignee: Jason Lowe
Priority: Critical
 Fix For: 3.0.0, 0.23.4, 2.0.3-alpha

 Attachments: HADOOP-8843.patch, HADOOP-8843.patch


 The older format of the trash checkpoint for 1.x is yyMMddHHmm the new format 
 is yyMMddHHmmss(-\d+)? so if you upgrade from an old cluster to a new one, 
 all of the entires in .trash will never be deleted because they currently are 
 always ignored on deletion.
 We should support deleting the older format as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8855) SSL-based image transfer does not work when Kerberos is disabled

2012-09-26 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464217#comment-13464217
 ] 

Andy Isaacson commented on HADOOP-8855:
---

I tested Todd's patch on a cluster with various permutations of krb5 and SSL. 
With the patched JAR, all of my tests passed.
- hadoop.security.authentication=kerberos hadoop.ssl.enabled=true: dfsadmin 
-fetchImage works.
- hadoop.security.authentication=simple hadoop.ssl.enabled=true: fetchImage 
works.
- hadoop.security.authentication=kerberos hadoop.ssl.enabled=false: fetchImage 
works.

I also duplicated Todd's observation that {{dfsadmin -fetchImage}} does not 
work on krb5 without the doAs.

 SSL-based image transfer does not work when Kerberos is disabled
 

 Key: HADOOP-8855
 URL: https://issues.apache.org/jira/browse/HADOOP-8855
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8855.txt, hadoop-8855.txt, hadoop-8855.txt


 In SecurityUtil.openSecureHttpConnection, we first check 
 {{UserGroupInformation.isSecurityEnabled()}}. However, this only checks the 
 kerberos config, which is independent of {{hadoop.ssl.enabled}}. Instead, we 
 should check {{HttpConfig.isSecure()}}.
 Credit to Wing Yew Poon for discovering this bug

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8843) Old trash directories are never deleted on upgrade from 1.x

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464219#comment-13464219
 ] 

Hudson commented on HADOOP-8843:


Integrated in Hadoop-Hdfs-trunk-Commit #2840 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2840/])
HADOOP-8843. Old trash directories are never deleted on upgrade from 1.x.  
Contributed by Jason Lowe (Revision 1390616)

 Result = SUCCESS
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390616
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java


 Old trash directories are never deleted on upgrade from 1.x
 ---

 Key: HADOOP-8843
 URL: https://issues.apache.org/jira/browse/HADOOP-8843
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3, 2.0.2-alpha
Reporter: Robert Joseph Evans
Assignee: Jason Lowe
Priority: Critical
 Fix For: 3.0.0, 0.23.4, 2.0.3-alpha

 Attachments: HADOOP-8843.patch, HADOOP-8843.patch


 The older format of the trash checkpoint for 1.x is yyMMddHHmm the new format 
 is yyMMddHHmmss(-\d+)? so if you upgrade from an old cluster to a new one, 
 all of the entires in .trash will never be deleted because they currently are 
 always ignored on deletion.
 We should support deleting the older format as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8852) DelegationTokenRenewer thread is not stopped when its filesystem is closed

2012-09-26 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8852:
-

Status: Patch Available  (was: Open)

 DelegationTokenRenewer thread is not stopped when its filesystem is closed
 --

 Key: HADOOP-8852
 URL: https://issues.apache.org/jira/browse/HADOOP-8852
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tom White
Assignee: Karthik Kambatla
 Attachments: hadoop-8852.patch


 HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
 thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8852) DelegationTokenRenewer thread is not stopped when its filesystem is closed

2012-09-26 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8852:
-

Affects Version/s: 2.0.0-alpha

 DelegationTokenRenewer thread is not stopped when its filesystem is closed
 --

 Key: HADOOP-8852
 URL: https://issues.apache.org/jira/browse/HADOOP-8852
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Tom White
Assignee: Karthik Kambatla
 Attachments: hadoop-8852.patch


 HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
 thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8852) DelegationTokenRenewer thread is not stopped when its filesystem is closed

2012-09-26 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464235#comment-13464235
 ] 

Karthik Kambatla commented on HADOOP-8852:
--

By the way, in the little time I worked on Hadoop so far, I noticed several 
cases like this where {{close()}} does not stop _all_ the threads it spawns. I 
feel we should add more structure to ensure we do not miss any of them. 
Thoughts?

 DelegationTokenRenewer thread is not stopped when its filesystem is closed
 --

 Key: HADOOP-8852
 URL: https://issues.apache.org/jira/browse/HADOOP-8852
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Tom White
Assignee: Karthik Kambatla
 Attachments: hadoop-8852.patch


 HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
 thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8843) Old trash directories are never deleted on upgrade from 1.x

2012-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464244#comment-13464244
 ] 

Hudson commented on HADOOP-8843:


Integrated in Hadoop-Mapreduce-trunk-Commit #2799 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2799/])
HADOOP-8843. Old trash directories are never deleted on upgrade from 1.x.  
Contributed by Jason Lowe (Revision 1390616)

 Result = ABORTED
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390616
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestTrash.java


 Old trash directories are never deleted on upgrade from 1.x
 ---

 Key: HADOOP-8843
 URL: https://issues.apache.org/jira/browse/HADOOP-8843
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3, 2.0.2-alpha
Reporter: Robert Joseph Evans
Assignee: Jason Lowe
Priority: Critical
 Fix For: 3.0.0, 0.23.4, 2.0.3-alpha

 Attachments: HADOOP-8843.patch, HADOOP-8843.patch


 The older format of the trash checkpoint for 1.x is yyMMddHHmm the new format 
 is yyMMddHHmmss(-\d+)? so if you upgrade from an old cluster to a new one, 
 all of the entires in .trash will never be deleted because they currently are 
 always ignored on deletion.
 We should support deleting the older format as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8855) SSL-based image transfer does not work when Kerberos is disabled

2012-09-26 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464258#comment-13464258
 ] 

Eli Collins commented on HADOOP-8855:
-

+1, patch and testing look good - modulo two small things:

Nits:
- openSecureHttpConnection javadoc shouldn't mention SPNEGO only since there 
are other authenticators
- Think you mean to remove this debug statement
{code}
   private void sendToken(byte[] outToken) throws IOException, 
AuthenticationException {
+new Exception(sendToken).printStackTrace(System.out);
{code}

 SSL-based image transfer does not work when Kerberos is disabled
 

 Key: HADOOP-8855
 URL: https://issues.apache.org/jira/browse/HADOOP-8855
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8855.txt, hadoop-8855.txt, hadoop-8855.txt


 In SecurityUtil.openSecureHttpConnection, we first check 
 {{UserGroupInformation.isSecurityEnabled()}}. However, this only checks the 
 kerberos config, which is independent of {{hadoop.ssl.enabled}}. Instead, we 
 should check {{HttpConfig.isSecure()}}.
 Credit to Wing Yew Poon for discovering this bug

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8858) backport branch-trunk-win: HADOOP-8234 Enable user group mappings on Windows

2012-09-26 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-8858:
-

 Summary: backport branch-trunk-win: HADOOP-8234 Enable user group 
mappings on Windows
 Key: HADOOP-8858
 URL: https://issues.apache.org/jira/browse/HADOOP-8858
 Project: Hadoop Common
  Issue Type: Task
  Components: security
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth


Backport the code for HADOOP-8234 to enable user group mappings on Windows.  
This code had been committed to branch-1-win.  This issue tracks backporting to 
branch-trunk-win, in preparation for merging all the way to trunk.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8858) backport branch-trunk-win: HADOOP-8234 Enable user group mappings on Windows

2012-09-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8858:
--

Attachment: HADOOP-8858-branch-trunk-win.patch

The original patch was contributed by Bikas Saha via Sanjay on branch-1-win.  
To see that changeset:

git diff-tree -p 2d5142a --no-prefix

svn diff -c r1310617


 backport branch-trunk-win: HADOOP-8234 Enable user group mappings on Windows
 

 Key: HADOOP-8858
 URL: https://issues.apache.org/jira/browse/HADOOP-8858
 Project: Hadoop Common
  Issue Type: Task
  Components: security
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-8858-branch-trunk-win.patch


 Backport the code for HADOOP-8234 to enable user group mappings on Windows.  
 This code had been committed to branch-1-win.  This issue tracks backporting 
 to branch-trunk-win, in preparation for merging all the way to trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8803) Make Hadoop running more secure public cloud envrionment

2012-09-26 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464309#comment-13464309
 ] 

Luke Lu commented on HADOOP-8803:
-

bq. The attacker just uses the compromised machine to dump network packet it 
can observe and fetch information

In switched networks, which all reasonable clusters are configured, you only 
see the traffic to/from the compromised NIC. The attacker would already have 
access to all the blocks local to the compromised node. Your block token will 
not realistically improve security in this case.

bq. Uniformly configured cluster would weak my proposal. But it is a 
implementation issue and it depends on how user implement Hadoop.

There are only a handful of viable OSes that can run Hadoop effectively. 
Managing a single Hadoop cluster with different OS (version doesn't matter that 
much) would be an admin's nightmare (unless of course they use vsphere :). It's 
not gonna happen in practice. OTOH, it might be useful to create different 
zones of DNs with a per zone secret key. Per host keys don't scale w.r.t to 
replicas.

bq. I feel that your goal is that try to make hadoop fully secured, no bad guys 
can get in. My goal is that how to reduce the damage if bad guys get in.

No, my point is that your proposal is based on unrealistic assumptions. It 
greatly increases the complexity of the system and negatively impact the 
performance, while not actually improve security tangibly in practice. This is 
not a good trade-off. 

 Make Hadoop running more secure public cloud envrionment
 

 Key: HADOOP-8803
 URL: https://issues.apache.org/jira/browse/HADOOP-8803
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs, ipc, security
Affects Versions: 0.20.204.0
Reporter: Xianqing Yu
  Labels: hadoop
   Original Estimate: 2m
  Remaining Estimate: 2m

 I am a Ph.D student in North Carolina State University. I am modifying the 
 Hadoop's code (which including most parts of Hadoop, e.g. JobTracker, 
 TaskTracker, NameNode, DataNode) to achieve better security.
  
 My major goal is that make Hadoop running more secure in the Cloud 
 environment, especially for public Cloud environment. In order to achieve 
 that, I redesign the currently security mechanism and achieve following 
 proprieties:
 1. Bring byte-level access control to Hadoop HDFS. Based on 0.20.204, HDFS 
 access control is based on user or block granularity, e.g. HDFS Delegation 
 Token only check if the file can be accessed by certain user or not, Block 
 Token only proof which block or blocks can be accessed. I make Hadoop can do 
 byte-granularity access control, each access party, user or task process can 
 only access the bytes she or he least needed.
 2. I assume that in the public Cloud environment, only Namenode, secondary 
 Namenode, JobTracker can be trusted. A large number of Datanode and 
 TaskTracker may be compromised due to some of them may be running under less 
 secure environment. So I re-design the secure mechanism to make the damage 
 the hacker can do to be minimized.
  
 a. Re-design the Block Access Token to solve wildly shared-key problem of 
 HDFS. In original Block Access Token design, all HDFS (Namenode and Datanode) 
 share one master key to generate Block Access Token, if one DataNode is 
 compromised by hacker, the hacker can get the key and generate any  Block 
 Access Token he or she want.
  
 b. Re-design the HDFS Delegation Token to do fine-grain access control for 
 TaskTracker and Map-Reduce Task process on HDFS. 
  
 In the Hadoop 0.20.204, all TaskTrackers can use their kerberos credentials 
 to access any files for MapReduce on HDFS. So they have the same privilege as 
 JobTracker to do read or write tokens, copy job file, etc.. However, if one 
 of them is compromised, every critical thing in MapReduce directory (job 
 file, Delegation Token) is exposed to attacker. I solve the problem by making 
 JobTracker to decide which TaskTracker can access which file in MapReduce 
 Directory on HDFS.
  
 For Task process, once it get HDFS Delegation Token, it can access everything 
 belong to this job or user on HDFS. By my design, it can only access the 
 bytes it needed from HDFS.
  
 There are some other improvement in the security, such as TaskTracker can not 
 know some information like blockID from the Block Token (because it is 
 encrypted by my way), and HDFS can set up secure channel to send data as a 
 option.
  
 By those features, Hadoop can run much securely under uncertain environment 
 such as Public Cloud. I already start to test my prototype. I want to know 
 that whether community is interesting about my work? Is that a value work to 
 contribute to production Hadoop?

--
This message is automatically 

[jira] [Commented] (HADOOP-8852) DelegationTokenRenewer thread is not stopped when its filesystem is closed

2012-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464311#comment-13464311
 ] 

Hadoop QA commented on HADOOP-8852:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12546759/hadoop-8852.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1533//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1533//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1533//console

This message is automatically generated.

 DelegationTokenRenewer thread is not stopped when its filesystem is closed
 --

 Key: HADOOP-8852
 URL: https://issues.apache.org/jira/browse/HADOOP-8852
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Tom White
Assignee: Karthik Kambatla
 Attachments: hadoop-8852.patch


 HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
 thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8852) DelegationTokenRenewer thread is not stopped when its filesystem is closed

2012-09-26 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464319#comment-13464319
 ] 

Karthik Kambatla commented on HADOOP-8852:
--

When I ran test-patch.sh locally, it didn't complain of any findbugs warnings.

 DelegationTokenRenewer thread is not stopped when its filesystem is closed
 --

 Key: HADOOP-8852
 URL: https://issues.apache.org/jira/browse/HADOOP-8852
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Tom White
Assignee: Karthik Kambatla
 Attachments: hadoop-8852.patch


 HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
 thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8855) SSL-based image transfer does not work when Kerberos is disabled

2012-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464353#comment-13464353
 ] 

Hadoop QA commented on HADOOP-8855:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12546754/hadoop-8855.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-auth hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.ha.TestZKFailoverController
  
org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics
  org.apache.hadoop.hdfs.TestPersistBlocks

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1534//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1534//console

This message is automatically generated.

 SSL-based image transfer does not work when Kerberos is disabled
 

 Key: HADOOP-8855
 URL: https://issues.apache.org/jira/browse/HADOOP-8855
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8855.txt, hadoop-8855.txt, hadoop-8855.txt


 In SecurityUtil.openSecureHttpConnection, we first check 
 {{UserGroupInformation.isSecurityEnabled()}}. However, this only checks the 
 kerberos config, which is independent of {{hadoop.ssl.enabled}}. Instead, we 
 should check {{HttpConfig.isSecure()}}.
 Credit to Wing Yew Poon for discovering this bug

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8852) DelegationTokenRenewer thread is not stopped when its filesystem is closed

2012-09-26 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-8852:
-

Attachment: hadoop-8852.patch

Moved {{DelegationTokenRenewer#start()}} to constructor.

Ran test-patch.sh locally - findbugs doesn't show any warnings.

{color:green}+1 overall{color}.  

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version ) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.


 DelegationTokenRenewer thread is not stopped when its filesystem is closed
 --

 Key: HADOOP-8852
 URL: https://issues.apache.org/jira/browse/HADOOP-8852
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Tom White
Assignee: Karthik Kambatla
 Attachments: hadoop-8852.patch, hadoop-8852.patch


 HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
 thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8803) Make Hadoop running more secure public cloud envrionment

2012-09-26 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464369#comment-13464369
 ] 

Andy Isaacson commented on HADOOP-8803:
---

bq. In switched networks, which all reasonable clusters are configured, you 
only see the traffic to/from the compromised NIC.
Switch MAC tables are not a security measure; it's pretty easy to fool most 
switches into sending traffic to the wrong port.  Managed switches can often be 
configured to avoid this failure or to alarm when MAC spoofing happens, but 
that's additional admin overhead.

So yeah, it's a real threat that a compromised machine can observe and MITM 
traffic to other hosts in the same broadcast domain.

I'm not sure the block token is the right solution to this problem, but it is a 
real problem.

 Make Hadoop running more secure public cloud envrionment
 

 Key: HADOOP-8803
 URL: https://issues.apache.org/jira/browse/HADOOP-8803
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs, ipc, security
Affects Versions: 0.20.204.0
Reporter: Xianqing Yu
  Labels: hadoop
   Original Estimate: 2m
  Remaining Estimate: 2m

 I am a Ph.D student in North Carolina State University. I am modifying the 
 Hadoop's code (which including most parts of Hadoop, e.g. JobTracker, 
 TaskTracker, NameNode, DataNode) to achieve better security.
  
 My major goal is that make Hadoop running more secure in the Cloud 
 environment, especially for public Cloud environment. In order to achieve 
 that, I redesign the currently security mechanism and achieve following 
 proprieties:
 1. Bring byte-level access control to Hadoop HDFS. Based on 0.20.204, HDFS 
 access control is based on user or block granularity, e.g. HDFS Delegation 
 Token only check if the file can be accessed by certain user or not, Block 
 Token only proof which block or blocks can be accessed. I make Hadoop can do 
 byte-granularity access control, each access party, user or task process can 
 only access the bytes she or he least needed.
 2. I assume that in the public Cloud environment, only Namenode, secondary 
 Namenode, JobTracker can be trusted. A large number of Datanode and 
 TaskTracker may be compromised due to some of them may be running under less 
 secure environment. So I re-design the secure mechanism to make the damage 
 the hacker can do to be minimized.
  
 a. Re-design the Block Access Token to solve wildly shared-key problem of 
 HDFS. In original Block Access Token design, all HDFS (Namenode and Datanode) 
 share one master key to generate Block Access Token, if one DataNode is 
 compromised by hacker, the hacker can get the key and generate any  Block 
 Access Token he or she want.
  
 b. Re-design the HDFS Delegation Token to do fine-grain access control for 
 TaskTracker and Map-Reduce Task process on HDFS. 
  
 In the Hadoop 0.20.204, all TaskTrackers can use their kerberos credentials 
 to access any files for MapReduce on HDFS. So they have the same privilege as 
 JobTracker to do read or write tokens, copy job file, etc.. However, if one 
 of them is compromised, every critical thing in MapReduce directory (job 
 file, Delegation Token) is exposed to attacker. I solve the problem by making 
 JobTracker to decide which TaskTracker can access which file in MapReduce 
 Directory on HDFS.
  
 For Task process, once it get HDFS Delegation Token, it can access everything 
 belong to this job or user on HDFS. By my design, it can only access the 
 bytes it needed from HDFS.
  
 There are some other improvement in the security, such as TaskTracker can not 
 know some information like blockID from the Block Token (because it is 
 encrypted by my way), and HDFS can set up secure channel to send data as a 
 option.
  
 By those features, Hadoop can run much securely under uncertain environment 
 such as Public Cloud. I already start to test my prototype. I want to know 
 that whether community is interesting about my work? Is that a value work to 
 contribute to production Hadoop?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8386) hadoop script doesn't work if 'cd' prints to stdout (default behavior in Ubuntu)

2012-09-26 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464385#comment-13464385
 ] 

Andy Isaacson commented on HADOOP-8386:
---

If it's not an alias, cd hackery almost always is done using a shell function.  
Also, a /bin/cd command cannot work -- running it would fork a child process, 
change directory of the child process, and then exit, having no impact on the 
parent shell process.

To figure out what your cd is doing, in bash use type cd.
{noformat}
# first, define a function foo
$ foo() { echo bar; }
# now, run it
$ foo
bar
$ type foo
foo is a function
foo () 
{ 
echo bar
}
$
{noformat}
In dash, {{type}} just says {{foo is a shell function}}.  I bet the original 
user is using bash though.

bq. Fixes the 'hadoop' script to work on Ubuntu distro and others where the 
'cd' command prints to stdout

My ubuntu 12.04 install doesn't have any aliases or functions defined for cd, 
can you find out what package is installing the evil settings in 
/etc/bash_completion.d (most likely) and file an upstream bug?
{code}
ubuntu@ubu-cdh-0:~$ type cd
cd is a shell builtin
{code}

 hadoop script doesn't work if 'cd' prints to stdout (default behavior in 
 Ubuntu)
 

 Key: HADOOP-8386
 URL: https://issues.apache.org/jira/browse/HADOOP-8386
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 1.0.2
 Environment: Ubuntu
Reporter: Christopher Berner
 Attachments: hadoop.diff


 if the 'hadoop' script is run as 'bin/hadoop' on a distro where the 'cd' 
 command prints to stdout, the script will fail due to this line: 'bin=`cd 
 $bin; pwd`'
 Workaround: execute from the bin/ directory as './hadoop'
 Fix: change that line to 'bin=`cd $bin  /dev/null; pwd`'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8386) hadoop script doesn't work if 'cd' prints to stdout (default behavior in Ubuntu)

2012-09-26 Thread Christopher Berner (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464391#comment-13464391
 ] 

Christopher Berner commented on HADOOP-8386:


Just checked my installation and I'm using bash and the output of type cd 
says cd is a shell builtin.

Will try and test this on a fresh install of 12.04 and see if I can reproduce 
it.

 hadoop script doesn't work if 'cd' prints to stdout (default behavior in 
 Ubuntu)
 

 Key: HADOOP-8386
 URL: https://issues.apache.org/jira/browse/HADOOP-8386
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 1.0.2
 Environment: Ubuntu
Reporter: Christopher Berner
 Attachments: hadoop.diff


 if the 'hadoop' script is run as 'bin/hadoop' on a distro where the 'cd' 
 command prints to stdout, the script will fail due to this line: 'bin=`cd 
 $bin; pwd`'
 Workaround: execute from the bin/ directory as './hadoop'
 Fix: change that line to 'bin=`cd $bin  /dev/null; pwd`'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8841) In trunk for command rm, the flags -[rR] and -f are not documented

2012-09-26 Thread Hemanth Yamijala (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464405#comment-13464405
 ] 

Hemanth Yamijala commented on HADOOP-8841:
--

Daryn, +1 to that idea. That way we'd never have them out of sync again.

 In trunk for command rm, the flags -[rR] and -f are not documented
 --

 Key: HADOOP-8841
 URL: https://issues.apache.org/jira/browse/HADOOP-8841
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Attachments: HADOOP-8841.001.patch


 We need to add description about the flags in the document for trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8841) In trunk for command rm, the flags -[rR] and -f are not documented

2012-09-26 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464412#comment-13464412
 ] 

Jing Zhao commented on HADOOP-8841:
---

Daryn, I agree. But does that mean users have to run a hadoop instance to see 
the document?

 In trunk for command rm, the flags -[rR] and -f are not documented
 --

 Key: HADOOP-8841
 URL: https://issues.apache.org/jira/browse/HADOOP-8841
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Attachments: HADOOP-8841.001.patch


 We need to add description about the flags in the document for trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8776) Provide an option in test-patch that can enable / disable compiling native code

2012-09-26 Thread Hemanth Yamijala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hemanth Yamijala updated HADOOP-8776:
-

Attachment: HADOOP-8776.patch

As per discussion, uploading a patch that leaves the native compile enabled by 
default, and removes the hint from the earlier patch.

 Provide an option in test-patch that can enable / disable compiling native 
 code
 ---

 Key: HADOOP-8776
 URL: https://issues.apache.org/jira/browse/HADOOP-8776
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Hemanth Yamijala
Assignee: Hemanth Yamijala
Priority: Minor
 Attachments: HADOOP-8776.patch, HADOOP-8776.patch, HADOOP-8776.patch


 The test-patch script in Hadoop source runs a native compile with the patch. 
 On platforms like MAC, there are issues with the native compile that make it 
 difficult to use test-patch. This JIRA is to try and provide an option to 
 make the native compilation optional. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8776) Provide an option in test-patch that can enable / disable compiling native code

2012-09-26 Thread Hemanth Yamijala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hemanth Yamijala updated HADOOP-8776:
-

Affects Version/s: 3.0.0
   Status: Patch Available  (was: Open)

 Provide an option in test-patch that can enable / disable compiling native 
 code
 ---

 Key: HADOOP-8776
 URL: https://issues.apache.org/jira/browse/HADOOP-8776
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Hemanth Yamijala
Assignee: Hemanth Yamijala
Priority: Minor
 Attachments: HADOOP-8776.patch, HADOOP-8776.patch, HADOOP-8776.patch


 The test-patch script in Hadoop source runs a native compile with the patch. 
 On platforms like MAC, there are issues with the native compile that make it 
 difficult to use test-patch. This JIRA is to try and provide an option to 
 make the native compilation optional. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8852) DelegationTokenRenewer thread is not stopped when its filesystem is closed

2012-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464423#comment-13464423
 ] 

Hadoop QA commented on HADOOP-8852:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12546790/hadoop-8852.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics
  org.apache.hadoop.hdfs.TestPersistBlocks

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1535//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1535//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1535//console

This message is automatically generated.

 DelegationTokenRenewer thread is not stopped when its filesystem is closed
 --

 Key: HADOOP-8852
 URL: https://issues.apache.org/jira/browse/HADOOP-8852
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Tom White
Assignee: Karthik Kambatla
 Attachments: hadoop-8852.patch, hadoop-8852.patch


 HftpFileSystem and WebHdfsFileSystem should stop the DelegationTokenRenewer 
 thread when they are closed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8850) Method org.apache.hadoop.hdfs.TestHftpFileSystem.tearDown() sometimes throws NPEs

2012-09-26 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464428#comment-13464428
 ] 

Eli Collins commented on HADOOP-8850:
-

Looks good - thanks for contributing. Please update the patch to use the 
standard coding style (indent is 2 spaces rather than tabs).

 Method org.apache.hadoop.hdfs.TestHftpFileSystem.tearDown() sometimes throws 
 NPEs
 -

 Key: HADOOP-8850
 URL: https://issues.apache.org/jira/browse/HADOOP-8850
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Priority: Minor
 Attachments: HADOOP-8850-vs-trunk.patch


 Recommended to add null-checking.
 Suggested patch is attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >