[jira] [Commented] (HADOOP-8330) TestSequenceFile.testCreateUsesFsArg() is broken

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265764#comment-13265764
 ] 

Hudson commented on HADOOP-8330:


Integrated in Hadoop-Hdfs-0.23-Build #244 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/244/])
svn merge -c 1332363 from trunk for HADOOP-8330. (Revision 1332367)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332367
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java


 TestSequenceFile.testCreateUsesFsArg() is broken
 

 Key: HADOOP-8330
 URL: https://issues.apache.org/jira/browse/HADOOP-8330
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: John George
Assignee: John George
Priority: Minor
 Fix For: 0.23.3

 Attachments: HADOOP-8330.patch


 It seems HADOOP-8305 broke TestSequenceFile.testCreateUsesFsArg(). Fix the 
 tests if the test is broken or the source.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8335) Improve Configuration's address handling

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265765#comment-13265765
 ] 

Hudson commented on HADOOP-8335:


Integrated in Hadoop-Hdfs-0.23-Build #244 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/244/])
svn merge -c 1332427. FIXES: HADOOP-8335. Improve Configuration's address 
handling (Daryn Sharp via bobby) (Revision 1332430)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332430
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java


 Improve Configuration's address handling
 

 Key: HADOOP-8335
 URL: https://issues.apache.org/jira/browse/HADOOP-8335
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 0.23.3, 0.24.0, 2.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 0.23.3, 2.0.0, 3.0.0

 Attachments: HADOOP-8335.patch


 There's a {{Configuration#getSocketAddr}} but no symmetrical 
 {{setSocketAddr}}.  An {{updateSocketAddr}} would also be very handy for 
 yarn's updating of wildcard addresses in the config.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8334) HttpServer sometimes returns incorrect port

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265766#comment-13265766
 ] 

Hudson commented on HADOOP-8334:


Integrated in Hadoop-Hdfs-0.23-Build #244 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/244/])
svn merge -c 1332336. FIXES: HADOOP-8334. HttpServer sometimes returns 
incorrect port (Daryn Sharp via bobby) (Revision 1332338)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332338
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/HttpServerFunctionalTest.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java


 HttpServer sometimes returns incorrect port
 ---

 Key: HADOOP-8334
 URL: https://issues.apache.org/jira/browse/HADOOP-8334
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3, 0.24.0, 2.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 0.23.3, 2.0.0, 3.0.0

 Attachments: HADOOP-8334.patch


 {{HttpServer}} is not always returning the correct listening port.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8305) distcp over viewfs is broken

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265778#comment-13265778
 ] 

Hudson commented on HADOOP-8305:


Integrated in Hadoop-Hdfs-trunk #1031 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1031/])
HADOOP-8330. Update TestSequenceFile.testCreateUsesFsArg() for HADOOP-8305. 
 Contributed by John George (Revision 1332363)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332363
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java


 distcp over viewfs is broken
 

 Key: HADOOP-8305
 URL: https://issues.apache.org/jira/browse/HADOOP-8305
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3, 2.0.0, 3.0.0
Reporter: John George
Assignee: John George
 Fix For: 0.23.3, 2.0.0, 3.0.0

 Attachments: HADOOP-8305.patch, HADOOP-8305.patch


 This is similar to MAPREDUCE-4133. distcp over viewfs is broken because 
 getDefaultReplication/BlockSize are being requested with no arguments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8308) Support cross-project Jenkins builds

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265780#comment-13265780
 ] 

Hudson commented on HADOOP-8308:


Integrated in Hadoop-Hdfs-trunk #1031 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1031/])
HADOOP-8308. Support cross-project Jenkins builds. (Revision 1332479)

 Result = FAILURE
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332479
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Support cross-project Jenkins builds
 

 Key: HADOOP-8308
 URL: https://issues.apache.org/jira/browse/HADOOP-8308
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Reporter: Tom White
Assignee: Tom White
 Attachments: HADOOP-8308.patch


 This issue is to change test-patch to run only the tests for modules that 
 have changed and then run from the top-level. See discussion at 
 http://mail-archives.aurora.apache.org/mod_mbox/hadoop-common-dev/201204.mbox/%3ccaf-wd4tvkwypuuq9ibxv4uz8b2behxnpfkb5mq3d-pwvksh...@mail.gmail.com%3E.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8330) TestSequenceFile.testCreateUsesFsArg() is broken

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265784#comment-13265784
 ] 

Hudson commented on HADOOP-8330:


Integrated in Hadoop-Hdfs-trunk #1031 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1031/])
HADOOP-8330. Update TestSequenceFile.testCreateUsesFsArg() for HADOOP-8305. 
 Contributed by John George (Revision 1332363)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332363
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java


 TestSequenceFile.testCreateUsesFsArg() is broken
 

 Key: HADOOP-8330
 URL: https://issues.apache.org/jira/browse/HADOOP-8330
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: John George
Assignee: John George
Priority: Minor
 Fix For: 0.23.3

 Attachments: HADOOP-8330.patch


 It seems HADOOP-8305 broke TestSequenceFile.testCreateUsesFsArg(). Fix the 
 tests if the test is broken or the source.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8335) Improve Configuration's address handling

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265786#comment-13265786
 ] 

Hudson commented on HADOOP-8335:


Integrated in Hadoop-Hdfs-trunk #1031 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1031/])
HADOOP-8335. Improve Configuration's address handling (Daryn Sharp via 
bobby) (Revision 1332427)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332427
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java


 Improve Configuration's address handling
 

 Key: HADOOP-8335
 URL: https://issues.apache.org/jira/browse/HADOOP-8335
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 0.23.3, 0.24.0, 2.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 0.23.3, 2.0.0, 3.0.0

 Attachments: HADOOP-8335.patch


 There's a {{Configuration#getSocketAddr}} but no symmetrical 
 {{setSocketAddr}}.  An {{updateSocketAddr}} would also be very handy for 
 yarn's updating of wildcard addresses in the config.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8312) testpatch.sh should provide a simpler way to see which warnings changed

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265783#comment-13265783
 ] 

Hudson commented on HADOOP-8312:


Integrated in Hadoop-Hdfs-trunk #1031 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1031/])
HADOOP-8312. testpatch.sh should provide a simpler way to see which 
warnings changed (bobby) (Revision 1332417)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332417
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 testpatch.sh should provide a simpler way to see which warnings changed
 ---

 Key: HADOOP-8312
 URL: https://issues.apache.org/jira/browse/HADOOP-8312
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Fix For: 3.0.0

 Attachments: HADOOP-8312.txt


 test-patch.sh reports that a specific number of warnings has changed but it 
 does not provide an easy way to see which ones have changed.  For at least 
 the javac warnings we should be able to provide a diff of the warnings in 
 addition to the total count, because we capture the full compile log both 
 before and after applying the patch.
 For javadoc warnings it would be nice to be able to provide a filtered list 
 of the warnings based off of the files that were modified in the patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8325) Add a ShutdownHookManager to be used by different components instead of the JVM shutdownhook

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265785#comment-13265785
 ] 

Hudson commented on HADOOP-8325:


Integrated in Hadoop-Hdfs-trunk #1031 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1031/])
HADOOP-8325. Add a ShutdownHookManager to be used by different components 
instead of the JVM shutdownhook (tucu) (Revision 1332345)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332345
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ShutdownHookManager.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileContextDeleteOnExit.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestShutdownHookManager.java


 Add a ShutdownHookManager to be used by different components instead of the 
 JVM shutdownhook
 

 Key: HADOOP-8325
 URL: https://issues.apache.org/jira/browse/HADOOP-8325
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.0.0

 Attachments: HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, 
 HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, 
 HADOOP-8325.patch


 FileSystem adds a JVM shutdown hook when a filesystem instance is cached.
 MRAppMaster also uses a JVM shutdown hook, among other things, the 
 MRAppMaster JVM shutdown hook is used to ensure state are written to HDFS.
 This creates a race condition because each JVM shutdown hook is a separate 
 thread and if there are multiple JVM shutdown hooks there is not assurance of 
 order of execution, they could even run in parallel.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8334) HttpServer sometimes returns incorrect port

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265788#comment-13265788
 ] 

Hudson commented on HADOOP-8334:


Integrated in Hadoop-Hdfs-trunk #1031 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1031/])
HADOOP-8334. HttpServer sometimes returns incorrect port (Daryn Sharp via 
bobby) (Revision 1332336)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332336
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/HttpServerFunctionalTest.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java


 HttpServer sometimes returns incorrect port
 ---

 Key: HADOOP-8334
 URL: https://issues.apache.org/jira/browse/HADOOP-8334
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3, 0.24.0, 2.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 0.23.3, 2.0.0, 3.0.0

 Attachments: HADOOP-8334.patch


 {{HttpServer}} is not always returning the correct listening port.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8308) Support cross-project Jenkins builds

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265814#comment-13265814
 ] 

Hudson commented on HADOOP-8308:


Integrated in Hadoop-Mapreduce-trunk #1066 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1066/])
HADOOP-8308. Support cross-project Jenkins builds. (Revision 1332479)

 Result = FAILURE
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332479
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 Support cross-project Jenkins builds
 

 Key: HADOOP-8308
 URL: https://issues.apache.org/jira/browse/HADOOP-8308
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Reporter: Tom White
Assignee: Tom White
 Attachments: HADOOP-8308.patch


 This issue is to change test-patch to run only the tests for modules that 
 have changed and then run from the top-level. See discussion at 
 http://mail-archives.aurora.apache.org/mod_mbox/hadoop-common-dev/201204.mbox/%3ccaf-wd4tvkwypuuq9ibxv4uz8b2behxnpfkb5mq3d-pwvksh...@mail.gmail.com%3E.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8305) distcp over viewfs is broken

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265812#comment-13265812
 ] 

Hudson commented on HADOOP-8305:


Integrated in Hadoop-Mapreduce-trunk #1066 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1066/])
HADOOP-8330. Update TestSequenceFile.testCreateUsesFsArg() for HADOOP-8305. 
 Contributed by John George (Revision 1332363)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332363
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java


 distcp over viewfs is broken
 

 Key: HADOOP-8305
 URL: https://issues.apache.org/jira/browse/HADOOP-8305
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3, 2.0.0, 3.0.0
Reporter: John George
Assignee: John George
 Fix For: 0.23.3, 2.0.0, 3.0.0

 Attachments: HADOOP-8305.patch, HADOOP-8305.patch


 This is similar to MAPREDUCE-4133. distcp over viewfs is broken because 
 getDefaultReplication/BlockSize are being requested with no arguments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8335) Improve Configuration's address handling

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265820#comment-13265820
 ] 

Hudson commented on HADOOP-8335:


Integrated in Hadoop-Mapreduce-trunk #1066 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1066/])
HADOOP-8335. Improve Configuration's address handling (Daryn Sharp via 
bobby) (Revision 1332427)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332427
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java


 Improve Configuration's address handling
 

 Key: HADOOP-8335
 URL: https://issues.apache.org/jira/browse/HADOOP-8335
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 0.23.3, 0.24.0, 2.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 0.23.3, 2.0.0, 3.0.0

 Attachments: HADOOP-8335.patch


 There's a {{Configuration#getSocketAddr}} but no symmetrical 
 {{setSocketAddr}}.  An {{updateSocketAddr}} would also be very handy for 
 yarn's updating of wildcard addresses in the config.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8330) TestSequenceFile.testCreateUsesFsArg() is broken

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265818#comment-13265818
 ] 

Hudson commented on HADOOP-8330:


Integrated in Hadoop-Mapreduce-trunk #1066 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1066/])
HADOOP-8330. Update TestSequenceFile.testCreateUsesFsArg() for HADOOP-8305. 
 Contributed by John George (Revision 1332363)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332363
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestSequenceFile.java


 TestSequenceFile.testCreateUsesFsArg() is broken
 

 Key: HADOOP-8330
 URL: https://issues.apache.org/jira/browse/HADOOP-8330
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: John George
Assignee: John George
Priority: Minor
 Fix For: 0.23.3

 Attachments: HADOOP-8330.patch


 It seems HADOOP-8305 broke TestSequenceFile.testCreateUsesFsArg(). Fix the 
 tests if the test is broken or the source.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8334) HttpServer sometimes returns incorrect port

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265822#comment-13265822
 ] 

Hudson commented on HADOOP-8334:


Integrated in Hadoop-Mapreduce-trunk #1066 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1066/])
HADOOP-8334. HttpServer sometimes returns incorrect port (Daryn Sharp via 
bobby) (Revision 1332336)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332336
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/HttpServerFunctionalTest.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java


 HttpServer sometimes returns incorrect port
 ---

 Key: HADOOP-8334
 URL: https://issues.apache.org/jira/browse/HADOOP-8334
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3, 0.24.0, 2.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 0.23.3, 2.0.0, 3.0.0

 Attachments: HADOOP-8334.patch


 {{HttpServer}} is not always returning the correct listening port.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8312) testpatch.sh should provide a simpler way to see which warnings changed

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265817#comment-13265817
 ] 

Hudson commented on HADOOP-8312:


Integrated in Hadoop-Mapreduce-trunk #1066 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1066/])
HADOOP-8312. testpatch.sh should provide a simpler way to see which 
warnings changed (bobby) (Revision 1332417)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332417
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 testpatch.sh should provide a simpler way to see which warnings changed
 ---

 Key: HADOOP-8312
 URL: https://issues.apache.org/jira/browse/HADOOP-8312
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Fix For: 3.0.0

 Attachments: HADOOP-8312.txt


 test-patch.sh reports that a specific number of warnings has changed but it 
 does not provide an easy way to see which ones have changed.  For at least 
 the javac warnings we should be able to provide a diff of the warnings in 
 addition to the total count, because we capture the full compile log both 
 before and after applying the patch.
 For javadoc warnings it would be nice to be able to provide a filtered list 
 of the warnings based off of the files that were modified in the patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8325) Add a ShutdownHookManager to be used by different components instead of the JVM shutdownhook

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265819#comment-13265819
 ] 

Hudson commented on HADOOP-8325:


Integrated in Hadoop-Mapreduce-trunk #1066 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1066/])
HADOOP-8325. Add a ShutdownHookManager to be used by different components 
instead of the JVM shutdownhook (tucu) (Revision 1332345)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332345
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ShutdownHookManager.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileContextDeleteOnExit.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestShutdownHookManager.java


 Add a ShutdownHookManager to be used by different components instead of the 
 JVM shutdownhook
 

 Key: HADOOP-8325
 URL: https://issues.apache.org/jira/browse/HADOOP-8325
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.0.0

 Attachments: HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, 
 HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, 
 HADOOP-8325.patch


 FileSystem adds a JVM shutdown hook when a filesystem instance is cached.
 MRAppMaster also uses a JVM shutdown hook, among other things, the 
 MRAppMaster JVM shutdown hook is used to ensure state are written to HDFS.
 This creates a race condition because each JVM shutdown hook is a separate 
 thread and if there are multiple JVM shutdown hooks there is not assurance of 
 order of execution, they could even run in parallel.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8319) FileContext does not support setWriteChecksum

2012-05-01 Thread John George (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HADOOP-8319:


Target Version/s: 2.0.0, 3.0.0  (was: 3.0.0, 2.0.0)
  Status: Open  (was: Patch Available)

 FileContext does not support setWriteChecksum
 -

 Key: HADOOP-8319
 URL: https://issues.apache.org/jira/browse/HADOOP-8319
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0, 3.0.0
Reporter: John George
Assignee: John George
 Attachments: HADOOP-8319.patch, HADOOP-8319.patch


 File Context does not support setWriteChecksum and hence users trying
 to use this functionality fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8319) FileContext does not support setWriteChecksum

2012-05-01 Thread John George (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HADOOP-8319:


Target Version/s: 2.0.0, 3.0.0  (was: 3.0.0, 2.0.0)
  Status: Patch Available  (was: Open)

 FileContext does not support setWriteChecksum
 -

 Key: HADOOP-8319
 URL: https://issues.apache.org/jira/browse/HADOOP-8319
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0, 3.0.0
Reporter: John George
Assignee: John George
 Attachments: HADOOP-8319.patch, HADOOP-8319.patch, HADOOP-8319.patch


 File Context does not support setWriteChecksum and hence users trying
 to use this functionality fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8319) FileContext does not support setWriteChecksum

2012-05-01 Thread John George (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HADOOP-8319:


Attachment: HADOOP-8319.patch

Retrying the same patch again to see if HADOOP-8308 can probably get this patch 
to run

 FileContext does not support setWriteChecksum
 -

 Key: HADOOP-8319
 URL: https://issues.apache.org/jira/browse/HADOOP-8319
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0, 3.0.0
Reporter: John George
Assignee: John George
 Attachments: HADOOP-8319.patch, HADOOP-8319.patch, HADOOP-8319.patch


 File Context does not support setWriteChecksum and hence users trying
 to use this functionality fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8326) test-patch can leak processes in some cases

2012-05-01 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8326:


Attachment: HADOOP-8326.txt

This patch no longer uses pgrep, just ps, grep and awk.  I have tested the code 
manually and verified that it works as expected.  I have not tested it with 
jenkins nor am I able to unless I get more permissions to edit the jenkins 
setup.

 test-patch can leak processes in some cases
 ---

 Key: HADOOP-8326
 URL: https://issues.apache.org/jira/browse/HADOOP-8326
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Attachments: HADOOP-8326.txt, HADOOP-8326.txt


 test-patch.sh can leak processes in some cases.  These leaked processes can 
 cause subsequent tests to fail because they are holding resources, like ports 
 that the others may need to execute correctly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8326) test-patch can leak processes in some cases

2012-05-01 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8326:


Status: Patch Available  (was: Open)

 test-patch can leak processes in some cases
 ---

 Key: HADOOP-8326
 URL: https://issues.apache.org/jira/browse/HADOOP-8326
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Attachments: HADOOP-8326.txt, HADOOP-8326.txt


 test-patch.sh can leak processes in some cases.  These leaked processes can 
 cause subsequent tests to fail because they are holding resources, like ports 
 that the others may need to execute correctly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8319) FileContext does not support setWriteChecksum

2012-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265850#comment-13265850
 ] 

Hadoop QA commented on HADOOP-8319:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12525174/HADOOP-8319.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 4 new or modified test 
files.

+1 javadoc.  The javadoc tool did not generate any warning messages.

-1 javac.  The patch appears to cause tar ant target to fail.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to cause Findbugs (version 1.3.9) to fail.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/910//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/910//console

This message is automatically generated.

 FileContext does not support setWriteChecksum
 -

 Key: HADOOP-8319
 URL: https://issues.apache.org/jira/browse/HADOOP-8319
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0, 3.0.0
Reporter: John George
Assignee: John George
 Attachments: HADOOP-8319.patch, HADOOP-8319.patch, HADOOP-8319.patch


 File Context does not support setWriteChecksum and hence users trying
 to use this functionality fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8326) test-patch can leak processes in some cases

2012-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265855#comment-13265855
 ] 

Hadoop QA commented on HADOOP-8326:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12525175/HADOOP-8326.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

-1 javadoc.  The javadoc tool appears to have generated 16 warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/911//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/911//console

This message is automatically generated.

 test-patch can leak processes in some cases
 ---

 Key: HADOOP-8326
 URL: https://issues.apache.org/jira/browse/HADOOP-8326
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Attachments: HADOOP-8326.txt, HADOOP-8326.txt


 test-patch.sh can leak processes in some cases.  These leaked processes can 
 cause subsequent tests to fail because they are holding resources, like ports 
 that the others may need to execute correctly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8338) Can't renew or cancel HDFS delegation tokens over secure RPC

2012-05-01 Thread Owen O'Malley (JIRA)
Owen O'Malley created HADOOP-8338:
-

 Summary: Can't renew or cancel HDFS delegation tokens over secure 
RPC
 Key: HADOOP-8338
 URL: https://issues.apache.org/jira/browse/HADOOP-8338
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Owen O'Malley
Assignee: Owen O'Malley


The fetchdt tool is failing for secure deployments when given --renew or 
--cancel on tokens fetched using RPC. (The tokens fetched over HTTP can be 
renewed and canceled fine.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8338) Can't renew or cancel HDFS delegation tokens over secure RPC

2012-05-01 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HADOOP-8338:
--

Target Version/s: 1.0.3

 Can't renew or cancel HDFS delegation tokens over secure RPC
 

 Key: HADOOP-8338
 URL: https://issues.apache.org/jira/browse/HADOOP-8338
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Owen O'Malley
Assignee: Owen O'Malley

 The fetchdt tool is failing for secure deployments when given --renew or 
 --cancel on tokens fetched using RPC. (The tokens fetched over HTTP can be 
 renewed and canceled fine.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8338) Can't renew or cancel HDFS delegation tokens over secure RPC

2012-05-01 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HADOOP-8338:
--

Attachment: hadoop-8338.patch

The problem is that fetchdt doesn't include the hdfs-site.xml and therefore 
doesn't get the value of dfs.namenode.kerberos.principal.

 Can't renew or cancel HDFS delegation tokens over secure RPC
 

 Key: HADOOP-8338
 URL: https://issues.apache.org/jira/browse/HADOOP-8338
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Attachments: hadoop-8338.patch


 The fetchdt tool is failing for secure deployments when given --renew or 
 --cancel on tokens fetched using RPC. (The tokens fetched over HTTP can be 
 renewed and canceled fine.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8339) jenkins complaining about 16 javadoc warnings

2012-05-01 Thread Thomas Graves (JIRA)
Thomas Graves created HADOOP-8339:
-

 Summary: jenkins complaining about 16 javadoc warnings 
 Key: HADOOP-8339
 URL: https://issues.apache.org/jira/browse/HADOOP-8339
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Thomas Graves


See any of the mapreduce/hadoop jenkins reports recently and they all complain 
about 16 javadoc warnings.


-1 javadoc.  The javadoc tool appears to have generated 16 warning messages.

Which really means there are 24 since there are 8 that are supposed to be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8339) jenkins complaining about 16 javadoc warnings

2012-05-01 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265955#comment-13265955
 ] 

Thomas Graves commented on HADOOP-8339:
---

It looks like the test-patch.properties file at the top level /dev-support 
needs to be updated to take into account each sub project.  6 of the warnings 
are from rumen, which never got built before.

Also the sub-project test-patch.properties need to be udpated too as common and 
hdfs have more then its actually generating.

 jenkins complaining about 16 javadoc warnings 
 --

 Key: HADOOP-8339
 URL: https://issues.apache.org/jira/browse/HADOOP-8339
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Thomas Graves

 See any of the mapreduce/hadoop jenkins reports recently and they all 
 complain about 16 javadoc warnings.
 -1 javadoc.  The javadoc tool appears to have generated 16 warning 
 messages.
 Which really means there are 24 since there are 8 that are supposed to be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8279) Auto-HA: Allow manual failover to be invoked from zkfc.

2012-05-01 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265958#comment-13265958
 ] 

Aaron T. Myers commented on HADOOP-8279:


Patch looks pretty good to me, Todd. A few little comments:

# -forceFence doesn't seem to have any real use cases with auto-HA so it isn't 
implemented. - I don't follow the reasoning. Seems like it should be just as 
applicable to auto-HA as manual, no?
# If the attempt to transition to standby succeeds, then the ZKFC will delete 
the breadcrumb node in ZooKeeper - might want to specify which ZKFC will do 
the deletion.
# If the node is healthy and not active, it sends an RPC to the current 
active, asking it to yield from the election. - it actually sends an RPC to 
the ZKFC associated with the current active.
# if the current active does not respond to the graceful request, throws an 
exception indicating the reason for failure. - I recommend you make it 
explicit which graceful request this is referring to. In fact, if the active NN 
fails to respond to the graceful request to transition to standby, it will be 
fenced. It's the failure of the active ZKFC to respond to the cedeActive calls 
that results in a failure of gracefulFailover.
# I think you need interface annotations on ZKFCRpcServer, or perhaps it can be 
made package-private?
# In ZKFCProtocol#cedeActive you declare the parameter to be in millis, but in 
the ZKFCRpcServer#cedeActive implementation, you say the period is in seconds.
# I don't see much point in having both ZKFCRpcServer#stop and 
ZKFCRpcServer#join. Why not just call this.server.join in ZKFCRpcServer#stop?
# periodically check health state since, because entering an - doesn't quite 
parse.
# I think the log message about the timeout elapsing in 
ZKFailoverController#waitForActiveAttempt should probably be at least at WARN 
level instead of INFO.
# It's possible that it's in standby but just about to go into active, no? Is 
there some race here? - should this comment now be removed?
# I recommend you change the value of DFS_HA_ZKFC_PORT_DEFAULT to something 
other than 8021. I've seen a lot of JTs in the wild with their default port set 
to 8021.
# The design in the document posted to HDFS-2185 mentions introducing -to and 
-from parameters to the `haadmin -failover' command, but this implementation 
doesn't do that. That seems fine by me, but I'm curious why you chose to do it 
this way.

 Auto-HA: Allow manual failover to be invoked from zkfc.
 ---

 Key: HADOOP-8279
 URL: https://issues.apache.org/jira/browse/HADOOP-8279
 Project: Hadoop Common
  Issue Type: Improvement
  Components: auto-failover, ha
Affects Versions: Auto Failover (HDFS-3042)
Reporter: Mingjie Lai
Assignee: Todd Lipcon
 Fix For: Auto Failover (HDFS-3042)

 Attachments: hadoop-8279.txt, hadoop-8279.txt, hadoop-8279.txt, 
 hadoop-8279.txt


 HADOOP-8247 introduces a configure flag to prevent potential status 
 inconsistency between zkfc and namenode, by making auto and manual failover 
 mutually exclusive.
 However, as described in 2.7.2 section of design doc at HDFS-2185, we should 
 allow manual and auto failover co-exist, by:
 - adding some rpc interfaces at zkfc
 - manual failover shall be triggered by haadmin, and handled by zkfc if auto 
 failover is enabled. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7348) Modify the option of FsShell getmerge from [addnl] to [-nl] for consistency

2012-05-01 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-7348:


Description: 
The [addnl] option of FsShell getmerge should be either true or false,but 
it is very hard to understand by users, especially  who`s never used this 
option before. 
So,the [addnl] option should be changed to [-nl] for consistency.


  was:
The [addnl] option of FsShell getmerge should be either true or false,but 
it is very hard to understand by users, especially  who`s never used this 
option before. 
So,the [addnl] option should be changed to [-nl] for more comprehensive.


Summary: Modify the option of FsShell getmerge from [addnl] to [-nl] 
for consistency  (was: Modify the option of FsShell getmerge from [addnl] to 
[-nl] for more comprehensive)

 Modify the option of FsShell getmerge from [addnl] to [-nl] for consistency
 ---

 Key: HADOOP-7348
 URL: https://issues.apache.org/jira/browse/HADOOP-7348
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.23.0
Reporter: XieXianshan
Assignee: XieXianshan
 Fix For: 0.23.1

 Attachments: HADOOP-7348-v0.3.patch, HADOOP-7348-v0.4.patch, 
 HADOOP-7348.patch, HADOOP-7348.patch, HADOOP-7348.patch, HADOOP-7348.patch_2


 The [addnl] option of FsShell getmerge should be either true or false,but 
 it is very hard to understand by users, especially  who`s never used this 
 option before. 
 So,the [addnl] option should be changed to [-nl] for consistency.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8317) Update maven-assembly-plugin to 2.3 - fix build on FreeBSD

2012-05-01 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8317:


   Resolution: Fixed
Fix Version/s: 3.0.0
   2.0.0
   Status: Resolved  (was: Patch Available)

Thanks Radim,

+1 for the small change.  I put this into trunk, and branch-2

 Update maven-assembly-plugin to 2.3 - fix build on FreeBSD
 --

 Key: HADOOP-8317
 URL: https://issues.apache.org/jira/browse/HADOOP-8317
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.3, 2.0.0
 Environment: FreeBSD 8.2, AMD 64, OPENJDK 6, ZFS
Reporter: Radim Kolar
 Fix For: 2.0.0, 3.0.0

 Attachments: assembly-plugin-update.txt


 There is bug in hadoop-assembly plugin which makes builds fail on FreeBSD 
 because its chmod do not understand nonstgandard linux parameters. Unless you 
 do mvn clean before every build it fails with:
 [INFO] --- maven-assembly-plugin:2.2.1:single (dist) @ hadoop-common ---
 [WARNING] The following patterns were never triggered in this artifact 
 exclusion filter:
 o  'org.apache.ant:*:jar'
 o  'jdiff:jdiff:jar'
 [INFO] Copying files to 
 /usr/local/jboss/.jenkins/jobs/Hadoop-0.23/workspace/hadoop-common-project/hadoop-common/target/hadoop-common-0.23.3-SNAPSHOT
 [WARNING] ---
 [WARNING] Standard error:
 [WARNING] ---
 [WARNING] 
 [WARNING] ---
 [WARNING] Standard output:
 [WARNING] ---
 [WARNING] chmod: 
 /usr/local/jboss/.jenkins/jobs/Hadoop-0.23/workspace/hadoop-common-project/hadoop-common/target/hadoop-common-0.23.3-SNAPSHOT/share/hadoop/common/lib/hadoop-auth-0.23.3-SNAPSHOT.jar:
  Inappropriate file type or format
 [WARNING] ---
 mojoFailed org.apache.maven.plugins:maven-assembly-plugin:2.2.1(dist)
 projectFailed org.apache.hadoop:hadoop-common:0.23.3-SNAPSHOT
 sessionEnded

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-8339) jenkins complaining about 16 javadoc warnings

2012-05-01 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans reassigned HADOOP-8339:
---

Assignee: Robert Joseph Evans

 jenkins complaining about 16 javadoc warnings 
 --

 Key: HADOOP-8339
 URL: https://issues.apache.org/jira/browse/HADOOP-8339
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Thomas Graves
Assignee: Robert Joseph Evans

 See any of the mapreduce/hadoop jenkins reports recently and they all 
 complain about 16 javadoc warnings.
 -1 javadoc.  The javadoc tool appears to have generated 16 warning 
 messages.
 Which really means there are 24 since there are 8 that are supposed to be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8339) jenkins complaining about 16 javadoc warnings

2012-05-01 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-8339:
--

Attachment: HADOOP-8339.patch

I had a look at this earlier - here's the patch I came up with. Note that we 
can remove all but the top-level test-patch.properties. Also, we still need to 
record the allowed number of javadoc warnings (6) since the ones resulting from 
use of com.sun packages cannot be easily suppressed.

 jenkins complaining about 16 javadoc warnings 
 --

 Key: HADOOP-8339
 URL: https://issues.apache.org/jira/browse/HADOOP-8339
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Thomas Graves
Assignee: Robert Joseph Evans
 Attachments: HADOOP-8339.patch


 See any of the mapreduce/hadoop jenkins reports recently and they all 
 complain about 16 javadoc warnings.
 -1 javadoc.  The javadoc tool appears to have generated 16 warning 
 messages.
 Which really means there are 24 since there are 8 that are supposed to be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8317) Update maven-assembly-plugin to 2.3 - fix build on FreeBSD

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265978#comment-13265978
 ] 

Hudson commented on HADOOP-8317:


Integrated in Hadoop-Hdfs-trunk-Commit #2241 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2241/])
HADOOP-8317. Update maven-assembly-plugin to 2.3 - fix build on FreeBSD 
(Radim Kolar via bobby) (Revision 1332775)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332775
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/pom.xml


 Update maven-assembly-plugin to 2.3 - fix build on FreeBSD
 --

 Key: HADOOP-8317
 URL: https://issues.apache.org/jira/browse/HADOOP-8317
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.3, 2.0.0
 Environment: FreeBSD 8.2, AMD 64, OPENJDK 6, ZFS
Reporter: Radim Kolar
 Fix For: 2.0.0, 3.0.0

 Attachments: assembly-plugin-update.txt


 There is bug in hadoop-assembly plugin which makes builds fail on FreeBSD 
 because its chmod do not understand nonstgandard linux parameters. Unless you 
 do mvn clean before every build it fails with:
 [INFO] --- maven-assembly-plugin:2.2.1:single (dist) @ hadoop-common ---
 [WARNING] The following patterns were never triggered in this artifact 
 exclusion filter:
 o  'org.apache.ant:*:jar'
 o  'jdiff:jdiff:jar'
 [INFO] Copying files to 
 /usr/local/jboss/.jenkins/jobs/Hadoop-0.23/workspace/hadoop-common-project/hadoop-common/target/hadoop-common-0.23.3-SNAPSHOT
 [WARNING] ---
 [WARNING] Standard error:
 [WARNING] ---
 [WARNING] 
 [WARNING] ---
 [WARNING] Standard output:
 [WARNING] ---
 [WARNING] chmod: 
 /usr/local/jboss/.jenkins/jobs/Hadoop-0.23/workspace/hadoop-common-project/hadoop-common/target/hadoop-common-0.23.3-SNAPSHOT/share/hadoop/common/lib/hadoop-auth-0.23.3-SNAPSHOT.jar:
  Inappropriate file type or format
 [WARNING] ---
 mojoFailed org.apache.maven.plugins:maven-assembly-plugin:2.2.1(dist)
 projectFailed org.apache.hadoop:hadoop-common:0.23.3-SNAPSHOT
 sessionEnded

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8317) Update maven-assembly-plugin to 2.3 - fix build on FreeBSD

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265980#comment-13265980
 ] 

Hudson commented on HADOOP-8317:


Integrated in Hadoop-Common-trunk-Commit #2167 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2167/])
HADOOP-8317. Update maven-assembly-plugin to 2.3 - fix build on FreeBSD 
(Radim Kolar via bobby) (Revision 1332775)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332775
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/pom.xml


 Update maven-assembly-plugin to 2.3 - fix build on FreeBSD
 --

 Key: HADOOP-8317
 URL: https://issues.apache.org/jira/browse/HADOOP-8317
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.3, 2.0.0
 Environment: FreeBSD 8.2, AMD 64, OPENJDK 6, ZFS
Reporter: Radim Kolar
 Fix For: 2.0.0, 3.0.0

 Attachments: assembly-plugin-update.txt


 There is bug in hadoop-assembly plugin which makes builds fail on FreeBSD 
 because its chmod do not understand nonstgandard linux parameters. Unless you 
 do mvn clean before every build it fails with:
 [INFO] --- maven-assembly-plugin:2.2.1:single (dist) @ hadoop-common ---
 [WARNING] The following patterns were never triggered in this artifact 
 exclusion filter:
 o  'org.apache.ant:*:jar'
 o  'jdiff:jdiff:jar'
 [INFO] Copying files to 
 /usr/local/jboss/.jenkins/jobs/Hadoop-0.23/workspace/hadoop-common-project/hadoop-common/target/hadoop-common-0.23.3-SNAPSHOT
 [WARNING] ---
 [WARNING] Standard error:
 [WARNING] ---
 [WARNING] 
 [WARNING] ---
 [WARNING] Standard output:
 [WARNING] ---
 [WARNING] chmod: 
 /usr/local/jboss/.jenkins/jobs/Hadoop-0.23/workspace/hadoop-common-project/hadoop-common/target/hadoop-common-0.23.3-SNAPSHOT/share/hadoop/common/lib/hadoop-auth-0.23.3-SNAPSHOT.jar:
  Inappropriate file type or format
 [WARNING] ---
 mojoFailed org.apache.maven.plugins:maven-assembly-plugin:2.2.1(dist)
 projectFailed org.apache.hadoop:hadoop-common:0.23.3-SNAPSHOT
 sessionEnded

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8340) SNAPSHOT build versions should compare as less than their eventual final release

2012-05-01 Thread Todd Lipcon (JIRA)
Todd Lipcon created HADOOP-8340:
---

 Summary: SNAPSHOT build versions should compare as less than their 
eventual final release
 Key: HADOOP-8340
 URL: https://issues.apache.org/jira/browse/HADOOP-8340
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 2.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor


We recently added a utility function to compare two version strings, based on 
splitting on '.'s and comparing each component. However, it considers a version 
like 2.0.0-SNAPSHOT as being greater than 2.0.0. This isn't right, since 
SNAPSHOT builds come before the final release.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8340) SNAPSHOT build versions should compare as less than their eventual final release

2012-05-01 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8340:


Attachment: hadoop-8340.txt

 SNAPSHOT build versions should compare as less than their eventual final 
 release
 

 Key: HADOOP-8340
 URL: https://issues.apache.org/jira/browse/HADOOP-8340
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 2.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8340.txt


 We recently added a utility function to compare two version strings, based on 
 splitting on '.'s and comparing each component. However, it considers a 
 version like 2.0.0-SNAPSHOT as being greater than 2.0.0. This isn't right, 
 since SNAPSHOT builds come before the final release.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8340) SNAPSHOT build versions should compare as less than their eventual final release

2012-05-01 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8340:


Status: Patch Available  (was: Open)

 SNAPSHOT build versions should compare as less than their eventual final 
 release
 

 Key: HADOOP-8340
 URL: https://issues.apache.org/jira/browse/HADOOP-8340
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 2.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8340.txt


 We recently added a utility function to compare two version strings, based on 
 splitting on '.'s and comparing each component. However, it considers a 
 version like 2.0.0-SNAPSHOT as being greater than 2.0.0. This isn't right, 
 since SNAPSHOT builds come before the final release.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8339) jenkins complaining about 16 javadoc warnings

2012-05-01 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13265996#comment-13265996
 ] 

Jonathan Eagles commented on HADOOP-8339:
-

Minor thought. Can the greater-than check for warnings be changed to 
not-equal-to? This will give us better visibility, for example, when javadoc 
warnings are reduced in code but not in the properties file.

 jenkins complaining about 16 javadoc warnings 
 --

 Key: HADOOP-8339
 URL: https://issues.apache.org/jira/browse/HADOOP-8339
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Thomas Graves
Assignee: Robert Joseph Evans
 Attachments: HADOOP-8339.patch


 See any of the mapreduce/hadoop jenkins reports recently and they all 
 complain about 16 javadoc warnings.
 -1 javadoc.  The javadoc tool appears to have generated 16 warning 
 messages.
 Which really means there are 24 since there are 8 that are supposed to be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8317) Update maven-assembly-plugin to 2.3 - fix build on FreeBSD

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266002#comment-13266002
 ] 

Hudson commented on HADOOP-8317:


Integrated in Hadoop-Mapreduce-trunk-Commit #2183 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2183/])
HADOOP-8317. Update maven-assembly-plugin to 2.3 - fix build on FreeBSD 
(Radim Kolar via bobby) (Revision 1332775)

 Result = ABORTED
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332775
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/pom.xml


 Update maven-assembly-plugin to 2.3 - fix build on FreeBSD
 --

 Key: HADOOP-8317
 URL: https://issues.apache.org/jira/browse/HADOOP-8317
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.3, 2.0.0
 Environment: FreeBSD 8.2, AMD 64, OPENJDK 6, ZFS
Reporter: Radim Kolar
 Fix For: 2.0.0, 3.0.0

 Attachments: assembly-plugin-update.txt


 There is bug in hadoop-assembly plugin which makes builds fail on FreeBSD 
 because its chmod do not understand nonstgandard linux parameters. Unless you 
 do mvn clean before every build it fails with:
 [INFO] --- maven-assembly-plugin:2.2.1:single (dist) @ hadoop-common ---
 [WARNING] The following patterns were never triggered in this artifact 
 exclusion filter:
 o  'org.apache.ant:*:jar'
 o  'jdiff:jdiff:jar'
 [INFO] Copying files to 
 /usr/local/jboss/.jenkins/jobs/Hadoop-0.23/workspace/hadoop-common-project/hadoop-common/target/hadoop-common-0.23.3-SNAPSHOT
 [WARNING] ---
 [WARNING] Standard error:
 [WARNING] ---
 [WARNING] 
 [WARNING] ---
 [WARNING] Standard output:
 [WARNING] ---
 [WARNING] chmod: 
 /usr/local/jboss/.jenkins/jobs/Hadoop-0.23/workspace/hadoop-common-project/hadoop-common/target/hadoop-common-0.23.3-SNAPSHOT/share/hadoop/common/lib/hadoop-auth-0.23.3-SNAPSHOT.jar:
  Inappropriate file type or format
 [WARNING] ---
 mojoFailed org.apache.maven.plugins:maven-assembly-plugin:2.2.1(dist)
 projectFailed org.apache.hadoop:hadoop-common:0.23.3-SNAPSHOT
 sessionEnded

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8327) distcpv2 and distcpv1 jars should not coexist

2012-05-01 Thread Dave Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266006#comment-13266006
 ] 

Dave Thompson commented on HADOOP-8327:
---

Logalyzer and TestCopyFiles are two utilities that depend on distcp V1, and 
further use an incompatible constructor.   I suggest renaming the DistCp (v1) 
class to DistCPV1 for now, which will prevent random distcp failures from the 
above problem, and not affect those utilities that still depend on DistCpV1.
Further any external utilities that use this class will be flushed out, but the 
class will still be accessible (though now called DistCpV1).

DistCp (v2) will still remain (untouched) as DistCp.

 distcpv2 and distcpv1 jars should not coexist
 -

 Key: HADOOP-8327
 URL: https://issues.apache.org/jira/browse/HADOOP-8327
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.2
Reporter: Dave Thompson
Assignee: Dave Thompson

 Distcp v2 (hadoop-tools/hadoop-distcp/...)and Distcp v1 
 (hadoop-tools/hadoop-extras/...) are currently both built, and the resulting 
 hadoop-distcp-x.jar and hadoop-extras-x.jar end up in the same class path 
 directory.   This causes some undeterministic problems, where v1 is launched 
 when v2 is intended, or even v2 is launched, but may later fail on various 
 nodes because of mismatch with v1.
 According to
 http://docs.oracle.com/javase/6/docs/technotes/tools/windows/classpath.html 
 (Understanding class path wildcards)
 The order in which the JAR files in a directory are enumerated in the 
 expanded class path is not specified and may vary from platform to platform 
 and even from moment to moment on the same machine.
 Suggest distcpv1 be deprecated at this point, possibly by discontinuing build 
 of distcpv1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8340) SNAPSHOT build versions should compare as less than their eventual final release

2012-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266007#comment-13266007
 ] 

Hadoop QA commented on HADOOP-8340:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12525199/hadoop-8340.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

-1 javadoc.  The javadoc tool appears to have generated 16 warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.fs.viewfs.TestViewFsTrash

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/912//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/912//console

This message is automatically generated.

 SNAPSHOT build versions should compare as less than their eventual final 
 release
 

 Key: HADOOP-8340
 URL: https://issues.apache.org/jira/browse/HADOOP-8340
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 2.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8340.txt


 We recently added a utility function to compare two version strings, based on 
 splitting on '.'s and comparing each component. However, it considers a 
 version like 2.0.0-SNAPSHOT as being greater than 2.0.0. This isn't right, 
 since SNAPSHOT builds come before the final release.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8339) jenkins complaining about 16 javadoc warnings

2012-05-01 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8339:


Attachment: HADOOP-8339.txt

This is really just a copy of Tom White's patch with the 6 allowable javadoc 
warnings set, and and -eq instead of a -gt.  All I really did was test this.

 jenkins complaining about 16 javadoc warnings 
 --

 Key: HADOOP-8339
 URL: https://issues.apache.org/jira/browse/HADOOP-8339
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Thomas Graves
Assignee: Robert Joseph Evans
 Attachments: HADOOP-8339.patch, HADOOP-8339.txt


 See any of the mapreduce/hadoop jenkins reports recently and they all 
 complain about 16 javadoc warnings.
 -1 javadoc.  The javadoc tool appears to have generated 16 warning 
 messages.
 Which really means there are 24 since there are 8 that are supposed to be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8339) jenkins complaining about 16 javadoc warnings

2012-05-01 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8339:


Status: Patch Available  (was: Open)

 jenkins complaining about 16 javadoc warnings 
 --

 Key: HADOOP-8339
 URL: https://issues.apache.org/jira/browse/HADOOP-8339
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Thomas Graves
Assignee: Robert Joseph Evans
 Attachments: HADOOP-8339.patch, HADOOP-8339.txt


 See any of the mapreduce/hadoop jenkins reports recently and they all 
 complain about 16 javadoc warnings.
 -1 javadoc.  The javadoc tool appears to have generated 16 warning 
 messages.
 Which really means there are 24 since there are 8 that are supposed to be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8104) Inconsistent Jackson versions

2012-05-01 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8104:


Fix Version/s: 0.23.3

I just pulled this into 0.23.3

 Inconsistent Jackson versions
 -

 Key: HADOOP-8104
 URL: https://issues.apache.org/jira/browse/HADOOP-8104
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Colin Patrick McCabe
Assignee: Alejandro Abdelnur
 Fix For: 0.23.3, 2.0.0

 Attachments: HADOOP-7470.patch, HADOOP-8104.patch, HADOOP-8104.patch, 
 dependency-tree-old.txt


 This is a maven build issue.
 Jersey 1.8 is pulling in version 1.7.1 of Jackson.  Meanwhile, we are 
 manually specifying that we want version 1.8 of Jackson in the POM files.  
 This causes a conflict where Jackson produces unexpected results when 
 serializing Map objects.
 How to reproduce: try this code:
 {quote}
 ObjectMapper mapper = new ObjectMapper();
  MapString, Object m = new HashMapString, Object();
 mapper.writeValue(new File(foo), m);
 {quote}
 You will get an exception:
 {quote}
 Exception in thread main java.lang.NoSuchMethodError: 
 org.codehaus.jackson.type.JavaType.isMapLikeType()Z
 at 
 org.codehaus.jackson.map.ser.BasicSerializerFactory.buildContainerSerializer(BasicSerializerFactory.java:396)
 at 
 org.codehaus.jackson.map.ser.BeanSerializerFactory.createSerializer(BeanSerializerFactory.java:267)
 {quote}
 Basically the inconsistent versions of various Jackson components are causing 
 this NoSuchMethod error.
 As far as I know, this only occurs when serializing maps-- that's why it 
 hasn't been found and fixed yet.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8340) SNAPSHOT build versions should compare as less than their eventual final release

2012-05-01 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266019#comment-13266019
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8340:


Looks good.  Minor comments:
- remove System.err.println
- Use com.google.common.collect.ComparisonChain, i.e.
{code}
return ComparisonChain.start()
.compare(version1Parts.length, version2Parts.length)
.compare(isSnapshot2, isSnapshot1)
.result();
{code}


 SNAPSHOT build versions should compare as less than their eventual final 
 release
 

 Key: HADOOP-8340
 URL: https://issues.apache.org/jira/browse/HADOOP-8340
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 2.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8340.txt


 We recently added a utility function to compare two version strings, based on 
 splitting on '.'s and comparing each component. However, it considers a 
 version like 2.0.0-SNAPSHOT as being greater than 2.0.0. This isn't right, 
 since SNAPSHOT builds come before the final release.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8339) jenkins complaining about 16 javadoc warnings

2012-05-01 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8339:


Attachment: HADOOP-8339.txt

Oops I forgot to remove a shortcut I added for testing.  This patch should be 
good.

 jenkins complaining about 16 javadoc warnings 
 --

 Key: HADOOP-8339
 URL: https://issues.apache.org/jira/browse/HADOOP-8339
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Thomas Graves
Assignee: Robert Joseph Evans
 Attachments: HADOOP-8339.patch, HADOOP-8339.txt, HADOOP-8339.txt


 See any of the mapreduce/hadoop jenkins reports recently and they all 
 complain about 16 javadoc warnings.
 -1 javadoc.  The javadoc tool appears to have generated 16 warning 
 messages.
 Which really means there are 24 since there are 8 that are supposed to be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8341) Fix or filter findbugs issues in hadoop-tools

2012-05-01 Thread Robert Joseph Evans (JIRA)
Robert Joseph Evans created HADOOP-8341:
---

 Summary: Fix or filter findbugs issues in hadoop-tools
 Key: HADOOP-8341
 URL: https://issues.apache.org/jira/browse/HADOOP-8341
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans


Now that the precommit build can test hadoop-tools we need to fix or filter the 
many findbugs warnings that are popping up in there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8340) SNAPSHOT build versions should compare as less than their eventual final release

2012-05-01 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8340:


Attachment: hadoop-8340.txt

Oops, sorry about the println. I used ComparisonChain like you suggested - good 
call, much easier to follow.

 SNAPSHOT build versions should compare as less than their eventual final 
 release
 

 Key: HADOOP-8340
 URL: https://issues.apache.org/jira/browse/HADOOP-8340
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 2.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8340.txt, hadoop-8340.txt


 We recently added a utility function to compare two version strings, based on 
 splitting on '.'s and comparing each component. However, it considers a 
 version like 2.0.0-SNAPSHOT as being greater than 2.0.0. This isn't right, 
 since SNAPSHOT builds come before the final release.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8340) SNAPSHOT build versions should compare as less than their eventual final release

2012-05-01 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266024#comment-13266024
 ] 

Todd Lipcon commented on HADOOP-8340:
-

Worth noting that when we commit this, we also have to commit a change to HDFS 
to make the new minimum version 2.0.0-SNAPSHOT instead of 2.0.0. Then when we 
cut a 2.0 release candidate, we need to change the minimum back to 2.0.0 to 
disallow snapshots from connecting to 2.0.0 clusters.

 SNAPSHOT build versions should compare as less than their eventual final 
 release
 

 Key: HADOOP-8340
 URL: https://issues.apache.org/jira/browse/HADOOP-8340
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 2.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8340.txt, hadoop-8340.txt


 We recently added a utility function to compare two version strings, based on 
 splitting on '.'s and comparing each component. However, it considers a 
 version like 2.0.0-SNAPSHOT as being greater than 2.0.0. This isn't right, 
 since SNAPSHOT builds come before the final release.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8340) SNAPSHOT build versions should compare as less than their eventual final release

2012-05-01 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266030#comment-13266030
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8340:


+1 patch looks good.

 SNAPSHOT build versions should compare as less than their eventual final 
 release
 

 Key: HADOOP-8340
 URL: https://issues.apache.org/jira/browse/HADOOP-8340
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 2.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8340.txt, hadoop-8340.txt


 We recently added a utility function to compare two version strings, based on 
 splitting on '.'s and comparing each component. However, it considers a 
 version like 2.0.0-SNAPSHOT as being greater than 2.0.0. This isn't right, 
 since SNAPSHOT builds come before the final release.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8339) jenkins complaining about 16 javadoc warnings

2012-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266034#comment-13266034
 ] 

Hadoop QA commented on HADOOP-8339:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12525204/HADOOP-8339.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to introduce 10 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-tools/hadoop-extras 
hadoop-tools/hadoop-rumen.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/913//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/913//artifact/trunk/trunk/patchprocess/newPatchFindbugsWarningshadoop-rumen.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/913//artifact/trunk/trunk/patchprocess/newPatchFindbugsWarningshadoop-extras.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/913//console

This message is automatically generated.

 jenkins complaining about 16 javadoc warnings 
 --

 Key: HADOOP-8339
 URL: https://issues.apache.org/jira/browse/HADOOP-8339
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Thomas Graves
Assignee: Robert Joseph Evans
 Attachments: HADOOP-8339.patch, HADOOP-8339.txt, HADOOP-8339.txt


 See any of the mapreduce/hadoop jenkins reports recently and they all 
 complain about 16 javadoc warnings.
 -1 javadoc.  The javadoc tool appears to have generated 16 warning 
 messages.
 Which really means there are 24 since there are 8 that are supposed to be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8338) Can't renew or cancel HDFS delegation tokens over secure RPC

2012-05-01 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266037#comment-13266037
 ] 

Owen O'Malley commented on HADOOP-8338:
---

Nicholas,
  I tend to think that HdfsConfiguration is a mistake, but it doesn't even 
exist in 1.x. I guess the trunk version of the patch should use it.

 Can't renew or cancel HDFS delegation tokens over secure RPC
 

 Key: HADOOP-8338
 URL: https://issues.apache.org/jira/browse/HADOOP-8338
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Attachments: hadoop-8338.patch


 The fetchdt tool is failing for secure deployments when given --renew or 
 --cancel on tokens fetched using RPC. (The tokens fetched over HTTP can be 
 renewed and canceled fine.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Moved] (HADOOP-8342) HDFS command fails with exception following merge of HADOOP-8325

2012-05-01 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur moved HDFS-3337 to HADOOP-8342:
--

  Component/s: (was: hdfs client)
   fs
 Target Version/s:   (was: 2.0.0)
Affects Version/s: (was: 2.0.0)
   2.0.0
  Key: HADOOP-8342  (was: HDFS-3337)
  Project: Hadoop Common  (was: Hadoop HDFS)

 HDFS command fails with exception following merge of HADOOP-8325
 

 Key: HADOOP-8342
 URL: https://issues.apache.org/jira/browse/HADOOP-8342
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0
 Environment: QE tests on version 2.0.1205010603
Reporter: Randy Clayton
Assignee: Alejandro Abdelnur
 Attachments: HDFS-3337.patch


 We are seeing most hdfs commands in our nightly acceptance tests fail with an 
 exception as shown below. This started with a few hours of the merge of 
 HADOOP-8325 on 4/30/2012
 hdfs --config conf/hadoop/ dfs -ls dirname
 ls: `dirname': No such file or directory
 12/05/01 16:57:52 WARN util.ShutdownHookManager: ShutdownHook 
 'ClientFinalizer' failed, java.lang.IllegalStateException: Shutdown in 
 progress, cannot remove a shutdownHook
 java.lang.IllegalStateException: Shutdown in progress, cannot remove a 
 shutdownHook
   at 
 org.apache.hadoop.util.ShutdownHookManager.removeShutdownHook(ShutdownHookManager.java:166)
   at org.apache.hadoop.fs.FileSystem$Cache.remove(FileSystem.java:2202)
   at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2231)
   at 
 org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2251)
   at 
 org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8338) Can't renew or cancel HDFS delegation tokens over secure RPC

2012-05-01 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266055#comment-13266055
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8338:


Oops, I missed that the patch is for 1.x.  Why HdfsConfiguration is mistake?

+1 on the 1.x patch.

 Can't renew or cancel HDFS delegation tokens over secure RPC
 

 Key: HADOOP-8338
 URL: https://issues.apache.org/jira/browse/HADOOP-8338
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Attachments: hadoop-8338.patch


 The fetchdt tool is failing for secure deployments when given --renew or 
 --cancel on tokens fetched using RPC. (The tokens fetched over HTTP can be 
 renewed and canceled fine.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8279) Auto-HA: Allow manual failover to be invoked from zkfc.

2012-05-01 Thread Hari Mankude (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266060#comment-13266060
 ] 

Hari Mankude commented on HADOOP-8279:
--

Todd,

Is the feature going to switch the active NN to standby state or will it result 
in active NN getting fenced and hence going away?

thanks

 Auto-HA: Allow manual failover to be invoked from zkfc.
 ---

 Key: HADOOP-8279
 URL: https://issues.apache.org/jira/browse/HADOOP-8279
 Project: Hadoop Common
  Issue Type: Improvement
  Components: auto-failover, ha
Affects Versions: Auto Failover (HDFS-3042)
Reporter: Mingjie Lai
Assignee: Todd Lipcon
 Fix For: Auto Failover (HDFS-3042)

 Attachments: hadoop-8279.txt, hadoop-8279.txt, hadoop-8279.txt, 
 hadoop-8279.txt


 HADOOP-8247 introduces a configure flag to prevent potential status 
 inconsistency between zkfc and namenode, by making auto and manual failover 
 mutually exclusive.
 However, as described in 2.7.2 section of design doc at HDFS-2185, we should 
 allow manual and auto failover co-exist, by:
 - adding some rpc interfaces at zkfc
 - manual failover shall be triggered by haadmin, and handled by zkfc if auto 
 failover is enabled. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8339) jenkins complaining about 16 javadoc warnings

2012-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266065#comment-13266065
 ] 

Hadoop QA commented on HADOOP-8339:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12525206/HADOOP-8339.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to introduce 10 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-tools/hadoop-extras 
hadoop-tools/hadoop-rumen.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/914//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/914//artifact/trunk/trunk/patchprocess/newPatchFindbugsWarningshadoop-rumen.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/914//artifact/trunk/trunk/patchprocess/newPatchFindbugsWarningshadoop-extras.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/914//console

This message is automatically generated.

 jenkins complaining about 16 javadoc warnings 
 --

 Key: HADOOP-8339
 URL: https://issues.apache.org/jira/browse/HADOOP-8339
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Thomas Graves
Assignee: Robert Joseph Evans
 Attachments: HADOOP-8339.patch, HADOOP-8339.txt, HADOOP-8339.txt


 See any of the mapreduce/hadoop jenkins reports recently and they all 
 complain about 16 javadoc warnings.
 -1 javadoc.  The javadoc tool appears to have generated 16 warning 
 messages.
 Which really means there are 24 since there are 8 that are supposed to be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8279) Auto-HA: Allow manual failover to be invoked from zkfc.

2012-05-01 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266067#comment-13266067
 ] 

Todd Lipcon commented on HADOOP-8279:
-

Hari: yes, it gracefully transitions the NN to standby state, and doesn't cause 
fencing. Fencing onyl results if the previous active has crashed (i.e. not 
responding to the request). Please refer to the design doc referenced in the 
description of the JIRA.

 Auto-HA: Allow manual failover to be invoked from zkfc.
 ---

 Key: HADOOP-8279
 URL: https://issues.apache.org/jira/browse/HADOOP-8279
 Project: Hadoop Common
  Issue Type: Improvement
  Components: auto-failover, ha
Affects Versions: Auto Failover (HDFS-3042)
Reporter: Mingjie Lai
Assignee: Todd Lipcon
 Fix For: Auto Failover (HDFS-3042)

 Attachments: hadoop-8279.txt, hadoop-8279.txt, hadoop-8279.txt, 
 hadoop-8279.txt


 HADOOP-8247 introduces a configure flag to prevent potential status 
 inconsistency between zkfc and namenode, by making auto and manual failover 
 mutually exclusive.
 However, as described in 2.7.2 section of design doc at HDFS-2185, we should 
 allow manual and auto failover co-exist, by:
 - adding some rpc interfaces at zkfc
 - manual failover shall be triggered by haadmin, and handled by zkfc if auto 
 failover is enabled. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8339) jenkins complaining about 16 javadoc warnings

2012-05-01 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8339:


Attachment: HADOOP-8339.txt

I feel dumb I had a -eq when it should have been a -ne.  Thanks for seeing that 
Tom.

 jenkins complaining about 16 javadoc warnings 
 --

 Key: HADOOP-8339
 URL: https://issues.apache.org/jira/browse/HADOOP-8339
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Thomas Graves
Assignee: Robert Joseph Evans
 Attachments: HADOOP-8339.patch, HADOOP-8339.txt, HADOOP-8339.txt, 
 HADOOP-8339.txt


 See any of the mapreduce/hadoop jenkins reports recently and they all 
 complain about 16 javadoc warnings.
 -1 javadoc.  The javadoc tool appears to have generated 16 warning 
 messages.
 Which really means there are 24 since there are 8 that are supposed to be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8340) SNAPSHOT build versions should compare as less than their eventual final release

2012-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266075#comment-13266075
 ] 

Hadoop QA commented on HADOOP-8340:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12525207/hadoop-8340.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

-1 javadoc.  The javadoc tool appears to have generated 16 warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.fs.viewfs.TestViewFsTrash

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/915//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/915//console

This message is automatically generated.

 SNAPSHOT build versions should compare as less than their eventual final 
 release
 

 Key: HADOOP-8340
 URL: https://issues.apache.org/jira/browse/HADOOP-8340
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 2.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Attachments: hadoop-8340.txt, hadoop-8340.txt


 We recently added a utility function to compare two version strings, based on 
 splitting on '.'s and comparing each component. However, it considers a 
 version like 2.0.0-SNAPSHOT as being greater than 2.0.0. This isn't right, 
 since SNAPSHOT builds come before the final release.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8339) jenkins complaining about 16 javadoc warnings

2012-05-01 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266079#comment-13266079
 ] 

Tom White commented on HADOOP-8339:
---

Robert - your latest change looks good to me.

 jenkins complaining about 16 javadoc warnings 
 --

 Key: HADOOP-8339
 URL: https://issues.apache.org/jira/browse/HADOOP-8339
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Thomas Graves
Assignee: Robert Joseph Evans
 Attachments: HADOOP-8339.patch, HADOOP-8339.txt, HADOOP-8339.txt, 
 HADOOP-8339.txt


 See any of the mapreduce/hadoop jenkins reports recently and they all 
 complain about 16 javadoc warnings.
 -1 javadoc.  The javadoc tool appears to have generated 16 warning 
 messages.
 Which really means there are 24 since there are 8 that are supposed to be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8341) Fix or filter findbugs issues in hadoop-tools

2012-05-01 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266084#comment-13266084
 ] 

Robert Joseph Evans commented on HADOOP-8341:
-

This is the breakdown of the existing findbugs issues.

 * Streaming - 7 (mostly returning internal data)
 * distcp - 2 (threading issues)
 * archives - 1 (casting Configuration to JobConf)
 * Rumen - 8 (returning internal data and serialization)
 * extras - 2 (some things should be marked final)


 Fix or filter findbugs issues in hadoop-tools
 -

 Key: HADOOP-8341
 URL: https://issues.apache.org/jira/browse/HADOOP-8341
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans

 Now that the precommit build can test hadoop-tools we need to fix or filter 
 the many findbugs warnings that are popping up in there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8343) Allow configuration of authorization for JmxJsonServlet and MetricsServlet

2012-05-01 Thread Philip Zeyliger (JIRA)
Philip Zeyliger created HADOOP-8343:
---

 Summary: Allow configuration of authorization for JmxJsonServlet 
and MetricsServlet
 Key: HADOOP-8343
 URL: https://issues.apache.org/jira/browse/HADOOP-8343
 Project: Hadoop Common
  Issue Type: New Feature
  Components: util
Affects Versions: 2.0.0
Reporter: Philip Zeyliger


When using authorization for the daemons' web server, it would be useful to 
specifically control the authorization requirements for accessing /jmx and 
/metrics.  Currently, they require administrative access.  This JIRA would 
propose that whether or not they are available to administrators only or to all 
users be controlled by hadoop.instrumentation.requires.administrator (or 
similar).  The default would be that administrator access is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8327) distcpv2 and distcpv1 jars should not coexist

2012-05-01 Thread Dave Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Thompson updated HADOOP-8327:
--

Attachment: HADOOP-8327-branch-0.23.2.patch

The attached patch renames the v1 DistCp class to DistCpV1.   Further, the 
utility dependencies that use it Logalyzer and TestCopyFiles are adjusted to 
use the renamed DistCpV1.

 distcpv2 and distcpv1 jars should not coexist
 -

 Key: HADOOP-8327
 URL: https://issues.apache.org/jira/browse/HADOOP-8327
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.2
Reporter: Dave Thompson
Assignee: Dave Thompson
 Attachments: HADOOP-8327-branch-0.23.2.patch


 Distcp v2 (hadoop-tools/hadoop-distcp/...)and Distcp v1 
 (hadoop-tools/hadoop-extras/...) are currently both built, and the resulting 
 hadoop-distcp-x.jar and hadoop-extras-x.jar end up in the same class path 
 directory.   This causes some undeterministic problems, where v1 is launched 
 when v2 is intended, or even v2 is launched, but may later fail on various 
 nodes because of mismatch with v1.
 According to
 http://docs.oracle.com/javase/6/docs/technotes/tools/windows/classpath.html 
 (Understanding class path wildcards)
 The order in which the JAR files in a directory are enumerated in the 
 expanded class path is not specified and may vary from platform to platform 
 and even from moment to moment on the same machine.
 Suggest distcpv1 be deprecated at this point, possibly by discontinuing build 
 of distcpv1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8339) jenkins complaining about 16 javadoc warnings

2012-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266107#comment-13266107
 ] 

Hadoop QA commented on HADOOP-8339:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12525215/HADOOP-8339.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to introduce 10 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-tools/hadoop-extras 
hadoop-tools/hadoop-rumen.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/916//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/916//artifact/trunk/trunk/patchprocess/newPatchFindbugsWarningshadoop-rumen.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/916//artifact/trunk/trunk/patchprocess/newPatchFindbugsWarningshadoop-extras.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/916//console

This message is automatically generated.

 jenkins complaining about 16 javadoc warnings 
 --

 Key: HADOOP-8339
 URL: https://issues.apache.org/jira/browse/HADOOP-8339
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Thomas Graves
Assignee: Robert Joseph Evans
 Attachments: HADOOP-8339.patch, HADOOP-8339.txt, HADOOP-8339.txt, 
 HADOOP-8339.txt


 See any of the mapreduce/hadoop jenkins reports recently and they all 
 complain about 16 javadoc warnings.
 -1 javadoc.  The javadoc tool appears to have generated 16 warning 
 messages.
 Which really means there are 24 since there are 8 that are supposed to be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8327) distcpv2 and distcpv1 jars should not coexist

2012-05-01 Thread Dave Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Thompson updated HADOOP-8327:
--

Tags: distcp  distcpv1 distcpv2 classpath
Target Version/s: 0.23.3
Release Note: Resolve sporadic distcp issue due to having two DistCp 
classes (v1  v2) in the classpath.
  Status: Patch Available  (was: Open)

 distcpv2 and distcpv1 jars should not coexist
 -

 Key: HADOOP-8327
 URL: https://issues.apache.org/jira/browse/HADOOP-8327
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.2
Reporter: Dave Thompson
Assignee: Dave Thompson
 Attachments: HADOOP-8327-branch-0.23.2.patch


 Distcp v2 (hadoop-tools/hadoop-distcp/...)and Distcp v1 
 (hadoop-tools/hadoop-extras/...) are currently both built, and the resulting 
 hadoop-distcp-x.jar and hadoop-extras-x.jar end up in the same class path 
 directory.   This causes some undeterministic problems, where v1 is launched 
 when v2 is intended, or even v2 is launched, but may later fail on various 
 nodes because of mismatch with v1.
 According to
 http://docs.oracle.com/javase/6/docs/technotes/tools/windows/classpath.html 
 (Understanding class path wildcards)
 The order in which the JAR files in a directory are enumerated in the 
 expanded class path is not specified and may vary from platform to platform 
 and even from moment to moment on the same machine.
 Suggest distcpv1 be deprecated at this point, possibly by discontinuing build 
 of distcpv1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8275) harden serialization logic against malformed or malicious input

2012-05-01 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266112#comment-13266112
 ] 

Eli Collins commented on HADOOP-8275:
-

+1 looks good.  Test failures are unrelated (HADOOP-8330 and HADOOP-8110).

 harden serialization logic against malformed or malicious input
 ---

 Key: HADOOP-8275
 URL: https://issues.apache.org/jira/browse/HADOOP-8275
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8275.001.patch, HADOOP-8275.002.patch, 
 HADOOP-8275.003.patch


 Harden serialization logic against malformed or malicious input.
 Add range checking to readVInt, to detect overflows, underflows, and 
 larger-than-expected values.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8172) Configuration no longer sets all keys in a deprecated key list.

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266113#comment-13266113
 ] 

Hudson commented on HADOOP-8172:


Integrated in Hadoop-Hdfs-trunk-Commit #2242 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2242/])
HADOOP-8172. Configuration no longer sets all keys in a deprecated key 
list. (Anupam Seth via bobby) (Revision 1332821)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332821
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationDeprecation.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestDeprecatedKeys.java


 Configuration no longer sets all keys in a deprecated key list.
 ---

 Key: HADOOP-8172
 URL: https://issues.apache.org/jira/browse/HADOOP-8172
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.3, 0.24.0
Reporter: Robert Joseph Evans
Assignee: Anupam Seth
Priority: Critical
 Attachments: HADOOP-8172-branch-2.patch, HADOOP-8172-branch-2.patch


 I did not look at the patch for HADOOP-8167 previously, but I did in response 
 to a recent test failure. The patch appears to have changed the following 
 code (I am just paraphrasing the code)
 {code}
 if(!deprecated(key)) {
   set(key, value);
 } else {
   for(String newKey: depricatedKeyMap.get(key)) {
 set(newKey, value);
   }
 }
 {code}
 to be 
 {code}
 set(key, value);
 if(depricatedKeyMap.contains(key)) {
set(deprecatedKeyMap.get(key)[0], value);
 } else if(reverseKeyMap.contains(key)) {
set(reverseKeyMap.get(key), value);
 }
 {code}
 If a key is deprecated and is mapped to more then one new key value only the 
 first one in the list will be set, where as previously all of them would be 
 set.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8275) Range check DelegationKey length

2012-05-01 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8275:


 Summary: Range check DelegationKey length   (was: harden serialization 
logic against malformed or malicious input)
Hadoop Flags: Reviewed

Forgot to mention, jira for other places where we need to use readVInt range 
checking?

 Range check DelegationKey length 
 -

 Key: HADOOP-8275
 URL: https://issues.apache.org/jira/browse/HADOOP-8275
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8275.001.patch, HADOOP-8275.002.patch, 
 HADOOP-8275.003.patch


 Harden serialization logic against malformed or malicious input.
 Add range checking to readVInt, to detect overflows, underflows, and 
 larger-than-expected values.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8172) Configuration no longer sets all keys in a deprecated key list.

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266119#comment-13266119
 ] 

Hudson commented on HADOOP-8172:


Integrated in Hadoop-Common-trunk-Commit #2168 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2168/])
HADOOP-8172. Configuration no longer sets all keys in a deprecated key 
list. (Anupam Seth via bobby) (Revision 1332821)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332821
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationDeprecation.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestDeprecatedKeys.java


 Configuration no longer sets all keys in a deprecated key list.
 ---

 Key: HADOOP-8172
 URL: https://issues.apache.org/jira/browse/HADOOP-8172
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.3, 0.24.0
Reporter: Robert Joseph Evans
Assignee: Anupam Seth
Priority: Critical
 Attachments: HADOOP-8172-branch-2.patch, HADOOP-8172-branch-2.patch


 I did not look at the patch for HADOOP-8167 previously, but I did in response 
 to a recent test failure. The patch appears to have changed the following 
 code (I am just paraphrasing the code)
 {code}
 if(!deprecated(key)) {
   set(key, value);
 } else {
   for(String newKey: depricatedKeyMap.get(key)) {
 set(newKey, value);
   }
 }
 {code}
 to be 
 {code}
 set(key, value);
 if(depricatedKeyMap.contains(key)) {
set(deprecatedKeyMap.get(key)[0], value);
 } else if(reverseKeyMap.contains(key)) {
set(reverseKeyMap.get(key), value);
 }
 {code}
 If a key is deprecated and is mapped to more then one new key value only the 
 first one in the list will be set, where as previously all of them would be 
 set.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8327) distcpv2 and distcpv1 jars should not coexist

2012-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266122#comment-13266122
 ] 

Hadoop QA commented on HADOOP-8327:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12525221/HADOOP-8327-branch-0.23.2.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/917//console

This message is automatically generated.

 distcpv2 and distcpv1 jars should not coexist
 -

 Key: HADOOP-8327
 URL: https://issues.apache.org/jira/browse/HADOOP-8327
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.2
Reporter: Dave Thompson
Assignee: Dave Thompson
 Attachments: HADOOP-8327-branch-0.23.2.patch


 Distcp v2 (hadoop-tools/hadoop-distcp/...)and Distcp v1 
 (hadoop-tools/hadoop-extras/...) are currently both built, and the resulting 
 hadoop-distcp-x.jar and hadoop-extras-x.jar end up in the same class path 
 directory.   This causes some undeterministic problems, where v1 is launched 
 when v2 is intended, or even v2 is launched, but may later fail on various 
 nodes because of mismatch with v1.
 According to
 http://docs.oracle.com/javase/6/docs/technotes/tools/windows/classpath.html 
 (Understanding class path wildcards)
 The order in which the JAR files in a directory are enumerated in the 
 expanded class path is not specified and may vary from platform to platform 
 and even from moment to moment on the same machine.
 Suggest distcpv1 be deprecated at this point, possibly by discontinuing build 
 of distcpv1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8275) Range check DelegationKey length

2012-05-01 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8275:


  Resolution: Fixed
   Fix Version/s: 2.0.0
Target Version/s:   (was: 2.0.0)
  Status: Resolved  (was: Patch Available)

I've committed this and merged to branch-2. Thanks Colin!

 Range check DelegationKey length 
 -

 Key: HADOOP-8275
 URL: https://issues.apache.org/jira/browse/HADOOP-8275
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.0.0

 Attachments: HADOOP-8275.001.patch, HADOOP-8275.002.patch, 
 HADOOP-8275.003.patch


 Harden serialization logic against malformed or malicious input.
 Add range checking to readVInt, to detect overflows, underflows, and 
 larger-than-expected values.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8275) Range check DelegationKey length

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266139#comment-13266139
 ] 

Hudson commented on HADOOP-8275:


Integrated in Hadoop-Common-trunk-Commit #2169 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2169/])
HADOOP-8275. Range check DelegationKey length. Contributed by Colin Patrick 
McCabe (Revision 1332839)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332839
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/DelegationKey.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestWritableUtils.java


 Range check DelegationKey length 
 -

 Key: HADOOP-8275
 URL: https://issues.apache.org/jira/browse/HADOOP-8275
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.0.0

 Attachments: HADOOP-8275.001.patch, HADOOP-8275.002.patch, 
 HADOOP-8275.003.patch


 Harden serialization logic against malformed or malicious input.
 Add range checking to readVInt, to detect overflows, underflows, and 
 larger-than-expected values.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8275) Range check DelegationKey length

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266146#comment-13266146
 ] 

Hudson commented on HADOOP-8275:


Integrated in Hadoop-Hdfs-trunk-Commit #2243 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2243/])
HADOOP-8275. Range check DelegationKey length. Contributed by Colin Patrick 
McCabe (Revision 1332839)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332839
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/DelegationKey.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestWritableUtils.java


 Range check DelegationKey length 
 -

 Key: HADOOP-8275
 URL: https://issues.apache.org/jira/browse/HADOOP-8275
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.0.0

 Attachments: HADOOP-8275.001.patch, HADOOP-8275.002.patch, 
 HADOOP-8275.003.patch


 Harden serialization logic against malformed or malicious input.
 Add range checking to readVInt, to detect overflows, underflows, and 
 larger-than-expected values.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8339) jenkins complaining about 16 javadoc warnings

2012-05-01 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266148#comment-13266148
 ] 

Thomas Graves commented on HADOOP-8339:
---

+1 lgtm. I see the findbugs will be handled by HADOOP-8341.  Thanks Bobby and 
Tom!

 jenkins complaining about 16 javadoc warnings 
 --

 Key: HADOOP-8339
 URL: https://issues.apache.org/jira/browse/HADOOP-8339
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Thomas Graves
Assignee: Robert Joseph Evans
 Attachments: HADOOP-8339.patch, HADOOP-8339.txt, HADOOP-8339.txt, 
 HADOOP-8339.txt


 See any of the mapreduce/hadoop jenkins reports recently and they all 
 complain about 16 javadoc warnings.
 -1 javadoc.  The javadoc tool appears to have generated 16 warning 
 messages.
 Which really means there are 24 since there are 8 that are supposed to be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8172) Configuration no longer sets all keys in a deprecated key list.

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266157#comment-13266157
 ] 

Hudson commented on HADOOP-8172:


Integrated in Hadoop-Mapreduce-trunk-Commit #2184 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2184/])
HADOOP-8172. Configuration no longer sets all keys in a deprecated key 
list. (Anupam Seth via bobby) (Revision 1332821)

 Result = ABORTED
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332821
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationDeprecation.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestDeprecatedKeys.java


 Configuration no longer sets all keys in a deprecated key list.
 ---

 Key: HADOOP-8172
 URL: https://issues.apache.org/jira/browse/HADOOP-8172
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.3, 0.24.0
Reporter: Robert Joseph Evans
Assignee: Anupam Seth
Priority: Critical
 Attachments: HADOOP-8172-branch-2.patch, HADOOP-8172-branch-2.patch


 I did not look at the patch for HADOOP-8167 previously, but I did in response 
 to a recent test failure. The patch appears to have changed the following 
 code (I am just paraphrasing the code)
 {code}
 if(!deprecated(key)) {
   set(key, value);
 } else {
   for(String newKey: depricatedKeyMap.get(key)) {
 set(newKey, value);
   }
 }
 {code}
 to be 
 {code}
 set(key, value);
 if(depricatedKeyMap.contains(key)) {
set(deprecatedKeyMap.get(key)[0], value);
 } else if(reverseKeyMap.contains(key)) {
set(reverseKeyMap.get(key), value);
 }
 {code}
 If a key is deprecated and is mapped to more then one new key value only the 
 first one in the list will be set, where as previously all of them would be 
 set.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8172) Configuration no longer sets all keys in a deprecated key list.

2012-05-01 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans resolved HADOOP-8172.
-

   Resolution: Fixed
Fix Version/s: 3.0.0
   2.0.0

Thanks Anupam,

I put this into trunk and branch-2.  +1

 Configuration no longer sets all keys in a deprecated key list.
 ---

 Key: HADOOP-8172
 URL: https://issues.apache.org/jira/browse/HADOOP-8172
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.3, 0.24.0
Reporter: Robert Joseph Evans
Assignee: Anupam Seth
Priority: Critical
 Fix For: 2.0.0, 3.0.0

 Attachments: HADOOP-8172-branch-2.patch, HADOOP-8172-branch-2.patch


 I did not look at the patch for HADOOP-8167 previously, but I did in response 
 to a recent test failure. The patch appears to have changed the following 
 code (I am just paraphrasing the code)
 {code}
 if(!deprecated(key)) {
   set(key, value);
 } else {
   for(String newKey: depricatedKeyMap.get(key)) {
 set(newKey, value);
   }
 }
 {code}
 to be 
 {code}
 set(key, value);
 if(depricatedKeyMap.contains(key)) {
set(deprecatedKeyMap.get(key)[0], value);
 } else if(reverseKeyMap.contains(key)) {
set(reverseKeyMap.get(key), value);
 }
 {code}
 If a key is deprecated and is mapped to more then one new key value only the 
 first one in the list will be set, where as previously all of them would be 
 set.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8339) jenkins complaining about 16 javadoc warnings

2012-05-01 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-8339:
--

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

 jenkins complaining about 16 javadoc warnings 
 --

 Key: HADOOP-8339
 URL: https://issues.apache.org/jira/browse/HADOOP-8339
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Thomas Graves
Assignee: Robert Joseph Evans
 Fix For: 3.0.0

 Attachments: HADOOP-8339.patch, HADOOP-8339.txt, HADOOP-8339.txt, 
 HADOOP-8339.txt


 See any of the mapreduce/hadoop jenkins reports recently and they all 
 complain about 16 javadoc warnings.
 -1 javadoc.  The javadoc tool appears to have generated 16 warning 
 messages.
 Which really means there are 24 since there are 8 that are supposed to be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8339) jenkins complaining about 16 javadoc warnings

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266174#comment-13266174
 ] 

Hudson commented on HADOOP-8339:


Integrated in Hadoop-Hdfs-trunk-Commit #2244 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2244/])
HADOOP-8339. jenkins complaining about 16 javadoc warnings (Tom White and 
Robert Evans via tgraves) (Revision 1332853)

 Result = SUCCESS
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332853
Files : 
* /hadoop/common/trunk/dev-support/test-patch.properties
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/dev-support/test-patch.properties
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/FenceMethod.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/test-patch.properties
* /hadoop/common/trunk/hadoop-hdfs-project/dev-support/test-patch.properties
* 
/hadoop/common/trunk/hadoop-mapreduce-project/dev-support/test-patch.properties
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/mapred/tools/package-info.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/CurrentJHParser.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/LoggedTaskAttempt.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/datatypes/util/MapReduceJobPropertiesParser.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/package-info.java


 jenkins complaining about 16 javadoc warnings 
 --

 Key: HADOOP-8339
 URL: https://issues.apache.org/jira/browse/HADOOP-8339
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Thomas Graves
Assignee: Robert Joseph Evans
 Fix For: 3.0.0

 Attachments: HADOOP-8339.patch, HADOOP-8339.txt, HADOOP-8339.txt, 
 HADOOP-8339.txt


 See any of the mapreduce/hadoop jenkins reports recently and they all 
 complain about 16 javadoc warnings.
 -1 javadoc.  The javadoc tool appears to have generated 16 warning 
 messages.
 Which really means there are 24 since there are 8 that are supposed to be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8339) jenkins complaining about 16 javadoc warnings

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266181#comment-13266181
 ] 

Hudson commented on HADOOP-8339:


Integrated in Hadoop-Common-trunk-Commit #2170 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2170/])
HADOOP-8339. jenkins complaining about 16 javadoc warnings (Tom White and 
Robert Evans via tgraves) (Revision 1332853)

 Result = SUCCESS
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332853
Files : 
* /hadoop/common/trunk/dev-support/test-patch.properties
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/dev-support/test-patch.properties
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/FenceMethod.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/test-patch.properties
* /hadoop/common/trunk/hadoop-hdfs-project/dev-support/test-patch.properties
* 
/hadoop/common/trunk/hadoop-mapreduce-project/dev-support/test-patch.properties
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/mapred/tools/package-info.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/CurrentJHParser.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/LoggedTaskAttempt.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/datatypes/util/MapReduceJobPropertiesParser.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/package-info.java


 jenkins complaining about 16 javadoc warnings 
 --

 Key: HADOOP-8339
 URL: https://issues.apache.org/jira/browse/HADOOP-8339
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Thomas Graves
Assignee: Robert Joseph Evans
 Fix For: 3.0.0

 Attachments: HADOOP-8339.patch, HADOOP-8339.txt, HADOOP-8339.txt, 
 HADOOP-8339.txt


 See any of the mapreduce/hadoop jenkins reports recently and they all 
 complain about 16 javadoc warnings.
 -1 javadoc.  The javadoc tool appears to have generated 16 warning 
 messages.
 Which really means there are 24 since there are 8 that are supposed to be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8275) Range check DelegationKey length

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266209#comment-13266209
 ] 

Hudson commented on HADOOP-8275:


Integrated in Hadoop-Mapreduce-trunk-Commit #2185 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2185/])
HADOOP-8275. Range check DelegationKey length. Contributed by Colin Patrick 
McCabe (Revision 1332839)

 Result = ABORTED
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332839
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/DelegationKey.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestWritableUtils.java


 Range check DelegationKey length 
 -

 Key: HADOOP-8275
 URL: https://issues.apache.org/jira/browse/HADOOP-8275
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.0.0

 Attachments: HADOOP-8275.001.patch, HADOOP-8275.002.patch, 
 HADOOP-8275.003.patch


 Harden serialization logic against malformed or malicious input.
 Add range checking to readVInt, to detect overflows, underflows, and 
 larger-than-expected values.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8332) make default container-executor.conf.dir be a path relative to the container-executor binary

2012-05-01 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8332:
-

Attachment: HADOOP-8332.patch.txt

This patch adds an option of specifying container-executor.conf.dir as a path 
relative to the location of the container-executor executable itself.

 make default container-executor.conf.dir be a path relative to the 
 container-executor binary
 

 Key: HADOOP-8332
 URL: https://issues.apache.org/jira/browse/HADOOP-8332
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 0.23.1
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
 Fix For: 2.0.0

 Attachments: HADOOP-8332.patch.txt


 Currently, container-executor binary has an absolute pathname of its 
 configuration file baked in. This prevents an easy relocation of the 
 configuration files when dealing with multiple Hadoop installs on the same 
 node. It would be nice to at least allow for a relative path resolution 
 starting from the location of the container-executor binary itself. Something 
 like
 {noformat}
 ../etc/hadoop/
 {noformat}
 Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8332) make default container-executor.conf.dir be a path relative to the container-executor binary

2012-05-01 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8332:
-

Attachment: (was: HADOOP-8332.patch.txt)

 make default container-executor.conf.dir be a path relative to the 
 container-executor binary
 

 Key: HADOOP-8332
 URL: https://issues.apache.org/jira/browse/HADOOP-8332
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 0.23.1
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
 Fix For: 2.0.0

 Attachments: HADOOP-8332.patch.txt


 Currently, container-executor binary has an absolute pathname of its 
 configuration file baked in. This prevents an easy relocation of the 
 configuration files when dealing with multiple Hadoop installs on the same 
 node. It would be nice to at least allow for a relative path resolution 
 starting from the location of the container-executor binary itself. Something 
 like
 {noformat}
 ../etc/hadoop/
 {noformat}
 Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8332) make default container-executor.conf.dir be a path relative to the container-executor binary

2012-05-01 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8332:
-

Attachment: HADOOP-8332.patch.txt

 make default container-executor.conf.dir be a path relative to the 
 container-executor binary
 

 Key: HADOOP-8332
 URL: https://issues.apache.org/jira/browse/HADOOP-8332
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 0.23.1
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
 Fix For: 2.0.0

 Attachments: HADOOP-8332.patch.txt


 Currently, container-executor binary has an absolute pathname of its 
 configuration file baked in. This prevents an easy relocation of the 
 configuration files when dealing with multiple Hadoop installs on the same 
 node. It would be nice to at least allow for a relative path resolution 
 starting from the location of the container-executor binary itself. Something 
 like
 {noformat}
 ../etc/hadoop/
 {noformat}
 Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8332) make default container-executor.conf.dir be a path relative to the container-executor binary

2012-05-01 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266219#comment-13266219
 ] 

Roman Shaposhnik commented on HADOOP-8332:
--

@Robert, I understand your concerns and that's why:
  # I would like this change to be vetted by as large of a community as possible
  # my current patch leaves an option of sticking with the absolute path for 
those who really need that

 make default container-executor.conf.dir be a path relative to the 
 container-executor binary
 

 Key: HADOOP-8332
 URL: https://issues.apache.org/jira/browse/HADOOP-8332
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 0.23.1
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
 Fix For: 2.0.0

 Attachments: HADOOP-8332.patch.txt


 Currently, container-executor binary has an absolute pathname of its 
 configuration file baked in. This prevents an easy relocation of the 
 configuration files when dealing with multiple Hadoop installs on the same 
 node. It would be nice to at least allow for a relative path resolution 
 starting from the location of the container-executor binary itself. Something 
 like
 {noformat}
 ../etc/hadoop/
 {noformat}
 Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8339) jenkins complaining about 16 javadoc warnings

2012-05-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266220#comment-13266220
 ] 

Hudson commented on HADOOP-8339:


Integrated in Hadoop-Mapreduce-trunk-Commit #2186 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2186/])
HADOOP-8339. jenkins complaining about 16 javadoc warnings (Tom White and 
Robert Evans via tgraves) (Revision 1332853)

 Result = FAILURE
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1332853
Files : 
* /hadoop/common/trunk/dev-support/test-patch.properties
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/dev-support/test-patch.properties
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/FenceMethod.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/test-patch.properties
* /hadoop/common/trunk/hadoop-hdfs-project/dev-support/test-patch.properties
* 
/hadoop/common/trunk/hadoop-mapreduce-project/dev-support/test-patch.properties
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/mapred/tools/package-info.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/CurrentJHParser.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/LoggedTaskAttempt.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/datatypes/util/MapReduceJobPropertiesParser.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/package-info.java


 jenkins complaining about 16 javadoc warnings 
 --

 Key: HADOOP-8339
 URL: https://issues.apache.org/jira/browse/HADOOP-8339
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Thomas Graves
Assignee: Robert Joseph Evans
 Fix For: 3.0.0

 Attachments: HADOOP-8339.patch, HADOOP-8339.txt, HADOOP-8339.txt, 
 HADOOP-8339.txt


 See any of the mapreduce/hadoop jenkins reports recently and they all 
 complain about 16 javadoc warnings.
 -1 javadoc.  The javadoc tool appears to have generated 16 warning 
 messages.
 Which really means there are 24 since there are 8 that are supposed to be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-8343) Allow configuration of authorization for JmxJsonServlet and MetricsServlet

2012-05-01 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur reassigned HADOOP-8343:
--

Assignee: Alejandro Abdelnur

 Allow configuration of authorization for JmxJsonServlet and MetricsServlet
 --

 Key: HADOOP-8343
 URL: https://issues.apache.org/jira/browse/HADOOP-8343
 Project: Hadoop Common
  Issue Type: New Feature
  Components: util
Affects Versions: 2.0.0
Reporter: Philip Zeyliger
Assignee: Alejandro Abdelnur

 When using authorization for the daemons' web server, it would be useful to 
 specifically control the authorization requirements for accessing /jmx and 
 /metrics.  Currently, they require administrative access.  This JIRA would 
 propose that whether or not they are available to administrators only or to 
 all users be controlled by hadoop.instrumentation.requires.administrator 
 (or similar).  The default would be that administrator access is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8343) Allow configuration of authorization for JmxJsonServlet and MetricsServlet

2012-05-01 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8343:
---

Attachment: HADOOP-8343.patch

attached is a patch that adds a 
hadoop.security.anonymous.instrumentation.access configuration property which 
is TRUE by default and if set to TRUE enables anonymous access (without ACLs 
enforcement).

This works because (as it seems intended) in HttpServer, the JMX, METRICS  
CONF servlets are added without requiring authentication.

 Allow configuration of authorization for JmxJsonServlet and MetricsServlet
 --

 Key: HADOOP-8343
 URL: https://issues.apache.org/jira/browse/HADOOP-8343
 Project: Hadoop Common
  Issue Type: New Feature
  Components: util
Affects Versions: 2.0.0
Reporter: Philip Zeyliger
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-8343.patch


 When using authorization for the daemons' web server, it would be useful to 
 specifically control the authorization requirements for accessing /jmx and 
 /metrics.  Currently, they require administrative access.  This JIRA would 
 propose that whether or not they are available to administrators only or to 
 all users be controlled by hadoop.instrumentation.requires.administrator 
 (or similar).  The default would be that administrator access is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8343) Allow configuration of authorization for JmxJsonServlet and MetricsServlet

2012-05-01 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8343:
---

Status: Patch Available  (was: Open)

 Allow configuration of authorization for JmxJsonServlet and MetricsServlet
 --

 Key: HADOOP-8343
 URL: https://issues.apache.org/jira/browse/HADOOP-8343
 Project: Hadoop Common
  Issue Type: New Feature
  Components: util
Affects Versions: 2.0.0
Reporter: Philip Zeyliger
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-8343.patch


 When using authorization for the daemons' web server, it would be useful to 
 specifically control the authorization requirements for accessing /jmx and 
 /metrics.  Currently, they require administrative access.  This JIRA would 
 propose that whether or not they are available to administrators only or to 
 all users be controlled by hadoop.instrumentation.requires.administrator 
 (or similar).  The default would be that administrator access is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8343) Allow configuration of authorization for JmxJsonServlet and MetricsServlet

2012-05-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266289#comment-13266289
 ] 

Hadoop QA commented on HADOOP-8343:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12525245/HADOOP-8343.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified test 
files.

-1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/918//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/918//console

This message is automatically generated.

 Allow configuration of authorization for JmxJsonServlet and MetricsServlet
 --

 Key: HADOOP-8343
 URL: https://issues.apache.org/jira/browse/HADOOP-8343
 Project: Hadoop Common
  Issue Type: New Feature
  Components: util
Affects Versions: 2.0.0
Reporter: Philip Zeyliger
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-8343.patch


 When using authorization for the daemons' web server, it would be useful to 
 specifically control the authorization requirements for accessing /jmx and 
 /metrics.  Currently, they require administrative access.  This JIRA would 
 propose that whether or not they are available to administrators only or to 
 all users be controlled by hadoop.instrumentation.requires.administrator 
 (or similar).  The default would be that administrator access is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8275) Range check DelegationKey length

2012-05-01 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266290#comment-13266290
 ] 

Colin Patrick McCabe commented on HADOOP-8275:
--

bq. Forgot to mention, jira for other places where we need to use readVInt 
range checking?

I think HDFS-3134 takes care of all the other cases.  It also adds a fuzz 
tester for the edit log.

I filed HDFS-3346 for FSImage fuzz testing.

 Range check DelegationKey length 
 -

 Key: HADOOP-8275
 URL: https://issues.apache.org/jira/browse/HADOOP-8275
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.0.0

 Attachments: HADOOP-8275.001.patch, HADOOP-8275.002.patch, 
 HADOOP-8275.003.patch


 Harden serialization logic against malformed or malicious input.
 Add range checking to readVInt, to detect overflows, underflows, and 
 larger-than-expected values.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8343) Allow configuration of authorization for JmxJsonServlet and MetricsServlet

2012-05-01 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266314#comment-13266314
 ] 

Alejandro Abdelnur commented on HADOOP-8343:


the javadoc warnings seem unrelated to this patch

 Allow configuration of authorization for JmxJsonServlet and MetricsServlet
 --

 Key: HADOOP-8343
 URL: https://issues.apache.org/jira/browse/HADOOP-8343
 Project: Hadoop Common
  Issue Type: New Feature
  Components: util
Affects Versions: 2.0.0
Reporter: Philip Zeyliger
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-8343.patch


 When using authorization for the daemons' web server, it would be useful to 
 specifically control the authorization requirements for accessing /jmx and 
 /metrics.  Currently, they require administrative access.  This JIRA would 
 propose that whether or not they are available to administrators only or to 
 all users be controlled by hadoop.instrumentation.requires.administrator 
 (or similar).  The default would be that administrator access is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8275) Range check DelegationKey length

2012-05-01 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266330#comment-13266330
 ] 

Eli Collins commented on HADOOP-8275:
-

Beautiful

 Range check DelegationKey length 
 -

 Key: HADOOP-8275
 URL: https://issues.apache.org/jira/browse/HADOOP-8275
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.0.0

 Attachments: HADOOP-8275.001.patch, HADOOP-8275.002.patch, 
 HADOOP-8275.003.patch


 Harden serialization logic against malformed or malicious input.
 Add range checking to readVInt, to detect overflows, underflows, and 
 larger-than-expected values.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8230) Enable sync by default and disable append

2012-05-01 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266339#comment-13266339
 ] 

Eli Collins commented on HADOOP-8230:
-

Thanks for chiming in Suresh.

Wrt #1 see [this comment in 
HDFS-3120|https://issues.apache.org/jira/browse/HDFS-3120?focusedCommentId=13241903page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13241903]
 that outlines the proposal that Todd, Nicholas and I thought was best. Feel 
free to file a follow-on jira for an improvement.

Wrt #2 add a new option to disable durable sync? Personally I don't think we 
should HADOOP-8230 

 Enable sync by default and disable append
 -

 Key: HADOOP-8230
 URL: https://issues.apache.org/jira/browse/HADOOP-8230
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.0
Reporter: Eli Collins
Assignee: Eli Collins
 Fix For: 1.1.0

 Attachments: hadoop-8230.txt


 Per HDFS-3120 for 1.x let's:
 - Always enable the sync path, which is currently only enabled if 
 dfs.support.append is set
 - Remove the dfs.support.append configuration option. We'll keep the code 
 paths though in case we ever fix append on branch-1, in which case we can add 
 the config option back

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8338) Can't renew or cancel HDFS delegation tokens over secure RPC

2012-05-01 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley resolved HADOOP-8338.
---

   Resolution: Fixed
Fix Version/s: 1.1.0
   1.0.3
 Hadoop Flags: Reviewed

I committed this to branch-1.0 and branch-1. Trunk was already referencing 
HdfsConfiguration in DelegationTokenFetcher, so the problem won't happen.

 Can't renew or cancel HDFS delegation tokens over secure RPC
 

 Key: HADOOP-8338
 URL: https://issues.apache.org/jira/browse/HADOOP-8338
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 1.0.3, 1.1.0

 Attachments: hadoop-8338.patch


 The fetchdt tool is failing for secure deployments when given --renew or 
 --cancel on tokens fetched using RPC. (The tokens fetched over HTTP can be 
 renewed and canceled fine.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Issue Comment Edited] (HADOOP-8230) Enable sync by default and disable append

2012-05-01 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266339#comment-13266339
 ] 

Eli Collins edited comment on HADOOP-8230 at 5/2/12 4:46 AM:
-

Thanks for chiming in Suresh.

Wrt #1 see [this comment in 
HDFS-3120|https://issues.apache.org/jira/browse/HDFS-3120?focusedCommentId=13241903page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13241903]
 that outlines the proposal that Todd, Nicholas and I thought was best. Feel 
free to file a follow-on jira for an improvement, happy to review. I'll update 
the description to match the proposal.

Wrt #2 personally I don't think we should allow people to disable durable sync 
as that can result in data loss for people running HBase. See HADOOP-8230 for 
more info. I'm open to having an option to disable durable sync if you think 
that use case is important.

Wrt #3 the rationale was two-fold: (1) there are tests that are using append 
not to test append per se but for the side effects and we'd lose sync test 
coverage by removing those tests and (2) per the description we're keeping the 
append code path in case someone wants to fix the data loss issues in which 
case it makes sense to keep the test coverage as well.

  was (Author: eli2):
Thanks for chiming in Suresh.

Wrt #1 see [this comment in 
HDFS-3120|https://issues.apache.org/jira/browse/HDFS-3120?focusedCommentId=13241903page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13241903]
 that outlines the proposal that Todd, Nicholas and I thought was best. Feel 
free to file a follow-on jira for an improvement.

Wrt #2 add a new option to disable durable sync? Personally I don't think we 
should HADOOP-8230 
  
 Enable sync by default and disable append
 -

 Key: HADOOP-8230
 URL: https://issues.apache.org/jira/browse/HADOOP-8230
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.0.0
Reporter: Eli Collins
Assignee: Eli Collins
 Fix For: 1.1.0

 Attachments: hadoop-8230.txt


 Per HDFS-3120 for 1.x let's:
 - Always enable the sync path, which is currently only enabled if 
 dfs.support.append is set
 - Remove the dfs.support.append configuration option. We'll keep the code 
 paths though in case we ever fix append on branch-1, in which case we can add 
 the config option back

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8279) Auto-HA: Allow manual failover to be invoked from zkfc.

2012-05-01 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266357#comment-13266357
 ] 

Todd Lipcon commented on HADOOP-8279:
-

bq. -forceFence doesn't seem to have any real use cases with auto-HA so it 
isn't implemented. - I don't follow the reasoning. Seems like it should be 
just as applicable to auto-HA as manual, no?

I chatted with Eli about this, since he's the one who originally added the 
-forceFence option. The original motivation was to test the fencing script, 
but with this manual failover that's probably not the best way to test it. 
Better would be to do something like kill -STOP the active NN, which will both 
trigger a failover and trigger fencing. Another option might be to create a new 
command like -testFencer which would (after requiring confirmation) shoot 
down the active. But since it's a corner case let's address as a follow-up 
improvement.

bq. If the attempt to transition to standby succeeds, then the ZKFC will 
delete the breadcrumb node in ZooKeeper - might want to specify which ZKFC 
will do the deletion.

Changed to:
{code}
   * If the attempt to transition to standby succeeds, then the ZKFC receiving
   * this RPC will delete its own breadcrumb node in ZooKeeper. Thus, the
   * next node to become active will not run any fencing process. Otherwise,
   * the breadcrumb will be left, such that the next active will fence this
   * node.
{code}

bq. If the node is healthy and not active, it sends an RPC to the current 
active, asking it to yield from the election. - it actually sends an RPC to 
the ZKFC associated with the current active.

I actually removed the details here in ZKFCProtocol.java, electing instead to 
refer the reader to the implementation. I think it's better for the 
ZKFCProtocol javadocs to explain the outward behavior, and explain the actual 
implementation in the design doc and the inline comments in 
ZKFailoverController. It now reads:

{code}
   * If the node is healthy and not active, it will try to initiate a graceful
   * failover to become active, returning only when it has successfully become
   * active. See {@link ZKFailoverController#gracefulFailoverToYou()} for the
   * implementation details.
{code}

bq. if the current active does not respond to the graceful request, throws an 
exception indicating the reason for failure. - I recommend you make it 
explicit which graceful request this is referring to. In fact, if the active NN 
fails to respond to the graceful request to transition to standby, it will be 
fenced. It's the failure of the active ZKFC to respond to the cedeActive calls 
that results in a failure of gracefulFailover.

Per above, I changed this to only reference what a caller needs to know, 
instead of the underlying implementation.
{code}
   * If the node fails to successfully coordinate the failover, throws an
   * exception indicating the reason for failure.
{code}

bq. I think you need interface annotations on ZKFCRpcServer, or perhaps it can 
be made package-private?
Good catch. It can't be package-private because DFSZKFailoverController is in 
an HDFS package. I annotated it LimitedPrivate to HDFS.

bq. In ZKFCProtocol#cedeActive you declare the parameter to be in millis, but 
in the ZKFCRpcServer#cedeActive implementation, you say the period is in 
seconds.
Another good catch - I changed this late in the development of the patch and 
missed a spot. Fixed.

bq. I don't see much point in having both ZKFCRpcServer#stop and 
ZKFCRpcServer#join. Why not just call this.server.join in ZKFCRpcServer#stop?

Combined the two into a {{stopAndJoin}}

bq. periodically check health state since, because entering an - doesn't 
quite parse.

Fixed.

bq. I think the log message about the timeout elapsing in 
ZKFailoverController#waitForActiveAttempt should probably be at least at WARN 
level instead of INFO.
Fixed.

bq. It's possible that it's in standby but just about to go into active, no? 
Is there some race here? - should this comment now be removed?

This comment is basically about the situation described in HADOOP-8217, so it's 
still relevant.

bq. I recommend you change the value of DFS_HA_ZKFC_PORT_DEFAULT to something 
other than 8021. I've seen a lot of JTs in the wild with their default port set 
to 8021.

Good point... I changed it to 8019.

bq. The design in the document posted to HDFS-2185 mentions introducing -to 
and -from parameters to the `haadmin -failover' command, but this 
implementation doesn't do that. That seems fine by me, but I'm curious why you 
chose to do it this way.

I ended up not changing it just to keep the syntax consistent with what we've 
already got and avoid making this patch even longer. Let's discuss in a 
followup JIRA if we want to change the syntax for this command.


 Auto-HA: Allow manual failover to be invoked from zkfc.
 

[jira] [Updated] (HADOOP-8279) Auto-HA: Allow manual failover to be invoked from zkfc.

2012-05-01 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8279:


Attachment: hadoop-8279.txt

 Auto-HA: Allow manual failover to be invoked from zkfc.
 ---

 Key: HADOOP-8279
 URL: https://issues.apache.org/jira/browse/HADOOP-8279
 Project: Hadoop Common
  Issue Type: Improvement
  Components: auto-failover, ha
Affects Versions: Auto Failover (HDFS-3042)
Reporter: Mingjie Lai
Assignee: Todd Lipcon
 Fix For: Auto Failover (HDFS-3042)

 Attachments: hadoop-8279.txt, hadoop-8279.txt, hadoop-8279.txt, 
 hadoop-8279.txt, hadoop-8279.txt


 HADOOP-8247 introduces a configure flag to prevent potential status 
 inconsistency between zkfc and namenode, by making auto and manual failover 
 mutually exclusive.
 However, as described in 2.7.2 section of design doc at HDFS-2185, we should 
 allow manual and auto failover co-exist, by:
 - adding some rpc interfaces at zkfc
 - manual failover shall be triggered by haadmin, and handled by zkfc if auto 
 failover is enabled. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira