[jira] [Created] (HADOOP-9016) Provide unit tests for class org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream

2012-11-07 Thread Ivan A. Veselovsky (JIRA)
Ivan A. Veselovsky created HADOOP-9016:
--

 Summary: Provide unit tests for class 
org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream 
 Key: HADOOP-9016
 URL: https://issues.apache.org/jira/browse/HADOOP-9016
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
Priority: Minor


unit-test coverage of classes 
org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream,
org.apache.hadoop.fs.HarFileSystem.HarFSDataInputStream.HarFsInputStream is 
zero.
Suggested to provide unit-tests covering these classes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9004) Allow security unit tests to use external KDC

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492256#comment-13492256
 ] 

Hudson commented on HADOOP-9004:


Integrated in Hadoop-Yarn-trunk #29 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/29/])
HADOOP-9004. Allow security unit tests to use external KDC. Contributed by 
Stephen Chu. (Revision 1406413)
HADOOP-9004. Reverting the commit r1406202 to address patch issue (Revision 
1406379)
HADOOP-9004. Allow security unit tests to use external KDC. Contributed by 
Stephen Chu. (Revision 1406202)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406413
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/SecurityUtilTestHelper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGIWithExternalKdc.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/SecureDataNodeStarter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestStartSecureDataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecureNameNodeWithExternalKdc.java

suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406379
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/SecurityUtilTestHelper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGIWithExternalKdc.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/SecureDataNodeStarter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestStartSecureDataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecureNameNodeWithExternalKdc.java

suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406202
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/SecurityUtilTestHelper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGIWithExternalKdc.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/SecureDataNodeStarter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestStartSecureDataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecureNameNodeWithExternalKdc.java


 Allow security unit tests to use external KDC
 -

 Key: HADOOP-9004
 URL: https://issues.apache.org/jira/browse/HADOOP-9004
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security, test
Affects Versions: 2.0.0-alpha
Reporter: Stephen Chu
Assignee: Stephen Chu
 Fix For: 3.0.0

 Attachments: HADOOP-9004.patch, HADOOP-9004.patch, 
 HADOOP-9004.patch.007, HADOOP-9004.patch.008


 I want to add the option of allowing security-related unit tests to use an 
 external KDC.
 In HADOOP-8078, we add the ability to start and use an ApacheDS KDC for 
 security-related unit tests. It would be good to allow users to validate the 
 use of their own KDC, keytabs, and principals and to test different KDCs and 
 not rely on the ApacheDS KDC.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9012) IPC Client sends wrong connection context

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492257#comment-13492257
 ] 

Hudson commented on HADOOP-9012:


Integrated in Hadoop-Yarn-trunk #29 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/29/])
HADOOP-9012. IPC Client sends wrong connection context (daryn via bobby) 
(Revision 1406184)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406184
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java


 IPC Client sends wrong connection context
 -

 Key: HADOOP-9012
 URL: https://issues.apache.org/jira/browse/HADOOP-9012
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-9012.patch


 The IPC client will send the wrong connection context when asked to switch to 
 simple auth.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8427) Convert Forrest docs to APT

2012-11-07 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492311#comment-13492311
 ] 

Tom White commented on HADOOP-8427:
---

Andy, this looks good so far. I generated the site with {{mvn site; mvn 
site:stage -DstagingDirectory=/tmp/hadoop-site}} and the converted files looked 
OK. The navigation is not wired up yet though - my patch in HADOOP-8860 sets up 
the nav correctly so it would help if committed that one first. Does that patch 
look OK to you? 

Going through the other files in Common that have not been migrated, 
cluster_setup.xml and single_node_setup.xml are already in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt so can be left, 
but deployment_layout.xml, index.xml, native_libraries.xml, 
service_level_auth.xml, and Superusers.xml should all be migrated. They will 
need to be reviewed and updated, but this can be done separately as Nicholas 
suggested.


 Convert Forrest docs to APT
 ---

 Key: HADOOP-8427
 URL: https://issues.apache.org/jira/browse/HADOOP-8427
 Project: Hadoop Common
  Issue Type: Task
  Components: documentation
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Andy Isaacson
  Labels: newbie
 Attachments: hadoop8427-1.txt, hadoop8427-3.txt, hadoop8427.txt


 Some of the forrest docs content in src/docs/src/documentation/content/xdocs 
 has not yet been converted to APT and moved to src/site/apt. Let's convert 
 the forrest docs that haven't been converted yet to new APT content in 
 hadoop-common/src/site/apt (and link the new content into 
 hadoop-project/src/site/apt/index.apt.vm) and remove all forrest dependencies.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9004) Allow security unit tests to use external KDC

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492317#comment-13492317
 ] 

Hudson commented on HADOOP-9004:


Integrated in Hadoop-Hdfs-trunk #1219 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1219/])
HADOOP-9004. Allow security unit tests to use external KDC. Contributed by 
Stephen Chu. (Revision 1406413)
HADOOP-9004. Reverting the commit r1406202 to address patch issue (Revision 
1406379)
HADOOP-9004. Allow security unit tests to use external KDC. Contributed by 
Stephen Chu. (Revision 1406202)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406413
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/SecurityUtilTestHelper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGIWithExternalKdc.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/SecureDataNodeStarter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestStartSecureDataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecureNameNodeWithExternalKdc.java

suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406379
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/SecurityUtilTestHelper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGIWithExternalKdc.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/SecureDataNodeStarter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestStartSecureDataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecureNameNodeWithExternalKdc.java

suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406202
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/SecurityUtilTestHelper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGIWithExternalKdc.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/SecureDataNodeStarter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestStartSecureDataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecureNameNodeWithExternalKdc.java


 Allow security unit tests to use external KDC
 -

 Key: HADOOP-9004
 URL: https://issues.apache.org/jira/browse/HADOOP-9004
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security, test
Affects Versions: 2.0.0-alpha
Reporter: Stephen Chu
Assignee: Stephen Chu
 Fix For: 3.0.0

 Attachments: HADOOP-9004.patch, HADOOP-9004.patch, 
 HADOOP-9004.patch.007, HADOOP-9004.patch.008


 I want to add the option of allowing security-related unit tests to use an 
 external KDC.
 In HADOOP-8078, we add the ability to start and use an ApacheDS KDC for 
 security-related unit tests. It would be good to allow users to validate the 
 use of their own KDC, keytabs, and principals and to test different KDCs and 
 not rely on the ApacheDS KDC.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9012) IPC Client sends wrong connection context

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492318#comment-13492318
 ] 

Hudson commented on HADOOP-9012:


Integrated in Hadoop-Hdfs-trunk #1219 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1219/])
HADOOP-9012. IPC Client sends wrong connection context (daryn via bobby) 
(Revision 1406184)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406184
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java


 IPC Client sends wrong connection context
 -

 Key: HADOOP-9012
 URL: https://issues.apache.org/jira/browse/HADOOP-9012
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-9012.patch


 The IPC client will send the wrong connection context when asked to switch to 
 simple auth.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9012) IPC Client sends wrong connection context

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492360#comment-13492360
 ] 

Hudson commented on HADOOP-9012:


Integrated in Hadoop-Mapreduce-trunk #1249 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1249/])
HADOOP-9012. IPC Client sends wrong connection context (daryn via bobby) 
(Revision 1406184)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406184
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java


 IPC Client sends wrong connection context
 -

 Key: HADOOP-9012
 URL: https://issues.apache.org/jira/browse/HADOOP-9012
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-9012.patch


 The IPC client will send the wrong connection context when asked to switch to 
 simple auth.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9004) Allow security unit tests to use external KDC

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492359#comment-13492359
 ] 

Hudson commented on HADOOP-9004:


Integrated in Hadoop-Mapreduce-trunk #1249 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1249/])
HADOOP-9004. Allow security unit tests to use external KDC. Contributed by 
Stephen Chu. (Revision 1406413)
HADOOP-9004. Reverting the commit r1406202 to address patch issue (Revision 
1406379)
HADOOP-9004. Allow security unit tests to use external KDC. Contributed by 
Stephen Chu. (Revision 1406202)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406413
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/SecurityUtilTestHelper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGIWithExternalKdc.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/SecureDataNodeStarter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestStartSecureDataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecureNameNodeWithExternalKdc.java

suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406379
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/SecurityUtilTestHelper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGIWithExternalKdc.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/SecureDataNodeStarter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestStartSecureDataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecureNameNodeWithExternalKdc.java

suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406202
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/SecurityUtilTestHelper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGIWithExternalKdc.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/SecureDataNodeStarter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestStartSecureDataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecureNameNodeWithExternalKdc.java


 Allow security unit tests to use external KDC
 -

 Key: HADOOP-9004
 URL: https://issues.apache.org/jira/browse/HADOOP-9004
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security, test
Affects Versions: 2.0.0-alpha
Reporter: Stephen Chu
Assignee: Stephen Chu
 Fix For: 3.0.0

 Attachments: HADOOP-9004.patch, HADOOP-9004.patch, 
 HADOOP-9004.patch.007, HADOOP-9004.patch.008


 I want to add the option of allowing security-related unit tests to use an 
 external KDC.
 In HADOOP-8078, we add the ability to start and use an ApacheDS KDC for 
 security-related unit tests. It would be good to allow users to validate the 
 use of their own KDC, keytabs, and principals and to test different KDCs and 
 not rely on the ApacheDS KDC.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8963) CopyFromLocal doesn't always create user directory

2012-11-07 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492403#comment-13492403
 ] 

Daryn Sharp commented on HADOOP-8963:
-

+1

 CopyFromLocal doesn't always create user directory
 --

 Key: HADOOP-8963
 URL: https://issues.apache.org/jira/browse/HADOOP-8963
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3
Reporter: Billie Rinaldi
Assignee: Arpit Gupta
Priority: Trivial
 Attachments: HADOOP-8963.branch-1.patch, HADOOP-8963.branch-1.patch, 
 HADOOP-8963.branch-1.patch, HADOOP-8963.branch-1.patch


 When you use the command hadoop fs -copyFromLocal filename . before the 
 /user/username directory has been created, the file is created with name 
 /user/username instead of a directory being created with file 
 /user/username/filename.  The command hadoop fs -copyFromLocal filename 
 filename works as expected, creating /user/username and 
 /user/username/filename, and hadoop fs -copyFromLocal filename . works as 
 expected if the /user/username directory already exists.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9013) UGI should not hardcode loginUser's authenticationType

2012-11-07 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492443#comment-13492443
 ] 

Robert Joseph Evans commented on HADOOP-9013:
-

The change looks good to me too. +1

 UGI should not hardcode loginUser's authenticationType
 --

 Key: HADOOP-9013
 URL: https://issues.apache.org/jira/browse/HADOOP-9013
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-9013.patch


 {{UGI.loginUser}} assumes that the user's auth type for security on = 
 kerberos, security off = simple.  It should instead use the configured auth 
 type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9013) UGI should not hardcode loginUser's authenticationType

2012-11-07 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9013:


   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Daryn,

I put this into trunk and branch-2

 UGI should not hardcode loginUser's authenticationType
 --

 Key: HADOOP-9013
 URL: https://issues.apache.org/jira/browse/HADOOP-9013
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-9013.patch


 {{UGI.loginUser}} assumes that the user's auth type for security on = 
 kerberos, security off = simple.  It should instead use the configured auth 
 type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9014) Standardize creation of SaslRpcClients

2012-11-07 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492452#comment-13492452
 ] 

Robert Joseph Evans commented on HADOOP-9014:
-

This change looks good +1.

 Standardize creation of SaslRpcClients
 --

 Key: HADOOP-9014
 URL: https://issues.apache.org/jira/browse/HADOOP-9014
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 0.23.0, 3.0.0, 2.0.3-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-9014.patch


 To ease adding additional SASL support, need to change the chained 
 conditionals into a switch and make one standard call to createSaslClient.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9013) UGI should not hardcode loginUser's authenticationType

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492453#comment-13492453
 ] 

Hudson commented on HADOOP-9013:


Integrated in Hadoop-trunk-Commit #2971 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/2971/])
HADOOP-9013. UGI should not hardcode loginUser's authenticationType (daryn 
via bobby) (Revision 1406684)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406684
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java


 UGI should not hardcode loginUser's authenticationType
 --

 Key: HADOOP-9013
 URL: https://issues.apache.org/jira/browse/HADOOP-9013
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-9013.patch


 {{UGI.loginUser}} assumes that the user's auth type for security on = 
 kerberos, security off = simple.  It should instead use the configured auth 
 type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9014) Standardize creation of SaslRpcClients

2012-11-07 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9014:


   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Daryn,

I put this in trunk and branch-2

 Standardize creation of SaslRpcClients
 --

 Key: HADOOP-9014
 URL: https://issues.apache.org/jira/browse/HADOOP-9014
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 0.23.0, 3.0.0, 2.0.3-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-9014.patch


 To ease adding additional SASL support, need to change the chained 
 conditionals into a switch and make one standard call to createSaslClient.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9014) Standardize creation of SaslRpcClients

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492461#comment-13492461
 ] 

Hudson commented on HADOOP-9014:


Integrated in Hadoop-trunk-Commit #2972 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/2972/])
HADOOP-9014. Standardize creation of SaslRpcClients (daryn via bobby) 
(Revision 1406689)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406689
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java


 Standardize creation of SaslRpcClients
 --

 Key: HADOOP-9014
 URL: https://issues.apache.org/jira/browse/HADOOP-9014
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 0.23.0, 3.0.0, 2.0.3-alpha
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-9014.patch


 To ease adding additional SASL support, need to change the chained 
 conditionals into a switch and make one standard call to createSaslClient.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8589) ViewFs tests fail when tests and home dirs are nested

2012-11-07 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HADOOP-8589:
-

Attachment: Hadoop-8589-v2.patch

Update patch - incorporates Nicolas's feedaback and run successfully against 
homedir of /joe, /x/joe, /x/y/joe

 ViewFs tests fail when tests and home dirs are nested
 -

 Key: HADOOP-8589
 URL: https://issues.apache.org/jira/browse/HADOOP-8589
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 0.23.1, 2.0.0-alpha
Reporter: Andrey Klochkov
Assignee: Sanjay Radia
 Attachments: Hadoop-8589.patch, HADOOP-8589.patch, HADOOP-8589.patch, 
 hadoop-8589-sanjay.patch, Hadoop-8589-v2.patch, HADOOP-8859.patch


 TestFSMainOperationsLocalFileSystem fails in case when the test root 
 directory is under the user's home directory, and the user's home dir is 
 deeper than 2 levels from /. This happens with the default 1-node 
 installation of Jenkins. 
 This is the failure log:
 {code}
 org.apache.hadoop.fs.FileAlreadyExistsException: Path /var already exists as 
 dir; cannot create link here
   at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:244)
   at org.apache.hadoop.fs.viewfs.InodeTree.init(InodeTree.java:334)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem$1.init(ViewFileSystem.java:167)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:167)
   at 
 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2094)
   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:79)
   at 
 org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2128)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2110)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:290)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystemTestSetup.setupForViewFileSystem(ViewFileSystemTestSetup.java:76)
   at 
 org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem.setUp(TestFSMainOperationsLocalFileSystem.java:40)
 ...
 Standard Output
 2012-07-11 22:07:20,239 INFO  mortbay.log (Slf4jLog.java:info(67)) - Home dir 
 base /var/lib
 {code}
 The reason for the failure is that the code tries to mount links for both 
 /var and /var/lib, and it fails for the 2nd one as the /var is mounted 
 already.
 The fix was provided in HADOOP-8036 but later it was reverted in HADOOP-8129.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8589) ViewFs tests fail when tests and home dirs are nested

2012-11-07 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HADOOP-8589:
---

Target Version/s: 0.23.4, 3.0.0, 2.0.3-alpha  (was: 3.0.0, 0.23.4, 
2.0.3-alpha)
Hadoop Flags: Reviewed

+1 the new patch looks good.

 ViewFs tests fail when tests and home dirs are nested
 -

 Key: HADOOP-8589
 URL: https://issues.apache.org/jira/browse/HADOOP-8589
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 0.23.1, 2.0.0-alpha
Reporter: Andrey Klochkov
Assignee: Sanjay Radia
 Attachments: Hadoop-8589.patch, HADOOP-8589.patch, HADOOP-8589.patch, 
 hadoop-8589-sanjay.patch, Hadoop-8589-v2.patch, HADOOP-8859.patch


 TestFSMainOperationsLocalFileSystem fails in case when the test root 
 directory is under the user's home directory, and the user's home dir is 
 deeper than 2 levels from /. This happens with the default 1-node 
 installation of Jenkins. 
 This is the failure log:
 {code}
 org.apache.hadoop.fs.FileAlreadyExistsException: Path /var already exists as 
 dir; cannot create link here
   at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:244)
   at org.apache.hadoop.fs.viewfs.InodeTree.init(InodeTree.java:334)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem$1.init(ViewFileSystem.java:167)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:167)
   at 
 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2094)
   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:79)
   at 
 org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2128)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2110)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:290)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystemTestSetup.setupForViewFileSystem(ViewFileSystemTestSetup.java:76)
   at 
 org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem.setUp(TestFSMainOperationsLocalFileSystem.java:40)
 ...
 Standard Output
 2012-07-11 22:07:20,239 INFO  mortbay.log (Slf4jLog.java:info(67)) - Home dir 
 base /var/lib
 {code}
 The reason for the failure is that the code tries to mount links for both 
 /var and /var/lib, and it fails for the 2nd one as the /var is mounted 
 already.
 The fix was provided in HADOOP-8036 but later it was reverted in HADOOP-8129.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8589) ViewFs tests fail when tests and home dirs are nested

2012-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492631#comment-13492631
 ] 

Hadoop QA commented on HADOOP-8589:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552511/Hadoop-8589-v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1718//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1718//console

This message is automatically generated.

 ViewFs tests fail when tests and home dirs are nested
 -

 Key: HADOOP-8589
 URL: https://issues.apache.org/jira/browse/HADOOP-8589
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 0.23.1, 2.0.0-alpha
Reporter: Andrey Klochkov
Assignee: Sanjay Radia
 Attachments: Hadoop-8589.patch, HADOOP-8589.patch, HADOOP-8589.patch, 
 hadoop-8589-sanjay.patch, Hadoop-8589-v2.patch, HADOOP-8859.patch


 TestFSMainOperationsLocalFileSystem fails in case when the test root 
 directory is under the user's home directory, and the user's home dir is 
 deeper than 2 levels from /. This happens with the default 1-node 
 installation of Jenkins. 
 This is the failure log:
 {code}
 org.apache.hadoop.fs.FileAlreadyExistsException: Path /var already exists as 
 dir; cannot create link here
   at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:244)
   at org.apache.hadoop.fs.viewfs.InodeTree.init(InodeTree.java:334)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem$1.init(ViewFileSystem.java:167)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:167)
   at 
 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2094)
   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:79)
   at 
 org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2128)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2110)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:290)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystemTestSetup.setupForViewFileSystem(ViewFileSystemTestSetup.java:76)
   at 
 org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem.setUp(TestFSMainOperationsLocalFileSystem.java:40)
 ...
 Standard Output
 2012-07-11 22:07:20,239 INFO  mortbay.log (Slf4jLog.java:info(67)) - Home dir 
 base /var/lib
 {code}
 The reason for the failure is that the code tries to mount links for both 
 /var and /var/lib, and it fails for the 2nd one as the /var is mounted 
 already.
 The fix was provided in HADOOP-8036 but later it was reverted in HADOOP-8129.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9017) fix hadoop-client-pom-template.xml and hadoop-client-pom-template.xml for version

2012-11-07 Thread Giridharan Kesavan (JIRA)
Giridharan Kesavan created HADOOP-9017:
--

 Summary: fix hadoop-client-pom-template.xml and 
hadoop-client-pom-template.xml for version 
 Key: HADOOP-9017
 URL: https://issues.apache.org/jira/browse/HADOOP-9017
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.1.0, 1.0.4
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan


hadoop-client-pom-template.xml and hadoop-client-pom-template.xml references to 
project.version variable, instead they should refer to @version token.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9017) fix hadoop-client-pom-template.xml and hadoop-client-pom-template.xml for version

2012-11-07 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan updated HADOOP-9017:
---

Attachment: HADOOP-9017.patch

 fix hadoop-client-pom-template.xml and hadoop-client-pom-template.xml for 
 version 
 --

 Key: HADOOP-9017
 URL: https://issues.apache.org/jira/browse/HADOOP-9017
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.0.4, 1.1.0
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan
 Attachments: HADOOP-9017.patch


 hadoop-client-pom-template.xml and hadoop-client-pom-template.xml references 
 to project.version variable, instead they should refer to @version token.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7115) Add a cache for getpwuid_r and getpwgid_r calls

2012-11-07 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-7115:
---

Attachment: HADOOP-7115.patch

Patch for trunk. It revives the cache for ID-NAME. It has been refactored a 
bit to be used for both UIDs and GIDs. It has been tested in a secure cluster 
configured to use and external directory for user provisioning running terasort 
jobs. Without this patch the same cluster cannot run a single terasort job for 
non-local users.

 Add a cache for getpwuid_r and getpwgid_r calls
 ---

 Key: HADOOP-7115
 URL: https://issues.apache.org/jira/browse/HADOOP-7115
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.22.0, 2.0.2-alpha, 0.23.4
Reporter: Arun C Murthy
Assignee: Alejandro Abdelnur
 Fix For: 0.22.1, 2.0.3-alpha

 Attachments: h-7115.1.patch, hadoop-7115-0.22.patch, 
 hadoop-7115-0.22.patch, HADOOP-7115.patch


 As discussed in HADOOP-6978, a cache helps a lot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7115) Add a cache for getpwuid_r and getpwgid_r calls

2012-11-07 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-7115:
---

Fix Version/s: 2.0.3-alpha
   Status: Patch Available  (was: Open)

 Add a cache for getpwuid_r and getpwgid_r calls
 ---

 Key: HADOOP-7115
 URL: https://issues.apache.org/jira/browse/HADOOP-7115
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.4, 2.0.2-alpha, 0.22.0
Reporter: Arun C Murthy
Assignee: Alejandro Abdelnur
 Fix For: 0.22.1, 2.0.3-alpha

 Attachments: h-7115.1.patch, hadoop-7115-0.22.patch, 
 hadoop-7115-0.22.patch, HADOOP-7115.patch


 As discussed in HADOOP-6978, a cache helps a lot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7115) Add a cache for getpwuid_r and getpwgid_r calls

2012-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492717#comment-13492717
 ] 

Hadoop QA commented on HADOOP-7115:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552538/HADOOP-7115.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1719//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1719//console

This message is automatically generated.

 Add a cache for getpwuid_r and getpwgid_r calls
 ---

 Key: HADOOP-7115
 URL: https://issues.apache.org/jira/browse/HADOOP-7115
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.22.0, 2.0.2-alpha, 0.23.4
Reporter: Arun C Murthy
Assignee: Alejandro Abdelnur
 Fix For: 0.22.1, 2.0.3-alpha

 Attachments: h-7115.1.patch, hadoop-7115-0.22.patch, 
 hadoop-7115-0.22.patch, HADOOP-7115.patch


 As discussed in HADOOP-6978, a cache helps a lot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9015) Standardize creation of SaslRpcServers

2012-11-07 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-9015:


Attachment: HADOOP-9015.patch

Instead of having two switches and deferred instantiation of the sasl server, 
merge the two and immediately create the server once it's known to be needed.

 Standardize creation of SaslRpcServers
 --

 Key: HADOOP-9015
 URL: https://issues.apache.org/jira/browse/HADOOP-9015
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-9015.patch


 To ease adding additional SASL support, need to merge the multiple switches 
 for mechanism type and server creation into a single switch with a single 
 call to createSaslServer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7115) Add a cache for getpwuid_r and getpwgid_r calls

2012-11-07 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492733#comment-13492733
 ] 

Todd Lipcon commented on HADOOP-7115:
-

- why is cacheTimeout volatile? You're already synchronizing on the class 
inside {{ensureInitialized}}.
- instead of {{ensureInitialized}}, why not initialize the configuration in the 
existing static {...} block?
- instead of using ints for {{USER}} and {{GROUP}}, how about an enum like this:

{code}
enum IdCache {
  USERNAME,
  GROUP;

  ConcurrentHashMapInteger, CachedName cache =
new ConcurrentHashMapInteger, CachedName();
}

...

private static String getName(IdCache cache, int id) {
  CachedName cachedName = cache.cache.get(id);
  ...
}
{code}

this way you get typesafety, and you can just stringify the {{IdCache}} 
instance to get a printable name


Otherwise looks good to me

 Add a cache for getpwuid_r and getpwgid_r calls
 ---

 Key: HADOOP-7115
 URL: https://issues.apache.org/jira/browse/HADOOP-7115
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.22.0, 2.0.2-alpha, 0.23.4
Reporter: Arun C Murthy
Assignee: Alejandro Abdelnur
 Fix For: 0.22.1, 2.0.3-alpha

 Attachments: h-7115.1.patch, hadoop-7115-0.22.patch, 
 hadoop-7115-0.22.patch, HADOOP-7115.patch


 As discussed in HADOOP-6978, a cache helps a lot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7115) Add a cache for getpwuid_r and getpwgid_r calls

2012-11-07 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-7115:
---

Attachment: HADOOP-7115.patch

Thanks for the review Todd, new patch addressing you comments. Minor tweak, 
converted the INT constants to an ENUM but did not move the cache into the 
ENUM, left as before (not very keen of mutable ENUMs).

 Add a cache for getpwuid_r and getpwgid_r calls
 ---

 Key: HADOOP-7115
 URL: https://issues.apache.org/jira/browse/HADOOP-7115
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.22.0, 2.0.2-alpha, 0.23.4
Reporter: Arun C Murthy
Assignee: Alejandro Abdelnur
 Fix For: 0.22.1, 2.0.3-alpha

 Attachments: h-7115.1.patch, hadoop-7115-0.22.patch, 
 hadoop-7115-0.22.patch, HADOOP-7115.patch, HADOOP-7115.patch


 As discussed in HADOOP-6978, a cache helps a lot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7115) Add a cache for getpwuid_r and getpwgid_r calls

2012-11-07 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492747#comment-13492747
 ] 

Alejandro Abdelnur commented on HADOOP-7115:


Forgot the mention before, the native part of the patch it was done by Sandy 
Ryza (thx Sandy).

 Add a cache for getpwuid_r and getpwgid_r calls
 ---

 Key: HADOOP-7115
 URL: https://issues.apache.org/jira/browse/HADOOP-7115
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.22.0, 2.0.2-alpha, 0.23.4
Reporter: Arun C Murthy
Assignee: Alejandro Abdelnur
 Fix For: 0.22.1, 2.0.3-alpha

 Attachments: h-7115.1.patch, hadoop-7115-0.22.patch, 
 hadoop-7115-0.22.patch, HADOOP-7115.patch, HADOOP-7115.patch


 As discussed in HADOOP-6978, a cache helps a lot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9015) Standardize creation of SaslRpcServers

2012-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492750#comment-13492750
 ] 

Hadoop QA commented on HADOOP-9015:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552543/HADOOP-9015.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1720//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1720//console

This message is automatically generated.

 Standardize creation of SaslRpcServers
 --

 Key: HADOOP-9015
 URL: https://issues.apache.org/jira/browse/HADOOP-9015
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-9015.patch


 To ease adding additional SASL support, need to merge the multiple switches 
 for mechanism type and server creation into a single switch with a single 
 call to createSaslServer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7115) Add a cache for getpwuid_r and getpwgid_r calls

2012-11-07 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492754#comment-13492754
 ] 

Todd Lipcon commented on HADOOP-7115:
-

{code}
+cacheTimeout = new Configuration().getLong(
+  CommonConfigurationKeys.HADOOP_SECURITY_UID_NAME_CACHE_TIMEOUT_KEY,
+  CommonConfigurationKeys.HADOOP_SECURITY_UID_NAME_CACHE_TIMEOUT_DEFAULT) *
+  1000;
+LOG.debug(Initialized cache for IDs to User/Group mapping with a +
+   cache timeout of  + cacheTimeout/1000 +  seconds.);
{code}

This should move up into the try block above -- if the native library fails to 
load, then it shouldn't log this debug message (since the cache won't ever be 
used). Additionally you can reuse the Configuration instance that's already 
constructed above.


- Just noticed an addition of {{syslog.h}} to the native code - doesn't seem to 
be used.


Otherwise looks good.


 Add a cache for getpwuid_r and getpwgid_r calls
 ---

 Key: HADOOP-7115
 URL: https://issues.apache.org/jira/browse/HADOOP-7115
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.22.0, 2.0.2-alpha, 0.23.4
Reporter: Arun C Murthy
Assignee: Alejandro Abdelnur
 Fix For: 0.22.1, 2.0.3-alpha

 Attachments: h-7115.1.patch, hadoop-7115-0.22.patch, 
 hadoop-7115-0.22.patch, HADOOP-7115.patch, HADOOP-7115.patch


 As discussed in HADOOP-6978, a cache helps a lot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7115) Add a cache for getpwuid_r and getpwgid_r calls

2012-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492770#comment-13492770
 ] 

Hadoop QA commented on HADOOP-7115:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552549/HADOOP-7115.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1721//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1721//console

This message is automatically generated.

 Add a cache for getpwuid_r and getpwgid_r calls
 ---

 Key: HADOOP-7115
 URL: https://issues.apache.org/jira/browse/HADOOP-7115
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.22.0, 2.0.2-alpha, 0.23.4
Reporter: Arun C Murthy
Assignee: Alejandro Abdelnur
 Fix For: 0.22.1, 2.0.3-alpha

 Attachments: h-7115.1.patch, hadoop-7115-0.22.patch, 
 hadoop-7115-0.22.patch, HADOOP-7115.patch, HADOOP-7115.patch


 As discussed in HADOOP-6978, a cache helps a lot.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8953) Shell PathData parsing failures on Windows

2012-11-07 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492772#comment-13492772
 ] 

Suresh Srinivas commented on HADOOP-8953:
-

Given that this change is needed for making progress on gettign 100% unit tests 
passing on Windows, I recommend creating a follow on jira for completing the 
discussion and finishing the remaining discussion. Arpit, can you create 
another jira?

 Shell PathData parsing failures on Windows
 --

 Key: HADOOP-8953
 URL: https://issues.apache.org/jira/browse/HADOOP-8953
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Arpit Agarwal
 Attachments: HADOOP-8953-branch-trunk-win-3.patch, 
 HADOOP-8953-branch-trunk-win-4.patch, HADOOP-8953-branch-trunk-win-6.patch, 
 HADOOP-8953-branch-trunk-win.patch


 Several test suites fail on Windows, apparently due to Windows-specific path 
 parsing bugs in PathData.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9018) Reject invalid Windows URIs

2012-11-07 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-9018:
-

 Summary: Reject invalid Windows URIs
 Key: HADOOP-9018
 URL: https://issues.apache.org/jira/browse/HADOOP-9018
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: trunk-win
 Environment: Windows
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


This JIRA is to make handling of improperly constructed file URIs for Windows 
local paths more rigorous. e.g. reject file:///c:\\Windows 

Valid file URI syntax explained at 
http://blogs.msdn.com/b/ie/archive/2006/12/06/file-uris-in-windows.aspx.

Also see https://issues.apache.org/jira/browse/HADOOP-8953

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9015) Standardize creation of SaslRpcServers

2012-11-07 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492783#comment-13492783
 ] 

Robert Joseph Evans commented on HADOOP-9015:
-

The changes look OK to me.  I am +1. I'll check them in.

 Standardize creation of SaslRpcServers
 --

 Key: HADOOP-9015
 URL: https://issues.apache.org/jira/browse/HADOOP-9015
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HADOOP-9015.patch


 To ease adding additional SASL support, need to merge the multiple switches 
 for mechanism type and server creation into a single switch with a single 
 call to createSaslServer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9015) Standardize creation of SaslRpcServers

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492787#comment-13492787
 ] 

Hudson commented on HADOOP-9015:


Integrated in Hadoop-trunk-Commit #2976 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/2976/])
HADOOP-9015. Standardize creation of SaslRpcServers (daryn via bobby) 
(Revision 1406851)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406851
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java


 Standardize creation of SaslRpcServers
 --

 Key: HADOOP-9015
 URL: https://issues.apache.org/jira/browse/HADOOP-9015
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-9015.patch


 To ease adding additional SASL support, need to merge the multiple switches 
 for mechanism type and server creation into a single switch with a single 
 call to createSaslServer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9015) Standardize creation of SaslRpcServers

2012-11-07 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-9015:


   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks Daryn,

I put this into trunk and branch-2

 Standardize creation of SaslRpcServers
 --

 Key: HADOOP-9015
 URL: https://issues.apache.org/jira/browse/HADOOP-9015
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-9015.patch


 To ease adding additional SASL support, need to merge the multiple switches 
 for mechanism type and server creation into a single switch with a single 
 call to createSaslServer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8953) Shell PathData parsing failures on Windows

2012-11-07 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492793#comment-13492793
 ] 

Arpit Agarwal commented on HADOOP-8953:
---

Daryn suggested a few further improvements to make Windows URI handling more 
rigorous and I have filed HADOOP-9018 to track them.

I would like to avoid inflating the scope of HADOOP-8953 and allow us to make 
forward progress with getting Windows support working in trunk-win.

 Shell PathData parsing failures on Windows
 --

 Key: HADOOP-8953
 URL: https://issues.apache.org/jira/browse/HADOOP-8953
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Arpit Agarwal
 Attachments: HADOOP-8953-branch-trunk-win-3.patch, 
 HADOOP-8953-branch-trunk-win-4.patch, HADOOP-8953-branch-trunk-win-6.patch, 
 HADOOP-8953-branch-trunk-win.patch


 Several test suites fail on Windows, apparently due to Windows-specific path 
 parsing bugs in PathData.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8560) Change TestFTPFileSystem to use non-SNAPSHOT dependencies

2012-11-07 Thread Yu Gao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Gao updated HADOOP-8560:
---

Attachment: test-TestFTPFileSystem.result
test-patch.result

 Change TestFTPFileSystem to use non-SNAPSHOT dependencies
 -

 Key: HADOOP-8560
 URL: https://issues.apache.org/jira/browse/HADOOP-8560
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 1.0.3
Reporter: Luke Lu
Assignee: Yu Gao
Priority: Minor
 Attachments: hadoop-8560-branch-1.patch, test-patch.result, 
 test-TestFTPFileSystem.result


 It would good if the stable hadoop release don't depend on SNAPSHOT ftpserver 
 artifacts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8560) Change TestFTPFileSystem to use non-SNAPSHOT dependencies

2012-11-07 Thread Yu Gao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492838#comment-13492838
 ] 

Yu Gao commented on HADOOP-8560:


Attached ant test and test-patch results. Also put the overall result here:

ant test-patch:
 [exec] +1 overall.  
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] +1 tests included.  The patch appears to include 3 new or 
modified tests.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] +1 findbugs.  The patch does not introduce any new Findbugs 
(version 2.0.1) warnings.

ant test -Dtestcase=TestFTPFileSystem
Testsuite: org.apache.hadoop.fs.ftp.TestFTPFileSystem
Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 20.243 sec


 Change TestFTPFileSystem to use non-SNAPSHOT dependencies
 -

 Key: HADOOP-8560
 URL: https://issues.apache.org/jira/browse/HADOOP-8560
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 1.0.3
Reporter: Luke Lu
Assignee: Yu Gao
Priority: Minor
 Attachments: hadoop-8560-branch-1.patch, test-patch.result, 
 test-TestFTPFileSystem.result


 It would good if the stable hadoop release don't depend on SNAPSHOT ftpserver 
 artifacts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8560) Change TestFTPFileSystem to use non-SNAPSHOT dependencies

2012-11-07 Thread Yu Gao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492839#comment-13492839
 ] 

Yu Gao commented on HADOOP-8560:


The SNAPSHOT ftpserver and mina jars should be removed from src/test/lib as 
well.

 Change TestFTPFileSystem to use non-SNAPSHOT dependencies
 -

 Key: HADOOP-8560
 URL: https://issues.apache.org/jira/browse/HADOOP-8560
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 1.0.3
Reporter: Luke Lu
Assignee: Yu Gao
Priority: Minor
 Attachments: hadoop-8560-branch-1.patch, test-patch.result, 
 test-TestFTPFileSystem.result


 It would good if the stable hadoop release don't depend on SNAPSHOT ftpserver 
 artifacts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9008) Building hadoop tarball fails on Windows

2012-11-07 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-9008:
--

Attachment: HADOOP-9008-branch-trunk-win.patch

The attached patch ports the sh scripting in the distribution build to Python.  
It wasn't possible to use only Maven plugins (like maven-antrun-plugin with a 
tar task), because they mishandled permissions and symlinks in the built 
tarballs.

I tested all of the following build variations:

Windows: mvn -Pnative-win -Pdist -Dtar -DskipTests clean package
Mac: mvn -Pdist -Dtar -DskipTests clean package
Ubuntu: mvn -Pnative -Pdist -Dtar -DskipTests clean package
Ubuntu: mvn -Pnative -Pdist -Dtar -Drequire.snappy -Dbundle.snappy 
-Dsnappy.lib=/usr/local/lib -DskipTests clean package

This works on Windows.  Additionally, on Mac and Ubuntu, I compared the built 
tarballs from before and after my changes.  I confirmed that the resulting 
tarballs have exactly the same contents, including permissions and symlinks.


 Building hadoop tarball fails on Windows
 

 Key: HADOOP-9008
 URL: https://issues.apache.org/jira/browse/HADOOP-9008
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: trunk-win
Reporter: Ivan Mitic
Assignee: Chris Nauroth
 Attachments: HADOOP-9008-branch-trunk-win.patch


 Trying to build Hadoop trunk tarball via {{mvn package -Pdist -DskipTests 
 -Dtar}} fails on Windows.
 Build system generates sh scripts that execute build tasks what does not work 
 on Windows without Cygwin. It might make sense to apply the same pattern as 
 in HADOOP-8924, and use python instead of sh.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8977) multiple FsShell test failures on Windows

2012-11-07 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492852#comment-13492852
 ] 

Chris Nauroth commented on HADOOP-8977:
---

Hi, Daryn.  Do you have any additional feedback on the new version of the patch 
that I uploaded on 10/31?  Thank you.

 multiple FsShell test failures on Windows
 -

 Key: HADOOP-8977
 URL: https://issues.apache.org/jira/browse/HADOOP-8977
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-8977-branch-trunk-win.patch, 
 HADOOP-8977-branch-trunk-win.patch


 Multiple FsShell-related tests fail on Windows.  Commands are returning 
 non-zero exit status.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8820) Backport HADOOP-8469 and HADOOP-8470: add NodeGroup layer in new NetworkTopology (also known as NetworkTopologyWithNodeGroup)

2012-11-07 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492890#comment-13492890
 ] 

Junping Du commented on HADOOP-8820:


Thanks Nicholas for reviewing. I will address your comments in HADOOP-8817 
where I put all 4 patches together. Hopefully, it is easy to review and 
maintain.

 Backport HADOOP-8469 and HADOOP-8470: add NodeGroup layer in new 
 NetworkTopology (also known as NetworkTopologyWithNodeGroup)
 ---

 Key: HADOOP-8820
 URL: https://issues.apache.org/jira/browse/HADOOP-8820
 Project: Hadoop Common
  Issue Type: New Feature
  Components: net
Affects Versions: 1.0.0
Reporter: Junping Du
Assignee: Junping Du
 Attachments: HADOOP-8820.patch


 This patch backport HADOOP-8469 and HADOOP-8470 to branch-1 and includes:
 1. Make NetworkTopology class pluggable for extension.
 2. Implement a 4-layer NetworkTopology class (named as 
 NetworkTopologyWithNodeGroup) to use in virtualized environment (or other 
 situation with additional layer between host and rack).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8860) Split MapReduce and YARN sections in documentation navigation

2012-11-07 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492896#comment-13492896
 ] 

Andy Isaacson commented on HADOOP-8860:
---

The patch seems reasonable to me. +1.

 Split MapReduce and YARN sections in documentation navigation
 -

 Key: HADOOP-8860
 URL: https://issues.apache.org/jira/browse/HADOOP-8860
 Project: Hadoop Common
  Issue Type: Task
  Components: documentation
Affects Versions: 2.0.1-alpha
Reporter: Tom White
Assignee: Tom White
 Attachments: HADOOP-8860.patch, HADOOP-8860.sh


 This JIRA is to change the navigation on 
 http://hadoop.apache.org/docs/r2.0.1-alpha/ to reflect the fact that 
 MapReduce and YARN are separate modules/sub-projects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8953) Shell PathData parsing failures on Windows

2012-11-07 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-8953.
-

   Resolution: Fixed
Fix Version/s: trunk-win
 Hadoop Flags: Reviewed

I committed the patch to branch-trunk-win. Thank you Arpit.

Please continue additional work needed in HADOOP-9018.

 Shell PathData parsing failures on Windows
 --

 Key: HADOOP-8953
 URL: https://issues.apache.org/jira/browse/HADOOP-8953
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Arpit Agarwal
 Fix For: trunk-win

 Attachments: HADOOP-8953-branch-trunk-win-3.patch, 
 HADOOP-8953-branch-trunk-win-4.patch, HADOOP-8953-branch-trunk-win-6.patch, 
 HADOOP-8953-branch-trunk-win.patch


 Several test suites fail on Windows, apparently due to Windows-specific path 
 parsing bugs in PathData.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8589) ViewFs tests fail when tests and home dirs are nested

2012-11-07 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HADOOP-8589:
-

  Resolution: Fixed
Target Version/s: 0.23.4, 3.0.0, 2.0.3-alpha  (was: 3.0.0, 0.23.4, 
2.0.3-alpha)
  Status: Resolved  (was: Patch Available)

 ViewFs tests fail when tests and home dirs are nested
 -

 Key: HADOOP-8589
 URL: https://issues.apache.org/jira/browse/HADOOP-8589
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 0.23.1, 2.0.0-alpha
Reporter: Andrey Klochkov
Assignee: Sanjay Radia
 Attachments: Hadoop-8589.patch, HADOOP-8589.patch, HADOOP-8589.patch, 
 hadoop-8589-sanjay.patch, Hadoop-8589-v2.patch, HADOOP-8859.patch


 TestFSMainOperationsLocalFileSystem fails in case when the test root 
 directory is under the user's home directory, and the user's home dir is 
 deeper than 2 levels from /. This happens with the default 1-node 
 installation of Jenkins. 
 This is the failure log:
 {code}
 org.apache.hadoop.fs.FileAlreadyExistsException: Path /var already exists as 
 dir; cannot create link here
   at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:244)
   at org.apache.hadoop.fs.viewfs.InodeTree.init(InodeTree.java:334)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem$1.init(ViewFileSystem.java:167)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:167)
   at 
 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2094)
   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:79)
   at 
 org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2128)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2110)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:290)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystemTestSetup.setupForViewFileSystem(ViewFileSystemTestSetup.java:76)
   at 
 org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem.setUp(TestFSMainOperationsLocalFileSystem.java:40)
 ...
 Standard Output
 2012-07-11 22:07:20,239 INFO  mortbay.log (Slf4jLog.java:info(67)) - Home dir 
 base /var/lib
 {code}
 The reason for the failure is that the code tries to mount links for both 
 /var and /var/lib, and it fails for the 2nd one as the /var is mounted 
 already.
 The fix was provided in HADOOP-8036 but later it was reverted in HADOOP-8129.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8589) ViewFs tests fail when tests and home dirs are nested

2012-11-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493009#comment-13493009
 ] 

Hudson commented on HADOOP-8589:


Integrated in Hadoop-trunk-Commit #2977 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/2977/])
HADOOP-8589 ViewFs tests fail when tests and home dirs are nested (sanjay 
Radia) (Revision 1406939)

 Result = SUCCESS
sradia : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406939
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ChRootedFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFs.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemTestHelper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestChRootedFs.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestFcMainOperationsLocalFs.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemTestSetup.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsTestSetup.java


 ViewFs tests fail when tests and home dirs are nested
 -

 Key: HADOOP-8589
 URL: https://issues.apache.org/jira/browse/HADOOP-8589
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 0.23.1, 2.0.0-alpha
Reporter: Andrey Klochkov
Assignee: Sanjay Radia
 Attachments: Hadoop-8589.patch, HADOOP-8589.patch, HADOOP-8589.patch, 
 hadoop-8589-sanjay.patch, Hadoop-8589-v2.patch, HADOOP-8859.patch


 TestFSMainOperationsLocalFileSystem fails in case when the test root 
 directory is under the user's home directory, and the user's home dir is 
 deeper than 2 levels from /. This happens with the default 1-node 
 installation of Jenkins. 
 This is the failure log:
 {code}
 org.apache.hadoop.fs.FileAlreadyExistsException: Path /var already exists as 
 dir; cannot create link here
   at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:244)
   at org.apache.hadoop.fs.viewfs.InodeTree.init(InodeTree.java:334)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem$1.init(ViewFileSystem.java:167)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:167)
   at 
 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2094)
   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:79)
   at 
 org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2128)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2110)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:290)
   at 
 org.apache.hadoop.fs.viewfs.ViewFileSystemTestSetup.setupForViewFileSystem(ViewFileSystemTestSetup.java:76)
   at 
 org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem.setUp(TestFSMainOperationsLocalFileSystem.java:40)
 ...
 Standard Output
 2012-07-11 22:07:20,239 INFO  mortbay.log (Slf4jLog.java:info(67)) - Home dir 
 base /var/lib
 {code}
 The reason for the failure is that the code tries to mount links for both 
 /var and /var/lib, and it fails for the 2nd one as the /var is mounted 
 already.
 The fix was provided in HADOOP-8036 but later it was reverted in HADOOP-8129.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8419) GzipCodec NPE upon reset with IBM JDK

2012-11-07 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493011#comment-13493011
 ] 

Yu Li commented on HADOOP-8419:
---

More details about this issue:
In IBM JDK, GZIPOutputStream class calls the deflater's end method as part of 
GZIPOutputStream.finish(), so deflater's reset can't be called after, while in 
SUN or Open JDK implementation this end method won't be called.

To work-around this issue, we need to override the finish method of 
coresponding classes which extends GZIPOutputStream, thus won't depend on 
implementation of differenct JDK. And since the needed writeTrailer, writeInt 
and writeShort all become private method in JDK6(SUN/IBM/OPEN JDK), we also 
need to add these 3 methods in the patch. 

 GzipCodec NPE upon reset with IBM JDK
 -

 Key: HADOOP-8419
 URL: https://issues.apache.org/jira/browse/HADOOP-8419
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3
Reporter: Luke Lu
Assignee: Yu Li
  Labels: gzip, ibm-jdk

 The GzipCodec will NPE upon reset after finish when the native zlib codec is 
 not loaded. When the native zlib is loaded the codec creates a 
 CompressorOutputStream that doesn't have the problem, otherwise, the 
 GZipCodec uses GZIPOutputStream which is extended to provide the resetState 
 method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, 
 GZIPOutputStream#finish will release the underlying deflater, which causes 
 NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK 
 doesn't have this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HADOOP-8419) GzipCodec NPE upon reset with IBM JDK

2012-11-07 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-8419 started by Yu Li.

 GzipCodec NPE upon reset with IBM JDK
 -

 Key: HADOOP-8419
 URL: https://issues.apache.org/jira/browse/HADOOP-8419
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3
Reporter: Luke Lu
Assignee: Yu Li
  Labels: gzip, ibm-jdk

 The GzipCodec will NPE upon reset after finish when the native zlib codec is 
 not loaded. When the native zlib is loaded the codec creates a 
 CompressorOutputStream that doesn't have the problem, otherwise, the 
 GZipCodec uses GZIPOutputStream which is extended to provide the resetState 
 method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, 
 GZIPOutputStream#finish will release the underlying deflater, which causes 
 NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK 
 doesn't have this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8419) GzipCodec NPE upon reset with IBM JDK

2012-11-07 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HADOOP-8419:
--

Attachment: HADOOP-8419-branch-1.patch

 GzipCodec NPE upon reset with IBM JDK
 -

 Key: HADOOP-8419
 URL: https://issues.apache.org/jira/browse/HADOOP-8419
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3
Reporter: Luke Lu
Assignee: Yu Li
  Labels: gzip, ibm-jdk
 Attachments: HADOOP-8419-branch-1.patch


 The GzipCodec will NPE upon reset after finish when the native zlib codec is 
 not loaded. When the native zlib is loaded the codec creates a 
 CompressorOutputStream that doesn't have the problem, otherwise, the 
 GZipCodec uses GZIPOutputStream which is extended to provide the resetState 
 method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, 
 GZIPOutputStream#finish will release the underlying deflater, which causes 
 NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK 
 doesn't have this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8419) GzipCodec NPE upon reset with IBM JDK

2012-11-07 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493024#comment-13493024
 ] 

Yu Li commented on HADOOP-8419:
---

Attached the patch for branch-1

 GzipCodec NPE upon reset with IBM JDK
 -

 Key: HADOOP-8419
 URL: https://issues.apache.org/jira/browse/HADOOP-8419
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3
Reporter: Luke Lu
Assignee: Yu Li
  Labels: gzip, ibm-jdk
 Attachments: HADOOP-8419-branch-1.patch


 The GzipCodec will NPE upon reset after finish when the native zlib codec is 
 not loaded. When the native zlib is loaded the codec creates a 
 CompressorOutputStream that doesn't have the problem, otherwise, the 
 GZipCodec uses GZIPOutputStream which is extended to provide the resetState 
 method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, 
 GZIPOutputStream#finish will release the underlying deflater, which causes 
 NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK 
 doesn't have this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira