[jira] [Commented] (HADOOP-10428) JavaKeyStoreProvider should accept keystore password via configuration falling back to ENV VAR

2014-04-03 Thread Raymie Stata (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13958559#comment-13958559
 ] 

Raymie Stata commented on HADOOP-10428:
---

{code}
+// Get the password from the conf, if not present from the user's 
environment
+String pw = conf.get(KEYSTORE_PASSWORD_KEY,
+System.getenv(KEYSTORE_PASSWORD_ENV_VAR));
{code}
Should the search order be env then conf (ie, env overrides conf) instead?

   JavaKeyStoreProvider should accept keystore password via configuration 
 falling back to ENV VAR
 ---

 Key: HADOOP-10428
 URL: https://issues.apache.org/jira/browse/HADOOP-10428
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-10428.patch


 Currently the password for the {{JavaKeyStoreProvider}} must be set in an ENV 
 VAR.
 Allowing the password to be set via configuration enables applications to 
 interactively ask for the password before initializing the 
 {{JavaKeyStoreProvider}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10428) JavaKeyStoreProvider should accept keystore password via configuration falling back to ENV VAR

2014-04-03 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10428:


Attachment: HADOOP-10428.patch

pathc with configuration pointing to file with password.

   JavaKeyStoreProvider should accept keystore password via configuration 
 falling back to ENV VAR
 ---

 Key: HADOOP-10428
 URL: https://issues.apache.org/jira/browse/HADOOP-10428
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-10428.patch, HADOOP-10428.patch


 Currently the password for the {{JavaKeyStoreProvider}} must be set in an ENV 
 VAR.
 Allowing the password to be set via configuration enables applications to 
 interactively ask for the password before initializing the 
 {{JavaKeyStoreProvider}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HADOOP-10428) JavaKeyStoreProvider should accept keystore password via configuration falling back to ENV VAR

2014-04-03 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13958575#comment-13958575
 ] 

Alejandro Abdelnur edited comment on HADOOP-10428 at 4/3/14 6:50 AM:
-

patch with configuration pointing to file with password.


was (Author: tucu00):
pathc with configuration pointing to file with password.

   JavaKeyStoreProvider should accept keystore password via configuration 
 falling back to ENV VAR
 ---

 Key: HADOOP-10428
 URL: https://issues.apache.org/jira/browse/HADOOP-10428
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-10428.patch, HADOOP-10428.patch


 Currently the password for the {{JavaKeyStoreProvider}} must be set in an ENV 
 VAR.
 Allowing the password to be set via configuration enables applications to 
 interactively ask for the password before initializing the 
 {{JavaKeyStoreProvider}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10428) JavaKeyStoreProvider should accept keystore password via configuration falling back to ENV VAR

2014-04-03 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13958580#comment-13958580
 ] 

Alejandro Abdelnur commented on HADOOP-10428:
-

[~raymie], sure we can. It was preserving backward compat, but all this is in 
trunk, so we can change it. I'll update the patch per your suggestion.

   JavaKeyStoreProvider should accept keystore password via configuration 
 falling back to ENV VAR
 ---

 Key: HADOOP-10428
 URL: https://issues.apache.org/jira/browse/HADOOP-10428
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-10428.patch, HADOOP-10428.patch


 Currently the password for the {{JavaKeyStoreProvider}} must be set in an ENV 
 VAR.
 Allowing the password to be set via configuration enables applications to 
 interactively ask for the password before initializing the 
 {{JavaKeyStoreProvider}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10428) JavaKeyStoreProvider should accept keystore password via configuration falling back to ENV VAR

2014-04-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13958604#comment-13958604
 ] 

Hadoop QA commented on HADOOP-10428:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12638435/HADOOP-10428.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3741//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3741//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3741//console

This message is automatically generated.

   JavaKeyStoreProvider should accept keystore password via configuration 
 falling back to ENV VAR
 ---

 Key: HADOOP-10428
 URL: https://issues.apache.org/jira/browse/HADOOP-10428
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-10428.patch, HADOOP-10428.patch


 Currently the password for the {{JavaKeyStoreProvider}} must be set in an ENV 
 VAR.
 Allowing the password to be set via configuration enables applications to 
 interactively ask for the password before initializing the 
 {{JavaKeyStoreProvider}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10456) Bug in Configuration.java exposed by Spark (ConcurrentModificationException)

2014-04-03 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10456:
---

Target Version/s: 2.5.0  (was: 2.3.0)

 Bug in Configuration.java exposed by Spark (ConcurrentModificationException)
 

 Key: HADOOP-10456
 URL: https://issues.apache.org/jira/browse/HADOOP-10456
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.3.0
Reporter: Nishkam Ravi
 Attachments: HADOOP-10456_nravi.patch


 The following exception occurs non-deterministically:
 java.util.ConcurrentModificationException
 at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926)
 at java.util.HashMap$KeyIterator.next(HashMap.java:960)
 at java.util.AbstractCollection.addAll(AbstractCollection.java:341)
 at java.util.HashSet.init(HashSet.java:117)
 at org.apache.hadoop.conf.Configuration.init(Configuration.java:671)
 at org.apache.hadoop.mapred.JobConf.init(JobConf.java:439)
 at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:110)
 at org.apache.spark.rdd.HadoopRDD$$anon$1.init(HadoopRDD.scala:154)
 at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:149)
 at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:64)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at 
 org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:34)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at 
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:161)
 at 
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:102)
 at org.apache.spark.scheduler.Task.run(Task.scala:53)
 at 
 org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
 at 
 org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:42)
 at 
 org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:41)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at 
 org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:41)
 at 
 org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10456) Bug in Configuration.java exposed by Spark (ConcurrentModificationException)

2014-04-03 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10456:
---

Hadoop Flags: Reviewed

+1 for the patch.  Thank you, Nishkam and Tsuyoshi.

I can't commit this right now.  I'll aim to commit tomorrow.  If I don't commit 
it in the next few days, please ping me again in case I forget.  :-)

 Bug in Configuration.java exposed by Spark (ConcurrentModificationException)
 

 Key: HADOOP-10456
 URL: https://issues.apache.org/jira/browse/HADOOP-10456
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.3.0
Reporter: Nishkam Ravi
 Attachments: HADOOP-10456_nravi.patch


 The following exception occurs non-deterministically:
 java.util.ConcurrentModificationException
 at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926)
 at java.util.HashMap$KeyIterator.next(HashMap.java:960)
 at java.util.AbstractCollection.addAll(AbstractCollection.java:341)
 at java.util.HashSet.init(HashSet.java:117)
 at org.apache.hadoop.conf.Configuration.init(Configuration.java:671)
 at org.apache.hadoop.mapred.JobConf.init(JobConf.java:439)
 at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:110)
 at org.apache.spark.rdd.HadoopRDD$$anon$1.init(HadoopRDD.scala:154)
 at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:149)
 at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:64)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at 
 org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:34)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at 
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:161)
 at 
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:102)
 at org.apache.spark.scheduler.Task.run(Task.scala:53)
 at 
 org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
 at 
 org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:42)
 at 
 org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:41)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at 
 org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:41)
 at 
 org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9907) Webapp http://hostname:port/metrics link is not working

2014-04-03 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13958718#comment-13958718
 ] 

Akira AJISAKA commented on HADOOP-9907:
---

I verified /metrics link is not working and /jmx?qry=Hadoop:* works with 
Hadoop 1.2.1.
Should I create a patch for branch-1 also?

 Webapp http://hostname:port/metrics  link is not working 
 -

 Key: HADOOP-9907
 URL: https://issues.apache.org/jira/browse/HADOOP-9907
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Jian He
Assignee: Akira AJISAKA
Priority: Critical
 Attachments: HADOOP-9907.patch


 This link is not working which just shows a blank page.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10459) distcp V2 doesn't preserve root dir's attributes when -p is specified

2014-04-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13958727#comment-13958727
 ] 

Hudson commented on HADOOP-10459:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #528 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/528/])
HADOOP-10459. distcp V2 doesn't preserve root dir's attributes when -p is 
specified. Contributed by Yongjun Zhang. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1584227)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestCopyListing.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpSystem.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestFileBasedCopyListing.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestGlobbedCopyListing.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestIntegration.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyCommitter.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestUniformSizeInputFormat.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/lib/TestDynamicInputFormat.java


 distcp V2 doesn't preserve root dir's attributes when -p is specified
 -

 Key: HADOOP-10459
 URL: https://issues.apache.org/jira/browse/HADOOP-10459
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools/distcp
Affects Versions: 2.3.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Fix For: 2.5.0

 Attachments: HDFS-6152.001.patch, HDFS-6152.002.patch, 
 HDFS-6152.002.patch, HDFS-6152.003.patch


 Two issues were observed with distcpV2
 ISSUE 1. when copying a source dir to target dir with -pu option using 
 command 
   distcp -pu source-dir target-dir
  
 The source dir's owner is not preserved at target dir. Simiarly other 
 attributes of source dir are not preserved.  Supposedly they should be 
 preserved when no -update and no -overwrite specified. 
 There are two scenarios with the above command:
 a. when target-dir already exists. Issuing the above command will  result in 
 target-dir/source-dir (source-dir here refers to the last component of the 
 source-dir path in the command line) at target file system, with all contents 
 in source-dir copied to under target-dir/src-dir. The issue in this case is, 
 the attributes of src-dir is not preserved.
 b. when target-dir doesn't exist. It will result in target-dir with all 
 contents of source-dir copied to under target-dir. This issue in this  case 
 is, the attributes of source-dir is not carried over to target-dir.
 For multiple source cases, e.g., command 
   distcp -pu source-dir1 source-dir2 target-dir
 No matter whether the target-dir exists or not, the multiple sources are 
 copied to under the target dir (target-dir is created if it didn't exist). 
 And their attributes are preserved. 
 ISSUE 2. with the following command:
   distcp source-dir target-dir
 when source-dir is an empty directory, and when target-dir doesn't exist, 
 source-dir is not copied, actually the command behaves like a no-op. However, 
 when the source-dir is not empty, it would be copied and results in 
 target-dir at the target file system containing a copy of source-dir's 
 children.
 To be consistent, empty source dir should be copied too. Basically the  above 
 distcp command should cause target-dir get created at target file system, and 
 the source-dir's attributes are preserved at target-dir when -p is passed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10460) Please update on-line documentation for hadoop 2.3

2014-04-03 Thread Darek (JIRA)
Darek created HADOOP-10460:
--

 Summary: Please update on-line documentation for hadoop 2.3
 Key: HADOOP-10460
 URL: https://issues.apache.org/jira/browse/HADOOP-10460
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.3.0
 Environment: any
Reporter: Darek


Documentation on page:
http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/SingleNodeSetup.html

contains steps like:
 $ cp conf/*.xml input
but after checked out repository conf does not exists (I quess it was moved 
to etc/hadoop)

Few lines below in section Execution there are steps:
  $ bin/hadoop namenode -format  - OK

  $ bin/start-all.sh  - this file has been removed





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10459) distcp V2 doesn't preserve root dir's attributes when -p is specified

2014-04-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13958808#comment-13958808
 ] 

Hudson commented on HADOOP-10459:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1746 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1746/])
HADOOP-10459. distcp V2 doesn't preserve root dir's attributes when -p is 
specified. Contributed by Yongjun Zhang. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1584227)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestCopyListing.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpSystem.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestFileBasedCopyListing.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestGlobbedCopyListing.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestIntegration.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyCommitter.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestUniformSizeInputFormat.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/lib/TestDynamicInputFormat.java


 distcp V2 doesn't preserve root dir's attributes when -p is specified
 -

 Key: HADOOP-10459
 URL: https://issues.apache.org/jira/browse/HADOOP-10459
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools/distcp
Affects Versions: 2.3.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Fix For: 2.5.0

 Attachments: HDFS-6152.001.patch, HDFS-6152.002.patch, 
 HDFS-6152.002.patch, HDFS-6152.003.patch


 Two issues were observed with distcpV2
 ISSUE 1. when copying a source dir to target dir with -pu option using 
 command 
   distcp -pu source-dir target-dir
  
 The source dir's owner is not preserved at target dir. Simiarly other 
 attributes of source dir are not preserved.  Supposedly they should be 
 preserved when no -update and no -overwrite specified. 
 There are two scenarios with the above command:
 a. when target-dir already exists. Issuing the above command will  result in 
 target-dir/source-dir (source-dir here refers to the last component of the 
 source-dir path in the command line) at target file system, with all contents 
 in source-dir copied to under target-dir/src-dir. The issue in this case is, 
 the attributes of src-dir is not preserved.
 b. when target-dir doesn't exist. It will result in target-dir with all 
 contents of source-dir copied to under target-dir. This issue in this  case 
 is, the attributes of source-dir is not carried over to target-dir.
 For multiple source cases, e.g., command 
   distcp -pu source-dir1 source-dir2 target-dir
 No matter whether the target-dir exists or not, the multiple sources are 
 copied to under the target dir (target-dir is created if it didn't exist). 
 And their attributes are preserved. 
 ISSUE 2. with the following command:
   distcp source-dir target-dir
 when source-dir is an empty directory, and when target-dir doesn't exist, 
 source-dir is not copied, actually the command behaves like a no-op. However, 
 when the source-dir is not empty, it would be copied and results in 
 target-dir at the target file system containing a copy of source-dir's 
 children.
 To be consistent, empty source dir should be copied too. Basically the  above 
 distcp command should cause target-dir get created at target file system, and 
 the source-dir's attributes are preserved at target-dir when -p is passed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10459) distcp V2 doesn't preserve root dir's attributes when -p is specified

2014-04-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13958837#comment-13958837
 ] 

Hudson commented on HADOOP-10459:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1720 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1720/])
HADOOP-10459. distcp V2 doesn't preserve root dir's attributes when -p is 
specified. Contributed by Yongjun Zhang. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1584227)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestCopyListing.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpSystem.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpViewFs.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestFileBasedCopyListing.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestGlobbedCopyListing.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestIntegration.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyCommitter.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestUniformSizeInputFormat.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/lib/TestDynamicInputFormat.java


 distcp V2 doesn't preserve root dir's attributes when -p is specified
 -

 Key: HADOOP-10459
 URL: https://issues.apache.org/jira/browse/HADOOP-10459
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools/distcp
Affects Versions: 2.3.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Fix For: 2.5.0

 Attachments: HDFS-6152.001.patch, HDFS-6152.002.patch, 
 HDFS-6152.002.patch, HDFS-6152.003.patch


 Two issues were observed with distcpV2
 ISSUE 1. when copying a source dir to target dir with -pu option using 
 command 
   distcp -pu source-dir target-dir
  
 The source dir's owner is not preserved at target dir. Simiarly other 
 attributes of source dir are not preserved.  Supposedly they should be 
 preserved when no -update and no -overwrite specified. 
 There are two scenarios with the above command:
 a. when target-dir already exists. Issuing the above command will  result in 
 target-dir/source-dir (source-dir here refers to the last component of the 
 source-dir path in the command line) at target file system, with all contents 
 in source-dir copied to under target-dir/src-dir. The issue in this case is, 
 the attributes of src-dir is not preserved.
 b. when target-dir doesn't exist. It will result in target-dir with all 
 contents of source-dir copied to under target-dir. This issue in this  case 
 is, the attributes of source-dir is not carried over to target-dir.
 For multiple source cases, e.g., command 
   distcp -pu source-dir1 source-dir2 target-dir
 No matter whether the target-dir exists or not, the multiple sources are 
 copied to under the target dir (target-dir is created if it didn't exist). 
 And their attributes are preserved. 
 ISSUE 2. with the following command:
   distcp source-dir target-dir
 when source-dir is an empty directory, and when target-dir doesn't exist, 
 source-dir is not copied, actually the command behaves like a no-op. However, 
 when the source-dir is not empty, it would be copied and results in 
 target-dir at the target file system containing a copy of source-dir's 
 children.
 To be consistent, empty source dir should be copied too. Basically the  above 
 distcp command should cause target-dir get created at target file system, and 
 the source-dir's attributes are preserved at target-dir when -p is passed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10433) Key Management Server based on KeyProvider API

2014-04-03 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13958858#comment-13958858
 ] 

Larry McCay commented on HADOOP-10433:
--

I don't think that features are an issue at this point.
My larger concern is whether Hadoop common needs a KMS server offering of its 
own or not.
Given that the keystore provider is not likely a scalable provider for a KMS 
that will lead us down the path of requiring an appropriate DB to do it right. 
This was my thinking as I starting prototyping on an earlier version of the 
KeyProvider API as well.

It became reasonable to me that the KeyProvider API within Hadoop common was 
sufficient to enable deployments to plugin any number of various KeyProvider 
implementations. This might be:
1. direct to KMIP
2. to an OpenStack Barbican server
3. to a Knox (or wherever) KMS

This would allow Hadoop common to not have to take on the weight of another 
server.

I'm not really arguing against putting it common here but recollecting my 
thought process on the matter.

Here is a question that I have had trouble answering with regard to the server 
and API being in common together...

Given the pluggable nature of the KeyProvider API what value does adding a full 
KMS server with additional (and maybe redundant) pluggability to common provide?

I've had trouble reconciling that in my mind.

Now, take that same server implementation and move it out of common and it 
easily makes sense to be able to have pluggability for the Hadoop KeyProvider 
API, KMIP and others.

So, when I said as long as it make sense to do so, I was really talking about 
whether it made sense to have it colocated with the KeyProvider API in common. 
I think the feature set is less of an issue.



 Key Management Server based on KeyProvider API
 --

 Key: HADOOP-10433
 URL: https://issues.apache.org/jira/browse/HADOOP-10433
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-10433-v2.patch, HADOOP-10433-v3.patch, 
 HADOOP-10433.patch, KMS-ALL-PATCHES-v2.patch, KMS-ALL-PATCHES-v3.patch, 
 KMS-ALL-PATCHES.patch, KMS-doc.pdf


 (from HDFS-6134 proposal)
 Hadoop KMS is the gateway, for Hadoop and Hadoop clients, to the underlying 
 KMS. It provides an interface that works with existing Hadoop security 
 components (authenticatication, confidentiality).
 Hadoop KMS will be implemented leveraging the work being done in HADOOP-10141 
 and HADOOP-10177.
 Hadoop KMS will provide an additional implementation of the Hadoop 
 KeyProvider class. This implementation will be a client-server implementation.
 The client-server protocol will be secure:
 * Kerberos HTTP SPNEGO (authentication)
 * HTTPS for transport (confidentiality and integrity)
 * Hadoop ACLs (authorization)
 The Hadoop KMS implementation will not provide additional ACL to access 
 encrypted files. For sophisticated access control requirements, HDFS ACLs 
 (HDFS-4685) should be used.
 Basic key administration will be supported by the Hadoop KMS via the, already 
 available, Hadoop KeyShell command line tool
 There are minor changes that must be done in Hadoop KeyProvider functionality:
 The KeyProvider contract, and the existing implementations, must be 
 thread-safe
 KeyProvider API should have an API to generate the key material internally
 JavaKeyStoreProvider should use, if present, a password provided via 
 configuration
 KeyProvider Option and Metadata should include a label (for easier 
 cross-referencing)
 To avoid overloading the underlying KeyProvider implementation, the Hadoop 
 KMS will cache keys using a TTL policy.
 Scalability and High Availability of the Hadoop KMS can achieved by running 
 multiple instances behind a VIP/Load-Balancer. For High Availability, the 
 underlying KeyProvider implementation used by the Hadoop KMS must be High 
 Available.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10431) Change visibility of KeyStore KeyVersion/Metadata/Options constructor and methods to public

2014-04-03 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13958860#comment-13958860
 ] 

Larry McCay commented on HADOOP-10431:
--

I need to get my head above water so that I can review in detail what you need 
there.
In my early prototyping the API was sufficient as it was for my needs - this 
was on an earlier version of the API though.

I think that I actually created a different set of classes to represent the 
databinding within the REST server. This may be the difference.
I will try and look into that today.

Thanks for the clarification, tucu!

 Change visibility of KeyStore KeyVersion/Metadata/Options constructor and 
 methods to public
 ---

 Key: HADOOP-10431
 URL: https://issues.apache.org/jira/browse/HADOOP-10431
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-10431.patch


 Making KeyVersion/Metadata/Options constructor and methods public will 
 facilitate {{KeyProvider}} implementations to use those classes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10133) winutils detection on windows-cygwin fails

2014-04-03 Thread David Fleeman (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13958862#comment-13958862
 ] 

David Fleeman commented on HADOOP-10133:


I have confirmed that I am having the same issue with Windows+cygwin setup.

 winutils detection on windows-cygwin fails
 --

 Key: HADOOP-10133
 URL: https://issues.apache.org/jira/browse/HADOOP-10133
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.2.0
 Environment: windows 7, cygwin
Reporter: Franjo Markovic
   Original Estimate: 1h
  Remaining Estimate: 1h

 java.io.IOException: Could not locate executable null\bin\winutils.exe in the 
 Hadoop binaries.
 at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
  at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
 at org.apache.hadoop.util.Shell.clinit(Shell.java:293)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10350) BUILDING.txt should mention openssl dependency required for hadoop-pipes

2014-04-03 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-10350:
---

Attachment: HADOOP-10350.patch

Attached the updated patch.

 BUILDING.txt should mention openssl dependency required for hadoop-pipes
 

 Key: HADOOP-10350
 URL: https://issues.apache.org/jira/browse/HADOOP-10350
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HADOOP-10350.patch, HADOOP-10350.patch


 BUILDING.txt should mention openssl dependency required for hadoop-pipes



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10221) Add a plugin to specify SaslProperties for RPC protocol based on connection properties

2014-04-03 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10221:
--

Release Note: 
SaslPropertiesResolver  or its subclass is used to resolve the QOP used for a 
connection. The subclass can be specified via 
hadoop.security.saslproperties.resolver.class configuration property. If not 
specified, the full set of values specified in hadoop.rpc.protection is used 
while determining the QOP used for the  connection. If a class is specified, 
then the QOP values returned by the class will be used while determining the 
QOP used for the connection.

Note that this change, effectively removes SaslRpcServer.SASL_PROPS which was a 
public variable. Any use of this variable outside hadoop should be replaced 
with the following code:

SaslPropertiesResolver saslPropsResolver = 
SaslPropertiesResolver.getInstance(conf);

MapString, String sasl_props = saslPropsResolver.getDefaultProperties()

  was:
SaslPropertiesResolver  or its subclass is used to resolve the QOP used for a 
connection. The subclass can be specified via 
hadoop.security.saslproperties.resolver.class configuration property. If not 
specified, the full set of values specified in hadoop.rpc.protection is used 
while determining the QOP used for the  connection. If a class is specified, 
then the QOP values returned by the class will be used while determining the 
QOP used for the connection.

Note that this change, effectively removes SaslRpcServer.SASL_PROPS which is a 
public variable. Any use of this variable outside hadoop should be replaced 
with the following code:

SaslPropertiesResolver saslPropsResolver = 
SaslPropertiesResolver.getInstance(conf);

MapString, String sasl_props = saslPropsResolver.getDefaultProperties()


 Add a plugin to specify SaslProperties for RPC protocol based on connection 
 properties
 --

 Key: HADOOP-10221
 URL: https://issues.apache.org/jira/browse/HADOOP-10221
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.2.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Fix For: 3.0.0, 2.4.0

 Attachments: HADOOP-10221.no-static.example, HADOOP-10221.patch, 
 HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch, 
 HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch, 
 HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch


 Add a plugin to specify SaslProperties for RPC protocol based on connection 
 properties.
 HADOOP-10211 enables client and server to specify and support multiple QOP.  
 Some connections needs to be restricted to a specific set of QOP based on 
 connection properties.
 Eg. connections from client from a specific subnet needs to be encrypted 
 (QOP=privacy)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10221) Add a plugin to specify SaslProperties for RPC protocol based on connection properties

2014-04-03 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10221:
--

Release Note: 
SaslPropertiesResolver  or its subclass is used to resolve the QOP used for a 
connection. The subclass can be specified via 
hadoop.security.saslproperties.resolver.class configuration property. If not 
specified, the full set of values specified in hadoop.rpc.protection is used 
while determining the QOP used for the  connection. If a class is specified, 
then the QOP values returned by the class will be used while determining the 
QOP used for the connection.

Note that this change, effectively removes SaslRpcServer.SASL_PROPS which is a 
public variable. Any use of this variable outside hadoop should be replaced 
with the following code:

SaslPropertiesResolver saslPropsResolver = 
SaslPropertiesResolver.getInstance(conf);

MapString, String sasl_props = saslPropsResolver.getDefaultProperties()

 Add a plugin to specify SaslProperties for RPC protocol based on connection 
 properties
 --

 Key: HADOOP-10221
 URL: https://issues.apache.org/jira/browse/HADOOP-10221
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.2.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Fix For: 3.0.0, 2.4.0

 Attachments: HADOOP-10221.no-static.example, HADOOP-10221.patch, 
 HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch, 
 HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch, 
 HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch


 Add a plugin to specify SaslProperties for RPC protocol based on connection 
 properties.
 HADOOP-10211 enables client and server to specify and support multiple QOP.  
 Some connections needs to be restricted to a specific set of QOP based on 
 connection properties.
 Eg. connections from client from a specific subnet needs to be encrypted 
 (QOP=privacy)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10221) Add a plugin to specify SaslProperties for RPC protocol based on connection properties

2014-04-03 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10221:
--

Release Note: 
SaslPropertiesResolver  or its subclass is used to resolve the QOP used for a 
connection. The subclass can be specified via 
hadoop.security.saslproperties.resolver.class configuration property. If not 
specified, the full set of values specified in hadoop.rpc.protection is used 
while determining the QOP used for the  connection. If a class is specified, 
then the QOP values returned by the class will be used while determining the 
QOP used for the connection.

Note that this change, effectively removes SaslRpcServer.SASL_PROPS which was a 
public variable. Any use of this variable outside hadoop should be replaced 
with the following code:
SaslPropertiesResolver saslPropsResolver = 
SaslPropertiesResolver.getInstance(conf);
MapString, String sasl_props = saslPropsResolver.getDefaultProperties()

  was:
SaslPropertiesResolver  or its subclass is used to resolve the QOP used for a 
connection. The subclass can be specified via 
hadoop.security.saslproperties.resolver.class configuration property. If not 
specified, the full set of values specified in hadoop.rpc.protection is used 
while determining the QOP used for the  connection. If a class is specified, 
then the QOP values returned by the class will be used while determining the 
QOP used for the connection.

Note that this change, effectively removes SaslRpcServer.SASL_PROPS which was a 
public variable. Any use of this variable outside hadoop should be replaced 
with the following code:

SaslPropertiesResolver saslPropsResolver = 
SaslPropertiesResolver.getInstance(conf);

MapString, String sasl_props = saslPropsResolver.getDefaultProperties()


 Add a plugin to specify SaslProperties for RPC protocol based on connection 
 properties
 --

 Key: HADOOP-10221
 URL: https://issues.apache.org/jira/browse/HADOOP-10221
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.2.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Fix For: 3.0.0, 2.4.0

 Attachments: HADOOP-10221.no-static.example, HADOOP-10221.patch, 
 HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch, 
 HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch, 
 HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch


 Add a plugin to specify SaslProperties for RPC protocol based on connection 
 properties.
 HADOOP-10211 enables client and server to specify and support multiple QOP.  
 Some connections needs to be restricted to a specific set of QOP based on 
 connection properties.
 Eg. connections from client from a specific subnet needs to be encrypted 
 (QOP=privacy)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10221) Add a plugin to specify SaslProperties for RPC protocol based on connection properties

2014-04-03 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10221:
--

Release Note: 
SaslPropertiesResolver  or its subclass is used to resolve the QOP used for a 
connection. The subclass can be specified via 
hadoop.security.saslproperties.resolver.class configuration property. If not 
specified, the full set of values specified in hadoop.rpc.protection is used 
while determining the QOP used for the  connection. If a class is specified, 
then the QOP values returned by the class will be used while determining the 
QOP used for the connection.

Note that this change, effectively removes SaslRpcServer.SASL_PROPS which was a 
public field. Any use of this variable outside hadoop should be replaced with 
the following code:
SaslPropertiesResolver saslPropsResolver = 
SaslPropertiesResolver.getInstance(conf);
MapString, String sasl_props = saslPropsResolver.getDefaultProperties()

  was:
SaslPropertiesResolver  or its subclass is used to resolve the QOP used for a 
connection. The subclass can be specified via 
hadoop.security.saslproperties.resolver.class configuration property. If not 
specified, the full set of values specified in hadoop.rpc.protection is used 
while determining the QOP used for the  connection. If a class is specified, 
then the QOP values returned by the class will be used while determining the 
QOP used for the connection.

Note that this change, effectively removes _SaslRpcServer.SASL_PROPS_ which was 
a public variable. Any use of this variable outside hadoop should be replaced 
with the following code:
SaslPropertiesResolver saslPropsResolver = 
SaslPropertiesResolver.getInstance(conf);
MapString, String sasl_props = saslPropsResolver.getDefaultProperties()


 Add a plugin to specify SaslProperties for RPC protocol based on connection 
 properties
 --

 Key: HADOOP-10221
 URL: https://issues.apache.org/jira/browse/HADOOP-10221
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.2.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Fix For: 3.0.0, 2.4.0

 Attachments: HADOOP-10221.no-static.example, HADOOP-10221.patch, 
 HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch, 
 HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch, 
 HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch


 Add a plugin to specify SaslProperties for RPC protocol based on connection 
 properties.
 HADOOP-10211 enables client and server to specify and support multiple QOP.  
 Some connections needs to be restricted to a specific set of QOP based on 
 connection properties.
 Eg. connections from client from a specific subnet needs to be encrypted 
 (QOP=privacy)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10221) Add a plugin to specify SaslProperties for RPC protocol based on connection properties

2014-04-03 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10221:
--

Release Note: 
SaslPropertiesResolver  or its subclass is used to resolve the QOP used for a 
connection. The subclass can be specified via 
hadoop.security.saslproperties.resolver.class configuration property. If not 
specified, the full set of values specified in hadoop.rpc.protection is used 
while determining the QOP used for the  connection. If a class is specified, 
then the QOP values returned by the class will be used while determining the 
QOP used for the connection.

Note that this change, effectively removes _SaslRpcServer.SASL_PROPS_ which was 
a public variable. Any use of this variable outside hadoop should be replaced 
with the following code:
SaslPropertiesResolver saslPropsResolver = 
SaslPropertiesResolver.getInstance(conf);
MapString, String sasl_props = saslPropsResolver.getDefaultProperties()

  was:
SaslPropertiesResolver  or its subclass is used to resolve the QOP used for a 
connection. The subclass can be specified via 
hadoop.security.saslproperties.resolver.class configuration property. If not 
specified, the full set of values specified in hadoop.rpc.protection is used 
while determining the QOP used for the  connection. If a class is specified, 
then the QOP values returned by the class will be used while determining the 
QOP used for the connection.

Note that this change, effectively removes SaslRpcServer.SASL_PROPS which was a 
public variable. Any use of this variable outside hadoop should be replaced 
with the following code:
SaslPropertiesResolver saslPropsResolver = 
SaslPropertiesResolver.getInstance(conf);
MapString, String sasl_props = saslPropsResolver.getDefaultProperties()


 Add a plugin to specify SaslProperties for RPC protocol based on connection 
 properties
 --

 Key: HADOOP-10221
 URL: https://issues.apache.org/jira/browse/HADOOP-10221
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.2.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Fix For: 3.0.0, 2.4.0

 Attachments: HADOOP-10221.no-static.example, HADOOP-10221.patch, 
 HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch, 
 HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch, 
 HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch


 Add a plugin to specify SaslProperties for RPC protocol based on connection 
 properties.
 HADOOP-10211 enables client and server to specify and support multiple QOP.  
 Some connections needs to be restricted to a specific set of QOP based on 
 connection properties.
 Eg. connections from client from a specific subnet needs to be encrypted 
 (QOP=privacy)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10451) Remove unused field and imports from SaslRpcServer

2014-04-03 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10451:
--

Release Note: 
SaslRpcServer.SASL_PROPS is removed.
Any use of this variable  should be replaced with the following code: 
SaslPropertiesResolver saslPropsResolver = 
SaslPropertiesResolver.getInstance(conf); 
MapString, String sasl_props = saslPropsResolver.getDefaultProperties()

 Remove unused field and imports from SaslRpcServer
 --

 Key: HADOOP-10451
 URL: https://issues.apache.org/jira/browse/HADOOP-10451
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.3.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Trivial
 Fix For: 2.5.0

 Attachments: HADOOP-10451.patch


 There were unused fields and import remained on SaslRpcServer.
 This jira is to remove cleanup those  fields from SaslRpcServer. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10221) Add a plugin to specify SaslProperties for RPC protocol based on connection properties

2014-04-03 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10221:
--

Release Note: 
SaslPropertiesResolver  or its subclass is used to resolve the QOP used for a 
connection. The subclass can be specified via 
hadoop.security.saslproperties.resolver.class configuration property. If not 
specified, the full set of values specified in hadoop.rpc.protection is used 
while determining the QOP used for the  connection. If a class is specified, 
then the QOP values returned by the class will be used while determining the 
QOP used for the connection.

Note that this change, effectively removes SaslRpcServer.SASL_PROPS which was a 
public field. Any use of this variable  should be replaced with the following 
code:
SaslPropertiesResolver saslPropsResolver = 
SaslPropertiesResolver.getInstance(conf);
MapString, String sasl_props = saslPropsResolver.getDefaultProperties()

  was:
SaslPropertiesResolver  or its subclass is used to resolve the QOP used for a 
connection. The subclass can be specified via 
hadoop.security.saslproperties.resolver.class configuration property. If not 
specified, the full set of values specified in hadoop.rpc.protection is used 
while determining the QOP used for the  connection. If a class is specified, 
then the QOP values returned by the class will be used while determining the 
QOP used for the connection.

Note that this change, effectively removes SaslRpcServer.SASL_PROPS which was a 
public field. Any use of this variable outside hadoop should be replaced with 
the following code:
SaslPropertiesResolver saslPropsResolver = 
SaslPropertiesResolver.getInstance(conf);
MapString, String sasl_props = saslPropsResolver.getDefaultProperties()


 Add a plugin to specify SaslProperties for RPC protocol based on connection 
 properties
 --

 Key: HADOOP-10221
 URL: https://issues.apache.org/jira/browse/HADOOP-10221
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.2.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Fix For: 3.0.0, 2.4.0

 Attachments: HADOOP-10221.no-static.example, HADOOP-10221.patch, 
 HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch, 
 HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch, 
 HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch


 Add a plugin to specify SaslProperties for RPC protocol based on connection 
 properties.
 HADOOP-10211 enables client and server to specify and support multiple QOP.  
 Some connections needs to be restricted to a specific set of QOP based on 
 connection properties.
 Eg. connections from client from a specific subnet needs to be encrypted 
 (QOP=privacy)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10221) Add a plugin to specify SaslProperties for RPC protocol based on connection properties

2014-04-03 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10221:
--

Release Note: 
SaslPropertiesResolver  or its subclass is used to resolve the QOP used for a 
connection. The subclass can be specified via 
hadoop.security.saslproperties.resolver.class configuration property. If not 
specified, the full set of values specified in hadoop.rpc.protection is used 
while determining the QOP used for the  connection. If a class is specified, 
then the QOP values returned by the class will be used while determining the 
QOP used for the connection.

Note that this change, effectively removes SaslRpcServer.SASL_PROPS which was a 
public field. Any use of this variable  should be replaced with the following 
code:
SaslPropertiesResolver saslPropsResolver = 
SaslPropertiesResolver.getInstance(conf);
MapString, String sasl_props = saslPropsResolver.getDefaultProperties();

  was:
SaslPropertiesResolver  or its subclass is used to resolve the QOP used for a 
connection. The subclass can be specified via 
hadoop.security.saslproperties.resolver.class configuration property. If not 
specified, the full set of values specified in hadoop.rpc.protection is used 
while determining the QOP used for the  connection. If a class is specified, 
then the QOP values returned by the class will be used while determining the 
QOP used for the connection.

Note that this change, effectively removes SaslRpcServer.SASL_PROPS which was a 
public field. Any use of this variable  should be replaced with the following 
code:
SaslPropertiesResolver saslPropsResolver = 
SaslPropertiesResolver.getInstance(conf);
MapString, String sasl_props = saslPropsResolver.getDefaultProperties()


 Add a plugin to specify SaslProperties for RPC protocol based on connection 
 properties
 --

 Key: HADOOP-10221
 URL: https://issues.apache.org/jira/browse/HADOOP-10221
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.2.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Fix For: 3.0.0, 2.4.0

 Attachments: HADOOP-10221.no-static.example, HADOOP-10221.patch, 
 HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch, 
 HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch, 
 HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch, HADOOP-10221.patch


 Add a plugin to specify SaslProperties for RPC protocol based on connection 
 properties.
 HADOOP-10211 enables client and server to specify and support multiple QOP.  
 Some connections needs to be restricted to a specific set of QOP based on 
 connection properties.
 Eg. connections from client from a specific subnet needs to be encrypted 
 (QOP=privacy)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10451) Remove unused field and imports from SaslRpcServer

2014-04-03 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10451:
--

Release Note: 
SaslRpcServer.SASL_PROPS is removed.
Any use of this variable  should be replaced with the following code: 
SaslPropertiesResolver saslPropsResolver = 
SaslPropertiesResolver.getInstance(conf); 
MapString, String sasl_props = saslPropsResolver.getDefaultProperties();

  was:
SaslRpcServer.SASL_PROPS is removed.
Any use of this variable  should be replaced with the following code: 
SaslPropertiesResolver saslPropsResolver = 
SaslPropertiesResolver.getInstance(conf); 
MapString, String sasl_props = saslPropsResolver.getDefaultProperties()


 Remove unused field and imports from SaslRpcServer
 --

 Key: HADOOP-10451
 URL: https://issues.apache.org/jira/browse/HADOOP-10451
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.3.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Trivial
 Fix For: 2.5.0

 Attachments: HADOOP-10451.patch


 There were unused fields and import remained on SaslRpcServer.
 This jira is to remove cleanup those  fields from SaslRpcServer. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10428) JavaKeyStoreProvider should accept keystore password via configuration falling back to ENV VAR

2014-04-03 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10428:


Attachment: HADOOP-10428.patch

new patch with password loading order as suggested by Raymie, fixed rat failure.

   JavaKeyStoreProvider should accept keystore password via configuration 
 falling back to ENV VAR
 ---

 Key: HADOOP-10428
 URL: https://issues.apache.org/jira/browse/HADOOP-10428
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-10428.patch, HADOOP-10428.patch, 
 HADOOP-10428.patch


 Currently the password for the {{JavaKeyStoreProvider}} must be set in an ENV 
 VAR.
 Allowing the password to be set via configuration enables applications to 
 interactively ask for the password before initializing the 
 {{JavaKeyStoreProvider}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10456) Bug in Configuration.java exposed by Spark (ConcurrentModificationException)

2014-04-03 Thread Nishkam Ravi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13958976#comment-13958976
 ] 

Nishkam Ravi commented on HADOOP-10456:
---

Thanks Tsuyoshi and Chris.

 Bug in Configuration.java exposed by Spark (ConcurrentModificationException)
 

 Key: HADOOP-10456
 URL: https://issues.apache.org/jira/browse/HADOOP-10456
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.3.0
Reporter: Nishkam Ravi
 Attachments: HADOOP-10456_nravi.patch


 The following exception occurs non-deterministically:
 java.util.ConcurrentModificationException
 at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926)
 at java.util.HashMap$KeyIterator.next(HashMap.java:960)
 at java.util.AbstractCollection.addAll(AbstractCollection.java:341)
 at java.util.HashSet.init(HashSet.java:117)
 at org.apache.hadoop.conf.Configuration.init(Configuration.java:671)
 at org.apache.hadoop.mapred.JobConf.init(JobConf.java:439)
 at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:110)
 at org.apache.spark.rdd.HadoopRDD$$anon$1.init(HadoopRDD.scala:154)
 at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:149)
 at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:64)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at 
 org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:34)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at 
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:161)
 at 
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:102)
 at org.apache.spark.scheduler.Task.run(Task.scala:53)
 at 
 org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
 at 
 org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:42)
 at 
 org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:41)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at 
 org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:41)
 at 
 org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10335) An ip whilelist based implementation to resolve Sasl properties per connection

2014-04-03 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10335:
--

Status: Patch Available  (was: Open)

 An ip whilelist based implementation to resolve Sasl properties per connection
 --

 Key: HADOOP-10335
 URL: https://issues.apache.org/jira/browse/HADOOP-10335
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-10335.patch, HADOOP-10335.patch, HADOOP-10335.pdf


 As noted in HADOOP-10221, it is sometimes required for a Hadoop Server to 
 communicate with some client over encrypted channel and with some other 
 clients over unencrypted channel. 
 Hadoop-10221 introduced an interface _SaslPropertiesResolver_  and the 
 changes required to plugin and use _SaslPropertiesResolver_  to identify the 
 SaslProperties to be used for a connection. 
 In this jira, an ip-whitelist based implementation of 
 _SaslPropertiesResolver_  is attempted.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10335) An ip whilelist based implementation to resolve Sasl properties per connection

2014-04-03 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10335:
--

Status: Open  (was: Patch Available)

 An ip whilelist based implementation to resolve Sasl properties per connection
 --

 Key: HADOOP-10335
 URL: https://issues.apache.org/jira/browse/HADOOP-10335
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-10335.patch, HADOOP-10335.patch, HADOOP-10335.pdf


 As noted in HADOOP-10221, it is sometimes required for a Hadoop Server to 
 communicate with some client over encrypted channel and with some other 
 clients over unencrypted channel. 
 Hadoop-10221 introduced an interface _SaslPropertiesResolver_  and the 
 changes required to plugin and use _SaslPropertiesResolver_  to identify the 
 SaslProperties to be used for a connection. 
 In this jira, an ip-whitelist based implementation of 
 _SaslPropertiesResolver_  is attempted.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10379) Protect authentication cookies with the HttpOnly and Secure flags

2014-04-03 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13959027#comment-13959027
 ] 

Jing Zhao commented on HADOOP-10379:


+1 for the branch-1 patch.

 Protect authentication cookies with the HttpOnly and Secure flags
 -

 Key: HADOOP-10379
 URL: https://issues.apache.org/jira/browse/HADOOP-10379
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.4.0

 Attachments: HADOOP-10379-branch-1.000.patch, HADOOP-10379.000.patch, 
 HADOOP-10379.001.patch, HADOOP-10379.002.patch


 Browser vendors have adopted proposals to enhance the security of HTTP 
 cookies. For example, the server can mark a cookie as {{Secure}} so that it 
 will not be transfer via plain-text HTTP protocol, and the server can mark a 
 cookie as {{HttpOnly}} to prohibit the JavaScript to access that cookie.
 This jira proposes to adopt these flags in Hadoop to protect the HTTP cookie 
 used for authentication purposes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10428) JavaKeyStoreProvider should accept keystore password via configuration falling back to ENV VAR

2014-04-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13959031#comment-13959031
 ] 

Hadoop QA commented on HADOOP-10428:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12638515/HADOOP-10428.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3742//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3742//console

This message is automatically generated.

   JavaKeyStoreProvider should accept keystore password via configuration 
 falling back to ENV VAR
 ---

 Key: HADOOP-10428
 URL: https://issues.apache.org/jira/browse/HADOOP-10428
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-10428.patch, HADOOP-10428.patch, 
 HADOOP-10428.patch


 Currently the password for the {{JavaKeyStoreProvider}} must be set in an ENV 
 VAR.
 Allowing the password to be set via configuration enables applications to 
 interactively ask for the password before initializing the 
 {{JavaKeyStoreProvider}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10150) Hadoop cryptographic file system

2014-04-03 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13959123#comment-13959123
 ] 

Todd Lipcon commented on HADOOP-10150:
--

A few questions here...

First, let me confirm my understanding of the key structure and storage:

- Client master key: this lives on the Key Management Server, and might be 
different from application to application. In many cases there may be just one 
per cluster, though in a multitenant cluster, perhaps we could have one per 
tenant.
- Data key: this is set per encrypted directory. This key is stored in the 
directory xattr on the NN, but encrypted by the client master key (which the NN 
doesn't know).

So, when a client wants to read a file, the following is the process:
1) Notices that the file is in an encrypted directory. Fetches the encrypted 
data key from the NN's xattr on the directory.
2) Somehow associates this encrypted data key with the master key that was used 
to encrypt it (perhaps it's tagged with some identifier). Fetches the 
appropriate master key from the key store.
2a) The keystore somehow authenticates and authorizes the client's access to 
this key
3) The client decrypts the data key using the master key, and is now able to 
set up a decrypting stream for the file itself. (I've ignored the IV here, but 
assume it's also stored in an xattr)

In terms of attack vectors:
- let's say that the NN disk is stolen. The thief now has access to a bunch of 
keys, but they're all encrypted by various master keys. So we're OK.
- let's say that a client is malicious. It can get whichever master keys it has 
access to from the KMS. If we only have one master key per cluster, then the 
combination of one malicious client plus stealing the fsimage will give up all 
the keys
- let's say that a client has escalated to root access on one of the slave 
nodes in the cluster, or otherwise has malicious access to a NodeManager 
process. By looking at a running MR task, it could steal whatever credentials 
the task is using to access the KMS, and/or dump the memory of the client 
process in order to give up the master key above.

Does the above look right? It would be nice to add to the design doc a clear 
description of the threat model here. Do we assume that the adversary will 
never have root on the cluster? Do we assume the adversary won't have access to 
the mapred user (or whoever runs the NM?)

How does the MR task in this context get the credentials to fetch keys from the 
KMS? If the KMS accepts the same authentication tokens as the NameNode, then is 
there any reason that this is more secure than having the NameNode supply the 
keys? Or is it just that decoupling the NameNode and the key server allows this 
approach to work for non-HDFS filesystems, at the expense of an additional 
daemon running a key distribution service?


 Hadoop cryptographic file system
 

 Key: HADOOP-10150
 URL: https://issues.apache.org/jira/browse/HADOOP-10150
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 3.0.0
Reporter: Yi Liu
Assignee: Yi Liu
  Labels: rhino
 Fix For: 3.0.0

 Attachments: CryptographicFileSystem.patch, HADOOP cryptographic file 
 system-V2.docx, HADOOP cryptographic file system.pdf, cfs.patch, extended 
 information based on INode feature.patch


 There is an increasing need for securing data when Hadoop customers use 
 various upper layer applications, such as Map-Reduce, Hive, Pig, HBase and so 
 on.
 HADOOP CFS (HADOOP Cryptographic File System) is used to secure data, based 
 on HADOOP “FilterFileSystem” decorating DFS or other file systems, and 
 transparent to upper layer applications. It’s configurable, scalable and fast.
 High level requirements:
 1.Transparent to and no modification required for upper layer 
 applications.
 2.“Seek”, “PositionedReadable” are supported for input stream of CFS if 
 the wrapped file system supports them.
 3.Very high performance for encryption and decryption, they will not 
 become bottleneck.
 4.Can decorate HDFS and all other file systems in Hadoop, and will not 
 modify existing structure of file system, such as namenode and datanode 
 structure if the wrapped file system is HDFS.
 5.Admin can configure encryption policies, such as which directory will 
 be encrypted.
 6.A robust key management framework.
 7.Support Pread and append operations if the wrapped file system supports 
 them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10454) Provide FileContext version of har file system

2014-04-03 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-10454:


Target Version/s: 0.23.11, 2.5.0  (was: 2.5.0)

 Provide FileContext version of har file system
 --

 Key: HADOOP-10454
 URL: https://issues.apache.org/jira/browse/HADOOP-10454
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HADOOP-10454.patch


 Add support for HarFs, the FileContext version of HarFileSystem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10454) Provide FileContext version of har file system

2014-04-03 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13959322#comment-13959322
 ] 

Jonathan Eagles commented on HADOOP-10454:
--

+1. lgtm.

 Provide FileContext version of har file system
 --

 Key: HADOOP-10454
 URL: https://issues.apache.org/jira/browse/HADOOP-10454
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HADOOP-10454.patch


 Add support for HarFs, the FileContext version of HarFileSystem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10454) Provide FileContext version of har file system

2014-04-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13959343#comment-13959343
 ] 

Hudson commented on HADOOP-10454:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #5453 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5453/])
HADOOP-10454. Provide FileContext version of har file system. (Kihwal Lee via 
jeagles) (jeagles: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1584431)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/HarFs.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


 Provide FileContext version of har file system
 --

 Key: HADOOP-10454
 URL: https://issues.apache.org/jira/browse/HADOOP-10454
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HADOOP-10454.patch


 Add support for HarFs, the FileContext version of HarFileSystem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10454) Provide FileContext version of har file system

2014-04-03 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13959358#comment-13959358
 ] 

Kihwal Lee commented on HADOOP-10454:
-

Committed to branch-0.23 as well.

 Provide FileContext version of har file system
 --

 Key: HADOOP-10454
 URL: https://issues.apache.org/jira/browse/HADOOP-10454
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HADOOP-10454.patch


 Add support for HarFs, the FileContext version of HarFileSystem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10454) Provide FileContext version of har file system

2014-04-03 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-10454:


   Resolution: Fixed
Fix Version/s: 2.5.0
   0.23.11
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-2 and branch-0.23.

 Provide FileContext version of har file system
 --

 Key: HADOOP-10454
 URL: https://issues.apache.org/jira/browse/HADOOP-10454
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 3.0.0, 0.23.11, 2.5.0

 Attachments: HADOOP-10454.patch


 Add support for HarFs, the FileContext version of HarFileSystem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HADOOP-10409) Bzip2 error message isn't clear

2014-04-03 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam reassigned HADOOP-10409:
--

Assignee: Mohammad Kamrul Islam

 Bzip2 error message isn't clear
 ---

 Key: HADOOP-10409
 URL: https://issues.apache.org/jira/browse/HADOOP-10409
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.3.0
Reporter: Travis Thompson
Assignee: Mohammad Kamrul Islam

 If you compile hadoop without {{bzip2-devel}} installed (on RHEL), bzip2 
 doesn't get compiled into libhadoop, as is expected.  This is not documented 
 however and the error message thrown from {{hadoop checknative -a}} is not 
 helpful.
 {noformat}
 [tthompso@eat1-hcl4060 bin]$ hadoop checknative -a
 14/03/13 00:51:02 WARN bzip2.Bzip2Factory: Failed to load/initialize 
 native-bzip2 library system-native, will use pure-Java version
 14/03/13 00:51:02 INFO zlib.ZlibFactory: Successfully loaded  initialized 
 native-zlib library
 Native library checking:
 hadoop: true 
 /export/apps/hadoop/hadoop-2.3.0.li7-1-bin/lib/native/libhadoop.so.1.0.0
 zlib:   true /lib64/libz.so.1
 snappy: true /usr/lib64/libsnappy.so.1
 lz4:true revision:99
 bzip2:  false 
 14/03/13 00:51:02 INFO util.ExitUtil: Exiting with status 1
 {noformat}
 You can see that it wasn't compiled in here:
 {noformat}
 [mislam@eat1-hcl4060 ~]$ strings 
 /export/apps/hadoop/latest/lib/native/libhadoop.so | grep initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
 {noformat}
 After installing bzip2-devel and recompiling:
 {noformat}
 [tthompso@eat1-hcl4060 ~]$ hadoop checknative -a
 14/03/14 23:00:08 INFO bzip2.Bzip2Factory: Successfully loaded  initialized 
 native-bzip2 library system-native
 14/03/14 23:00:08 INFO zlib.ZlibFactory: Successfully loaded  initialized 
 native-zlib library
 Native library checking:
 hadoop: true 
 /export/apps/hadoop/hadoop-2.3.0.11-2-bin/lib/native/libhadoop.so.1.0.0
 zlib:   true /lib64/libz.so.1
 snappy: true /usr/lib64/libsnappy.so.1
 lz4:true revision:99
 bzip2:  true /lib64/libbz2.so.1
 {noformat}
 {noformat}
 tthompso@esv4-hcl261:~/hadoop-common(li-2.3.0⚡) » strings 
 ./hadoop-common-project/hadoop-common/target/native/target/usr/local/lib/libhadoop.so
  |grep initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_bzip2_Bzip2Compressor_initIDs
 Java_org_apache_hadoop_io_compress_bzip2_Bzip2Decompressor_initIDs
 {noformat}
 The error message thrown should hint that perhaps libhadoop wasn't compiled 
 with the bzip2 headers installed.  It would also be nice if compile time 
 dependencies were documented somewhere... :)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10409) Bzip2 error message isn't clear

2014-04-03 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam resolved HADOOP-10409.


Resolution: Won't Fix

 Bzip2 error message isn't clear
 ---

 Key: HADOOP-10409
 URL: https://issues.apache.org/jira/browse/HADOOP-10409
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.3.0
Reporter: Travis Thompson
Assignee: Mohammad Kamrul Islam

 If you compile hadoop without {{bzip2-devel}} installed (on RHEL), bzip2 
 doesn't get compiled into libhadoop, as is expected.  This is not documented 
 however and the error message thrown from {{hadoop checknative -a}} is not 
 helpful.
 {noformat}
 [tthompso@eat1-hcl4060 bin]$ hadoop checknative -a
 14/03/13 00:51:02 WARN bzip2.Bzip2Factory: Failed to load/initialize 
 native-bzip2 library system-native, will use pure-Java version
 14/03/13 00:51:02 INFO zlib.ZlibFactory: Successfully loaded  initialized 
 native-zlib library
 Native library checking:
 hadoop: true 
 /export/apps/hadoop/hadoop-2.3.0.li7-1-bin/lib/native/libhadoop.so.1.0.0
 zlib:   true /lib64/libz.so.1
 snappy: true /usr/lib64/libsnappy.so.1
 lz4:true revision:99
 bzip2:  false 
 14/03/13 00:51:02 INFO util.ExitUtil: Exiting with status 1
 {noformat}
 You can see that it wasn't compiled in here:
 {noformat}
 [mislam@eat1-hcl4060 ~]$ strings 
 /export/apps/hadoop/latest/lib/native/libhadoop.so | grep initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
 {noformat}
 After installing bzip2-devel and recompiling:
 {noformat}
 [tthompso@eat1-hcl4060 ~]$ hadoop checknative -a
 14/03/14 23:00:08 INFO bzip2.Bzip2Factory: Successfully loaded  initialized 
 native-bzip2 library system-native
 14/03/14 23:00:08 INFO zlib.ZlibFactory: Successfully loaded  initialized 
 native-zlib library
 Native library checking:
 hadoop: true 
 /export/apps/hadoop/hadoop-2.3.0.11-2-bin/lib/native/libhadoop.so.1.0.0
 zlib:   true /lib64/libz.so.1
 snappy: true /usr/lib64/libsnappy.so.1
 lz4:true revision:99
 bzip2:  true /lib64/libbz2.so.1
 {noformat}
 {noformat}
 tthompso@esv4-hcl261:~/hadoop-common(li-2.3.0⚡) » strings 
 ./hadoop-common-project/hadoop-common/target/native/target/usr/local/lib/libhadoop.so
  |grep initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_bzip2_Bzip2Compressor_initIDs
 Java_org_apache_hadoop_io_compress_bzip2_Bzip2Decompressor_initIDs
 {noformat}
 The error message thrown should hint that perhaps libhadoop wasn't compiled 
 with the bzip2 headers installed.  It would also be nice if compile time 
 dependencies were documented somewhere... :)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10409) Bzip2 error message isn't clear

2014-04-03 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13959497#comment-13959497
 ] 

Mohammad Kamrul Islam commented on HADOOP-10409:


I found we don't need to do any code changes for this. The JIRA that 
[~tthompso] created should take care of the documentation task.

Therefore closing...


 Bzip2 error message isn't clear
 ---

 Key: HADOOP-10409
 URL: https://issues.apache.org/jira/browse/HADOOP-10409
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.3.0
Reporter: Travis Thompson
Assignee: Mohammad Kamrul Islam

 If you compile hadoop without {{bzip2-devel}} installed (on RHEL), bzip2 
 doesn't get compiled into libhadoop, as is expected.  This is not documented 
 however and the error message thrown from {{hadoop checknative -a}} is not 
 helpful.
 {noformat}
 [tthompso@eat1-hcl4060 bin]$ hadoop checknative -a
 14/03/13 00:51:02 WARN bzip2.Bzip2Factory: Failed to load/initialize 
 native-bzip2 library system-native, will use pure-Java version
 14/03/13 00:51:02 INFO zlib.ZlibFactory: Successfully loaded  initialized 
 native-zlib library
 Native library checking:
 hadoop: true 
 /export/apps/hadoop/hadoop-2.3.0.li7-1-bin/lib/native/libhadoop.so.1.0.0
 zlib:   true /lib64/libz.so.1
 snappy: true /usr/lib64/libsnappy.so.1
 lz4:true revision:99
 bzip2:  false 
 14/03/13 00:51:02 INFO util.ExitUtil: Exiting with status 1
 {noformat}
 You can see that it wasn't compiled in here:
 {noformat}
 [mislam@eat1-hcl4060 ~]$ strings 
 /export/apps/hadoop/latest/lib/native/libhadoop.so | grep initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
 {noformat}
 After installing bzip2-devel and recompiling:
 {noformat}
 [tthompso@eat1-hcl4060 ~]$ hadoop checknative -a
 14/03/14 23:00:08 INFO bzip2.Bzip2Factory: Successfully loaded  initialized 
 native-bzip2 library system-native
 14/03/14 23:00:08 INFO zlib.ZlibFactory: Successfully loaded  initialized 
 native-zlib library
 Native library checking:
 hadoop: true 
 /export/apps/hadoop/hadoop-2.3.0.11-2-bin/lib/native/libhadoop.so.1.0.0
 zlib:   true /lib64/libz.so.1
 snappy: true /usr/lib64/libsnappy.so.1
 lz4:true revision:99
 bzip2:  true /lib64/libbz2.so.1
 {noformat}
 {noformat}
 tthompso@esv4-hcl261:~/hadoop-common(li-2.3.0⚡) » strings 
 ./hadoop-common-project/hadoop-common/target/native/target/usr/local/lib/libhadoop.so
  |grep initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_bzip2_Bzip2Compressor_initIDs
 Java_org_apache_hadoop_io_compress_bzip2_Bzip2Decompressor_initIDs
 {noformat}
 The error message thrown should hint that perhaps libhadoop wasn't compiled 
 with the bzip2 headers installed.  It would also be nice if compile time 
 dependencies were documented somewhere... :)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10460) Please update on-line documentation for hadoop 2.3

2014-04-03 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13959523#comment-13959523
 ] 

Akira AJISAKA commented on HADOOP-10460:


SingleNodeSetup.html is deprecated and not linked from the left-side menu page. 
You can use SingleCluster.html for single node setup.
SingleCluster.html is also deprecated, however, it will be updated in 
HADOOP-10139 (2.4.0). You can download 2.4.0-rc0 
(http://people.apache.org/~acmurthy/hadoop-2.4.0-rc0/) or go to my 
documentation build 
(http://aajisaka.github.io/hadoop-project/hadoop-project-dist/hadoop-common/SingleCluster.html)
 to get the new document.

By the way, I suggest to remove SingleNodeSetup.html, which makes users 
confusing. I'll create a patch shortly.

 Please update on-line documentation for hadoop 2.3
 --

 Key: HADOOP-10460
 URL: https://issues.apache.org/jira/browse/HADOOP-10460
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.3.0
 Environment: any
Reporter: Darek

 Documentation on page:
 http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/SingleNodeSetup.html
 contains steps like:
  $ cp conf/*.xml input
 but after checked out repository conf does not exists (I quess it was moved 
 to etc/hadoop)
 Few lines below in section Execution there are steps:
   $ bin/hadoop namenode -format  - OK
   $ bin/start-all.sh  - this file has been removed



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HADOOP-10460) Please update on-line documentation for hadoop 2.3

2014-04-03 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HADOOP-10460:
--

Assignee: Akira AJISAKA

 Please update on-line documentation for hadoop 2.3
 --

 Key: HADOOP-10460
 URL: https://issues.apache.org/jira/browse/HADOOP-10460
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.3.0
 Environment: any
Reporter: Darek
Assignee: Akira AJISAKA

 Documentation on page:
 http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/SingleNodeSetup.html
 contains steps like:
  $ cp conf/*.xml input
 but after checked out repository conf does not exists (I quess it was moved 
 to etc/hadoop)
 Few lines below in section Execution there are steps:
   $ bin/hadoop namenode -format  - OK
   $ bin/start-all.sh  - this file has been removed



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10460) Please update on-line documentation for hadoop 2.3

2014-04-03 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10460:
---

  Labels: newbie  (was: )
Target Version/s: 2.4.1
  Status: Patch Available  (was: Open)

 Please update on-line documentation for hadoop 2.3
 --

 Key: HADOOP-10460
 URL: https://issues.apache.org/jira/browse/HADOOP-10460
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.3.0
 Environment: any
Reporter: Darek
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HADOOP-10460.patch


 Documentation on page:
 http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/SingleNodeSetup.html
 contains steps like:
  $ cp conf/*.xml input
 but after checked out repository conf does not exists (I quess it was moved 
 to etc/hadoop)
 Few lines below in section Execution there are steps:
   $ bin/hadoop namenode -format  - OK
   $ bin/start-all.sh  - this file has been removed



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10460) Please update on-line documentation for hadoop 2.3

2014-04-03 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10460:
---

Attachment: HADOOP-10460.patch

Attaching a patch.

 Please update on-line documentation for hadoop 2.3
 --

 Key: HADOOP-10460
 URL: https://issues.apache.org/jira/browse/HADOOP-10460
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.3.0
 Environment: any
Reporter: Darek
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HADOOP-10460.patch


 Documentation on page:
 http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/SingleNodeSetup.html
 contains steps like:
  $ cp conf/*.xml input
 but after checked out repository conf does not exists (I quess it was moved 
 to etc/hadoop)
 Few lines below in section Execution there are steps:
   $ bin/hadoop namenode -format  - OK
   $ bin/start-all.sh  - this file has been removed



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10460) Please update on-line documentation for hadoop 2.3

2014-04-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13959569#comment-13959569
 ] 

Hadoop QA commented on HADOOP-10460:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12638608/HADOOP-10460.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3743//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3743//console

This message is automatically generated.

 Please update on-line documentation for hadoop 2.3
 --

 Key: HADOOP-10460
 URL: https://issues.apache.org/jira/browse/HADOOP-10460
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.3.0
 Environment: any
Reporter: Darek
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HADOOP-10460.patch


 Documentation on page:
 http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/SingleNodeSetup.html
 contains steps like:
  $ cp conf/*.xml input
 but after checked out repository conf does not exists (I quess it was moved 
 to etc/hadoop)
 Few lines below in section Execution there are steps:
   $ bin/hadoop namenode -format  - OK
   $ bin/start-all.sh  - this file has been removed



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10461) Runtime DI based injector for FileSystem tests

2014-04-03 Thread jay vyas (JIRA)
jay vyas created HADOOP-10461:
-

 Summary: Runtime DI based injector for FileSystem tests
 Key: HADOOP-10461
 URL: https://issues.apache.org/jira/browse/HADOOP-10461
 Project: Hadoop Common
  Issue Type: Bug
Reporter: jay vyas
Priority: Minor


Currently alot of manual inheritance and stub classes are required in order to 
run the FileSystemBaseContract and FSMainOperations tests. 

Lets provide a Guice or other DI based injector for HCFS tests which

1) Injects the file system at runtime.
2) Can easily be adopted for other FileSystems.
3) Can read in System properties to skip certain tests , thus providing support 
for the type of variability that we know FileSystem tests require.

Ideally, we could replace RawLocalFileSystem tests with this injector as a 
second follow up patch to this, it would reduce the overall amount of code 
required, probably.












--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Moved] (HADOOP-10462) NameNodeResourceChecker prints 'null' mount point to the log

2014-04-03 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA moved HDFS-6073 to HADOOP-10462:
--

  Component/s: (was: namenode)
 Target Version/s:   (was: 2.4.0)
Affects Version/s: (was: 2.3.0)
   2.3.0
  Key: HADOOP-10462  (was: HDFS-6073)
  Project: Hadoop Common  (was: Hadoop HDFS)

 NameNodeResourceChecker prints 'null' mount point to the log
 

 Key: HADOOP-10462
 URL: https://issues.apache.org/jira/browse/HADOOP-10462
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HDFS-6073.2.patch, HDFS-6073.patch


 If the available space on the volume used for saving fsimage is less than 
 100MB (default), NameNodeResourceChecker prints the log as follows:
 {code}
 Space available on volume 'null' is 92274688, which is below the configured 
 reserved amount 104857600
 {code}
 It should print an appropriate mount point instead of null.



--
This message was sent by Atlassian JIRA
(v6.2#6252)