hadoop git commit: MAPREDUCE-7165. mapred-site.xml is misformatted in single node setup document. Contributed by Zhaohui Xin.

2018-11-29 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 0b83c95ff -> 298c76c48


MAPREDUCE-7165. mapred-site.xml is misformatted in single node setup document. 
Contributed by Zhaohui Xin.

(cherry picked from commit c9bfca217f4b15a3a367db51147d0dc2075ca274)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/298c76c4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/298c76c4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/298c76c4

Branch: refs/heads/branch-3.1
Commit: 298c76c48530e759ebdda46474ff44ff28ff9a81
Parents: 0b83c95
Author: Akira Ajisaka 
Authored: Fri Nov 30 13:13:23 2018 +0900
Committer: Akira Ajisaka 
Committed: Fri Nov 30 13:30:52 2018 +0900

--
 .../hadoop-common/src/site/markdown/SingleCluster.md.vm   | 3 ---
 1 file changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/298c76c4/hadoop-common-project/hadoop-common/src/site/markdown/SingleCluster.md.vm
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/SingleCluster.md.vm 
b/hadoop-common-project/hadoop-common/src/site/markdown/SingleCluster.md.vm
index e7a0a08..18fb52d 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/SingleCluster.md.vm
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/SingleCluster.md.vm
@@ -190,9 +190,6 @@ The following instructions assume that 1. ~ 4. steps of 
[the above instructions]
 mapreduce.framework.name
 yarn
 
-
-
-
 
 mapreduce.application.classpath
 
$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: MAPREDUCE-7165. mapred-site.xml is misformatted in single node setup document. Contributed by Zhaohui Xin.

2018-11-29 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 689555004 -> c9c4511a0


MAPREDUCE-7165. mapred-site.xml is misformatted in single node setup document. 
Contributed by Zhaohui Xin.

(cherry picked from commit c9bfca217f4b15a3a367db51147d0dc2075ca274)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c9c4511a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c9c4511a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c9c4511a

Branch: refs/heads/branch-3.0
Commit: c9c4511a0e8da82423cdae4ceba7ee809845314c
Parents: 6895550
Author: Akira Ajisaka 
Authored: Fri Nov 30 13:13:23 2018 +0900
Committer: Akira Ajisaka 
Committed: Fri Nov 30 13:31:13 2018 +0900

--
 .../hadoop-common/src/site/markdown/SingleCluster.md.vm   | 3 ---
 1 file changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c9c4511a/hadoop-common-project/hadoop-common/src/site/markdown/SingleCluster.md.vm
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/SingleCluster.md.vm 
b/hadoop-common-project/hadoop-common/src/site/markdown/SingleCluster.md.vm
index e7a0a08..18fb52d 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/SingleCluster.md.vm
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/SingleCluster.md.vm
@@ -190,9 +190,6 @@ The following instructions assume that 1. ~ 4. steps of 
[the above instructions]
 mapreduce.framework.name
 yarn
 
-
-
-
 
 mapreduce.application.classpath
 
$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-13870. WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT API doc. Contributed by Siyao Meng.

2018-11-29 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk bad12031f -> 0e36e935d


HDFS-13870. WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT API doc. 
Contributed by Siyao Meng.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0e36e935
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0e36e935
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0e36e935

Branch: refs/heads/trunk
Commit: 0e36e935d909862401890d0a5410204504f48b31
Parents: bad1203
Author: Yiqun Lin 
Authored: Fri Nov 30 11:31:34 2018 +0800
Committer: Yiqun Lin 
Committed: Fri Nov 30 11:31:34 2018 +0800

--
 .../hadoop-hdfs/src/site/markdown/WebHDFS.md| 24 
 1 file changed, 24 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0e36e935/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
index 383eda0..8661659 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
@@ -64,6 +64,8 @@ The HTTP REST API supports the complete 
[FileSystem](../../api/org/apache/hadoop
 * [`SETTIMES`](#Set_Access_or_Modification_Time) (see 
[FileSystem](../../api/org/apache/hadoop/fs/FileSystem.html).setTimes)
 * [`RENEWDELEGATIONTOKEN`](#Renew_Delegation_Token) (see 
[DelegationTokenAuthenticator](../../api/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.html).renewDelegationToken)
 * [`CANCELDELEGATIONTOKEN`](#Cancel_Delegation_Token) (see 
[DelegationTokenAuthenticator](../../api/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.html).cancelDelegationToken)
+* [`ALLOWSNAPSHOT`](#Allow_Snapshot)
+* [`DISALLOWSNAPSHOT`](#Disallow_Snapshot)
 * [`CREATESNAPSHOT`](#Create_Snapshot) (see 
[FileSystem](../../api/org/apache/hadoop/fs/FileSystem.html).createSnapshot)
 * [`RENAMESNAPSHOT`](#Rename_Snapshot) (see 
[FileSystem](../../api/org/apache/hadoop/fs/FileSystem.html).renameSnapshot)
 * [`SETXATTR`](#Set_XAttr) (see 
[FileSystem](../../api/org/apache/hadoop/fs/FileSystem.html).setXAttr)
@@ -1302,6 +1304,28 @@ See also: 
[HDFSErasureCoding](./HDFSErasureCoding.html#Administrative_commands).
 Snapshot Operations
 ---
 
+### Allow Snapshot
+
+* Submit a HTTP PUT request.
+
+curl -i -X PUT 
"http://:/webhdfs/v1/?op=ALLOWSNAPSHOT"
+
+The client receives a response with zero content length on success:
+
+HTTP/1.1 200 OK
+Content-Length: 0
+
+### Disallow Snapshot
+
+* Submit a HTTP PUT request.
+
+curl -i -X PUT 
"http://:/webhdfs/v1/?op=DISALLOWSNAPSHOT"
+
+The client receives a response with zero content length on success:
+
+HTTP/1.1 200 OK
+Content-Length: 0
+
 ### Create Snapshot
 
 * Submit a HTTP PUT request.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15959. Revert "HADOOP-12751. While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple"

2018-11-29 Thread sunilg
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.2.0 fa48363e2 -> 542b224d1


HADOOP-15959. Revert "HADOOP-12751. While using kerberos Hadoop incorrectly 
assumes names with '@' to be non-simple"

This reverts commit 829a2e4d271f05afb209ddc834cd4a0e85492eda.

(cherry picked from commit d0edd37269bb40290b409d583bcf3b70897c13e0)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/542b224d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/542b224d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/542b224d

Branch: refs/heads/branch-3.2.0
Commit: 542b224d1060b77c6f486870ca678d4f6f371cc7
Parents: fa48363
Author: Steve Loughran 
Authored: Thu Nov 29 17:52:11 2018 +
Committer: Sunil G 
Committed: Fri Nov 30 05:36:03 2018 +0530

--
 .../authentication/util/KerberosName.java   |  9 ++--
 .../TestKerberosAuthenticationHandler.java  |  7 ++-
 .../authentication/util/TestKerberosName.java   | 17 ++--
 .../java/org/apache/hadoop/security/KDiag.java  | 46 +---
 .../src/site/markdown/SecureMode.md |  6 ---
 .../org/apache/hadoop/security/TestKDiag.java   | 16 ---
 .../security/TestUserGroupInformation.java  | 27 
 7 files changed, 33 insertions(+), 95 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/542b224d/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
index 4e7ee3c..287bb13 100644
--- 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
+++ 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
@@ -324,8 +324,8 @@ public class KerberosName {
 }
   }
   if (result != null && nonSimplePattern.matcher(result).find()) {
-LOG.info("Non-simple name {} after auth_to_local rule {}",
-result, this);
+throw new NoMatchingRule("Non-simple name " + result +
+ " after auth_to_local rule " + this);
   }
   if (toLowerCase && result != null) {
 result = result.toLowerCase(Locale.ENGLISH);
@@ -378,7 +378,7 @@ public class KerberosName {
   /**
* Get the translation of the principal name into an operating system
* user name.
-   * @return the user name
+   * @return the short name
* @throws IOException throws if something is wrong with the rules
*/
   public String getShortName() throws IOException {
@@ -398,8 +398,7 @@ public class KerberosName {
 return result;
   }
 }
-LOG.info("No auth_to_local rules applied to {}", this);
-return toString();
+throw new NoMatchingRule("No rules applied to " + toString());
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/542b224d/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
 
b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
index e672391..8b4bc15 100644
--- 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
+++ 
b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
@@ -108,7 +108,12 @@ public class TestKerberosAuthenticationHandler
 kn = new KerberosName("bar@BAR");
 Assert.assertEquals("bar", kn.getShortName());
 kn = new KerberosName("bar@FOO");
-Assert.assertEquals("bar@FOO", kn.getShortName());
+try {
+  kn.getShortName();
+  Assert.fail();
+}
+catch (Exception ex) {  
+}
   }
 
   @Test(timeout=6)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/542b224d/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
 

hadoop git commit: YARN-9010. Fix the incorrect trailing slash deletion in constructor method of CGroupsHandlerImpl. (Zhankun Tang via wangda)

2018-11-29 Thread wangda
Repository: hadoop
Updated Branches:
  refs/heads/trunk 0081b02e3 -> bad12031f


YARN-9010. Fix the incorrect trailing slash deletion in constructor method of 
CGroupsHandlerImpl. (Zhankun Tang via wangda)

Change-Id: Iaecc66d57781cc10f19ead4647e47fc9556676da


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bad12031
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bad12031
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bad12031

Branch: refs/heads/trunk
Commit: bad12031f603347a701249a1e3ef5d879a5f1c8f
Parents: 0081b02
Author: Wangda Tan 
Authored: Thu Nov 29 14:56:07 2018 -0800
Committer: Wangda Tan 
Committed: Thu Nov 29 14:56:07 2018 -0800

--
 .../linux/resources/CGroupsHandlerImpl.java |  3 +-
 .../linux/resources/TestCGroupsHandlerImpl.java | 38 
 2 files changed, 40 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bad12031/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandlerImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandlerImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandlerImpl.java
index 050d0a8..1b2c780 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandlerImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/CGroupsHandlerImpl.java
@@ -87,9 +87,10 @@ class CGroupsHandlerImpl implements CGroupsHandler {
   CGroupsHandlerImpl(Configuration conf, PrivilegedOperationExecutor
   privilegedOperationExecutor, String mtab)
   throws ResourceHandlerException {
+// Remove leading and trialing slash(es)
 this.cGroupPrefix = conf.get(YarnConfiguration.
 NM_LINUX_CONTAINER_CGROUPS_HIERARCHY, "/hadoop-yarn")
-.replaceAll("^/", "").replaceAll("$/", "");
+.replaceAll("^/+", "").replaceAll("/+$", "");
 this.enableCGroupMount = conf.getBoolean(YarnConfiguration.
 NM_LINUX_CONTAINER_CGROUPS_MOUNT, false);
 this.cGroupMountPath = conf.get(YarnConfiguration.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bad12031/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupsHandlerImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupsHandlerImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupsHandlerImpl.java
index ea6fb52..70badaf 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupsHandlerImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupsHandlerImpl.java
@@ -598,4 +598,42 @@ public class TestCGroupsHandlerImpl {
   FileUtils.deleteQuietly(cpu);
 }
   }
+
+  // Remove leading and trailing slashes
+  @Test
+  public void testCgroupsHierarchySetting() throws ResourceHandlerException {
+YarnConfiguration conf = new YarnConfiguration();
+conf.set(YarnConfiguration.NM_LINUX_CONTAINER_CGROUPS_MOUNT_PATH, tmpPath);
+conf.set(YarnConfiguration.NM_LINUX_CONTAINER_CGROUPS_HIERARCHY,
+"/hadoop-yarn");
+CGroupsHandlerImpl cGroupsHandler = new CGroupsHandlerImpl(conf, null);
+String expectedRelativePath = "hadoop-yarn/c1";
+Assert.assertEquals(expectedRelativePath,
+cGroupsHandler.getRelativePathForCGroup("c1"));
+
+conf.set(YarnConfiguration.NM_LINUX_CONTAINER_CGROUPS_HIERARCHY,
+"hadoop-yarn");
+cGroupsHandler = new CGroupsHandlerImpl(conf, null);
+

hadoop git commit: HDDS-884. Fix merge issue that causes NPE OzoneManager#httpServer. Contributed by Xiaoyu Yao.

2018-11-29 Thread xyao
Repository: hadoop
Updated Branches:
  refs/heads/HDDS-4 187bbbe68 -> e0a65bb08


HDDS-884. Fix merge issue that causes NPE OzoneManager#httpServer. Contributed 
by Xiaoyu Yao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e0a65bb0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e0a65bb0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e0a65bb0

Branch: refs/heads/HDDS-4
Commit: e0a65bb08d6af67fdecddbd526e22384dcc02cfb
Parents: 187bbbe
Author: Xiaoyu Yao 
Authored: Thu Nov 29 13:39:30 2018 -0800
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 13:39:30 2018 -0800

--
 .../src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java  | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e0a65bb0/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
--
diff --git 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
index 1e49779..cb99b9e 100644
--- 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
+++ 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
@@ -702,6 +702,7 @@ public final class OzoneManager extends 
ServiceRuntimeInfoImpl
 keyManager.start(configuration);
 omRpcServer.start();
 try {
+  httpServer = new OzoneManagerHttpServer(configuration, this);
   httpServer.start();
 } catch (Exception ex) {
   // Allow OM to start as Http Server failure is not fatal.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-14112. Avoid recursive call to external authorizer for getContentSummary.

2018-11-29 Thread szetszwo
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.2 e2fa9e8cd -> 9d508f719


HDFS-14112. Avoid recursive call to external authorizer for getContentSummary.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9d508f71
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9d508f71
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9d508f71

Branch: refs/heads/branch-3.2
Commit: 9d508f719be9c1c40fbc133d19256613f4b8fd75
Parents: e2fa9e8
Author: Tsz Wo Nicholas Sze 
Authored: Thu Nov 29 13:55:21 2018 -0800
Committer: Tsz Wo Nicholas Sze 
Committed: Thu Nov 29 13:56:43 2018 -0800

--
 .../main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java   |  4 
 .../hdfs/server/namenode/FSDirStatAndListingOp.java   |  5 +
 .../apache/hadoop/hdfs/server/namenode/FSDirectory.java   |  7 +++
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml   | 10 ++
 4 files changed, 26 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9d508f71/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index d8024dc..c9f10e1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -280,6 +280,10 @@ public class DFSConfigKeys extends CommonConfigurationKeys 
{
   HdfsClientConfigKeys.DFS_WEBHDFS_USER_PATTERN_DEFAULT;
   public static final String  DFS_PERMISSIONS_ENABLED_KEY =
   HdfsClientConfigKeys.DeprecatedKeys.DFS_PERMISSIONS_ENABLED_KEY;
+  public static final String  DFS_PERMISSIONS_CONTENT_SUMMARY_SUBACCESS_KEY
+  = "dfs.permissions.ContentSummary.subAccess";
+  public static final boolean DFS_PERMISSIONS_CONTENT_SUMMARY_SUBACCESS_DEFAULT
+  = false;
   public static final boolean DFS_PERMISSIONS_ENABLED_DEFAULT = true;
   public static final String  DFS_PERMISSIONS_SUPERUSERGROUP_KEY =
   HdfsClientConfigKeys.DeprecatedKeys.DFS_PERMISSIONS_SUPERUSERGROUP_KEY;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9d508f71/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
index 01de236..052e522 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
@@ -128,6 +128,11 @@ class FSDirStatAndListingOp {
   static ContentSummary getContentSummary(
   FSDirectory fsd, FSPermissionChecker pc, String src) throws IOException {
 final INodesInPath iip = fsd.resolvePath(pc, src, DirOp.READ_LINK);
+if (fsd.isPermissionEnabled() && 
fsd.isPermissionContentSummarySubAccess()) {
+  fsd.checkPermission(pc, iip, false, null, null, null,
+  FsAction.READ_EXECUTE);
+  pc = null;
+}
 // getContentSummaryInt() call will check access (if enabled) when
 // traversing all sub directories.
 return getContentSummaryInt(fsd, pc, iip);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9d508f71/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
index 2a976d2..c49e00f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
@@ -175,6 +175,7 @@ public class FSDirectory implements Closeable {
   private final ReentrantReadWriteLock dirLock;
 
   private final boolean isPermissionEnabled;
+  private final boolean isPermissionContentSummarySubAccess;
   /**
* Support for ACLs is controlled by a configuration flag. If the
* configuration flag is false, then the NameNode will reject all
@@ -274,6 +275,9 @@ public class FSDirectory implements 

hadoop git commit: HDFS-13547. Add ingress port based sasl resolver. Contributed by Chen Liang.

2018-11-29 Thread cliang
Repository: hadoop
Updated Branches:
  refs/heads/branch-3 28b780277 -> bf90a27b5


HDFS-13547. Add ingress port based sasl resolver. Contributed by Chen Liang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bf90a27b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bf90a27b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bf90a27b

Branch: refs/heads/branch-3
Commit: bf90a27b51b1f1ac102fa861eb28025d21aad19b
Parents: 28b7802
Author: Chen Liang 
Authored: Thu Nov 29 13:31:58 2018 -0800
Committer: Chen Liang 
Committed: Thu Nov 29 13:31:58 2018 -0800

--
 .../security/IngressPortBasedResolver.java  | 100 +++
 .../hadoop/security/SaslPropertiesResolver.java |  47 -
 .../hadoop/security/WhitelistBasedResolver.java |  20 +---
 .../security/TestIngressPortBasedResolver.java  |  59 +++
 4 files changed, 207 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bf90a27b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/IngressPortBasedResolver.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/IngressPortBasedResolver.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/IngressPortBasedResolver.java
new file mode 100644
index 000..a30e4a8
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/IngressPortBasedResolver.java
@@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.security;
+
+import com.google.common.annotations.VisibleForTesting;
+import java.net.InetAddress;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.hadoop.conf.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * An implementation of SaslPropertiesResolver. Used on server side,
+ * returns SASL properties based on the port the client is connecting
+ * to. This should be used along with server side enabling multiple ports
+ * TODO: when NN multiple listener is enabled, automatically use this
+ * resolver without having to set in config.
+ *
+ * For configuration, for example if server runs on two ports 9000 and 9001,
+ * and we want to specify 9000 to use auth-conf and 9001 to use auth.
+ *
+ * We need to set the following configuration properties:
+ * ingress.port.sasl.configured.ports=9000,9001
+ * ingress.port.sasl.prop.9000=privacy
+ * ingress.port.sasl.prop.9001=authentication
+ *
+ * One note is that, if there is misconfiguration that a port, say, 9002 is
+ * given in ingress.port.sasl.configured.ports, but it's sasl prop is not
+ * set, a default of QOP of privacy (auth-conf) will be used. In addition,
+ * if a port is not given even in ingress.port.sasl.configured.ports, but
+ * is being checked in getServerProperties(), the default SASL prop will
+ * be returned. Both of these two cases are considered misconfiguration.
+ */
+public class IngressPortBasedResolver extends SaslPropertiesResolver {
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(IngressPortBasedResolver.class.getName());
+
+  static final String INGRESS_PORT_SASL_PROP_PREFIX = "ingress.port.sasl.prop";
+
+  static final String INGRESS_PORT_SASL_CONFIGURED_PORTS =
+  "ingress.port.sasl.configured.ports";
+
+  // no need to concurrent map, because after setConf() it never change,
+  // only for read.
+  private HashMap> portPropMapping;
+
+  @Override
+  public void setConf(Configuration conf) {
+super.setConf(conf);
+portPropMapping = new HashMap<>();
+Collection portStrings =
+conf.getTrimmedStringCollection(INGRESS_PORT_SASL_CONFIGURED_PORTS);
+for (String portString : portStrings) {
+  int port = Integer.parseInt(portString);
+  String configKey = INGRESS_PORT_SASL_PROP_PREFIX + "." + portString;
+  Map props = 

hadoop git commit: HDFS-13547. Add ingress port based sasl resolver. Contributed by Chen Liang.

2018-11-29 Thread cliang
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1.1 2b9a8c1d3 -> 7caf768a8


HDFS-13547. Add ingress port based sasl resolver. Contributed by Chen Liang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7caf768a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7caf768a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7caf768a

Branch: refs/heads/branch-3.1.1
Commit: 7caf768a8c9a639b6139b2cae8656c89e3d8c58d
Parents: 2b9a8c1d
Author: Chen Liang 
Authored: Thu Nov 29 13:17:01 2018 -0800
Committer: Chen Liang 
Committed: Thu Nov 29 13:17:01 2018 -0800

--
 .../security/IngressPortBasedResolver.java  | 100 +++
 .../hadoop/security/SaslPropertiesResolver.java |  47 -
 .../hadoop/security/WhitelistBasedResolver.java |  20 +---
 .../security/TestIngressPortBasedResolver.java  |  59 +++
 4 files changed, 207 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7caf768a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/IngressPortBasedResolver.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/IngressPortBasedResolver.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/IngressPortBasedResolver.java
new file mode 100644
index 000..a30e4a8
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/IngressPortBasedResolver.java
@@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.security;
+
+import com.google.common.annotations.VisibleForTesting;
+import java.net.InetAddress;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.hadoop.conf.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * An implementation of SaslPropertiesResolver. Used on server side,
+ * returns SASL properties based on the port the client is connecting
+ * to. This should be used along with server side enabling multiple ports
+ * TODO: when NN multiple listener is enabled, automatically use this
+ * resolver without having to set in config.
+ *
+ * For configuration, for example if server runs on two ports 9000 and 9001,
+ * and we want to specify 9000 to use auth-conf and 9001 to use auth.
+ *
+ * We need to set the following configuration properties:
+ * ingress.port.sasl.configured.ports=9000,9001
+ * ingress.port.sasl.prop.9000=privacy
+ * ingress.port.sasl.prop.9001=authentication
+ *
+ * One note is that, if there is misconfiguration that a port, say, 9002 is
+ * given in ingress.port.sasl.configured.ports, but it's sasl prop is not
+ * set, a default of QOP of privacy (auth-conf) will be used. In addition,
+ * if a port is not given even in ingress.port.sasl.configured.ports, but
+ * is being checked in getServerProperties(), the default SASL prop will
+ * be returned. Both of these two cases are considered misconfiguration.
+ */
+public class IngressPortBasedResolver extends SaslPropertiesResolver {
+
+  public static final Logger LOG =
+  LoggerFactory.getLogger(IngressPortBasedResolver.class.getName());
+
+  static final String INGRESS_PORT_SASL_PROP_PREFIX = "ingress.port.sasl.prop";
+
+  static final String INGRESS_PORT_SASL_CONFIGURED_PORTS =
+  "ingress.port.sasl.configured.ports";
+
+  // no need to concurrent map, because after setConf() it never change,
+  // only for read.
+  private HashMap> portPropMapping;
+
+  @Override
+  public void setConf(Configuration conf) {
+super.setConf(conf);
+portPropMapping = new HashMap<>();
+Collection portStrings =
+conf.getTrimmedStringCollection(INGRESS_PORT_SASL_CONFIGURED_PORTS);
+for (String portString : portStrings) {
+  int port = Integer.parseInt(portString);
+  String configKey = INGRESS_PORT_SASL_PROP_PREFIX + "." + portString;
+  Map props = 

[16/50] [abbrv] hadoop git commit: HDDS-6. Enable SCM kerberos auth. Contributed by Ajay Kumar.

2018-11-29 Thread xyao
HDDS-6. Enable SCM kerberos auth. Contributed by Ajay Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fcd705a5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fcd705a5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fcd705a5

Branch: refs/heads/HDDS-4
Commit: fcd705a59420dcb5d6a05e2d49536a77ae4f1d5b
Parents: 2119be4
Author: Xiaoyu Yao 
Authored: Wed May 9 15:56:03 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:56:49 2018 -0800

--
 hadoop-hdds/common/src/main/resources/ozone-default.xml | 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fcd705a5/hadoop-hdds/common/src/main/resources/ozone-default.xml
--
diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml 
b/hadoop-hdds/common/src/main/resources/ozone-default.xml
index d3e352b..4d79e8c 100644
--- a/hadoop-hdds/common/src/main/resources/ozone-default.xml
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -198,6 +198,7 @@
 
   
   
+<<< HEAD
 dfs.ratis.client.request.timeout.duration
 3s
 OZONE, RATIS, MANAGEMENT
@@ -255,6 +256,9 @@
   
   
 hdds.container.report.interval
+===
+ozone.container.report.interval
+>>> HDDS-6. Enable SCM kerberos auth. Contributed by Ajay Kumar.
 6ms
 OZONE, CONTAINER, MANAGEMENT
 Time interval of the datanode to send container report. Each


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[13/50] [abbrv] hadoop git commit: HADOOP-15959. Revert "HADOOP-12751. While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple"

2018-11-29 Thread xyao
HADOOP-15959. Revert "HADOOP-12751. While using kerberos Hadoop incorrectly 
assumes names with '@' to be non-simple"

This reverts commit 829a2e4d271f05afb209ddc834cd4a0e85492eda.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d0edd372
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d0edd372
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d0edd372

Branch: refs/heads/HDDS-4
Commit: d0edd37269bb40290b409d583bcf3b70897c13e0
Parents: 5e102f9
Author: Steve Loughran 
Authored: Thu Nov 29 17:52:11 2018 +
Committer: Steve Loughran 
Committed: Thu Nov 29 17:52:11 2018 +

--
 .../authentication/util/KerberosName.java   |  9 ++--
 .../TestKerberosAuthenticationHandler.java  |  7 ++-
 .../authentication/util/TestKerberosName.java   | 17 ++--
 .../java/org/apache/hadoop/security/KDiag.java  | 46 +---
 .../src/site/markdown/SecureMode.md |  6 ---
 .../org/apache/hadoop/security/TestKDiag.java   | 16 ---
 .../security/TestUserGroupInformation.java  | 27 
 7 files changed, 33 insertions(+), 95 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d0edd372/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
index 4e7ee3c..287bb13 100644
--- 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
+++ 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
@@ -324,8 +324,8 @@ public class KerberosName {
 }
   }
   if (result != null && nonSimplePattern.matcher(result).find()) {
-LOG.info("Non-simple name {} after auth_to_local rule {}",
-result, this);
+throw new NoMatchingRule("Non-simple name " + result +
+ " after auth_to_local rule " + this);
   }
   if (toLowerCase && result != null) {
 result = result.toLowerCase(Locale.ENGLISH);
@@ -378,7 +378,7 @@ public class KerberosName {
   /**
* Get the translation of the principal name into an operating system
* user name.
-   * @return the user name
+   * @return the short name
* @throws IOException throws if something is wrong with the rules
*/
   public String getShortName() throws IOException {
@@ -398,8 +398,7 @@ public class KerberosName {
 return result;
   }
 }
-LOG.info("No auth_to_local rules applied to {}", this);
-return toString();
+throw new NoMatchingRule("No rules applied to " + toString());
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d0edd372/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
 
b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
index e672391..8b4bc15 100644
--- 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
+++ 
b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
@@ -108,7 +108,12 @@ public class TestKerberosAuthenticationHandler
 kn = new KerberosName("bar@BAR");
 Assert.assertEquals("bar", kn.getShortName());
 kn = new KerberosName("bar@FOO");
-Assert.assertEquals("bar@FOO", kn.getShortName());
+try {
+  kn.getShortName();
+  Assert.fail();
+}
+catch (Exception ex) {  
+}
   }
 
   @Test(timeout=6)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d0edd372/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
 
b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
index c584fce..2db0df4 100644
--- 

[36/50] [abbrv] hadoop git commit: HDDS-8. Add OzoneManager Delegation Token support. Contributed by Ajay Kumar.

2018-11-29 Thread xyao
HDDS-8. Add OzoneManager Delegation Token support. Contributed by Ajay Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/59767be1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/59767be1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/59767be1

Branch: refs/heads/HDDS-4
Commit: 59767be13a37557b6d51cd5356c58ebdfad63896
Parents: 9c0baf0
Author: Ajay Kumar 
Authored: Thu Nov 15 12:18:19 2018 -0800
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:47 2018 -0800

--
 .../apache/hadoop/ozone/OzoneConfigKeys.java|   4 +
 .../org/apache/hadoop/ozone/OzoneConsts.java|   1 +
 .../common/src/main/resources/ozone-default.xml |  33 +
 .../ozone/client/protocol/ClientProtocol.java   |  32 +
 .../hadoop/ozone/client/rest/RestClient.java|  41 ++
 .../hadoop/ozone/client/rpc/RpcClient.java  |  41 ++
 .../ozone/client/TestHddsClientUtils.java   |   2 +-
 .../java/org/apache/hadoop/ozone/OmUtils.java   |  17 +-
 .../apache/hadoop/ozone/om/OMConfigKeys.java|  13 +
 .../ozone/om/protocol/OzoneManagerProtocol.java |   2 +-
 .../protocol/OzoneManagerSecurityProtocol.java  |  67 +++
 ...neManagerProtocolClientSideTranslatorPB.java |  77 +++
 .../om/protocolPB/OzoneManagerProtocolPB.java   |   3 +
 .../hadoop/ozone/protocolPB/OMPBHelper.java |  38 ++
 .../security/OzoneDelegationTokenSelector.java  |   3 +-
 .../ozone/security/OzoneSecretManager.java  | 598 +++
 .../hadoop/ozone/security/OzoneSecretStore.java | 250 
 .../ozone/security/OzoneSecurityException.java  | 104 
 .../src/main/proto/OzoneManagerProtocol.proto   |  19 +
 .../ozone/security/TestOzoneSecretManager.java  | 216 +++
 .../hadoop/ozone/TestSecureOzoneCluster.java| 309 --
 .../apache/hadoop/ozone/om/OzoneManager.java| 300 --
 ...neManagerProtocolServerSideTranslatorPB.java |  59 ++
 23 files changed, 2150 insertions(+), 79 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/59767be1/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
index ac97422..742fe3a 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
@@ -332,6 +332,10 @@ public final class OzoneConfigKeys {
   public static final String OZONE_CONTAINER_COPY_WORKDIR =
   "hdds.datanode.replication.work.dir";
 
+  public static final String OZONE_MAX_KEY_LEN =
+  "ozone.max.key.len";
+  public static final int OZONE_MAX_KEY_LEN_DEFAULT = 1024 * 1024;
+
   /**
* Config properties to set client side checksum properties.
*/

http://git-wip-us.apache.org/repos/asf/hadoop/blob/59767be1/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
index 096baee..e7c354c 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
@@ -99,6 +99,7 @@ public final class OzoneConsts {
   public static final String DN_CONTAINER_DB = "-dn-"+ CONTAINER_DB_SUFFIX;
   public static final String DELETED_BLOCK_DB = "deletedBlock.db";
   public static final String OM_DB_NAME = "om.db";
+  public static final String OZONE_MANAGER_TOKEN_DB_NAME = "om-token.db";
 
   public static final String STORAGE_DIR_CHUNKS = "chunks";
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/59767be1/hadoop-hdds/common/src/main/resources/ozone-default.xml
--
diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml 
b/hadoop-hdds/common/src/main/resources/ozone-default.xml
index 5fb594b..766f31d 100644
--- a/hadoop-hdds/common/src/main/resources/ozone-default.xml
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -984,6 +984,15 @@
   every principal specified in the keytab file.
 
   
+  
+ozone.max.key.len
+1048576
+OZONE, SECURITY
+
+  Maximum length of private key in Ozone. Used in Ozone delegation and
+  block tokens.
+
+  
 
   
   
@@ -1492,4 +1501,28 @@
   Name of file which stores public key generated for SCM CA.
 
   
+  
+ozone.manager.delegation.remover.scan.interval
+360
+
+  Time interval after 

[48/50] [abbrv] hadoop git commit: HDDS-696. Bootstrap genesis SCM(CA) with self-signed certificate. Contributed by Anu Engineer.

2018-11-29 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/87f51d23/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/keys/HDDSKeyGenerator.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/keys/HDDSKeyGenerator.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/keys/HDDSKeyGenerator.java
index 459dce7..ded50f9 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/keys/HDDSKeyGenerator.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/keys/HDDSKeyGenerator.java
@@ -28,7 +28,9 @@ import java.security.KeyPairGenerator;
 import java.security.NoSuchAlgorithmException;
 import java.security.NoSuchProviderException;
 
-/** A class to generate Key Pair for use with Certificates. */
+/**
+ * A class to generate Key Pair for use with Certificates.
+ */
 public class HDDSKeyGenerator {
   private static final Logger LOG =
   LoggerFactory.getLogger(HDDSKeyGenerator.class);
@@ -44,7 +46,17 @@ public class HDDSKeyGenerator {
   }
 
   /**
+   * Constructor that takes a SecurityConfig as the Argument.
+   *
+   * @param config - SecurityConfig
+   */
+  public HDDSKeyGenerator(SecurityConfig config) {
+this.securityConfig = config;
+  }
+
+  /**
* Returns the Security config used for this object.
+   *
* @return SecurityConfig
*/
   public SecurityConfig getSecurityConfig() {
@@ -55,10 +67,10 @@ public class HDDSKeyGenerator {
* Use Config to generate key.
*
* @return KeyPair
-   * @throws NoSuchProviderException - On Error, due to missing Java
-   * dependencies.
+   * @throws NoSuchProviderException  - On Error, due to missing Java
+   *  dependencies.
* @throws NoSuchAlgorithmException - On Error,  due to missing Java
-   * dependencies.
+   *  dependencies.
*/
   public KeyPair generateKey() throws NoSuchProviderException,
   NoSuchAlgorithmException {
@@ -71,10 +83,10 @@ public class HDDSKeyGenerator {
*
* @param size - int, valid key sizes.
* @return KeyPair
-   * @throws NoSuchProviderException - On Error, due to missing Java
-   * dependencies.
+   * @throws NoSuchProviderException  - On Error, due to missing Java
+   *  dependencies.
* @throws NoSuchAlgorithmException - On Error,  due to missing Java
-   * dependencies.
+   *  dependencies.
*/
   public KeyPair generateKey(int size) throws
   NoSuchProviderException, NoSuchAlgorithmException {
@@ -89,10 +101,10 @@ public class HDDSKeyGenerator {
* @param algorithm - Algorithm to use
* @param provider - Security provider.
* @return KeyPair.
-   * @throws NoSuchProviderException - On Error, due to missing Java
-   * dependencies.
+   * @throws NoSuchProviderException  - On Error, due to missing Java
+   *  dependencies.
* @throws NoSuchAlgorithmException - On Error,  due to missing Java
-   * dependencies.
+   *  dependencies.
*/
   public KeyPair generateKey(int size, String algorithm, String provider)
   throws NoSuchProviderException, NoSuchAlgorithmException {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/87f51d23/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/keys/HDDSKeyPEMWriter.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/keys/HDDSKeyPEMWriter.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/keys/HDDSKeyPEMWriter.java
deleted file mode 100644
index 95be1c4..000
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/keys/HDDSKeyPEMWriter.java
+++ /dev/null
@@ -1,255 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- *  with the License.  You may obtain a copy of the License at
- *
- *  http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- *
- */
-package org.apache.hadoop.hdds.security.x509.keys;
-
-import com.google.common.annotations.VisibleForTesting;
-import com.google.common.base.Preconditions;
-import 

[08/50] [abbrv] hadoop git commit: YARN-8948. PlacementRule interface should be for all YarnSchedulers. Contributed by Bibin A Chundatt.

2018-11-29 Thread xyao
YARN-8948. PlacementRule interface should be for all YarnSchedulers. 
Contributed by Bibin A Chundatt.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a68d766e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a68d766e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a68d766e

Branch: refs/heads/HDDS-4
Commit: a68d766e876631d7ee2e1a6504d4120ba628d178
Parents: c1d24f8
Author: bibinchundatt 
Authored: Thu Nov 29 21:43:34 2018 +0530
Committer: bibinchundatt 
Committed: Thu Nov 29 21:43:34 2018 +0530

--
 .../placement/AppNameMappingPlacementRule.java  | 12 ++--
 .../server/resourcemanager/placement/PlacementRule.java |  4 ++--
 .../placement/UserGroupMappingPlacementRule.java| 11 ++-
 3 files changed, 22 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a68d766e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/AppNameMappingPlacementRule.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/AppNameMappingPlacementRule.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/AppNameMappingPlacementRule.java
index 2debade..7a46962 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/AppNameMappingPlacementRule.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/AppNameMappingPlacementRule.java
@@ -20,11 +20,12 @@ package 
org.apache.hadoop.yarn.server.resourcemanager.placement;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.exceptions.YarnException;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueue;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerContext;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager;
@@ -61,8 +62,15 @@ public class AppNameMappingPlacementRule extends 
PlacementRule {
   }
 
   @Override
-  public boolean initialize(CapacitySchedulerContext schedulerContext)
+  public boolean initialize(ResourceScheduler scheduler)
   throws IOException {
+if (!(scheduler instanceof CapacityScheduler)) {
+  throw new IOException(
+  "AppNameMappingPlacementRule can be configured only for "
+  + "CapacityScheduler");
+}
+CapacitySchedulerContext schedulerContext =
+(CapacitySchedulerContext) scheduler;
 CapacitySchedulerConfiguration conf = schedulerContext.getConfiguration();
 boolean overrideWithQueueMappings = conf.getOverrideWithQueueMappings();
 LOG.info(

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a68d766e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/PlacementRule.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/PlacementRule.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/PlacementRule.java
index 21ab32a..0f3d43c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/PlacementRule.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/PlacementRule.java
@@ -22,7 +22,7 @@ import java.io.IOException;
 
 import 

[02/50] [abbrv] hadoop git commit: YARN-8974. Improve the assertion message in TestGPUResourceHandler. (Zhankun Tang via wangda)

2018-11-29 Thread xyao
YARN-8974. Improve the assertion message in TestGPUResourceHandler. (Zhankun 
Tang via wangda)

Change-Id: I4eb58e9d251d5f54e7feffc4fbb813b4f5ae4b1b


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8ebeda98
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8ebeda98
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8ebeda98

Branch: refs/heads/HDDS-4
Commit: 8ebeda98a9d3eac45598a33bae3d62e3ebb92cad
Parents: 9ed8756
Author: Wangda Tan 
Authored: Wed Nov 28 14:36:30 2018 -0800
Committer: Wangda Tan 
Committed: Wed Nov 28 14:36:30 2018 -0800

--
 .../linux/resources/gpu/TestGpuResourceHandler.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8ebeda98/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/gpu/TestGpuResourceHandler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/gpu/TestGpuResourceHandler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/gpu/TestGpuResourceHandler.java
index 10e5cd1..18785e1 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/gpu/TestGpuResourceHandler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/gpu/TestGpuResourceHandler.java
@@ -465,7 +465,7 @@ public class TestGpuResourceHandler {
   caughtException = true;
 }
 Assert.assertTrue(
-"Should fail since requested device Id is not in allowed list",
+"Should fail since requested device Id is already assigned",
 caughtException);
 
 // Make sure internal state not changed.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[01/50] [abbrv] hadoop git commit: YARN-9061. Improve the GPU/FPGA module log message of container-executor. (Zhankun Tang via wangda) [Forced Update!]

2018-11-29 Thread xyao
Repository: hadoop
Updated Branches:
  refs/heads/HDDS-4 278d4b9b7 -> 187bbbe68 (forced update)


YARN-9061. Improve the GPU/FPGA module log message of container-executor. 
(Zhankun Tang via wangda)

Change-Id: Iece9b47438357077a53984a820d4d6423f480518


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9ed87567
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9ed87567
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9ed87567

Branch: refs/heads/HDDS-4
Commit: 9ed87567ad0f1c26a263ce6d8fba56d066260c5c
Parents: 579ef4b
Author: Wangda Tan 
Authored: Wed Nov 28 14:31:31 2018 -0800
Committer: Wangda Tan 
Committed: Wed Nov 28 14:31:31 2018 -0800

--
 .../native/container-executor/impl/modules/fpga/fpga-module.c   | 5 +++--
 .../native/container-executor/impl/modules/gpu/gpu-module.c | 5 +++--
 2 files changed, 6 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ed87567/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/modules/fpga/fpga-module.c
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/modules/fpga/fpga-module.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/modules/fpga/fpga-module.c
index c1a2f83..e947d7c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/modules/fpga/fpga-module.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/modules/fpga/fpga-module.c
@@ -141,7 +141,7 @@ void reload_fpga_configuration() {
 /*
  * Format of FPGA request commandline:
  *
- * c-e fpga --excluded_fpgas 0,1,3 --container_id container_x_y
+ * c-e --module-fpga --excluded_fpgas 0,1,3 --container_id container_x_y
  */
 int handle_fpga_request(update_cgroups_parameters_function func,
 const char* module_name, int module_argc, char** module_argv) {
@@ -213,7 +213,8 @@ int handle_fpga_request(update_cgroups_parameters_function 
func,
 
   if (!minor_devices) {
  // Minor devices is null, skip following call.
- fprintf(ERRORFILE, "is not specified, skip cgroups call.\n");
+ fprintf(ERRORFILE,
+ "--excluded-fpgas is not specified, skip cgroups call.\n");
  goto cleanup;
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ed87567/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/modules/gpu/gpu-module.c
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/modules/gpu/gpu-module.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/modules/gpu/gpu-module.c
index 1a1b164..7522338 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/modules/gpu/gpu-module.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/modules/gpu/gpu-module.c
@@ -141,7 +141,7 @@ void reload_gpu_configuration() {
 /*
  * Format of GPU request commandline:
  *
- * c-e gpu --excluded_gpus 0,1,3 --container_id container_x_y
+ * c-e --module-gpu --excluded_gpus 0,1,3 --container_id container_x_y
  */
 int handle_gpu_request(update_cgroups_parameters_func func,
 const char* module_name, int module_argc, char** module_argv) {
@@ -213,7 +213,8 @@ int handle_gpu_request(update_cgroups_parameters_func func,
 
   if (!minor_devices) {
  // Minor devices is null, skip following call.
- fprintf(ERRORFILE, "is not specified, skip cgroups call.\n");
+ fprintf(ERRORFILE,
+ "--excluded_gpus is not specified, skip cgroups call.\n");
  goto cleanup;
   }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[40/50] [abbrv] hadoop git commit: HDDS-760. Add asf license to TestCertificateSignRequest. Contributed by Ajay Kumar.

2018-11-29 Thread xyao
HDDS-760. Add asf license to TestCertificateSignRequest. Contributed by Ajay 
Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cd470bfd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cd470bfd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cd470bfd

Branch: refs/heads/HDDS-4
Commit: cd470bfd27c878783666ebb151941ae4c2eed812
Parents: d0be865
Author: Ajay Kumar 
Authored: Tue Oct 30 09:10:21 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:47 2018 -0800

--
 .../certificates/TestCertificateSignRequest.java   | 17 +
 1 file changed, 17 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cd470bfd/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificates/TestCertificateSignRequest.java
--
diff --git 
a/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificates/TestCertificateSignRequest.java
 
b/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificates/TestCertificateSignRequest.java
index a9285df..0b9ef31 100644
--- 
a/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificates/TestCertificateSignRequest.java
+++ 
b/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificates/TestCertificateSignRequest.java
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.hadoop.hdds.security.x509.certificates;
 
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[14/50] [abbrv] hadoop git commit: HDFS-14095. EC: Track Erasure Coding commands in DFS statistics. Contributed by Ayush Saxena.

2018-11-29 Thread xyao
HDFS-14095. EC: Track Erasure Coding commands in DFS statistics. Contributed by 
Ayush Saxena.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f5347368
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f5347368
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f5347368

Branch: refs/heads/HDDS-4
Commit: f534736867eed962899615ca1b7eb68bcf591d17
Parents: d0edd37
Author: Brahma Reddy Battula 
Authored: Fri Nov 30 00:18:27 2018 +0530
Committer: Brahma Reddy Battula 
Committed: Fri Nov 30 00:18:27 2018 +0530

--
 .../hadoop/hdfs/DFSOpsCountStatistics.java  |  9 +++
 .../hadoop/hdfs/DistributedFileSystem.java  | 18 ++
 .../hadoop/hdfs/TestDistributedFileSystem.java  | 63 +++-
 3 files changed, 89 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f5347368/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
index 3dcf13b..b9852ba 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
@@ -41,6 +41,7 @@ public class DFSOpsCountStatistics extends StorageStatistics {
 
   /** This is for counting distributed file system operations. */
   public enum OpType {
+ADD_EC_POLICY("op_add_ec_policy"),
 ALLOW_SNAPSHOT("op_allow_snapshot"),
 APPEND(CommonStatisticNames.OP_APPEND),
 CONCAT("op_concat"),
@@ -51,10 +52,15 @@ public class DFSOpsCountStatistics extends 
StorageStatistics {
 CREATE_SYM_LINK("op_create_symlink"),
 DELETE(CommonStatisticNames.OP_DELETE),
 DELETE_SNAPSHOT("op_delete_snapshot"),
+DISABLE_EC_POLICY("op_disable_ec_policy"),
 DISALLOW_SNAPSHOT("op_disallow_snapshot"),
+ENABLE_EC_POLICY("op_enable_ec_policy"),
 EXISTS(CommonStatisticNames.OP_EXISTS),
 GET_BYTES_WITH_FUTURE_GS("op_get_bytes_with_future_generation_stamps"),
 GET_CONTENT_SUMMARY(CommonStatisticNames.OP_GET_CONTENT_SUMMARY),
+GET_EC_CODECS("op_get_ec_codecs"),
+GET_EC_POLICY("op_get_ec_policy"),
+GET_EC_POLICIES("op_get_ec_policies"),
 GET_FILE_BLOCK_LOCATIONS("op_get_file_block_locations"),
 GET_FILE_CHECKSUM(CommonStatisticNames.OP_GET_FILE_CHECKSUM),
 GET_FILE_LINK_STATUS("op_get_file_link_status"),
@@ -76,11 +82,13 @@ public class DFSOpsCountStatistics extends 
StorageStatistics {
 REMOVE_ACL(CommonStatisticNames.OP_REMOVE_ACL),
 REMOVE_ACL_ENTRIES(CommonStatisticNames.OP_REMOVE_ACL_ENTRIES),
 REMOVE_DEFAULT_ACL(CommonStatisticNames.OP_REMOVE_DEFAULT_ACL),
+REMOVE_EC_POLICY("op_remove_ec_policy"),
 REMOVE_XATTR("op_remove_xattr"),
 RENAME(CommonStatisticNames.OP_RENAME),
 RENAME_SNAPSHOT("op_rename_snapshot"),
 RESOLVE_LINK("op_resolve_link"),
 SET_ACL(CommonStatisticNames.OP_SET_ACL),
+SET_EC_POLICY("op_set_ec_policy"),
 SET_OWNER(CommonStatisticNames.OP_SET_OWNER),
 SET_PERMISSION(CommonStatisticNames.OP_SET_PERMISSION),
 SET_REPLICATION("op_set_replication"),
@@ -90,6 +98,7 @@ public class DFSOpsCountStatistics extends StorageStatistics {
 GET_SNAPSHOT_DIFF("op_get_snapshot_diff"),
 GET_SNAPSHOTTABLE_DIRECTORY_LIST("op_get_snapshottable_directory_list"),
 TRUNCATE(CommonStatisticNames.OP_TRUNCATE),
+UNSET_EC_POLICY("op_unset_ec_policy"),
 UNSET_STORAGE_POLICY("op_unset_storage_policy");
 
 private static final Map SYMBOL_MAP =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f5347368/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
index ca1546c..7dd02bd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
@@ -2845,6 +2845,8 @@ public class DistributedFileSystem extends FileSystem
*/
   public void setErasureCodingPolicy(final Path path,
   final String ecPolicyName) throws IOException {
+statistics.incrementWriteOps(1);
+

[28/50] [abbrv] hadoop git commit: HDDS-548. Create a Self-Signed Certificate. Contributed by Anu Engineer.

2018-11-29 Thread xyao
HDDS-548. Create a Self-Signed Certificate. Contributed by Anu Engineer.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/87acc150
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/87acc150
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/87acc150

Branch: refs/heads/HDDS-4
Commit: 87acc1508a4bafc8cd3d77ffa684bf9871f52698
Parents: d1c6ff7
Author: Ajay Kumar 
Authored: Fri Sep 28 06:52:56 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:46 2018 -0800

--
 .../org/apache/hadoop/hdds/HddsConfigKeys.java  |  16 ++
 .../hdds/security/x509/HDDSKeyGenerator.java|  99 ---
 .../hdds/security/x509/HDDSKeyPEMWriter.java| 254 --
 .../hdds/security/x509/SecurityConfig.java  | 105 +---
 .../certificates/SelfSignedCertificate.java | 212 +++
 .../x509/certificates/package-info.java |  22 ++
 .../x509/exceptions/CertificateException.java   |  63 +
 .../x509/exceptions/SCMSecurityException.java   |  64 +
 .../security/x509/exceptions/package-info.java  |  23 ++
 .../security/x509/keys/HDDSKeyGenerator.java| 106 
 .../security/x509/keys/HDDSKeyPEMWriter.java| 255 ++
 .../hdds/security/x509/keys/package-info.java   |  23 ++
 .../security/x509/TestHDDSKeyGenerator.java |  81 --
 .../security/x509/TestHDDSKeyPEMWriter.java | 213 ---
 .../x509/certificates/TestRootCertificate.java  | 258 +++
 .../x509/certificates/package-info.java |  22 ++
 .../x509/keys/TestHDDSKeyGenerator.java |  87 +++
 .../x509/keys/TestHDDSKeyPEMWriter.java | 216 
 .../hdds/security/x509/keys/package-info.java   |  22 ++
 .../hadoop/hdds/security/x509/package-info.java |  22 ++
 20 files changed, 1484 insertions(+), 679 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/87acc150/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
index 2a1404a..ef1194b 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
@@ -131,4 +131,20 @@ public final class HddsConfigKeys {
   public static final String HDDS_PUBLIC_KEY_FILE_NAME = "hdds.public.key.file"
   + ".name";
   public static final String HDDS_PUBLIC_KEY_FILE_NAME_DEFAULT = "public.pem";
+
+  /**
+   * Maximum duration of certificates issued by SCM including Self-Signed 
Roots.
+   * The formats accepted are based on the ISO-8601 duration format 
PnDTnHnMn.nS
+   * Default value is 5 years and written as P1865D.
+   */
+  public static final String HDDS_X509_MAX_DURATION = "hdds.x509.max.duration";
+  // Limit Certificate duration to a max value of 5 years.
+  public static final String HDDS_X509_MAX_DURATION_DEFAULT= "P1865D";
+
+  public static final String HDDS_X509_SIGNATURE_ALGO =
+  "hdds.x509.signature.algorithm";
+  public static final String HDDS_X509_SIGNATURE_ALGO_DEFAULT = 
"SHA256withRSA";
+
+
+
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/87acc150/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/HDDSKeyGenerator.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/HDDSKeyGenerator.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/HDDSKeyGenerator.java
deleted file mode 100644
index cb411b2..000
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/HDDSKeyGenerator.java
+++ /dev/null
@@ -1,99 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *  http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- *
- */
-package org.apache.hadoop.hdds.security.x509;
-
-import 

[25/50] [abbrv] hadoop git commit: Fix merge conflicts

2018-11-29 Thread xyao
Fix merge conflicts


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b802cb5b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b802cb5b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b802cb5b

Branch: refs/heads/HDDS-4
Commit: b802cb5b039172111a815f9ba8c8415f2d402fb8
Parents: ed10fa6
Author: Xiaoyu Yao 
Authored: Tue Jul 31 18:17:29 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:46 2018 -0800

--
 .../conf/TestConfigurationFieldsBase.java   |  2 -
 .../apache/hadoop/hdds/scm/ScmConfigKeys.java   | 10 ++-
 .../apache/hadoop/ozone/OzoneConfigKeys.java|  1 +
 .../common/src/main/resources/ozone-default.xml | 21 ++---
 .../scm/server/StorageContainerManager.java |  9 ++-
 .../StorageContainerManagerHttpServer.java  |  4 +-
 .../src/test/compose/compose-secure/.env|  2 +-
 .../test/compose/compose-secure/docker-config   | 55 ++---
 .../apache/hadoop/ozone/ksm/KSMConfigKeys.java  | 84 
 .../apache/hadoop/ozone/MiniOzoneCluster.java   |  3 +-
 .../hadoop/ozone/MiniOzoneClusterImpl.java  |  5 +-
 .../hadoop/ozone/TestSecureOzoneCluster.java| 22 ++---
 12 files changed, 87 insertions(+), 131 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b802cb5b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationFieldsBase.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationFieldsBase.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationFieldsBase.java
index bce1cd5..152159b 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationFieldsBase.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationFieldsBase.java
@@ -436,8 +436,6 @@ public abstract class TestConfigurationFieldsBase {
 // Create XML key/value map
 LOG_XML.debug("Reading XML property files\n");
 xmlKeyValueMap = extractPropertiesFromXml(xmlFilename);
-// Remove hadoop property set in ozone-default.xml
-xmlKeyValueMap.remove("hadoop.custom.tags");
 LOG_XML.debug("\n=\n");
 
 // Create default configuration variable key/value map

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b802cb5b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
index 376e6db..cb6672f 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
@@ -313,10 +313,12 @@ public final class ScmConfigKeys {
   public static final String HDDS_SCM_WATCHER_TIMEOUT_DEFAULT =
   "10m";
 
-  public static final String SCM_WEB_AUTHENTICATION_KERBEROS_PRINCIPAL_KEY =
-  "ozone.scm.web.authentication.kerberos.principal";
-  public static final String SCM_WEB_AUTHENTICATION_KERBEROS_KEYTAB_FILE_KEY =
-  "ozone.scm.web.authentication.kerberos.keytab";
+  public static final String
+  HDDS_SCM_WEB_AUTHENTICATION_KERBEROS_PRINCIPAL_KEY =
+  "hdds.scm.web.authentication.kerberos.principal";
+  public static final String
+  HDDS_SCM_WEB_AUTHENTICATION_KERBEROS_KEYTAB_FILE_KEY =
+  "hdds.scm.web.authentication.kerberos.keytab";
   /**
* Never constructed.
*/

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b802cb5b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
index 578a983..b47113b 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
@@ -334,6 +334,7 @@ public final class OzoneConfigKeys {
 
   public static final String OZONE_CONTAINER_COPY_WORKDIR =
   "hdds.datanode.replication.work.dir";
+
   /**
* Config properties to set client side checksum properties.
*/

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b802cb5b/hadoop-hdds/common/src/main/resources/ozone-default.xml
--
diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml 

[07/50] [abbrv] hadoop git commit: HDFS-13713. Add specification of Multipart Upload API to FS specification, with contract tests.

2018-11-29 Thread xyao
HDFS-13713. Add specification of Multipart Upload API to FS specification, with 
contract tests.

Contributed by Ewan Higgs and Steve Loughran.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c1d24f84
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c1d24f84
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c1d24f84

Branch: refs/heads/HDDS-4
Commit: c1d24f848345f6d34a2ac2d570d49e9787a0df6a
Parents: b71cc7f
Author: Ewan Higgs 
Authored: Thu Nov 29 15:11:07 2018 +
Committer: Steve Loughran 
Committed: Thu Nov 29 15:12:17 2018 +

--
 .../hadoop/fs/FileSystemMultipartUploader.java  |  36 +-
 .../org/apache/hadoop/fs/MultipartUploader.java |  88 ++-
 .../hadoop/fs/MultipartUploaderFactory.java |   7 +
 .../src/site/markdown/filesystem/index.md   |   1 +
 .../markdown/filesystem/multipartuploader.md| 235 
 .../AbstractContractMultipartUploaderTest.java  | 565 +++
 .../TestLocalFSContractMultipartUploader.java   |  10 +
 .../hdfs/TestHDFSContractMultipartUploader.java |  15 +
 .../hadoop/fs/s3a/S3AMultipartUploader.java |  31 +-
 .../s3a/ITestS3AContractMultipartUploader.java  |  64 ++-
 .../fs/s3a/TestS3AMultipartUploaderSupport.java |   2 +-
 11 files changed, 876 insertions(+), 178 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c1d24f84/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystemMultipartUploader.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystemMultipartUploader.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystemMultipartUploader.java
index 94c7861..b77c244 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystemMultipartUploader.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystemMultipartUploader.java
@@ -19,21 +19,23 @@ package org.apache.hadoop.fs;
 import java.io.IOException;
 import java.io.InputStream;
 import java.nio.ByteBuffer;
+import java.util.ArrayList;
 import java.util.Comparator;
 import java.util.List;
+import java.util.Map;
+import java.util.UUID;
 import java.util.stream.Collectors;
 
 import com.google.common.base.Charsets;
-import com.google.common.base.Preconditions;
 
 import org.apache.commons.compress.utils.IOUtils;
-import org.apache.commons.lang3.tuple.Pair;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.permission.FsPermission;
 
 import static org.apache.hadoop.fs.Path.mergePaths;
+import static org.apache.hadoop.io.IOUtils.cleanupWithLogger;
 
 /**
  * A MultipartUploader that uses the basic FileSystem commands.
@@ -70,7 +72,8 @@ public class FileSystemMultipartUploader extends 
MultipartUploader {
   public PartHandle putPart(Path filePath, InputStream inputStream,
   int partNumber, UploadHandle uploadId, long lengthInBytes)
   throws IOException {
-
+checkPutArguments(filePath, inputStream, partNumber, uploadId,
+lengthInBytes);
 byte[] uploadIdByteArray = uploadId.toByteArray();
 checkUploadId(uploadIdByteArray);
 Path collectorPath = new Path(new String(uploadIdByteArray, 0,
@@ -82,16 +85,17 @@ public class FileSystemMultipartUploader extends 
MultipartUploader {
 fs.createFile(partPath).build()) {
   IOUtils.copy(inputStream, fsDataOutputStream, 4096);
 } finally {
-  org.apache.hadoop.io.IOUtils.cleanupWithLogger(LOG, inputStream);
+  cleanupWithLogger(LOG, inputStream);
 }
 return BBPartHandle.from(ByteBuffer.wrap(
 partPath.toString().getBytes(Charsets.UTF_8)));
   }
 
   private Path createCollectorPath(Path filePath) {
+String uuid = UUID.randomUUID().toString();
 return mergePaths(filePath.getParent(),
 mergePaths(new Path(filePath.getName().split("\\.")[0]),
-mergePaths(new Path("_multipart"),
+mergePaths(new Path("_multipart_" + uuid),
 new Path(Path.SEPARATOR;
   }
 
@@ -110,21 +114,16 @@ public class FileSystemMultipartUploader extends 
MultipartUploader {
 
   @Override
   @SuppressWarnings("deprecation") // rename w/ OVERWRITE
-  public PathHandle complete(Path filePath,
-  List> handles, UploadHandle multipartUploadId)
-  throws IOException {
+  public PathHandle complete(Path filePath, Map handleMap,
+  UploadHandle multipartUploadId) throws IOException {
 
 checkUploadId(multipartUploadId.toByteArray());
 
-if (handles.isEmpty()) {
-  throw new IOException("Empty upload");
-}
-// If 

[41/50] [abbrv] hadoop git commit: HDDS-753. SCM security protocol server is not starting. Contributed by Ajay Kumar.

2018-11-29 Thread xyao
HDDS-753. SCM security protocol server is not starting. Contributed by Ajay 
Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9128c665
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9128c665
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9128c665

Branch: refs/heads/HDDS-4
Commit: 9128c665ccdf26eb3152d6903b98e779585acb2d
Parents: cd470bf
Author: Ajay Kumar 
Authored: Thu Nov 1 15:48:52 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:47 2018 -0800

--
 .../scm/server/SCMSecurityProtocolServer.java   |  4 +-
 .../server/TestSCMSecurityProtocolServer.java   | 60 
 .../hadoop/ozone/TestSecureOzoneCluster.java|  3 +-
 3 files changed, 65 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9128c665/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMSecurityProtocolServer.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMSecurityProtocolServer.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMSecurityProtocolServer.java
index e810c54..ab29c1e 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMSecurityProtocolServer.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMSecurityProtocolServer.java
@@ -28,7 +28,7 @@ import 
org.apache.hadoop.hdds.protocolPB.SCMSecurityProtocolServerSideTranslator
 import org.apache.hadoop.hdds.scm.HddsServerUtil;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.protocol.SCMSecurityProtocol;
-import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.ipc.ProtobufRpcEngine;
 import org.apache.hadoop.ipc.RPC;
 import org.apache.hadoop.security.KerberosInfo;
 import org.slf4j.Logger;
@@ -60,6 +60,8 @@ public class SCMSecurityProtocolServer implements 
SCMSecurityProtocol {
 rpcAddress = HddsServerUtil
 .getScmSecurityInetAddress(conf);
 // SCM security service RPC service.
+RPC.setProtocolEngine(conf, SCMSecurityProtocolPB.class,
+ProtobufRpcEngine.class);
 BlockingService secureProtoPbService =
 SCMSecurityProtocolProtos.SCMSecurityProtocolService
 .newReflectiveBlockingService(

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9128c665/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMSecurityProtocolServer.java
--
diff --git 
a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMSecurityProtocolServer.java
 
b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMSecurityProtocolServer.java
new file mode 100644
index 000..8e7d84c
--- /dev/null
+++ 
b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMSecurityProtocolServer.java
@@ -0,0 +1,60 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.hdds.scm.server;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+
+/**
+ * Test class for {@link SCMSecurityProtocolServer}.
+ * */
+public class TestSCMSecurityProtocolServer {
+  private SCMSecurityProtocolServer securityProtocolServer;
+  private OzoneConfiguration config;
+
+  @Rule
+  public Timeout timeout = new Timeout(1000 * 20);
+
+  @Before
+  public void setUp() throws Exception {
+config = new OzoneConfiguration();
+securityProtocolServer = new SCMSecurityProtocolServer(config, null);
+  }
+
+  @After
+  public void tearDown() throws Exception {
+if (securityProtocolServer != null) {
+  securityProtocolServer.stop();
+  securityProtocolServer = null;
+}
+config = null;
+  }
+
+  @Test
+  public void testStart() {
+

[35/50] [abbrv] hadoop git commit: HDDS-8. Add OzoneManager Delegation Token support. Contributed by Ajay Kumar.

2018-11-29 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/59767be1/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestSecureOzoneCluster.java
--
diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestSecureOzoneCluster.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestSecureOzoneCluster.java
index 904e597..c1695f1 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestSecureOzoneCluster.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestSecureOzoneCluster.java
@@ -17,16 +17,23 @@
  */
 package org.apache.hadoop.ozone;
 
+import static org.apache.hadoop.hdds.HddsConfigKeys.OZONE_METADATA_DIRS;
 import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ENABLED;
 import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_SECURITY_ENABLED_KEY;
+import static org.slf4j.event.Level.INFO;
 
 import java.io.File;
 import java.io.IOException;
+import java.net.InetAddress;
 import java.nio.file.Path;
 import java.nio.file.Paths;
+import java.security.KeyPair;
+import java.security.PrivilegedExceptionAction;
 import java.util.Properties;
 import java.util.UUID;
 import java.util.concurrent.Callable;
+import org.apache.commons.io.FileUtils;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
@@ -35,14 +42,29 @@ import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.scm.ScmInfo;
 import org.apache.hadoop.hdds.scm.server.SCMStorage;
 import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
+import 
org.apache.hadoop.hdds.security.x509.certificate.client.CertificateClient;
+import org.apache.hadoop.hdds.security.x509.keys.HDDSKeyGenerator;
+import org.apache.hadoop.hdds.security.x509.keys.HDDSKeyPEMWriter;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.ipc.Client;
+import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.ipc.RemoteException;
+import org.apache.hadoop.ipc.Server;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
 import org.apache.hadoop.minikdc.MiniKdc;
+import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.ozone.om.OMConfigKeys;
 import org.apache.hadoop.ozone.om.OMStorage;
 import org.apache.hadoop.ozone.om.OzoneManager;
+import 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB;
+import org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolPB;
+import org.apache.hadoop.ozone.security.OzoneTokenIdentifier;
 import org.apache.hadoop.security.KerberosAuthException;
+import org.apache.hadoop.security.SaslRpcServer.AuthMethod;
 import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.UserGroupInformation.AuthenticationMethod;
 import 
org.apache.hadoop.security.authentication.client.AuthenticationException;
-import org.apache.hadoop.security.authentication.util.KerberosUtil;
+import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.test.GenericTestUtils.LogCapturer;
 import org.apache.hadoop.test.LambdaTestUtils;
@@ -52,6 +74,7 @@ import org.junit.Before;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.rules.Timeout;
+import org.mockito.Mockito;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -61,6 +84,8 @@ import org.slf4j.LoggerFactory;
 @InterfaceAudience.Private
 public final class TestSecureOzoneCluster {
 
+  private static final String TEST_USER = "testUgiUser";
+  private static final int CLIENT_TIMEOUT = 2 * 1000;
   private Logger LOGGER = LoggerFactory
   .getLogger(TestSecureOzoneCluster.class);
 
@@ -81,14 +106,24 @@ public final class TestSecureOzoneCluster {
   private static String clusterId;
   private static String scmId;
   private static String omId;
+  private OzoneManagerProtocolClientSideTranslatorPB omClient;
+  private KeyPair keyPair;
+  private Path metaDirPath;
 
   @Before
   public void init() {
 try {
   conf = new OzoneConfiguration();
+  conf.set(ScmConfigKeys.OZONE_SCM_CLIENT_ADDRESS_KEY, "localhost");
+  DefaultMetricsSystem.setMiniClusterMode(true);
+  final String path = GenericTestUtils
+  .getTempPath(UUID.randomUUID().toString());
+  metaDirPath = Paths.get(path, "om-meta");
+  conf.set(OZONE_METADATA_DIRS, metaDirPath.toString());
   startMiniKdc();
   setSecureConfig(conf);
   createCredentialsInKDC(conf, miniKdc);
+  generateKeyPair(conf);
 } catch (IOException e) {
   LOGGER.error("Failed to initialize TestSecureOzoneCluster", e);
 } catch (Exception e) {
@@ -106,6 +141,10 @@ public final class TestSecureOzoneCluster {
   if (om != null) {
 om.stop();
 

[19/50] [abbrv] hadoop git commit: Revert "Bad merge with 996a627b289947af3894bf83e7b63ec702a665cd"

2018-11-29 Thread xyao
Revert "Bad merge with 996a627b289947af3894bf83e7b63ec702a665cd"

This reverts commit 996a627b289947af3894bf83e7b63ec702a665cd.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/074b8f4e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/074b8f4e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/074b8f4e

Branch: refs/heads/HDDS-4
Commit: 074b8f4ed52a9defaedaff788c517d7b3f3dd024
Parents: fcd705a
Author: Xiaoyu Yao 
Authored: Tue May 15 16:56:24 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:56:49 2018 -0800

--
 hadoop-hdds/common/src/main/resources/ozone-default.xml | 4 
 1 file changed, 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/074b8f4e/hadoop-hdds/common/src/main/resources/ozone-default.xml
--
diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml 
b/hadoop-hdds/common/src/main/resources/ozone-default.xml
index 4d79e8c..d3e352b 100644
--- a/hadoop-hdds/common/src/main/resources/ozone-default.xml
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -198,7 +198,6 @@
 
   
   
-<<< HEAD
 dfs.ratis.client.request.timeout.duration
 3s
 OZONE, RATIS, MANAGEMENT
@@ -256,9 +255,6 @@
   
   
 hdds.container.report.interval
-===
-ozone.container.report.interval
->>> HDDS-6. Enable SCM kerberos auth. Contributed by Ajay Kumar.
 6ms
 OZONE, CONTAINER, MANAGEMENT
 Time interval of the datanode to send container report. Each


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[46/50] [abbrv] hadoop git commit: Fix HDDS-4 after HDDS-751.

2018-11-29 Thread xyao
Fix HDDS-4 after HDDS-751.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3bee23d2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3bee23d2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3bee23d2

Branch: refs/heads/HDDS-4
Commit: 3bee23d24f6d90592e715be35fa6213a2723e0a6
Parents: 18c9db9
Author: Xiaoyu Yao 
Authored: Wed Nov 21 08:38:30 2018 -0800
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:48 2018 -0800

--
 .../main/java/org/apache/hadoop/hdds/scm/HddsServerUtil.java   | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3bee23d2/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/hdds/scm/HddsServerUtil.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/hdds/scm/HddsServerUtil.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/hdds/scm/HddsServerUtil.java
index e355d0a..1bb9972 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/hdds/scm/HddsServerUtil.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/hdds/scm/HddsServerUtil.java
@@ -176,9 +176,11 @@ public final class HddsServerUtil {
 ScmConfigKeys.OZONE_SCM_SECURITY_SERVICE_ADDRESS_KEY);
 
 return NetUtils.createSocketAddr(
-host.or(ScmConfigKeys.OZONE_SCM_SECURITY_SERVICE_BIND_HOST_DEFAULT) +
+host.orElse(ScmConfigKeys
+.OZONE_SCM_SECURITY_SERVICE_BIND_HOST_DEFAULT) +
 ":" + port
-.or(conf.getInt(ScmConfigKeys.OZONE_SCM_SECURITY_SERVICE_PORT_KEY,
+.orElse(conf.getInt(ScmConfigKeys
+.OZONE_SCM_SECURITY_SERVICE_PORT_KEY,
 ScmConfigKeys.OZONE_SCM_SECURITY_SERVICE_PORT_DEFAULT)));
   }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[47/50] [abbrv] hadoop git commit: Fix HDDS-4 after HDDS-759

2018-11-29 Thread xyao
Fix HDDS-4 after HDDS-759


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/18c9db98
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/18c9db98
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/18c9db98

Branch: refs/heads/HDDS-4
Commit: 18c9db980f8d3bcd01e3a9fa0f59861e2689
Parents: 96bd574
Author: Xiaoyu Yao 
Authored: Wed Nov 21 08:21:44 2018 -0800
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:48 2018 -0800

--
 .../java/org/apache/hadoop/hdds/security/x509/SecurityConfig.java  | 2 +-
 .../security/x509/certificates/TestCertificateSignRequest.java | 2 +-
 .../hdds/security/x509/certificates/TestRootCertificate.java   | 2 +-
 .../hadoop/hdds/security/x509/keys/TestHDDSKeyGenerator.java   | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/18c9db98/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/SecurityConfig.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/SecurityConfig.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/SecurityConfig.java
index 97627ca..2826e55 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/SecurityConfig.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/SecurityConfig.java
@@ -50,7 +50,7 @@ import static 
org.apache.hadoop.hdds.HddsConfigKeys.HDDS_X509_MAX_DURATION;
 import static 
org.apache.hadoop.hdds.HddsConfigKeys.HDDS_X509_MAX_DURATION_DEFAULT;
 import static org.apache.hadoop.hdds.HddsConfigKeys.HDDS_X509_SIGNATURE_ALGO;
 import static 
org.apache.hadoop.hdds.HddsConfigKeys.HDDS_X509_SIGNATURE_ALGO_DEFAULT;
-import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_METADATA_DIRS;
+import static org.apache.hadoop.hdds.HddsConfigKeys.OZONE_METADATA_DIRS;
 
 /**
  * A class that deals with all Security related configs in HDDS.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/18c9db98/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificates/TestCertificateSignRequest.java
--
diff --git 
a/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificates/TestCertificateSignRequest.java
 
b/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificates/TestCertificateSignRequest.java
index e8de1ea..9328c50 100644
--- 
a/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificates/TestCertificateSignRequest.java
+++ 
b/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificates/TestCertificateSignRequest.java
@@ -43,7 +43,7 @@ import java.security.NoSuchAlgorithmException;
 import java.security.NoSuchProviderException;
 import java.util.UUID;
 
-import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_METADATA_DIRS;
+import static org.apache.hadoop.hdds.HddsConfigKeys.OZONE_METADATA_DIRS;
 
 public class TestCertificateSignRequest {
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/18c9db98/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificates/TestRootCertificate.java
--
diff --git 
a/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificates/TestRootCertificate.java
 
b/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificates/TestRootCertificate.java
index 16245a6..e581dc8 100644
--- 
a/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificates/TestRootCertificate.java
+++ 
b/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/certificates/TestRootCertificate.java
@@ -46,7 +46,7 @@ import java.time.Instant;
 import java.util.Date;
 import java.util.UUID;
 
-import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_METADATA_DIRS;
+import static org.apache.hadoop.hdds.HddsConfigKeys.OZONE_METADATA_DIRS;
 
 /**
  * Test Class for Root Certificate generation.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/18c9db98/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/keys/TestHDDSKeyGenerator.java
--
diff --git 
a/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/keys/TestHDDSKeyGenerator.java
 
b/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/keys/TestHDDSKeyGenerator.java
index f9541a2..08761f4 100644
--- 
a/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/keys/TestHDDSKeyGenerator.java
+++ 

[24/50] [abbrv] hadoop git commit: HDDS-684. Fix HDDS-4 branch after HDDS-490 and HADOOP-15832. Contributed by Xiaoyu Yao.

2018-11-29 Thread xyao
HDDS-684. Fix HDDS-4 branch after HDDS-490 and HADOOP-15832. Contributed by 
Xiaoyu Yao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/72468986
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/72468986
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/72468986

Branch: refs/heads/HDDS-4
Commit: 724689864b6723a2ada9ccb925f9b828c19f305b
Parents: 950a473
Author: Xiaoyu Yao 
Authored: Wed Oct 24 15:46:34 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:46 2018 -0800

--
 hadoop-ozone/dist/dev-support/bin/dist-layout-stitching |  2 +-
 .../dist/src/main/compose/ozonesecure/docker-config |  1 +
 .../ozonesecure/docker-image/runner/scripts/starter.sh  |  4 ++--
 .../java/org/apache/hadoop/ozone/om/OzoneManager.java   | 12 ++--
 4 files changed, 10 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/72468986/hadoop-ozone/dist/dev-support/bin/dist-layout-stitching
--
diff --git a/hadoop-ozone/dist/dev-support/bin/dist-layout-stitching 
b/hadoop-ozone/dist/dev-support/bin/dist-layout-stitching
index e736065..2a4f0ad 100755
--- a/hadoop-ozone/dist/dev-support/bin/dist-layout-stitching
+++ b/hadoop-ozone/dist/dev-support/bin/dist-layout-stitching
@@ -111,5 +111,5 @@ run cp 
"${ROOT}/hadoop-ozone/objectstore-service/target/hadoop-ozone-objectstore
 cp -r "${ROOT}/hadoop-hdds/docs/target/classes/docs" ./
 
 #Copy docker compose files
-run cp -p -r "${ROOT}/hadoop-ozone/dist/src/main/compose" .
+run cp -p -R "${ROOT}/hadoop-ozone/dist/src/main/compose" .
 run cp -p -r "${ROOT}/hadoop-ozone/dist/src/main/smoketest" .

http://git-wip-us.apache.org/repos/asf/hadoop/blob/72468986/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-config
--
diff --git a/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-config 
b/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-config
index 704dc7b..36f05ae 100644
--- a/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-config
+++ b/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-config
@@ -37,6 +37,7 @@ 
HDFS-SITE.XML_dfs.datanode.kerberos.principal=dn/_h...@example.com
 HDFS-SITE.XML_dfs.datanode.keytab.file=/etc/security/keytabs/dn.keytab
 HDFS-SITE.XML_dfs.web.authentication.kerberos.principal=HTTP/_h...@example.com
 
HDFS-SITE.XML_dfs.web.authentication.kerberos.keytab=/etc/security/keytabs/HTTP.keytab
+OZONE-SITE.XML_hdds.datanode.dir=/data/hdds
 HDFS-SITE.XML_dfs.datanode.address=0.0.0.0:1019
 HDFS-SITE.XML_dfs.datanode.http.address=0.0.0.0:1012
 CORE-SITE.XML_dfs.data.transfer.protection=authentication

http://git-wip-us.apache.org/repos/asf/hadoop/blob/72468986/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts/starter.sh
--
diff --git 
a/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts/starter.sh
 
b/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts/starter.sh
index 04cd49d..eec7ce9 100755
--- 
a/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts/starter.sh
+++ 
b/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/runner/scripts/starter.sh
@@ -82,7 +82,7 @@ fi
 
 if [ -n "$ENSURE_SCM_INITIALIZED" ]; then
   if [ ! -f "$ENSURE_SCM_INITIALIZED" ]; then
-/opt/hadoop/bin/ozone scm -init
+/opt/hadoop/bin/ozone scm --init
   fi
 fi
 
@@ -92,7 +92,7 @@ if [ -n "$ENSURE_OM_INITIALIZED" ]; then
 # Could be removed after HDFS-13203
 echo "Waiting 15 seconds for SCM startup"
 sleep 15
-/opt/hadoop/bin/ozone om -createObjectStore
+/opt/hadoop/bin/ozone om --init
   fi
 fi
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/72468986/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
--
diff --git 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
index 7fc6aee..a5cec20 100644
--- 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
+++ 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
@@ -259,17 +259,17 @@ public final class OzoneManager extends 
ServiceRuntimeInfoImpl
 
 
   /**
-   * Login KSM service user if security and Kerberos are enabled.
+   * Login OM service user if security and Kerberos are enabled.
*
* @param  conf
* @throws IOException, AuthenticationException
*/
-  private static void loginKSMUser(OzoneConfiguration conf)
+  

[10/50] [abbrv] hadoop git commit: HDDS-808. Simplify OMAction and DNAction classes used for AuditLogging. Contributed by Dinesh Chitlangia.

2018-11-29 Thread xyao
HDDS-808. Simplify OMAction and DNAction classes used for AuditLogging. 
Contributed by Dinesh Chitlangia.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/184cced5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/184cced5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/184cced5

Branch: refs/heads/HDDS-4
Commit: 184cced513c5599d7b33c9124692fbcd2e6d338e
Parents: 07142f5
Author: Ajay Kumar 
Authored: Thu Nov 29 08:35:02 2018 -0800
Committer: Ajay Kumar 
Committed: Thu Nov 29 08:35:20 2018 -0800

--
 .../org/apache/hadoop/ozone/audit/DNAction.java | 44 +++-
 .../apache/hadoop/ozone/audit/DummyAction.java  | 36 ++---
 .../org/apache/hadoop/ozone/audit/OMAction.java | 54 +---
 3 files changed, 58 insertions(+), 76 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/184cced5/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/DNAction.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/DNAction.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/DNAction.java
index ce34c46..1c87f2b 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/DNAction.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/DNAction.java
@@ -21,34 +21,28 @@ package org.apache.hadoop.ozone.audit;
  */
 public enum DNAction implements AuditAction {
 
-  CREATE_CONTAINER("CREATE_CONTAINER"),
-  READ_CONTAINER("READ_CONTAINER"),
-  UPDATE_CONTAINER("UPDATE_CONTAINER"),
-  DELETE_CONTAINER("DELETE_CONTAINER"),
-  LIST_CONTAINER("LIST_CONTAINER"),
-  PUT_BLOCK("PUT_BLOCK"),
-  GET_BLOCK("GET_BLOCK"),
-  DELETE_BLOCK("DELETE_BLOCK"),
-  LIST_BLOCK("LIST_BLOCK"),
-  READ_CHUNK("READ_CHUNK"),
-  DELETE_CHUNK("DELETE_CHUNK"),
-  WRITE_CHUNK("WRITE_CHUNK"),
-  LIST_CHUNK("LIST_CHUNK"),
-  COMPACT_CHUNK("COMPACT_CHUNK"),
-  PUT_SMALL_FILE("PUT_SMALL_FILE"),
-  GET_SMALL_FILE("GET_SMALL_FILE"),
-  CLOSE_CONTAINER("CLOSE_CONTAINER"),
-  GET_COMMITTED_BLOCK_LENGTH("GET_COMMITTED_BLOCK_LENGTH");
-
-  private String action;
-
-  DNAction(String action) {
-this.action = action;
-  }
+  CREATE_CONTAINER,
+  READ_CONTAINER,
+  UPDATE_CONTAINER,
+  DELETE_CONTAINER,
+  LIST_CONTAINER,
+  PUT_BLOCK,
+  GET_BLOCK,
+  DELETE_BLOCK,
+  LIST_BLOCK,
+  READ_CHUNK,
+  DELETE_CHUNK,
+  WRITE_CHUNK,
+  LIST_CHUNK,
+  COMPACT_CHUNK,
+  PUT_SMALL_FILE,
+  GET_SMALL_FILE,
+  CLOSE_CONTAINER,
+  GET_COMMITTED_BLOCK_LENGTH;
 
   @Override
   public String getAction() {
-return this.action;
+return this.toString();
   }
 
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/184cced5/hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/audit/DummyAction.java
--
diff --git 
a/hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/audit/DummyAction.java
 
b/hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/audit/DummyAction.java
index 76cd39a..d2da3e6 100644
--- 
a/hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/audit/DummyAction.java
+++ 
b/hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/audit/DummyAction.java
@@ -22,30 +22,24 @@ package org.apache.hadoop.ozone.audit;
  */
 public enum DummyAction implements AuditAction {
 
-  CREATE_VOLUME("CREATE_VOLUME"),
-  CREATE_BUCKET("CREATE_BUCKET"),
-  CREATE_KEY("CREATE_KEY"),
-  READ_VOLUME("READ_VOLUME"),
-  READ_BUCKET("READ_BUCKET"),
-  READ_KEY("READ_BUCKET"),
-  UPDATE_VOLUME("UPDATE_VOLUME"),
-  UPDATE_BUCKET("UPDATE_BUCKET"),
-  UPDATE_KEY("UPDATE_KEY"),
-  DELETE_VOLUME("DELETE_VOLUME"),
-  DELETE_BUCKET("DELETE_BUCKET"),
-  DELETE_KEY("DELETE_KEY"),
-  SET_OWNER("SET_OWNER"),
-  SET_QUOTA("SET_QUOTA");
-
-  private final String action;
-
-  DummyAction(String action) {
-this.action = action;
-  }
+  CREATE_VOLUME,
+  CREATE_BUCKET,
+  CREATE_KEY,
+  READ_VOLUME,
+  READ_BUCKET,
+  READ_KEY,
+  UPDATE_VOLUME,
+  UPDATE_BUCKET,
+  UPDATE_KEY,
+  DELETE_VOLUME,
+  DELETE_BUCKET,
+  DELETE_KEY,
+  SET_OWNER,
+  SET_QUOTA;
 
   @Override
   public String getAction() {
-return this.action;
+return this.toString();
   }
 
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/184cced5/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/audit/OMAction.java
--
diff --git 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/audit/OMAction.java 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/audit/OMAction.java
index 1d4d646..8794014 100644
--- 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/audit/OMAction.java
+++ 

[20/50] [abbrv] hadoop git commit: HDDS-7. Enable kerberos auth for Ozone client in hadoop rpc. Contributed by Ajay Kumar.

2018-11-29 Thread xyao
HDDS-7. Enable kerberos auth for Ozone client in hadoop rpc. Contributed by 
Ajay Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5c821937
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5c821937
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5c821937

Branch: refs/heads/HDDS-4
Commit: 5c821937a4972c9a6001c18e18b7a58891198011
Parents: 074b8f4
Author: Xiaoyu Yao 
Authored: Fri May 18 13:09:17 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:56:49 2018 -0800

--
 .../src/test/compose/compose-secure/.env| 17 
 .../compose/compose-secure/docker-compose.yaml  | 66 ++
 .../test/compose/compose-secure/docker-config   | 66 ++
 .../acceptance/ozone-secure.robot   | 95 
 .../hadoop/ozone/client/rest/RestClient.java|  4 +-
 .../hadoop/ozone/client/rpc/RpcClient.java  |  6 +-
 6 files changed, 248 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5c821937/hadoop-ozone/acceptance-test/src/test/compose/compose-secure/.env
--
diff --git a/hadoop-ozone/acceptance-test/src/test/compose/compose-secure/.env 
b/hadoop-ozone/acceptance-test/src/test/compose/compose-secure/.env
new file mode 100644
index 000..3254735
--- /dev/null
+++ b/hadoop-ozone/acceptance-test/src/test/compose/compose-secure/.env
@@ -0,0 +1,17 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+OZONEDIR=../../../hadoop-dist/target/ozone
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5c821937/hadoop-ozone/acceptance-test/src/test/compose/compose-secure/docker-compose.yaml
--
diff --git 
a/hadoop-ozone/acceptance-test/src/test/compose/compose-secure/docker-compose.yaml
 
b/hadoop-ozone/acceptance-test/src/test/compose/compose-secure/docker-compose.yaml
new file mode 100644
index 000..2661163
--- /dev/null
+++ 
b/hadoop-ozone/acceptance-test/src/test/compose/compose-secure/docker-compose.yaml
@@ -0,0 +1,66 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+version: "3"
+services:
+   ozone.kdc:
+  image: ahadoop/kdc:v1
+   namenode:
+  image: ahadoop/ozone:v1
+  hostname: namenode
+  volumes:
+ - ${OZONEDIR}:/opt/hadoop
+  ports:
+ - 9000:9000
+  environment:
+  ENSURE_NAMENODE_DIR: /data/namenode
+  env_file:
+ - ./docker-config
+  command: ["/opt/hadoop/bin/hdfs","namenode"]
+   datanode:
+  image: ahadoop/ozone:v1
+  hostname: datanode
+  volumes:
+- ${OZONEDIR}:/opt/hadoop
+  ports:
+- 9874
+  env_file:
+- ./docker-config
+  command: ["/opt/hadoop/bin/ozone","datanode"]
+   ksm:
+  image: ahadoop/ozone:v1
+  hostname: ksm
+  volumes:
+ - ${OZONEDIR}:/opt/hadoop
+  ports:
+ - 9874:9874
+  environment:
+ ENSURE_KSM_INITIALIZED: /data/metadata/ksm/current/VERSION
+  env_file:
+  - ./docker-config
+  command: ["/opt/hadoop/bin/ozone","ksm"]
+   scm:
+  image: ahadoop/ozone:v1
+  hostname: scm
+  volumes:
+ - ${OZONEDIR}:/opt/hadoop
+  ports:
+ - 9876:9876

[45/50] [abbrv] hadoop git commit: HDDS-873. Fix TestSecureOzoneContainer NPE after HDDS-837. Contributed by Xiaoyu Yao.

2018-11-29 Thread xyao
HDDS-873. Fix TestSecureOzoneContainer NPE after HDDS-837. Contributed by 
Xiaoyu Yao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9b9b8e41
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9b9b8e41
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9b9b8e41

Branch: refs/heads/HDDS-4
Commit: 9b9b8e410f0d056ec3bae64da1f40013bbefde8b
Parents: 3bee23d
Author: Ajay Kumar 
Authored: Mon Nov 26 15:49:01 2018 -0800
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:48 2018 -0800

--
 .../container/ozoneimpl/TestSecureOzoneContainer.java | 14 +-
 1 file changed, 13 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9b9b8e41/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestSecureOzoneContainer.java
--
diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestSecureOzoneContainer.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestSecureOzoneContainer.java
index 02d5e28..2224300 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestSecureOzoneContainer.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestSecureOzoneContainer.java
@@ -32,6 +32,8 @@ import org.apache.hadoop.hdds.scm.TestUtils;
 import org.apache.hadoop.hdds.scm.XceiverClientGrpc;
 import org.apache.hadoop.hdds.scm.XceiverClientSpi;
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+import 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine;
+import org.apache.hadoop.ozone.container.common.statemachine.StateContext;
 import org.apache.hadoop.security.SecurityUtil;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.token.Token;
@@ -45,6 +47,7 @@ import org.junit.rules.TemporaryFolder;
 import org.junit.rules.Timeout;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
+import org.mockito.Mockito;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -133,7 +136,7 @@ public class TestSecureOzoneContainer {
   OzoneConfigKeys.DFS_CONTAINER_IPC_RANDOM_PORT, false);
 
   DatanodeDetails dn = TestUtils.randomDatanodeDetails();
-  container = new OzoneContainer(dn, conf, null);
+  container = new OzoneContainer(dn, conf, getContext(dn));
   //Setting scmId, as we start manually ozone container.
   container.getDispatcher().setScmId(UUID.randomUUID().toString());
   container.start();
@@ -206,4 +209,13 @@ public class TestSecureOzoneContainer {
 Assert.assertNotNull(response);
 Assert.assertTrue(request.getTraceID().equals(response.getTraceID()));
   }
+
+  private StateContext getContext(DatanodeDetails datanodeDetails) {
+DatanodeStateMachine stateMachine = Mockito.mock(
+DatanodeStateMachine.class);
+StateContext context = Mockito.mock(StateContext.class);
+
Mockito.when(stateMachine.getDatanodeDetails()).thenReturn(datanodeDetails);
+Mockito.when(context.getParent()).thenReturn(stateMachine);
+return context;
+  }
 }
\ No newline at end of file


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[29/50] [abbrv] hadoop git commit: HDDS-566. Move OzoneSecure docker-compose after HDDS-447. Contributed by Xiaoyu Yao.

2018-11-29 Thread xyao
HDDS-566. Move OzoneSecure docker-compose after HDDS-447. Contributed by Xiaoyu 
Yao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/585c3448
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/585c3448
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/585c3448

Branch: refs/heads/HDDS-4
Commit: 585c3448684cdae3c9f6c9c175c23038dfef80fc
Parents: eddbe99
Author: Ajay Kumar 
Authored: Tue Oct 2 10:07:35 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:46 2018 -0800

--
 hadoop-dist/src/main/compose/ozonesecure/.env   |  18 ---
 .../compose/ozonesecure/docker-compose.yaml |  57 ---
 .../src/main/compose/ozonesecure/docker-config  | 103 -
 .../ozonesecure/docker-image/runner/Dockerfile  |  39 -
 .../ozonesecure/docker-image/runner/build.sh|  26 
 .../docker-image/runner/scripts/envtoconf.py| 115 --
 .../docker-image/runner/scripts/krb5.conf   |  38 -
 .../docker-image/runner/scripts/starter.sh  | 100 -
 .../runner/scripts/transformation.py| 150 ---
 .../dist/src/main/compose/ozonesecure/.env  |  18 +++
 .../compose/ozonesecure/docker-compose.yaml |  57 +++
 .../src/main/compose/ozonesecure/docker-config  | 103 +
 .../ozonesecure/docker-image/runner/Dockerfile  |  39 +
 .../ozonesecure/docker-image/runner/build.sh|  26 
 .../docker-image/runner/scripts/envtoconf.py| 115 ++
 .../docker-image/runner/scripts/krb5.conf   |  38 +
 .../docker-image/runner/scripts/starter.sh  | 100 +
 .../runner/scripts/transformation.py| 150 +++
 18 files changed, 646 insertions(+), 646 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/585c3448/hadoop-dist/src/main/compose/ozonesecure/.env
--
diff --git a/hadoop-dist/src/main/compose/ozonesecure/.env 
b/hadoop-dist/src/main/compose/ozonesecure/.env
deleted file mode 100644
index a494004..000
--- a/hadoop-dist/src/main/compose/ozonesecure/.env
+++ /dev/null
@@ -1,18 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-HDDS_VERSION=${hdds.version}
-SRC_VOLUME=../../

http://git-wip-us.apache.org/repos/asf/hadoop/blob/585c3448/hadoop-dist/src/main/compose/ozonesecure/docker-compose.yaml
--
diff --git a/hadoop-dist/src/main/compose/ozonesecure/docker-compose.yaml 
b/hadoop-dist/src/main/compose/ozonesecure/docker-compose.yaml
deleted file mode 100644
index 42ab05e..000
--- a/hadoop-dist/src/main/compose/ozonesecure/docker-compose.yaml
+++ /dev/null
@@ -1,57 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-version: "3"
-services:
-   kdc:
-  image: ahadoop/kdc:v1
-  hostname: kdc
-  volumes:
-  - $SRC_VOLUME:/opt/hadoop
-   datanode:
-  image: ahadoop/runner:latest
-  volumes:
-- $SRC_VOLUME:/opt/hadoop
-  hostname: datanode
-  ports:
-- 9864
-  command: ["/opt/hadoop/bin/ozone","datanode"]
-  env_file:
-- ./docker-config
-   ozoneManager:
-  image: ahadoop/runner:latest
-  hostname: om
-  volumes:
- - 

[22/50] [abbrv] hadoop git commit: HDDS-100. SCM CA: generate public/private key pair for SCM/OM/DNs. Contributed by Ajay Kumar.

2018-11-29 Thread xyao
HDDS-100. SCM CA: generate public/private key pair for SCM/OM/DNs. Contributed 
by Ajay Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ed10fa6f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ed10fa6f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ed10fa6f

Branch: refs/heads/HDDS-4
Commit: ed10fa6fb160f210bcf424e151f2eeead34bf184
Parents: c8af727
Author: Xiaoyu Yao 
Authored: Fri Jun 8 08:33:58 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:45 2018 -0800

--
 hadoop-hdds/common/pom.xml  |   6 +-
 .../org/apache/hadoop/hdds/HddsConfigKeys.java  |  19 ++
 .../hdds/security/x509/HDDSKeyGenerator.java|  99 
 .../hdds/security/x509/HDDSKeyPEMWriter.java| 254 +++
 .../hdds/security/x509/SecurityConfig.java  | 190 ++
 .../hadoop/hdds/security/x509/package-info.java |  25 ++
 .../common/src/main/resources/ozone-default.xml |  42 ++-
 .../security/x509/TestHDDSKeyGenerator.java |  81 ++
 .../security/x509/TestHDDSKeyPEMWriter.java | 213 
 .../ozone/TestOzoneConfigurationFields.java |   6 +
 10 files changed, 933 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ed10fa6f/hadoop-hdds/common/pom.xml
--
diff --git a/hadoop-hdds/common/pom.xml b/hadoop-hdds/common/pom.xml
index 0345012..175500a 100644
--- a/hadoop-hdds/common/pom.xml
+++ b/hadoop-hdds/common/pom.xml
@@ -81,7 +81,6 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   rocksdbjni
   5.14.2
 
-
 
   org.apache.hadoop
   hadoop-common
@@ -110,6 +109,11 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   2.6.0
 
 
+
+  org.bouncycastle
+  bcprov-jdk15on
+  1.49
+
   
 
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ed10fa6f/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
index f16503e..2a1404a 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
@@ -112,4 +112,23 @@ public final class HddsConfigKeys {
 
   public static final String HDDS_PROMETHEUS_ENABLED =
   "hdds.prometheus.endpoint.enabled";
+
+  public static final String HDDS_KEY_LEN = "hdds.key.len";
+  public static final int HDDS_DEFAULT_KEY_LEN = 2048;
+  public static final String HDDS_KEY_ALGORITHM = "hdds.key.algo";
+  public static final String HDDS_DEFAULT_KEY_ALGORITHM = "RSA";
+  public static final String HDDS_SECURITY_PROVIDER = "hdds.security.provider";
+  public static final String HDDS_DEFAULT_SECURITY_PROVIDER = "BC";
+  public static final String HDDS_KEY_DIR_NAME = "hdds.key.dir.name";
+  public static final String HDDS_KEY_DIR_NAME_DEFAULT = "keys";
+
+  // TODO : Talk to StorageIO classes and see if they can return a secure
+  // storage location for each node.
+  public static final String HDDS_METADATA_DIR_NAME = "hdds.metadata.dir";
+  public static final String HDDS_PRIVATE_KEY_FILE_NAME =
+  "hdds.priv.key.file.name";
+  public static final String HDDS_PRIVATE_KEY_FILE_NAME_DEFAULT = 
"private.pem";
+  public static final String HDDS_PUBLIC_KEY_FILE_NAME = "hdds.public.key.file"
+  + ".name";
+  public static final String HDDS_PUBLIC_KEY_FILE_NAME_DEFAULT = "public.pem";
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ed10fa6f/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/HDDSKeyGenerator.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/HDDSKeyGenerator.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/HDDSKeyGenerator.java
new file mode 100644
index 000..cb411b2
--- /dev/null
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/HDDSKeyGenerator.java
@@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by 

[32/50] [abbrv] hadoop git commit: HDDS-588. SelfSignedCertificate#generateCertificate should sign the certificate the configured security provider. Contributed by Xiaoyu Yao.

2018-11-29 Thread xyao
HDDS-588. SelfSignedCertificate#generateCertificate should sign the certificate 
the configured security provider. Contributed by Xiaoyu Yao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/602aa807
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/602aa807
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/602aa807

Branch: refs/heads/HDDS-4
Commit: 602aa807483464ed3b601ace2a6c379719dff653
Parents: 812c07e
Author: Ajay Kumar 
Authored: Tue Oct 9 00:28:01 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:46 2018 -0800

--
 .../hdds/security/x509/certificates/SelfSignedCertificate.java   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/602aa807/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificates/SelfSignedCertificate.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificates/SelfSignedCertificate.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificates/SelfSignedCertificate.java
index fef7ac3..f221246 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificates/SelfSignedCertificate.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificates/SelfSignedCertificate.java
@@ -103,8 +103,8 @@ public final class SelfSignedCertificate {
 
 
 ContentSigner contentSigner =
-new JcaContentSignerBuilder(
-config.getSignatureAlgo()).build(key.getPrivate());
+new JcaContentSignerBuilder(config.getSignatureAlgo())
+.setProvider(config.getProvider()).build(key.getPrivate());
 
 // Please note: Since this is a root certificate we use "ONE" as the
 // serial number. Also note that skip enforcing locale or UTC. We are


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[21/50] [abbrv] hadoop git commit: HDDS-70. Fix config names for secure ksm and scm. Contributed by Ajay Kumar.

2018-11-29 Thread xyao
HDDS-70. Fix config names for secure ksm and scm. Contributed by Ajay Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c8af727f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c8af727f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c8af727f

Branch: refs/heads/HDDS-4
Commit: c8af727fa5efb58e9bfaeb2816d7d63c53e0bd62
Parents: 5c82193
Author: Xiaoyu Yao 
Authored: Tue May 22 13:32:28 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:56:50 2018 -0800

--
 .../apache/hadoop/hdds/scm/ScmConfigKeys.java   |  6 +-
 .../scm/protocol/ScmBlockLocationProtocol.java  |  2 +-
 .../StorageContainerLocationProtocol.java   |  3 +-
 .../protocolPB/ScmBlockLocationProtocolPB.java  |  4 +-
 .../StorageContainerLocationProtocolPB.java |  2 +-
 .../apache/hadoop/ozone/OzoneConfigKeys.java|  1 -
 .../common/src/main/resources/ozone-default.xml | 31 +---
 .../StorageContainerDatanodeProtocol.java   |  2 +-
 .../StorageContainerDatanodeProtocolPB.java |  2 +-
 .../scm/server/StorageContainerManager.java | 18 ++---
 .../compose/compose-secure/docker-compose.yaml  |  6 +-
 .../test/compose/compose-secure/docker-config   | 12 +--
 .../acceptance/ozone-secure.robot   | 12 +--
 .../ozone/client/protocol/ClientProtocol.java   |  2 +-
 .../apache/hadoop/ozone/ksm/KSMConfigKeys.java  | 84 
 .../ozone/om/protocol/OzoneManagerProtocol.java |  4 +-
 .../hadoop/ozone/TestSecureOzoneCluster.java| 21 +++--
 .../apache/hadoop/ozone/om/OzoneManager.java|  4 +-
 18 files changed, 151 insertions(+), 65 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c8af727f/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
index e18fe91..376e6db 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
@@ -192,9 +192,9 @@ public final class ScmConfigKeys {
   "ozone.scm.http-address";
   public static final String OZONE_SCM_HTTPS_ADDRESS_KEY =
   "ozone.scm.https-address";
-  public static final String OZONE_SCM_KERBEROS_KEYTAB_FILE_KEY =
-  "ozone.scm.kerberos.keytab.file";
-  public static final String OZONE_SCM_KERBEROS_PRINCIPAL_KEY = 
"ozone.scm.kerberos.principal";
+  public static final String HDDS_SCM_KERBEROS_KEYTAB_FILE_KEY =
+  "hdds.scm.kerberos.keytab.file";
+  public static final String HDDS_SCM_KERBEROS_PRINCIPAL_KEY = 
"hdds.scm.kerberos.principal";
   public static final String OZONE_SCM_HTTP_BIND_HOST_DEFAULT = "0.0.0.0";
   public static final int OZONE_SCM_HTTP_BIND_PORT_DEFAULT = 9876;
   public static final int OZONE_SCM_HTTPS_BIND_PORT_DEFAULT = 9877;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c8af727f/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/ScmBlockLocationProtocol.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/ScmBlockLocationProtocol.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/ScmBlockLocationProtocol.java
index e17f1c2..2d46ae0 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/ScmBlockLocationProtocol.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/ScmBlockLocationProtocol.java
@@ -33,7 +33,7 @@ import java.util.List;
  * ScmBlockLocationProtocol is used by an HDFS node to find the set of nodes
  * to read/write a block.
  */
-@KerberosInfo(serverPrincipal = ScmConfigKeys.OZONE_SCM_KERBEROS_PRINCIPAL_KEY)
+@KerberosInfo(serverPrincipal = ScmConfigKeys.HDDS_SCM_KERBEROS_PRINCIPAL_KEY)
 public interface ScmBlockLocationProtocol {
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c8af727f/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java
index 5bc2521..e21bc53 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java
@@ 

[27/50] [abbrv] hadoop git commit: HDDS-548. Create a Self-Signed Certificate. Contributed by Anu Engineer.

2018-11-29 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/87acc150/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/keys/TestHDDSKeyPEMWriter.java
--
diff --git 
a/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/keys/TestHDDSKeyPEMWriter.java
 
b/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/keys/TestHDDSKeyPEMWriter.java
new file mode 100644
index 000..db5d430
--- /dev/null
+++ 
b/hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/security/x509/keys/TestHDDSKeyPEMWriter.java
@@ -0,0 +1,216 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.hdds.security.x509.keys;
+
+import static org.apache.hadoop.hdds.HddsConfigKeys.HDDS_METADATA_DIR_NAME;
+
+import java.io.IOException;
+import java.nio.charset.StandardCharsets;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.nio.file.attribute.PosixFilePermission;
+import java.security.KeyFactory;
+import java.security.KeyPair;
+import java.security.NoSuchAlgorithmException;
+import java.security.NoSuchProviderException;
+import java.security.PrivateKey;
+import java.security.PublicKey;
+import java.security.spec.InvalidKeySpecException;
+import java.security.spec.PKCS8EncodedKeySpec;
+import java.security.spec.X509EncodedKeySpec;
+import java.util.Set;
+import org.apache.commons.codec.binary.Base64;
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.test.LambdaTestUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+/**
+ * Test class for HDDS pem writer.
+ */
+public class TestHDDSKeyPEMWriter {
+
+  @Rule
+  public TemporaryFolder temporaryFolder = new TemporaryFolder();
+  private OzoneConfiguration configuration;
+  private HDDSKeyGenerator keyGenerator;
+  private String prefix;
+
+  @Before
+  public void init() throws IOException {
+configuration = new OzoneConfiguration();
+prefix = temporaryFolder.newFolder().toString();
+configuration.set(HDDS_METADATA_DIR_NAME, prefix);
+keyGenerator = new HDDSKeyGenerator(configuration);
+  }
+
+  /**
+   * Assert basic things like we are able to create a file, and the names are
+   * in expected format etc.
+   *
+   * @throws NoSuchProviderException - On Error, due to missing Java
+   * dependencies.
+   * @throws NoSuchAlgorithmException - On Error,  due to missing Java
+   * dependencies.
+   * @throws IOException - On I/O failure.
+   */
+  @Test
+  public void testWriteKey()
+  throws NoSuchProviderException, NoSuchAlgorithmException,
+  IOException, InvalidKeySpecException {
+KeyPair keys = keyGenerator.generateKey();
+HDDSKeyPEMWriter pemWriter = new HDDSKeyPEMWriter(configuration);
+pemWriter.writeKey(keys);
+
+// Assert that locations have been created.
+Path keyLocation = pemWriter.getSecurityConfig().getKeyLocation();
+Assert.assertTrue(keyLocation.toFile().exists());
+
+// Assert that locations are created in the locations that we specified
+// using the Config.
+Assert.assertTrue(keyLocation.toString().startsWith(prefix));
+Path privateKeyPath = Paths.get(keyLocation.toString(),
+pemWriter.getSecurityConfig().getPrivateKeyFileName());
+Assert.assertTrue(privateKeyPath.toFile().exists());
+Path publicKeyPath = Paths.get(keyLocation.toString(),
+pemWriter.getSecurityConfig().getPublicKeyFileName());
+Assert.assertTrue(publicKeyPath.toFile().exists());
+
+// Read the private key and test if the expected String in the PEM file
+// format exists.
+byte[] privateKey = Files.readAllBytes(privateKeyPath);
+String privateKeydata = new String(privateKey, StandardCharsets.UTF_8);
+Assert.assertTrue(privateKeydata.contains("PRIVATE KEY"));
+
+// Read the public key and test if the expected String in the PEM file
+// format exists.
+byte[] publicKey = Files.readAllBytes(publicKeyPath);
+

[44/50] [abbrv] hadoop git commit: HDDS-778. Add an interface for CA and Clients for Certificate operations Contributed by Anu Engineer.

2018-11-29 Thread xyao
HDDS-778. Add an interface for CA and Clients for Certificate operations
Contributed by Anu Engineer.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4770e9de
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4770e9de
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4770e9de

Branch: refs/heads/HDDS-4
Commit: 4770e9dea8199753962c6517bff96fd39fbfd826
Parents: 8bbc95e
Author: Anu Engineer 
Authored: Thu Nov 8 09:54:27 2018 -0800
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:47 2018 -0800

--
 .../authority/CertificateServer.java|  99 
 .../certificate/authority/package-info.java |  22 +++
 .../certificate/client/CertificateClient.java   | 159 +++
 .../x509/certificate/client/package-info.java   |  22 +++
 4 files changed, 302 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4770e9de/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/CertificateServer.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/CertificateServer.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/CertificateServer.java
new file mode 100644
index 000..9332e5b
--- /dev/null
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/authority/CertificateServer.java
@@ -0,0 +1,99 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.hdds.security.x509.certificate.authority;
+
+import 
org.apache.hadoop.hdds.security.x509.certificates.CertificateSignRequest;
+import org.apache.hadoop.hdds.security.x509.exceptions.SCMSecurityException;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.bouncycastle.cert.X509CertificateHolder;
+
+import java.security.cert.X509Certificate;
+import java.util.concurrent.Future;
+
+/**
+ * Interface for Certificate Authority. This can be extended to talk to 
external
+ * CAs later or HSMs later.
+ */
+public interface CertificateServer {
+  /**
+   * Initialize the Certificate Authority.
+   *
+   * @param securityConfig - Security Configuration.
+   * @param type - The Type of CertificateServer we are creating, we make this
+   * explicit so that when we read code it is visible to the users.
+   * @throws SCMSecurityException - Throws if the init fails.
+   */
+  void init(SecurityConfig securityConfig, CAType type)
+  throws SCMSecurityException;
+
+  /**
+   * Returns the CA Certificate for this CA.
+   *
+   * @return X509CertificateHolder - Certificate for this CA.
+   * @throws SCMSecurityException -- usually thrown if this CA is not
+   *  initialized.
+   */
+  X509CertificateHolder getCACertificate()
+  throws SCMSecurityException;
+
+  /**
+   * Request a Certificate based on Certificate Signing Request.
+   *
+   * @param csr - Certificate Signing Request.
+   * @return A future that will have this certificate when this request is
+   * approved.
+   * @throws SCMSecurityException - on Error.
+   */
+  Future requestCertificate(CertificateSignRequest csr,
+  CertificateApprover approver) throws SCMSecurityException;
+
+  /**
+   * Revokes a Certificate issued by this CertificateServer.
+   *
+   * @param certificate - Certificate to revoke
+   * @param approver - Approval process to follow.
+   * @return Future that tells us what happened.
+   * @throws SCMSecurityException - on Error.
+   */
+  Future revokeCertificate(X509Certificate certificate,
+  CertificateApprover approver) throws SCMSecurityException;
+
+  /**
+   * TODO : CRL, OCSP etc. Later. This is the start of a CertificateServer
+   * framework.
+   */
+
+  /**
+   * Approval Types for a certificate request.
+   */
+  enum CertificateApprover {
+KERBEROS_TRUSTED, /* The Request came from a DN using Kerberos Identity*/
+MANUAL, /* Wait 

[17/50] [abbrv] hadoop git commit: HDDS-5. Enable OzoneManager kerberos auth. Contributed by Ajay Kumar.

2018-11-29 Thread xyao
HDDS-5. Enable OzoneManager kerberos auth. Contributed by Ajay Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2119be45
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2119be45
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2119be45

Branch: refs/heads/HDDS-4
Commit: 2119be450b6516fe79278870effd13d542fb3d35
Parents: aa7ca15
Author: Xiaoyu Yao 
Authored: Mon May 14 09:36:57 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:56:49 2018 -0800

--
 .../common/src/main/resources/ozone-default.xml |  32 +++-
 .../apache/hadoop/ozone/om/OMConfigKeys.java|   9 +
 .../ozone/om/protocol/OzoneManagerProtocol.java |   6 +
 .../om/protocolPB/OzoneManagerProtocolPB.java   |   4 +
 .../hadoop/ozone/MiniOzoneClusterImpl.java  |   3 +-
 .../hadoop/ozone/TestSecureOzoneCluster.java| 168 +++
 .../apache/hadoop/ozone/om/OzoneManager.java|  69 +++-
 .../hadoop/ozone/om/OzoneManagerHttpServer.java |   5 +-
 8 files changed, 246 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2119be45/hadoop-hdds/common/src/main/resources/ozone-default.xml
--
diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml 
b/hadoop-hdds/common/src/main/resources/ozone-default.xml
index a51443c..d3e352b 100644
--- a/hadoop-hdds/common/src/main/resources/ozone-default.xml
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -1400,7 +1400,23 @@
 ozone.scm.kerberos.principal
 
  OZONE, SECURITY
-The SCM service principal. Ex 
scm/_h...@realm.tld.
+The SCM service principal. Ex 
scm/_h...@realm.com
+  
+
+  
+ozone.om.kerberos.keytab.file
+
+ OZONE, SECURITY
+ The keytab file used by KSM daemon to login as its
+  service principal. The principal name is configured with
+  hdds.ksm.kerberos.principal.
+
+  
+  
+ozone.om.kerberos.principal
+
+ OZONE, SECURITY
+The KSM service principal. Ex 
ksm/_h...@realm.com
   
 
   
@@ -1412,4 +1428,18 @@
 /etc/security/keytabs/HTTP.keytab
   
 
+  
+ozone.om.http.kerberos.principal
+HTTP/_h...@example.com
+
+  KSM http server kerberos principal.
+
+  
+  
+ozone.om.http.kerberos.keytab.file
+/etc/security/keytabs/HTTP.keytab
+
+  KSM http server kerberos keytab.
+
+  
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2119be45/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
--
diff --git 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
index 739c75e..0119eb5 100644
--- 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
+++ 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
@@ -86,4 +86,13 @@ public final class OMConfigKeys {
   public static final String OZONE_OM_METRICS_SAVE_INTERVAL =
   "ozone.om.save.metrics.interval";
   public static final String OZONE_OM_METRICS_SAVE_INTERVAL_DEFAULT = "5m";
+
+  public static final String OZONE_OM_KERBEROS_KEYTAB_FILE_KEY = "ozone.om."
+  + "kerberos.keytab.file";
+  public static final String OZONE_OM_KERBEROS_PRINCIPAL_KEY = "ozone.om"
+  + ".kerberos.principal";
+  public static final String OZONE_OM_WEB_AUTHENTICATION_KERBEROS_KEYTAB_FILE =
+  "ozone.om.http.kerberos.keytab.file";
+  public static final String OZONE_OM_WEB_AUTHENTICATION_KERBEROS_PRINCIPAL_KEY
+  = "ozone.om.http.kerberos.principal";
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2119be45/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerProtocol.java
--
diff --git 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerProtocol.java
 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerProtocol.java
index e4cce65..2a4e864 100644
--- 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerProtocol.java
+++ 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerProtocol.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.ozone.om.protocol;
 
+import org.apache.hadoop.ozone.om.OMConfigKeys;
 import org.apache.hadoop.ozone.om.helpers.OmBucketArgs;
 import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
 import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
@@ -25,14 +26,19 @@ import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
 import 

[38/50] [abbrv] hadoop git commit: HDDS-9. Add GRPC protocol interceptors for Ozone Block Token. Contributed by Xiaoyu Yao.

2018-11-29 Thread xyao
HDDS-9. Add GRPC protocol interceptors for Ozone Block Token. Contributed by 
Xiaoyu Yao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/96bd574d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/96bd574d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/96bd574d

Branch: refs/heads/HDDS-4
Commit: 96bd574dc9104ce017c889dea93792ac00900f01
Parents: 59767be
Author: Xiaoyu Yao 
Authored: Tue Nov 20 20:21:08 2018 -0800
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:47 2018 -0800

--
 .../hdds/scm/ClientCredentialInterceptor.java   |  65 
 .../hadoop/hdds/scm/XceiverClientGrpc.java  |  64 +++-
 .../org/apache/hadoop/hdds/HddsConfigKeys.java  |   5 +-
 .../exception/SCMSecurityException.java |  52 +++
 .../hdds/security/exception/package-info.java   |  23 ++
 .../security/token/BlockTokenException.java |  53 
 .../hdds/security/token/BlockTokenVerifier.java | 113 +++
 .../token/OzoneBlockTokenIdentifier.java| 199 
 .../security/token/OzoneBlockTokenSelector.java |  55 
 .../hdds/security/token/TokenVerifier.java  |  38 +++
 .../hdds/security/token/package-info.java   |  22 ++
 .../hdds/security/x509/SecurityConfig.java  |  11 +
 .../authority/CertificateServer.java|   2 +-
 .../certificate/client/CertificateClient.java   |  11 +
 .../certificates/CertificateSignRequest.java|   2 +-
 .../certificates/SelfSignedCertificate.java |   2 +-
 .../x509/exceptions/CertificateException.java   |  14 +-
 .../x509/exceptions/SCMSecurityException.java   |  64 
 .../org/apache/hadoop/ozone/OzoneConsts.java|  14 +
 hadoop-hdds/common/src/main/proto/hdds.proto|   2 +-
 .../token/TestOzoneBlockTokenIdentifier.java| 313 +++
 .../hdds/security/token/package-info.java   |  22 ++
 .../TestCertificateSignRequest.java |   2 +-
 .../x509/certificates/TestRootCertificate.java  |   2 +-
 .../server/ServerCredentialInterceptor.java |  74 +
 .../transport/server/XceiverServerGrpc.java |  23 +-
 .../security/OzoneBlockTokenIdentifier.java | 178 ---
 .../ozone/security/OzoneBlockTokenSelector.java |  55 
 .../ozoneimpl/TestSecureOzoneContainer.java | 209 +
 .../security/TestOzoneBlockTokenIdentifier.java | 255 ---
 30 files changed, 1363 insertions(+), 581 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/96bd574d/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/ClientCredentialInterceptor.java
--
diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/ClientCredentialInterceptor.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/ClientCredentialInterceptor.java
new file mode 100644
index 000..7a15808
--- /dev/null
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/ClientCredentialInterceptor.java
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm;
+
+import org.apache.ratis.thirdparty.io.grpc.CallOptions;
+import org.apache.ratis.thirdparty.io.grpc.Channel;
+import org.apache.ratis.thirdparty.io.grpc.ClientCall;
+import org.apache.ratis.thirdparty.io.grpc.ClientInterceptor;
+import org.apache.ratis.thirdparty.io.grpc.ForwardingClientCall;
+import org.apache.ratis.thirdparty.io.grpc.Metadata;
+import org.apache.ratis.thirdparty.io.grpc.MethodDescriptor;
+
+import static org.apache.hadoop.ozone.OzoneConsts.OBT_METADATA_KEY;
+import static org.apache.hadoop.ozone.OzoneConsts.USER_METADATA_KEY;
+
+/**
+ * GRPC client interceptor for ozone block token.
+ */
+public class ClientCredentialInterceptor implements ClientInterceptor {
+
+  private final String user;
+  private final String token;
+
+  public ClientCredentialInterceptor(String user, String token) {
+this.user = user;
+this.token = token;
+  }
+
+  @Override
+  public  ClientCall 

[39/50] [abbrv] hadoop git commit: HDDS-103. SCM CA: Add new security protocol for SCM to expose security related functions. Contributed by Ajay Kumar.

2018-11-29 Thread xyao
HDDS-103. SCM CA: Add new security protocol for SCM to expose security related 
functions. Contributed by Ajay Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d0be8650
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d0be8650
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d0be8650

Branch: refs/heads/HDDS-4
Commit: d0be86504421594ed70080ad6bc4faba52c8e437
Parents: 1842ced
Author: Ajay Kumar 
Authored: Sun Oct 28 22:44:41 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:47 2018 -0800

--
 hadoop-hdds/common/pom.xml  |   1 +
 .../hdds/protocol/SCMSecurityProtocol.java  |  44 +++
 ...MSecurityProtocolClientSideTranslatorPB.java |  99 +++
 .../hdds/protocolPB/SCMSecurityProtocolPB.java  |  35 ++
 ...MSecurityProtocolServerSideTranslatorPB.java |  66 ++
 .../hadoop/hdds/protocolPB/package-info.java|  22 
 .../apache/hadoop/hdds/scm/ScmConfigKeys.java   |  16 +++
 .../apache/hadoop/ozone/OzoneSecurityUtil.java  |  43 +++
 .../src/main/proto/SCMSecurityProtocol.proto|  66 ++
 .../apache/hadoop/hdds/scm/HddsServerUtil.java  |  22 
 .../hdds/scm/server/SCMBlockProtocolServer.java |   2 +-
 .../scm/server/SCMSecurityProtocolServer.java   | 121 +++
 .../scm/server/StorageContainerManager.java |  26 +++-
 13 files changed, 558 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d0be8650/hadoop-hdds/common/pom.xml
--
diff --git a/hadoop-hdds/common/pom.xml b/hadoop-hdds/common/pom.xml
index 061158c..25774f8 100644
--- a/hadoop-hdds/common/pom.xml
+++ b/hadoop-hdds/common/pom.xml
@@ -239,6 +239,7 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   StorageContainerLocationProtocol.proto
   hdds.proto
   ScmBlockLocationProtocol.proto
+  SCMSecurityProtocol.proto
 
   
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d0be8650/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/SCMSecurityProtocol.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/SCMSecurityProtocol.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/SCMSecurityProtocol.java
new file mode 100644
index 000..f0ae41c
--- /dev/null
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocol/SCMSecurityProtocol.java
@@ -0,0 +1,44 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.hdds.protocol;
+
+import java.io.IOException;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.DatanodeDetailsProto;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.security.KerberosInfo;
+
+/**
+ * The protocol used to perform security related operations with SCM.
+ */
+@KerberosInfo(
+serverPrincipal = ScmConfigKeys.HDDS_SCM_KERBEROS_PRINCIPAL_KEY)
+@InterfaceAudience.Private
+public interface SCMSecurityProtocol {
+
+  /**
+   * Get SCM signed certificate for DataNode.
+   *
+   * @param dataNodeDetails - DataNode Details.
+   * @param certSignReq - Certificate signing request.
+   * @return byte[] - SCM signed certificate.
+   */
+  String getDataNodeCertificate(
+  DatanodeDetailsProto dataNodeDetails,
+  String certSignReq) throws IOException;
+
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d0be8650/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocolPB/SCMSecurityProtocolClientSideTranslatorPB.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/protocolPB/SCMSecurityProtocolClientSideTranslatorPB.java
 

[18/50] [abbrv] hadoop git commit: HDDS-6. Enable SCM kerberos auth. Contributed by Ajay Kumar.

2018-11-29 Thread xyao
HDDS-6. Enable SCM kerberos auth. Contributed by Ajay Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/aa7ca153
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/aa7ca153
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/aa7ca153

Branch: refs/heads/HDDS-4
Commit: aa7ca153d8a9c9c9381f29f9ed739a56416db015
Parents: ae5fbdd
Author: Xiaoyu Yao 
Authored: Wed May 9 15:56:03 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:56:49 2018 -0800

--
 .../authentication/util/KerberosUtil.java   |   2 +-
 .../conf/TestConfigurationFieldsBase.java   |   2 +
 .../java/org/apache/hadoop/hdds/HddsUtils.java  |  13 +-
 .../apache/hadoop/hdds/scm/ScmConfigKeys.java   |   9 +-
 .../scm/protocol/ScmBlockLocationProtocol.java  |   3 +
 .../StorageContainerLocationProtocol.java   |   4 +
 .../protocolPB/ScmBlockLocationProtocolPB.java  |   6 +
 .../StorageContainerLocationProtocolPB.java |   4 +
 .../apache/hadoop/ozone/OzoneConfigKeys.java|   4 +
 .../common/src/main/resources/ozone-default.xml |  26 ++-
 .../StorageContainerDatanodeProtocol.java   |   4 +
 .../StorageContainerDatanodeProtocolPB.java |   6 +
 .../scm/server/StorageContainerManager.java |  51 -
 .../StorageContainerManagerHttpServer.java  |   5 +-
 .../hadoop/hdds/scm/block/TestBlockManager.java |   3 +-
 .../ozone/client/protocol/ClientProtocol.java   |   3 +
 hadoop-ozone/common/src/main/bin/start-ozone.sh |  16 +-
 hadoop-ozone/common/src/main/bin/stop-ozone.sh  |  13 +-
 hadoop-ozone/integration-test/pom.xml   |   6 +
 .../TestContainerStateManagerIntegration.java   |   5 +-
 .../apache/hadoop/ozone/MiniOzoneCluster.java   |   4 +-
 .../hadoop/ozone/MiniOzoneClusterImpl.java  |  21 +-
 .../hadoop/ozone/TestSecureOzoneCluster.java| 205 +++
 .../ozone/TestStorageContainerManager.java  |   8 +-
 24 files changed, 375 insertions(+), 48 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa7ca153/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosUtil.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosUtil.java
 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosUtil.java
index c011045..4459928 100644
--- 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosUtil.java
+++ 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosUtil.java
@@ -167,7 +167,7 @@ public class KerberosUtil {
   }
 
   /* Return fqdn of the current host */
-  static String getLocalHostName() throws UnknownHostException {
+  public static String getLocalHostName() throws UnknownHostException {
 return InetAddress.getLocalHost().getCanonicalHostName();
   }
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa7ca153/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationFieldsBase.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationFieldsBase.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationFieldsBase.java
index 152159b..bce1cd5 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationFieldsBase.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationFieldsBase.java
@@ -436,6 +436,8 @@ public abstract class TestConfigurationFieldsBase {
 // Create XML key/value map
 LOG_XML.debug("Reading XML property files\n");
 xmlKeyValueMap = extractPropertiesFromXml(xmlFilename);
+// Remove hadoop property set in ozone-default.xml
+xmlKeyValueMap.remove("hadoop.custom.tags");
 LOG_XML.debug("\n=\n");
 
 // Create default configuration variable key/value map

http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa7ca153/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
index 18637af..fd6a0e3 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
@@ -278,18 +278,7 @@ public final class HddsUtils {
   }
 
   public static boolean 

[23/50] [abbrv] hadoop git commit: HDDS-101. SCM CA: generate CSR for SCM CA clients. Contributed by Xiaoyu Yao.

2018-11-29 Thread xyao
HDDS-101. SCM CA: generate CSR for SCM CA clients.
Contributed by Xiaoyu Yao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1842ced1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1842ced1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1842ced1

Branch: refs/heads/HDDS-4
Commit: 1842ced1268acdf59accb54aec664043bf07da73
Parents: 7246898
Author: Anu Engineer 
Authored: Fri Oct 26 17:57:21 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:46 2018 -0800

--
 .../certificates/CertificateSignRequest.java| 245 +
 .../security/x509/keys/HDDSKeyGenerator.java|   2 +-
 .../hdds/security/x509/keys/SecurityUtil.java   |  79 ++
 .../TestCertificateSignRequest.java | 268 +++
 4 files changed, 593 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1842ced1/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificates/CertificateSignRequest.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificates/CertificateSignRequest.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificates/CertificateSignRequest.java
new file mode 100644
index 000..2e1f9df
--- /dev/null
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificates/CertificateSignRequest.java
@@ -0,0 +1,245 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+package org.apache.hadoop.hdds.security.x509.certificates;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.hdds.security.x509.exceptions.CertificateException;
+import org.apache.hadoop.hdds.security.x509.exceptions.SCMSecurityException;
+import org.apache.hadoop.hdds.security.x509.keys.SecurityUtil;
+import org.apache.logging.log4j.util.Strings;
+import org.bouncycastle.asn1.DEROctetString;
+import org.bouncycastle.asn1.pkcs.PKCSObjectIdentifiers;
+import org.bouncycastle.asn1.x500.X500Name;
+import org.bouncycastle.asn1.x509.BasicConstraints;
+import org.bouncycastle.asn1.x509.Extension;
+import org.bouncycastle.asn1.x509.Extensions;
+import org.bouncycastle.asn1.x509.GeneralName;
+import org.bouncycastle.asn1.x509.GeneralNames;
+import org.bouncycastle.asn1.x509.KeyUsage;
+import org.bouncycastle.operator.ContentSigner;
+import org.bouncycastle.operator.OperatorCreationException;
+import org.bouncycastle.operator.jcajce.JcaContentSignerBuilder;
+import org.bouncycastle.pkcs.PKCS10CertificationRequest;
+import org.bouncycastle.pkcs.PKCS10CertificationRequestBuilder;
+import org.bouncycastle.pkcs.jcajce.JcaPKCS10CertificationRequestBuilder;
+
+import java.io.IOException;
+import java.security.KeyPair;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+
+/**
+ * A certificate sign request object that wraps operations to build a
+ * PKCS10CertificationRequest to CA.
+ */
+public final class CertificateSignRequest {
+  private final KeyPair keyPair;
+  private final SecurityConfig config;
+  private final Extensions extensions;
+  private String subject;
+  private String clusterID;
+  private String scmID;
+
+  /**
+   * Private Ctor for CSR.
+   *
+   * @param subject - Subject
+   * @param scmID - SCM ID
+   * @param clusterID - Cluster ID
+   * @param keyPair - KeyPair
+   * @param config - SCM Config
+   * @param extensions - CSR extensions
+   */
+  private CertificateSignRequest(String subject, String scmID, String 
clusterID,
+  KeyPair keyPair, SecurityConfig config, Extensions extensions) {
+this.subject = subject;
+this.clusterID = clusterID;
+this.scmID = scmID;
+this.keyPair = keyPair;
+this.config = config;
+this.extensions = extensions;
+  }
+
+  private PKCS10CertificationRequest generateCSR() throws
+  

[34/50] [abbrv] hadoop git commit: HDDS-591. Adding ASF license header to kadm5.acl. Contributed by Ajay Kumar.

2018-11-29 Thread xyao
HDDS-591. Adding ASF license header to kadm5.acl. Contributed by Ajay Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b4c0280d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b4c0280d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b4c0280d

Branch: refs/heads/HDDS-4
Commit: b4c0280d1126eca2190dc40988695f7746429899
Parents: 602aa80
Author: Xiaoyu Yao 
Authored: Wed Oct 10 10:01:01 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:46 2018 -0800

--
 .../docker-image/docker-krb5/Dockerfile-krb5 |  1 +
 .../docker-image/docker-krb5/kadm5.acl   | 19 +++
 2 files changed, 20 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b4c0280d/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/docker-krb5/Dockerfile-krb5
--
diff --git 
a/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/docker-krb5/Dockerfile-krb5
 
b/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/docker-krb5/Dockerfile-krb5
index b5b931d..14532d4 100644
--- 
a/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/docker-krb5/Dockerfile-krb5
+++ 
b/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/docker-krb5/Dockerfile-krb5
@@ -28,6 +28,7 @@ RUN kdb5_util create -s -P Welcome1
 RUN kadmin.local -q "addprinc -randkey admin/ad...@example.com"
 RUN kadmin.local -q "ktadd -k /tmp/admin.keytab admin/ad...@example.com"
 ADD launcher.sh .
+RUN chmod +x /opt/launcher.sh
 RUN mkdir -p /data
 ENTRYPOINT ["/usr/local/bin/dumb-init", "--", "/opt/launcher.sh"]
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b4c0280d/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/docker-krb5/kadm5.acl
--
diff --git 
a/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/docker-krb5/kadm5.acl
 
b/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/docker-krb5/kadm5.acl
index 8fe9f69..f0cd660 100644
--- 
a/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/docker-krb5/kadm5.acl
+++ 
b/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/docker-krb5/kadm5.acl
@@ -1 +1,20 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
 */ad...@example.com x


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[30/50] [abbrv] hadoop git commit: HDDS-547. Fix secure docker and configs. Contributed by Xiaoyu Yao.

2018-11-29 Thread xyao
HDDS-547. Fix secure docker and configs. Contributed by Xiaoyu Yao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/eddbe997
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/eddbe997
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/eddbe997

Branch: refs/heads/HDDS-4
Commit: eddbe997d86c54c21763b45de8d7f4a89a97dfa2
Parents: 87acc15
Author: Ajay Kumar 
Authored: Mon Oct 1 11:03:27 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:46 2018 -0800

--
 hadoop-dist/src/main/compose/ozonesecure/.env   |  18 +++
 .../compose/ozonesecure/docker-compose.yaml |  57 +++
 .../src/main/compose/ozonesecure/docker-config  | 103 +
 .../ozonesecure/docker-image/runner/Dockerfile  |  39 +
 .../ozonesecure/docker-image/runner/build.sh|  26 
 .../docker-image/runner/scripts/envtoconf.py| 115 ++
 .../docker-image/runner/scripts/krb5.conf   |  38 +
 .../docker-image/runner/scripts/starter.sh  | 100 +
 .../runner/scripts/transformation.py| 150 +++
 hadoop-hdds/common/pom.xml  |   6 +
 .../apache/hadoop/hdds/scm/ScmConfigKeys.java   |   8 +-
 .../apache/hadoop/ozone/OzoneConfigKeys.java|   3 -
 .../common/src/main/resources/ozone-default.xml |   6 +-
 .../hadoop/ozone/HddsDatanodeService.java   |  30 
 .../StorageContainerManagerHttpServer.java  |   4 +-
 .../src/test/compose/compose-secure/.env|  17 ---
 .../compose/compose-secure/docker-compose.yaml  |  66 
 .../test/compose/compose-secure/docker-config   |  99 
 .../apache/hadoop/ozone/om/OMConfigKeys.java|   4 +-
 .../hadoop/ozone/TestSecureOzoneCluster.java|  11 +-
 .../hadoop/ozone/om/OzoneManagerHttpServer.java |   4 +-
 21 files changed, 701 insertions(+), 203 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/eddbe997/hadoop-dist/src/main/compose/ozonesecure/.env
--
diff --git a/hadoop-dist/src/main/compose/ozonesecure/.env 
b/hadoop-dist/src/main/compose/ozonesecure/.env
new file mode 100644
index 000..a494004
--- /dev/null
+++ b/hadoop-dist/src/main/compose/ozonesecure/.env
@@ -0,0 +1,18 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+HDDS_VERSION=${hdds.version}
+SRC_VOLUME=../../

http://git-wip-us.apache.org/repos/asf/hadoop/blob/eddbe997/hadoop-dist/src/main/compose/ozonesecure/docker-compose.yaml
--
diff --git a/hadoop-dist/src/main/compose/ozonesecure/docker-compose.yaml 
b/hadoop-dist/src/main/compose/ozonesecure/docker-compose.yaml
new file mode 100644
index 000..42ab05e
--- /dev/null
+++ b/hadoop-dist/src/main/compose/ozonesecure/docker-compose.yaml
@@ -0,0 +1,57 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+version: "3"
+services:
+   kdc:
+  image: ahadoop/kdc:v1
+  hostname: kdc
+  volumes:
+  - $SRC_VOLUME:/opt/hadoop
+   datanode:
+  image: ahadoop/runner:latest
+  volumes:
+- $SRC_VOLUME:/opt/hadoop
+  hostname: datanode
+  ports:
+- 9864
+  command: ["/opt/hadoop/bin/ozone","datanode"]
+  env_file:
+- ./docker-config
+   ozoneManager:
+   

[50/50] [abbrv] hadoop git commit: HDDS-804. Block token: Add secret token manager. Contributed by Ajay Kumar.

2018-11-29 Thread xyao
HDDS-804. Block token: Add secret token manager. Contributed by Ajay Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/187bbbe6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/187bbbe6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/187bbbe6

Branch: refs/heads/HDDS-4
Commit: 187bbbe68cc87729f327e5ed614474d1269b8d85
Parents: 87f51d2
Author: Ajay Kumar 
Authored: Thu Nov 29 08:00:41 2018 -0800
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:58:55 2018 -0800

--
 .../hdds/security/x509/SecurityConfig.java  |   9 +
 .../security/OzoneBlockTokenSecretManager.java  | 191 +++
 .../OzoneDelegationTokenSecretManager.java  | 455 +
 .../ozone/security/OzoneSecretManager.java  | 498 ---
 .../TestOzoneBlockTokenSecretManager.java   | 146 ++
 .../TestOzoneDelegationTokenSecretManager.java  | 218 
 .../ozone/security/TestOzoneSecretManager.java  | 216 
 .../apache/hadoop/ozone/om/OzoneManager.java|  23 +-
 .../security/TestOzoneManagerBlockToken.java| 251 ++
 9 files changed, 1371 insertions(+), 636 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/187bbbe6/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/SecurityConfig.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/SecurityConfig.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/SecurityConfig.java
index ee20a21..b38ee7c 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/SecurityConfig.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/SecurityConfig.java
@@ -21,6 +21,7 @@ package org.apache.hadoop.hdds.security.x509;
 
 import com.google.common.base.Preconditions;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.OzoneConfigKeys;
 import org.bouncycastle.jce.provider.BouncyCastleProvider;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -77,6 +78,7 @@ public class SecurityConfig {
   private final Duration certDuration;
   private final String x509SignatureAlgo;
   private final Boolean grpcBlockTokenEnabled;
+  private final int getMaxKeyLength;
   private final String certificateDir;
   private final String certificateFileName;
 
@@ -88,6 +90,9 @@ public class SecurityConfig {
   public SecurityConfig(Configuration configuration) {
 Preconditions.checkNotNull(configuration, "Configuration cannot be null");
 this.configuration = configuration;
+this.getMaxKeyLength = configuration.getInt(
+OzoneConfigKeys.OZONE_MAX_KEY_LEN,
+OzoneConfigKeys.OZONE_MAX_KEY_LEN_DEFAULT);
 this.size = this.configuration.getInt(HDDS_KEY_LEN, HDDS_DEFAULT_KEY_LEN);
 this.keyAlgo = this.configuration.get(HDDS_KEY_ALGORITHM,
 HDDS_DEFAULT_KEY_ALGORITHM);
@@ -289,4 +294,8 @@ public class SecurityConfig {
   throw new SecurityException("Unknown security provider:" + provider);
 }
   }
+
+  public int getMaxKeyLength() {
+return this.getMaxKeyLength;
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/187bbbe6/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneBlockTokenSecretManager.java
--
diff --git 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneBlockTokenSecretManager.java
 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneBlockTokenSecretManager.java
new file mode 100644
index 000..3b833cb
--- /dev/null
+++ 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneBlockTokenSecretManager.java
@@ -0,0 +1,191 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.security;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import 

[43/50] [abbrv] hadoop git commit: HDDS-836. Add TokenIdentifier Ozone for delegation token and block token. Contributed by Ajay Kumar.

2018-11-29 Thread xyao
HDDS-836. Add TokenIdentifier Ozone for delegation token and block token. 
Contributed by Ajay Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9c0baf0e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9c0baf0e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9c0baf0e

Branch: refs/heads/HDDS-4
Commit: 9c0baf0ea30a0dc38778c6030eca1468ed7e40ae
Parents: 4770e9d
Author: Xiaoyu Yao 
Authored: Wed Nov 14 14:26:33 2018 -0800
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:47 2018 -0800

--
 .../hdds/security/x509/keys/SecurityUtil.java   |  59 
 hadoop-hdds/common/src/main/proto/hdds.proto|  24 ++
 .../security/OzoneBlockTokenIdentifier.java | 178 +++
 .../ozone/security/OzoneBlockTokenSelector.java |  55 
 .../security/OzoneDelegationTokenSelector.java  |  52 
 .../hadoop/ozone/security/OzoneSecretKey.java   | 195 
 .../ozone/security/OzoneTokenIdentifier.java| 217 ++
 .../hadoop/ozone/security/package-info.java |  21 ++
 .../src/main/proto/OzoneManagerProtocol.proto   |  20 ++
 .../security/TestOzoneBlockTokenIdentifier.java | 255 
 .../security/TestOzoneTokenIdentifier.java  | 300 +++
 .../hadoop/ozone/security/package-info.java |  21 ++
 12 files changed, 1397 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9c0baf0e/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/keys/SecurityUtil.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/keys/SecurityUtil.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/keys/SecurityUtil.java
index 2ca8825..6147d3a 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/keys/SecurityUtil.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/keys/SecurityUtil.java
@@ -18,6 +18,15 @@
  */
 package org.apache.hadoop.hdds.security.x509.keys;
 
+import java.security.KeyFactory;
+import java.security.NoSuchAlgorithmException;
+import java.security.NoSuchProviderException;
+import java.security.PrivateKey;
+import java.security.PublicKey;
+import java.security.spec.InvalidKeySpecException;
+import java.security.spec.PKCS8EncodedKeySpec;
+import java.security.spec.X509EncodedKeySpec;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
 import org.apache.hadoop.hdds.security.x509.exceptions.CertificateException;
 import org.bouncycastle.asn1.ASN1ObjectIdentifier;
 import org.bouncycastle.asn1.ASN1Sequence;
@@ -76,4 +85,54 @@ public final class SecurityUtil {
 }
 throw new CertificateException("No PKCS#9 extension found in CSR");
   }
+
+  /*
+   * Returns private key created from encoded key.
+   * @return private key if successful else returns null.
+   */
+  public static PrivateKey getPrivateKey(byte[] encodedKey,
+  SecurityConfig secureConfig) {
+PrivateKey pvtKey = null;
+if (encodedKey == null || encodedKey.length == 0) {
+  return null;
+}
+
+try {
+  KeyFactory kf = null;
+
+  kf = KeyFactory.getInstance(secureConfig.getKeyAlgo(),
+  secureConfig.getProvider());
+  pvtKey = kf.generatePrivate(new PKCS8EncodedKeySpec(encodedKey));
+
+} catch (NoSuchAlgorithmException | InvalidKeySpecException |
+NoSuchProviderException e) {
+  return null;
+}
+return pvtKey;
+  }
+
+  /*
+   * Returns public key created from encoded key.
+   * @return public key if successful else returns null.
+   */
+  public static PublicKey getPublicKey(byte[] encodedKey,
+  SecurityConfig secureConfig) {
+PublicKey key = null;
+if (encodedKey == null || encodedKey.length == 0) {
+  return null;
+}
+
+try {
+  KeyFactory kf = null;
+  kf = KeyFactory.getInstance(secureConfig.getKeyAlgo(),
+  secureConfig.getProvider());
+  key = kf.generatePublic(new X509EncodedKeySpec(encodedKey));
+
+} catch (NoSuchAlgorithmException | InvalidKeySpecException |
+NoSuchProviderException e) {
+  return null;
+}
+return key;
+  }
+
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9c0baf0e/hadoop-hdds/common/src/main/proto/hdds.proto
--
diff --git a/hadoop-hdds/common/src/main/proto/hdds.proto 
b/hadoop-hdds/common/src/main/proto/hdds.proto
index cf3d6d4..8598fbf 100644
--- a/hadoop-hdds/common/src/main/proto/hdds.proto
+++ b/hadoop-hdds/common/src/main/proto/hdds.proto
@@ -189,6 +189,30 @@ message ContainerBlockID {
 required int64 localID = 2;
 }
 
+
+/**
+ * Information for the Hdds block token.
+ * When adding further fields, make sure 

[12/50] [abbrv] hadoop git commit: HDDS-850. ReadStateMachineData hits OverlappingFileLockException in ContainerStateMachine. Contributed by Shashikant Banerjee.

2018-11-29 Thread xyao
HDDS-850. ReadStateMachineData hits OverlappingFileLockException in 
ContainerStateMachine. Contributed by Shashikant Banerjee.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5e102f9a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5e102f9a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5e102f9a

Branch: refs/heads/HDDS-4
Commit: 5e102f9aa54d3057ef5f0755d45428f22a24990b
Parents: 7eb0d3a
Author: Shashikant Banerjee 
Authored: Thu Nov 29 22:20:08 2018 +0530
Committer: Shashikant Banerjee 
Committed: Thu Nov 29 22:20:08 2018 +0530

--
 .../apache/hadoop/hdds/scm/ScmConfigKeys.java   |   8 ++
 .../apache/hadoop/ozone/OzoneConfigKeys.java|   9 ++
 .../main/proto/DatanodeContainerProtocol.proto  |   1 +
 .../common/src/main/resources/ozone-default.xml |   8 ++
 .../server/ratis/ContainerStateMachine.java | 134 +++
 .../server/ratis/XceiverServerRatis.java|  14 +-
 .../container/keyvalue/KeyValueHandler.java |   7 +-
 .../keyvalue/impl/ChunkManagerImpl.java |  11 +-
 .../keyvalue/interfaces/ChunkManager.java   |   5 +-
 .../keyvalue/TestChunkManagerImpl.java  |   6 +-
 .../common/impl/TestContainerPersistence.java   |  11 +-
 11 files changed, 143 insertions(+), 71 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5e102f9a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
index 6733b8e..062b101 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
@@ -93,6 +93,14 @@ public final class ScmConfigKeys {
   public static final String DFS_CONTAINER_RATIS_LOG_QUEUE_SIZE =
   "dfs.container.ratis.log.queue.size";
   public static final int DFS_CONTAINER_RATIS_LOG_QUEUE_SIZE_DEFAULT = 128;
+
+  // expiry interval stateMachineData cache entry inside containerStateMachine
+  public static final String
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_CACHE_EXPIRY_INTERVAL =
+  "dfs.container.ratis.statemachine.cache.expiry.interval";
+  public static final String
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_CACHE_EXPIRY_INTERVAL_DEFAULT =
+  "10s";
   public static final String DFS_RATIS_CLIENT_REQUEST_TIMEOUT_DURATION_KEY =
   "dfs.ratis.client.request.timeout.duration";
   public static final TimeDuration

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5e102f9a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
index 879f773..df233f7 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
@@ -249,6 +249,15 @@ public final class OzoneConfigKeys {
   DFS_CONTAINER_RATIS_STATEMACHINEDATA_SYNC_TIMEOUT_DEFAULT =
   ScmConfigKeys.DFS_CONTAINER_RATIS_STATEMACHINEDATA_SYNC_TIMEOUT_DEFAULT;
 
+  public static final String
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_CACHE_EXPIRY_INTERVAL =
+  ScmConfigKeys.
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_CACHE_EXPIRY_INTERVAL;
+  public static final String
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_CACHE_EXPIRY_INTERVAL_DEFAULT =
+  ScmConfigKeys.
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_CACHE_EXPIRY_INTERVAL_DEFAULT;
+
   public static final String DFS_CONTAINER_RATIS_DATANODE_STORAGE_DIR =
   "dfs.container.ratis.datanode.storage.dir";
   public static final String DFS_RATIS_CLIENT_REQUEST_TIMEOUT_DURATION_KEY =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5e102f9a/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
--
diff --git a/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto 
b/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
index 3695b6b..5237af8 100644
--- a/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
+++ b/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
@@ -392,6 +392,7 @@ message  WriteChunkResponseProto {
 message  ReadChunkRequestProto  {
   required DatanodeBlockID blockID = 1;
   required ChunkInfo chunkData = 2;
+  optional bool readFromTmpFile = 3 [default = false];
 }
 
 message  

[11/50] [abbrv] hadoop git commit: HADOOP-14927. ITestS3GuardTool failures in testDestroyNoBucket(). Contributed by Gabor Bota.

2018-11-29 Thread xyao
HADOOP-14927. ITestS3GuardTool failures in testDestroyNoBucket(). Contributed 
by Gabor Bota.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7eb0d3a3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7eb0d3a3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7eb0d3a3

Branch: refs/heads/HDDS-4
Commit: 7eb0d3a32435da110dc9e6004dba8c5c9b082c35
Parents: 184cced
Author: Sean Mackrory 
Authored: Wed Nov 28 16:57:12 2018 -0700
Committer: Sean Mackrory 
Committed: Thu Nov 29 09:36:39 2018 -0700

--
 .../hadoop/fs/s3a/s3guard/S3GuardTool.java  | 38 
 1 file changed, 24 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7eb0d3a3/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
index 1316121..aea57a6 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
@@ -218,6 +218,27 @@ public abstract class S3GuardTool extends Configured 
implements Tool {
 format.addOptionWithValue(SECONDS_FLAG);
   }
 
+  protected void checkMetadataStoreUri(List paths) throws IOException {
+// be sure that path is provided in params, so there's no IOoBE
+String s3Path = "";
+if(!paths.isEmpty()) {
+  s3Path = paths.get(0);
+}
+
+// Check if DynamoDB url is set from arguments.
+String metadataStoreUri = getCommandFormat().getOptValue(META_FLAG);
+if(metadataStoreUri == null || metadataStoreUri.isEmpty()) {
+  // If not set, check if filesystem is guarded by creating an
+  // S3AFileSystem and check if hasMetadataStore is true
+  try (S3AFileSystem s3AFileSystem = (S3AFileSystem)
+  S3AFileSystem.newInstance(toUri(s3Path), getConf())){
+Preconditions.checkState(s3AFileSystem.hasMetadataStore(),
+"The S3 bucket is unguarded. " + getName()
++ " can not be used on an unguarded bucket.");
+  }
+}
+  }
+
   /**
* Parse metadata store from command line option or HDFS configuration.
*
@@ -500,20 +521,7 @@ public abstract class S3GuardTool extends Configured 
implements Tool {
 public int run(String[] args, PrintStream out) throws Exception {
   List paths = parseArgs(args);
   Map options = new HashMap<>();
-  String s3Path = paths.get(0);
-
-  // Check if DynamoDB url is set from arguments.
-  String metadataStoreUri = getCommandFormat().getOptValue(META_FLAG);
-  if(metadataStoreUri == null || metadataStoreUri.isEmpty()) {
-// If not set, check if filesystem is guarded by creating an
-// S3AFileSystem and check if hasMetadataStore is true
-try (S3AFileSystem s3AFileSystem = (S3AFileSystem)
-S3AFileSystem.newInstance(toUri(s3Path), getConf())){
-  Preconditions.checkState(s3AFileSystem.hasMetadataStore(),
-  "The S3 bucket is unguarded. " + getName()
-  + " can not be used on an unguarded bucket.");
-}
-  }
+  checkMetadataStoreUri(paths);
 
   String readCap = getCommandFormat().getOptValue(READ_FLAG);
   if (StringUtils.isNotEmpty(readCap)) {
@@ -590,6 +598,8 @@ public abstract class S3GuardTool extends Configured 
implements Tool {
 throw e;
   }
 
+  checkMetadataStoreUri(paths);
+
   try {
 initMetadataStore(false);
   } catch (FileNotFoundException e) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[15/50] [abbrv] hadoop git commit: HDDS-877. Ensure correct surefire version for Ozone test. Contributed by Xiaoyu Yao.

2018-11-29 Thread xyao
HDDS-877. Ensure correct surefire version for Ozone test. Contributed by Xiaoyu 
Yao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ae5fbdd9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ae5fbdd9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ae5fbdd9

Branch: refs/heads/HDDS-4
Commit: ae5fbdd9ed6ef09b588637f2eadd7a04e8382289
Parents: f534736
Author: Xiaoyu Yao 
Authored: Thu Nov 29 11:37:36 2018 -0800
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:37:36 2018 -0800

--
 hadoop-hdds/pom.xml  | 1 +
 hadoop-ozone/pom.xml | 3 +--
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ae5fbdd9/hadoop-hdds/pom.xml
--
diff --git a/hadoop-hdds/pom.xml b/hadoop-hdds/pom.xml
index 869ecbf..5537b3a 100644
--- a/hadoop-hdds/pom.xml
+++ b/hadoop-hdds/pom.xml
@@ -53,6 +53,7 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
 0.5.1
 1.5.0.Final
 
+3.0.0-M1
 
   
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ae5fbdd9/hadoop-ozone/pom.xml
--
diff --git a/hadoop-ozone/pom.xml b/hadoop-ozone/pom.xml
index 39c65d5..4c13bd6 100644
--- a/hadoop-ozone/pom.xml
+++ b/hadoop-ozone/pom.xml
@@ -37,8 +37,7 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
 1.60
 Badlands
 ${ozone.version}
-
-
+3.0.0-M1
   
   
 common


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[49/50] [abbrv] hadoop git commit: HDDS-696. Bootstrap genesis SCM(CA) with self-signed certificate. Contributed by Anu Engineer.

2018-11-29 Thread xyao
HDDS-696. Bootstrap genesis SCM(CA) with self-signed certificate.
Contributed by Anu Engineer.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/87f51d23
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/87f51d23
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/87f51d23

Branch: refs/heads/HDDS-4
Commit: 87f51d23d9899e46ac31058b726b272f1a7880ba
Parents: 9b9b8e4
Author: Anu Engineer 
Authored: Tue Nov 27 15:02:07 2018 -0800
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:58:54 2018 -0800

--
 .../org/apache/hadoop/hdds/HddsConfigKeys.java  |  57 ++-
 .../hdds/security/x509/SecurityConfig.java  |  84 -
 .../authority/CertificateServer.java|  11 +-
 .../certificate/authority/DefaultCAServer.java  | 373 +++
 .../certificate/client/CertificateClient.java   |  18 +-
 .../certificate/utils/CertificateCodec.java | 280 ++
 .../x509/certificate/utils/package-info.java|  22 ++
 .../certificates/CertificateSignRequest.java| 245 
 .../certificates/SelfSignedCertificate.java | 212 ---
 .../x509/certificates/package-info.java |  22 --
 .../utils/CertificateSignRequest.java   | 245 
 .../utils/SelfSignedCertificate.java| 238 
 .../x509/certificates/utils/package-info.java   |  22 ++
 .../security/x509/keys/HDDSKeyGenerator.java|  32 +-
 .../security/x509/keys/HDDSKeyPEMWriter.java| 255 -
 .../hdds/security/x509/keys/KeyCodec.java   | 337 +
 .../apache/hadoop/ozone/common/StorageInfo.java |   2 +-
 .../authority/TestDefaultCAServer.java  | 118 ++
 .../certificate/authority/package-info.java |  22 ++
 .../certificate/utils/TestCertificateCodec.java | 218 +++
 .../TestCertificateSignRequest.java |  19 +-
 .../x509/certificates/TestRootCertificate.java  |  46 +--
 .../x509/certificates/package-info.java |   4 +-
 .../x509/keys/TestHDDSKeyPEMWriter.java | 216 ---
 .../hdds/security/x509/keys/TestKeyCodec.java   | 216 +++
 .../hadoop/utils/db/TestDBStoreBuilder.java |   5 +-
 .../hadoop/ozone/TestSecureOzoneCluster.java|  15 +-
 27 files changed, 2267 insertions(+), 1067 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/87f51d23/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
index 553f4aa..a02152d 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
@@ -1,19 +1,18 @@
 /**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
  * 
  * http://www.apache.org/licenses/LICENSE-2.0
  * 
  * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
  */
 package org.apache.hadoop.hdds;
 
@@ -25,61 +24,45 @@ import org.apache.hadoop.utils.db.DBProfile;
  */
 public final class HddsConfigKeys {
 
-  /**
-   * Do not instantiate.
-   */
-  private HddsConfigKeys() {
-  }
-
   public static final String HDDS_HEARTBEAT_INTERVAL =
   "hdds.heartbeat.interval";
   public static final String HDDS_HEARTBEAT_INTERVAL_DEFAULT =
   "30s";
-
   public static final String 

[31/50] [abbrv] hadoop git commit: HDDS-10. Add kdc docker image for secure ozone cluster. Contributed by Ajay Kumar.

2018-11-29 Thread xyao
HDDS-10. Add kdc docker image for secure ozone cluster. Contributed by Ajay 
Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/812c07ee
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/812c07ee
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/812c07ee

Branch: refs/heads/HDDS-4
Commit: 812c07ee4ffe110953d0d6b4ff3c1cb0287d569b
Parents: 585c344
Author: Xiaoyu Yao 
Authored: Thu Oct 4 13:20:09 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:46 2018 -0800

--
 .../dist/src/main/compose/ozonesecure/README.md | 22 +
 .../compose/ozonesecure/docker-compose.yaml | 94 
 .../docker-image/docker-krb5/Dockerfile-krb5| 33 +++
 .../docker-image/docker-krb5/README.md  | 34 +++
 .../docker-image/docker-krb5/kadm5.acl  |  1 +
 .../docker-image/docker-krb5/krb5.conf  | 40 +
 .../docker-image/docker-krb5/launcher.sh| 25 ++
 7 files changed, 210 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/812c07ee/hadoop-ozone/dist/src/main/compose/ozonesecure/README.md
--
diff --git a/hadoop-ozone/dist/src/main/compose/ozonesecure/README.md 
b/hadoop-ozone/dist/src/main/compose/ozonesecure/README.md
new file mode 100644
index 000..0ce9a0a
--- /dev/null
+++ b/hadoop-ozone/dist/src/main/compose/ozonesecure/README.md
@@ -0,0 +1,22 @@
+
+# Experimental UNSECURE krb5 Kerberos container.
+
+Only for development. Not for production.
+
+ Dockerfile for KDC:
+* ./docker-image/docker-krb5/Dockerfile-krb5
+
+ Dockerfile for SCM,OM and DataNode:
+* ./docker-image/runner/Dockerfile
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/812c07ee/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-compose.yaml
--
diff --git a/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-compose.yaml 
b/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-compose.yaml
index 42ab05e..fab5ba9 100644
--- a/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-compose.yaml
+++ b/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-compose.yaml
@@ -16,42 +16,58 @@
 
 version: "3"
 services:
-   kdc:
-  image: ahadoop/kdc:v1
-  hostname: kdc
-  volumes:
-  - $SRC_VOLUME:/opt/hadoop
-   datanode:
-  image: ahadoop/runner:latest
-  volumes:
-- $SRC_VOLUME:/opt/hadoop
-  hostname: datanode
-  ports:
-- 9864
-  command: ["/opt/hadoop/bin/ozone","datanode"]
-  env_file:
-- ./docker-config
-   ozoneManager:
-  image: ahadoop/runner:latest
-  hostname: om
-  volumes:
- - $SRC_VOLUME:/opt/hadoop
-  ports:
- - 9874:9874
-  environment:
- ENSURE_OM_INITIALIZED: /data/metadata/ozoneManager/current/VERSION
-  env_file:
-  - ./docker-config
-  command: ["/opt/hadoop/bin/ozone","om"]
-   scm:
-  image: ahadoop/runner:latest
-  hostname: scm
-  volumes:
- - $SRC_VOLUME:/opt/hadoop
-  ports:
- - 9876:9876
-  env_file:
-  - ./docker-config
-  environment:
-  ENSURE_SCM_INITIALIZED: /data/metadata/scm/current/VERSION
-  command: ["/opt/hadoop/bin/ozone","scm"]
+  kdc:
+build:
+  context: docker-image/docker-krb5
+  dockerfile: Dockerfile-krb5
+  args:
+buildno: 1
+hostname: kdc
+volumes:
+- $SRC_VOLUME:/opt/hadoop
+  datanode:
+build:
+  context: docker-image/runner
+  dockerfile: Dockerfile
+  args:
+buildno: 1
+volumes:
+- $SRC_VOLUME:/opt/hadoop
+hostname: datanode
+ports:
+- 9864
+command: ["/opt/hadoop/bin/ozone","datanode"]
+env_file:
+- docker-config
+  om:
+build:
+  context: docker-image/runner
+  dockerfile: Dockerfile
+  args:
+buildno: 1
+hostname: om
+volumes:
+- $SRC_VOLUME:/opt/hadoop
+ports:
+- 9874:9874
+environment:
+  ENSURE_OM_INITIALIZED: /data/metadata/om/current/VERSION
+env_file:
+- docker-config
+command: ["/opt/hadoop/bin/ozone","om"]
+  scm:
+build:
+  context: docker-image/runner
+  dockerfile: Dockerfile
+  args:
+buildno: 1
+hostname: scm
+volumes:
+- $SRC_VOLUME:/opt/hadoop
+ports:
+- 9876:9876
+env_file:
+- docker-config
+environment:
+  ENSURE_SCM_INITIALIZED: /data/metadata/scm/current/VERSION
+command: ["/opt/hadoop/bin/ozone","scm"]

http://git-wip-us.apache.org/repos/asf/hadoop/blob/812c07ee/hadoop-ozone/dist/src/main/compose/ozonesecure/docker-image/docker-krb5/Dockerfile-krb5

[26/50] [abbrv] hadoop git commit: HDDS-546. Resolve bouncy castle dependency for hadoop-hdds-common. Contributed by Ajay Kumar.

2018-11-29 Thread xyao
HDDS-546. Resolve bouncy castle dependency for hadoop-hdds-common. Contributed 
by Ajay Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d1c6ff7e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d1c6ff7e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d1c6ff7e

Branch: refs/heads/HDDS-4
Commit: d1c6ff7ea3f4f02df0d766860187a391975475d1
Parents: b802cb5
Author: Ajay Kumar 
Authored: Tue Sep 25 14:19:14 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:46 2018 -0800

--
 hadoop-hdds/common/pom.xml | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6ff7e/hadoop-hdds/common/pom.xml
--
diff --git a/hadoop-hdds/common/pom.xml b/hadoop-hdds/common/pom.xml
index 175500a..15ae307 100644
--- a/hadoop-hdds/common/pom.xml
+++ b/hadoop-hdds/common/pom.xml
@@ -59,6 +59,10 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   io.dropwizard.metrics
   metrics-core
 
+
+  org.bouncycastle
+  bcprov-jdk15on
+
   
 
 
@@ -112,7 +116,7 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
 
   org.bouncycastle
   bcprov-jdk15on
-  1.49
+  1.54
 
   
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[33/50] [abbrv] hadoop git commit: HDDS-704. Fix the Dependency convergence issue on HDDS-4. Contributed by Xiaoyu Yao.

2018-11-29 Thread xyao
HDDS-704. Fix the Dependency convergence issue on HDDS-4. Contributed by Xiaoyu 
Yao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/950a4733
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/950a4733
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/950a4733

Branch: refs/heads/HDDS-4
Commit: 950a4733ac86a0284ec46a1060ea3862f15181d3
Parents: b4c0280
Author: Xiaoyu Yao 
Authored: Fri Oct 19 21:09:51 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:46 2018 -0800

--
 hadoop-hdds/common/pom.xml | 7 ---
 1 file changed, 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/950a4733/hadoop-hdds/common/pom.xml
--
diff --git a/hadoop-hdds/common/pom.xml b/hadoop-hdds/common/pom.xml
index 30dcc58..061158c 100644
--- a/hadoop-hdds/common/pom.xml
+++ b/hadoop-hdds/common/pom.xml
@@ -112,13 +112,6 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   commons-pool2
   2.6.0
 
-
-
-  org.bouncycastle
-  bcprov-jdk15on
-  1.54
-
-
 
   org.bouncycastle
   bcpkix-jdk15on


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[09/50] [abbrv] hadoop git commit: YARN-9069. Fix SchedulerInfo#getSchedulerType for custom schedulers. Contributed by Bilwa S T.

2018-11-29 Thread xyao
YARN-9069. Fix SchedulerInfo#getSchedulerType for custom schedulers. 
Contributed by Bilwa S T.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/07142f54
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/07142f54
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/07142f54

Branch: refs/heads/HDDS-4
Commit: 07142f54a8c7f70857e99c041f3a2a5189c809b5
Parents: a68d766
Author: bibinchundatt 
Authored: Thu Nov 29 22:02:59 2018 +0530
Committer: bibinchundatt 
Committed: Thu Nov 29 22:02:59 2018 +0530

--
 .../yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java  | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/07142f54/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
index 163f707..ede0d15 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
@@ -54,6 +54,8 @@ public class SchedulerInfo {
   this.schedulerName = "Fair Scheduler";
 } else if (rs instanceof FifoScheduler) {
   this.schedulerName = "Fifo Scheduler";
+} else {
+  this.schedulerName = rs.getClass().getSimpleName();
 }
 this.minAllocResource = new 
ResourceInfo(rs.getMinimumResourceCapability());
 this.maxAllocResource = new 
ResourceInfo(rs.getMaximumResourceCapability());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[42/50] [abbrv] hadoop git commit: HDDS-592. Fix ozone-secure.robot test. Contributed by Ajay Kumar.

2018-11-29 Thread xyao
HDDS-592. Fix ozone-secure.robot test. Contributed by Ajay Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8bbc95ee
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8bbc95ee
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8bbc95ee

Branch: refs/heads/HDDS-4
Commit: 8bbc95ee536caae21f28f9513fe624955aa122a0
Parents: 9128c66
Author: Xiaoyu Yao 
Authored: Tue Nov 6 16:53:04 2018 -0800
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:57:47 2018 -0800

--
 .../acceptance/ozone-secure.robot   |  95 
 .../dist/src/main/compose/ozonesecure/.env  |   1 -
 .../compose/ozonesecure/docker-compose.yaml |  22 ++--
 .../ozonesecure/docker-image/runner/Dockerfile  |   4 +-
 .../main/smoketest/security/ozone-secure.robot  | 111 +++
 hadoop-ozone/dist/src/main/smoketest/test.sh|   2 +
 6 files changed, 126 insertions(+), 109 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8bbc95ee/hadoop-ozone/acceptance-test/src/test/robotframework/acceptance/ozone-secure.robot
--
diff --git 
a/hadoop-ozone/acceptance-test/src/test/robotframework/acceptance/ozone-secure.robot
 
b/hadoop-ozone/acceptance-test/src/test/robotframework/acceptance/ozone-secure.robot
deleted file mode 100644
index 7fc1088..000
--- 
a/hadoop-ozone/acceptance-test/src/test/robotframework/acceptance/ozone-secure.robot
+++ /dev/null
@@ -1,95 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-*** Settings ***
-Documentation   Smoke test to start cluster with docker-compose 
environments.
-Library OperatingSystem
-Suite Setup Startup Ozone Cluster
-Suite Teardown  Teardown Ozone Cluster
-
-*** Variables ***
-${COMMON_REST_HEADER}   -H "x-ozone-user: bilbo" -H "x-ozone-version: v1" -H  
"Date: Mon, 26 Jun 2017 04:23:30 GMT" -H "Authorization:OZONE root"
-${version}
-
-*** Test Cases ***
-
-Daemons are running
-Is daemon running   om
-Is daemon running   scm
-Is daemon running   datanode
-Is daemon running   ozone.kdc
-
-Check if datanode is connected to the scm
-Wait Until Keyword Succeeds 3min5secHave healthy datanodes   1
-
-Test rest interface
-${result} = Execute on  0   datanodecurl -i -X POST 
${COMMON_RESTHEADER} "http://localhost:9880/volume1;
-Should contain  ${result}   201 Created
-${result} = Execute on  0   datanodecurl -i -X POST 
${COMMON_RESTHEADER} "http://localhost:9880/volume1/bucket1;
-Should contain  ${result}   201 Created
-${result} = Execute on  0   datanodecurl -i -X DELETE 
${COMMON_RESTHEADER} "http://localhost:9880/volume1/bucket1;
-Should contain  ${result}   200 OK
-${result} = Execute on  0   datanodecurl -i -X DELETE 
${COMMON_RESTHEADER} "http://localhost:9880/volume1;
-Should contain  ${result}   200 OK
-
-Test ozone cli
-${result} = Execute on  1   datanodeozone oz -createVolume 
o3://om/hive -user bilbo -quota 100TB -root
-Should contain  ${result}   Client cannot 
authenticate via
-# Authenticate testuser
-Execute on  0   datanodekinit -k 
testuser/datan...@example.com -t /etc/security/keytabs/testuser.keytab
-Execute on  0   datanodeozone oz -createVolume 
o3://om/hive -user bilbo -quota 100TB -root
-${result} = Execute on  0   datanodeozone oz -listVolume 
o3://om/ -user bilbo | grep -Ev 'Removed|WARN|DEBUG|ERROR|INFO|TRACE' | jq -r 
'.[] | select(.volumeName=="hive")'
-Should contain  ${result}   createdOn
-Execute on  0   datanodeozone oz -updateVolume 
o3://om/hive -user bill -quota 10TB
-${result} = Execute on  0 

[37/50] [abbrv] hadoop git commit: HDDS-9. Add GRPC protocol interceptors for Ozone Block Token. Contributed by Xiaoyu Yao.

2018-11-29 Thread xyao
http://git-wip-us.apache.org/repos/asf/hadoop/blob/96bd574d/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestSecureOzoneContainer.java
--
diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestSecureOzoneContainer.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestSecureOzoneContainer.java
new file mode 100644
index 000..02d5e28
--- /dev/null
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestSecureOzoneContainer.java
@@ -0,0 +1,209 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.container.ozoneimpl;
+
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.security.exception.SCMSecurityException;
+import org.apache.hadoop.hdds.security.token.OzoneBlockTokenIdentifier;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.ozone.OzoneConfigKeys;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.container.ContainerTestHelper;
+import org.apache.hadoop.hdds.scm.TestUtils;
+import org.apache.hadoop.hdds.scm.XceiverClientGrpc;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+import org.apache.hadoop.security.SecurityUtil;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.util.Time;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+import org.junit.rules.Timeout;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.security.PrivilegedAction;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.EnumSet;
+import java.util.UUID;
+
+import static org.apache.hadoop.hdds.HddsConfigKeys.OZONE_METADATA_DIRS;
+import static org.apache.hadoop.hdds.scm.ScmConfigKeys.HDDS_DATANODE_DIR_KEY;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+/**
+ * Tests ozone containers via secure grpc/netty.
+ */
+@RunWith(Parameterized.class)
+public class TestSecureOzoneContainer {
+  private static final Logger LOG = LoggerFactory.getLogger(
+  TestSecureOzoneContainer.class);
+  /**
+   * Set the timeout for every test.
+   */
+  @Rule
+  public Timeout testTimeout = new Timeout(30);
+
+  @Rule
+  public TemporaryFolder tempFolder = new TemporaryFolder();
+
+  private OzoneConfiguration conf;
+  private SecurityConfig secConfig;
+  private Boolean requireBlockToken;
+  private Boolean hasBlockToken;
+  private Boolean blockTokeExpired;
+
+
+  public TestSecureOzoneContainer(Boolean requireBlockToken,
+  Boolean hasBlockToken, Boolean blockTokenExpired) {
+this.requireBlockToken = requireBlockToken;
+this.hasBlockToken = hasBlockToken;
+this.blockTokeExpired = blockTokenExpired;
+  }
+
+  @Parameterized.Parameters
+  public static Collection blockTokenOptions() {
+return Arrays.asList(new Object[][] {
+{true, true, false},
+{true, true, true},
+{true, false, false},
+{false, true, false},
+{false, false, false}});
+  }
+
+  @Before
+  public void setup() throws IOException{
+conf = new OzoneConfiguration();
+String ozoneMetaPath =
+GenericTestUtils.getTempPath("ozoneMeta");
+conf.set(OZONE_METADATA_DIRS, ozoneMetaPath);
+
+secConfig = new SecurityConfig(conf);
+
+  }
+
+  @Test
+  public void testCreateOzoneContainer() throws Exception {
+LOG.info("Test case: requireBlockToken: {} hasBlockToken: {} " +

[06/50] [abbrv] hadoop git commit: HDDS-642. Add chill mode exit condition for pipeline availability. Contributed by Yiqun Lin.

2018-11-29 Thread xyao
HDDS-642. Add chill mode exit condition for pipeline availability. Contributed 
by Yiqun Lin.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b71cc7f3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b71cc7f3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b71cc7f3

Branch: refs/heads/HDDS-4
Commit: b71cc7f33edbbf6a98d1efb330f1c748b5dd6e75
Parents: efc4d91
Author: Ajay Kumar 
Authored: Wed Nov 28 17:45:46 2018 -0800
Committer: Ajay Kumar 
Committed: Wed Nov 28 17:47:57 2018 -0800

--
 .../org/apache/hadoop/hdds/HddsConfigKeys.java  |   5 +
 .../common/src/main/resources/ozone-default.xml |   9 ++
 .../scm/chillmode/PipelineChillModeRule.java| 108 +++
 .../hdds/scm/chillmode/SCMChillModeManager.java |  19 +++-
 .../scm/server/StorageContainerManager.java |   5 +-
 .../scm/chillmode/TestSCMChillModeManager.java  |  81 --
 6 files changed, 213 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b71cc7f3/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
index 2d28a5b..f16503e 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
@@ -87,6 +87,11 @@ public final class HddsConfigKeys {
   "hdds.scm.chillmode.min.datanode";
   public static final int HDDS_SCM_CHILLMODE_MIN_DATANODE_DEFAULT = 1;
 
+  public static final String HDDS_SCM_CHILLMODE_PIPELINE_AVAILABILITY_CHECK =
+  "hdds.scm.chillmode.pipeline-availability.check";
+  public static final boolean
+  HDDS_SCM_CHILLMODE_PIPELINE_AVAILABILITY_CHECK_DEFAULT = false;
+
   // % of containers which should have at least one reported replica
   // before SCM comes out of chill mode.
   public static final String HDDS_SCM_CHILLMODE_THRESHOLD_PCT =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b71cc7f3/hadoop-hdds/common/src/main/resources/ozone-default.xml
--
diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml 
b/hadoop-hdds/common/src/main/resources/ozone-default.xml
index 9f3d7e1..aa22b2b 100644
--- a/hadoop-hdds/common/src/main/resources/ozone-default.xml
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -1232,6 +1232,15 @@
   
 
   
+hdds.scm.chillmode.pipeline-availability.check
+false
+HDDS,SCM,OPERATION
+
+  Boolean value to enable pipeline availability check during SCM chill 
mode.
+
+  
+
+  
 hdds.container.action.max.limit
 20
 DATANODE

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b71cc7f3/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/PipelineChillModeRule.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/PipelineChillModeRule.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/PipelineChillModeRule.java
new file mode 100644
index 000..f9a6e59
--- /dev/null
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/chillmode/PipelineChillModeRule.java
@@ -0,0 +1,108 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.chillmode;
+
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import 
org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.PipelineReport;
+import 
org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.PipelineReportsProto;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
+import 

[03/50] [abbrv] hadoop git commit: YARN-8975. [Submarine] Use predefined Charset object StandardCharsets.UTF_8 instead of String UTF-8. (Zhankun Tang via wangda)

2018-11-29 Thread xyao
YARN-8975. [Submarine] Use predefined Charset object StandardCharsets.UTF_8 
instead of String UTF-8. (Zhankun Tang via wangda)

Change-Id: If6c7904aa17895e543cfca245264249eb7328bdc


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/89764392
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/89764392
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/89764392

Branch: refs/heads/HDDS-4
Commit: 897643928c534062d45d00a36b95fd99b4f6
Parents: 8ebeda9
Author: Wangda Tan 
Authored: Wed Nov 28 14:39:06 2018 -0800
Committer: Wangda Tan 
Committed: Wed Nov 28 14:39:06 2018 -0800

--
 .../submarine/runtimes/yarnservice/YarnServiceJobSubmitter.java  | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/89764392/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/yarnservice/YarnServiceJobSubmitter.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/yarnservice/YarnServiceJobSubmitter.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/yarnservice/YarnServiceJobSubmitter.java
index b58ad77..2e84c96 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/yarnservice/YarnServiceJobSubmitter.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/yarnservice/YarnServiceJobSubmitter.java
@@ -49,6 +49,7 @@ import java.io.IOException;
 import java.io.OutputStreamWriter;
 import java.io.PrintWriter;
 import java.io.Writer;
+import java.nio.charset.StandardCharsets;
 import java.util.HashMap;
 import java.util.HashSet;
 import java.util.List;
@@ -218,7 +219,8 @@ public class YarnServiceJobSubmitter implements 
JobSubmitter {
   private String generateCommandLaunchScript(RunJobParameters parameters,
   TaskType taskType, Component comp) throws IOException {
 File file = File.createTempFile(taskType.name() + "-launch-script", ".sh");
-Writer w = new OutputStreamWriter(new FileOutputStream(file), "UTF-8");
+Writer w = new OutputStreamWriter(new FileOutputStream(file),
+StandardCharsets.UTF_8);
 PrintWriter pw = new PrintWriter(w);
 
 try {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[05/50] [abbrv] hadoop git commit: YARN-9067. YARN Resource Manager is running OOM because of leak of Configuration Object. Contributed by Eric Yang.

2018-11-29 Thread xyao
YARN-9067. YARN Resource Manager is running OOM because of leak of 
Configuration Object. Contributed by Eric Yang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/efc4d91c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/efc4d91c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/efc4d91c

Branch: refs/heads/HDDS-4
Commit: efc4d91cbeab8a13f6d61cb0e56443adb2d77559
Parents: fe7dab8
Author: Weiwei Yang 
Authored: Thu Nov 29 09:34:14 2018 +0800
Committer: Weiwei Yang 
Committed: Thu Nov 29 09:34:14 2018 +0800

--
 .../hadoop/yarn/service/webapp/ApiServer.java   | 209 +++
 .../hadoop/yarn/service/ServiceClientTest.java  |   2 +-
 .../yarn/service/client/ServiceClient.java  |   1 +
 3 files changed, 126 insertions(+), 86 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/efc4d91c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/webapp/ApiServer.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/webapp/ApiServer.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/webapp/ApiServer.java
index db831ba..88aeefd 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/webapp/ApiServer.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/webapp/ApiServer.java
@@ -118,10 +118,13 @@ public class ApiServer {
   @Override
   public Void run() throws YarnException, IOException {
 ServiceClient sc = getServiceClient();
-sc.init(YARN_CONFIG);
-sc.start();
-sc.actionBuild(service);
-sc.close();
+try {
+  sc.init(YARN_CONFIG);
+  sc.start();
+  sc.actionBuild(service);
+} finally {
+  sc.close();
+}
 return null;
   }
 });
@@ -133,11 +136,14 @@ public class ApiServer {
   @Override
   public ApplicationId run() throws IOException, YarnException {
 ServiceClient sc = getServiceClient();
-sc.init(YARN_CONFIG);
-sc.start();
-ApplicationId applicationId = sc.actionCreate(service);
-sc.close();
-return applicationId;
+try {
+  sc.init(YARN_CONFIG);
+  sc.start();
+  ApplicationId applicationId = sc.actionCreate(service);
+  return applicationId;
+} finally {
+  sc.close();
+}
   }
 });
 serviceStatus.setDiagnostics("Application ID: " + applicationId);
@@ -245,29 +251,32 @@ public class ApiServer {
   public Integer run() throws Exception {
 int result = 0;
 ServiceClient sc = getServiceClient();
-sc.init(YARN_CONFIG);
-sc.start();
-Exception stopException = null;
 try {
-  result = sc.actionStop(appName, destroy);
-  if (result == EXIT_SUCCESS) {
-LOG.info("Successfully stopped service {}", appName);
-  }
-} catch (Exception e) {
-  LOG.info("Got exception stopping service", e);
-  stopException = e;
-}
-if (destroy) {
-  result = sc.actionDestroy(appName);
-  if (result == EXIT_SUCCESS) {
-LOG.info("Successfully deleted service {}", appName);
+  sc.init(YARN_CONFIG);
+  sc.start();
+  Exception stopException = null;
+  try {
+result = sc.actionStop(appName, destroy);
+if (result == EXIT_SUCCESS) {
+  LOG.info("Successfully stopped service {}", appName);
+}
+  } catch (Exception e) {
+LOG.info("Got exception stopping service", e);
+stopException = e;
   }
-} else {
-  if (stopException != null) {
-throw stopException;
+  if (destroy) {
+result = sc.actionDestroy(appName);
+if (result == EXIT_SUCCESS) {
+  LOG.info("Successfully deleted service {}", appName);
+}
+  } else {
+if (stopException != null) {
+  throw stopException;
+

[04/50] [abbrv] hadoop git commit: YARN-8989. [YARN-8851] Move DockerCommandPlugin volume related APIs' invocation from DockerLinuxContainerRuntime#prepareContainer to #launchContainer. (Zhankun Tang

2018-11-29 Thread xyao
YARN-8989. [YARN-8851] Move DockerCommandPlugin volume related APIs' invocation 
from DockerLinuxContainerRuntime#prepareContainer to #launchContainer. (Zhankun 
Tang via wangda)

Change-Id: Ia6d532c687168448416dfdf46f0ac34bff20e6ca


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fe7dab8e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fe7dab8e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fe7dab8e

Branch: refs/heads/HDDS-4
Commit: fe7dab8ef55f08cf18c2d62c782c1ab8930a5a15
Parents: 8976439
Author: Wangda Tan 
Authored: Wed Nov 28 14:55:16 2018 -0800
Committer: Wangda Tan 
Committed: Wed Nov 28 15:03:06 2018 -0800

--
 .../runtime/DockerLinuxContainerRuntime.java| 44 
 .../runtime/TestDockerContainerRuntime.java | 15 ---
 2 files changed, 24 insertions(+), 35 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe7dab8e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
index 15ff0ff..225bc19 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
@@ -456,32 +456,6 @@ public class DockerLinuxContainerRuntime implements 
LinuxContainerRuntime {
   @Override
   public void prepareContainer(ContainerRuntimeContext ctx)
   throws ContainerExecutionException {
-Container container = ctx.getContainer();
-
-// Create volumes when needed.
-if (nmContext != null
-&& nmContext.getResourcePluginManager().getNameToPlugins() != null) {
-  for (ResourcePlugin plugin : nmContext.getResourcePluginManager()
-  .getNameToPlugins().values()) {
-DockerCommandPlugin dockerCommandPlugin =
-plugin.getDockerCommandPluginInstance();
-if (dockerCommandPlugin != null) {
-  DockerVolumeCommand dockerVolumeCommand =
-  dockerCommandPlugin.getCreateDockerVolumeCommand(
-  ctx.getContainer());
-  if (dockerVolumeCommand != null) {
-runDockerVolumeCommand(dockerVolumeCommand, container);
-
-// After volume created, run inspect to make sure volume properly
-// created.
-if (dockerVolumeCommand.getSubCommand().equals(
-DockerVolumeCommand.VOLUME_CREATE_SUB_COMMAND)) {
-  checkDockerVolumeCreated(dockerVolumeCommand, container);
-}
-  }
-}
-  }
-}
   }
 
   private void checkDockerVolumeCreated(
@@ -1034,14 +1008,30 @@ public class DockerLinuxContainerRuntime implements 
LinuxContainerRuntime {
   }
 }
 
-// use plugins to update docker run command.
+// use plugins to create volume and update docker run command.
 if (nmContext != null
 && nmContext.getResourcePluginManager().getNameToPlugins() != null) {
   for (ResourcePlugin plugin : nmContext.getResourcePluginManager()
   .getNameToPlugins().values()) {
 DockerCommandPlugin dockerCommandPlugin =
 plugin.getDockerCommandPluginInstance();
+
 if (dockerCommandPlugin != null) {
+  // Create volumes when needed.
+  DockerVolumeCommand dockerVolumeCommand =
+  dockerCommandPlugin.getCreateDockerVolumeCommand(
+  ctx.getContainer());
+  if (dockerVolumeCommand != null) {
+runDockerVolumeCommand(dockerVolumeCommand, container);
+
+// After volume created, run inspect to make sure volume properly
+// created.
+if (dockerVolumeCommand.getSubCommand().equals(
+DockerVolumeCommand.VOLUME_CREATE_SUB_COMMAND)) {
+  checkDockerVolumeCreated(dockerVolumeCommand, container);
+}
+  }
+  // Update cmd
   dockerCommandPlugin.updateDockerRunCommand(runCommand, container);
 }
   }


hadoop git commit: HDDS-877. Ensure correct surefire version for Ozone test. Contributed by Xiaoyu Yao.

2018-11-29 Thread xyao
Repository: hadoop
Updated Branches:
  refs/heads/trunk f53473686 -> ae5fbdd9e


HDDS-877. Ensure correct surefire version for Ozone test. Contributed by Xiaoyu 
Yao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ae5fbdd9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ae5fbdd9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ae5fbdd9

Branch: refs/heads/trunk
Commit: ae5fbdd9ed6ef09b588637f2eadd7a04e8382289
Parents: f534736
Author: Xiaoyu Yao 
Authored: Thu Nov 29 11:37:36 2018 -0800
Committer: Xiaoyu Yao 
Committed: Thu Nov 29 11:37:36 2018 -0800

--
 hadoop-hdds/pom.xml  | 1 +
 hadoop-ozone/pom.xml | 3 +--
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ae5fbdd9/hadoop-hdds/pom.xml
--
diff --git a/hadoop-hdds/pom.xml b/hadoop-hdds/pom.xml
index 869ecbf..5537b3a 100644
--- a/hadoop-hdds/pom.xml
+++ b/hadoop-hdds/pom.xml
@@ -53,6 +53,7 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
 0.5.1
 1.5.0.Final
 
+3.0.0-M1
 
   
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ae5fbdd9/hadoop-ozone/pom.xml
--
diff --git a/hadoop-ozone/pom.xml b/hadoop-ozone/pom.xml
index 39c65d5..4c13bd6 100644
--- a/hadoop-ozone/pom.xml
+++ b/hadoop-ozone/pom.xml
@@ -37,8 +37,7 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
 1.60
 Badlands
 ${ozone.version}
-
-
+3.0.0-M1
   
   
 common


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[2/2] hadoop git commit: HDFS-14095. EC: Track Erasure Coding commands in DFS statistics. Contributed by Ayush Saxena.

2018-11-29 Thread brahma
HDFS-14095. EC: Track Erasure Coding commands in DFS statistics. Contributed by 
Ayush Saxena.

(cherry picked from commit f534736867eed962899615ca1b7eb68bcf591d17)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e2fa9e8c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e2fa9e8c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e2fa9e8c

Branch: refs/heads/branch-3.2
Commit: e2fa9e8cddb95789e210e0400a38a676242de968
Parents: a8f67ad
Author: Brahma Reddy Battula 
Authored: Fri Nov 30 00:18:27 2018 +0530
Committer: Brahma Reddy Battula 
Committed: Fri Nov 30 00:28:04 2018 +0530

--
 .../hadoop/hdfs/DFSOpsCountStatistics.java  |  9 +++
 .../hadoop/hdfs/DistributedFileSystem.java  | 18 ++
 .../hadoop/hdfs/TestDistributedFileSystem.java  | 63 +++-
 3 files changed, 89 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2fa9e8c/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
index 3dcf13b..b9852ba 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
@@ -41,6 +41,7 @@ public class DFSOpsCountStatistics extends StorageStatistics {
 
   /** This is for counting distributed file system operations. */
   public enum OpType {
+ADD_EC_POLICY("op_add_ec_policy"),
 ALLOW_SNAPSHOT("op_allow_snapshot"),
 APPEND(CommonStatisticNames.OP_APPEND),
 CONCAT("op_concat"),
@@ -51,10 +52,15 @@ public class DFSOpsCountStatistics extends 
StorageStatistics {
 CREATE_SYM_LINK("op_create_symlink"),
 DELETE(CommonStatisticNames.OP_DELETE),
 DELETE_SNAPSHOT("op_delete_snapshot"),
+DISABLE_EC_POLICY("op_disable_ec_policy"),
 DISALLOW_SNAPSHOT("op_disallow_snapshot"),
+ENABLE_EC_POLICY("op_enable_ec_policy"),
 EXISTS(CommonStatisticNames.OP_EXISTS),
 GET_BYTES_WITH_FUTURE_GS("op_get_bytes_with_future_generation_stamps"),
 GET_CONTENT_SUMMARY(CommonStatisticNames.OP_GET_CONTENT_SUMMARY),
+GET_EC_CODECS("op_get_ec_codecs"),
+GET_EC_POLICY("op_get_ec_policy"),
+GET_EC_POLICIES("op_get_ec_policies"),
 GET_FILE_BLOCK_LOCATIONS("op_get_file_block_locations"),
 GET_FILE_CHECKSUM(CommonStatisticNames.OP_GET_FILE_CHECKSUM),
 GET_FILE_LINK_STATUS("op_get_file_link_status"),
@@ -76,11 +82,13 @@ public class DFSOpsCountStatistics extends 
StorageStatistics {
 REMOVE_ACL(CommonStatisticNames.OP_REMOVE_ACL),
 REMOVE_ACL_ENTRIES(CommonStatisticNames.OP_REMOVE_ACL_ENTRIES),
 REMOVE_DEFAULT_ACL(CommonStatisticNames.OP_REMOVE_DEFAULT_ACL),
+REMOVE_EC_POLICY("op_remove_ec_policy"),
 REMOVE_XATTR("op_remove_xattr"),
 RENAME(CommonStatisticNames.OP_RENAME),
 RENAME_SNAPSHOT("op_rename_snapshot"),
 RESOLVE_LINK("op_resolve_link"),
 SET_ACL(CommonStatisticNames.OP_SET_ACL),
+SET_EC_POLICY("op_set_ec_policy"),
 SET_OWNER(CommonStatisticNames.OP_SET_OWNER),
 SET_PERMISSION(CommonStatisticNames.OP_SET_PERMISSION),
 SET_REPLICATION("op_set_replication"),
@@ -90,6 +98,7 @@ public class DFSOpsCountStatistics extends StorageStatistics {
 GET_SNAPSHOT_DIFF("op_get_snapshot_diff"),
 GET_SNAPSHOTTABLE_DIRECTORY_LIST("op_get_snapshottable_directory_list"),
 TRUNCATE(CommonStatisticNames.OP_TRUNCATE),
+UNSET_EC_POLICY("op_unset_ec_policy"),
 UNSET_STORAGE_POLICY("op_unset_storage_policy");
 
 private static final Map SYMBOL_MAP =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2fa9e8c/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
index ca1546c..7dd02bd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
@@ -2845,6 +2845,8 @@ public class DistributedFileSystem extends FileSystem
*/
   public void setErasureCodingPolicy(final Path path,
   final String ecPolicyName) throws IOException {
+

[1/2] hadoop git commit: HDFS-14095. EC: Track Erasure Coding commands in DFS statistics. Contributed by Ayush Saxena.

2018-11-29 Thread brahma
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 1b731de94 -> 0b83c95ff
  refs/heads/branch-3.2 a8f67ad7c -> e2fa9e8cd


HDFS-14095. EC: Track Erasure Coding commands in DFS statistics. Contributed by 
Ayush Saxena.

(cherry picked from commit f534736867eed962899615ca1b7eb68bcf591d17)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0b83c95f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0b83c95f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0b83c95f

Branch: refs/heads/branch-3.1
Commit: 0b83c95ff43e043af58150bf7e23bf091f6d7fe7
Parents: 1b731de
Author: Brahma Reddy Battula 
Authored: Fri Nov 30 00:18:27 2018 +0530
Committer: Brahma Reddy Battula 
Committed: Fri Nov 30 00:27:37 2018 +0530

--
 .../hadoop/hdfs/DFSOpsCountStatistics.java  |  9 +++
 .../hadoop/hdfs/DistributedFileSystem.java  | 18 ++
 .../hadoop/hdfs/TestDistributedFileSystem.java  | 63 +++-
 3 files changed, 89 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0b83c95f/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
index 3dcf13b..b9852ba 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
@@ -41,6 +41,7 @@ public class DFSOpsCountStatistics extends StorageStatistics {
 
   /** This is for counting distributed file system operations. */
   public enum OpType {
+ADD_EC_POLICY("op_add_ec_policy"),
 ALLOW_SNAPSHOT("op_allow_snapshot"),
 APPEND(CommonStatisticNames.OP_APPEND),
 CONCAT("op_concat"),
@@ -51,10 +52,15 @@ public class DFSOpsCountStatistics extends 
StorageStatistics {
 CREATE_SYM_LINK("op_create_symlink"),
 DELETE(CommonStatisticNames.OP_DELETE),
 DELETE_SNAPSHOT("op_delete_snapshot"),
+DISABLE_EC_POLICY("op_disable_ec_policy"),
 DISALLOW_SNAPSHOT("op_disallow_snapshot"),
+ENABLE_EC_POLICY("op_enable_ec_policy"),
 EXISTS(CommonStatisticNames.OP_EXISTS),
 GET_BYTES_WITH_FUTURE_GS("op_get_bytes_with_future_generation_stamps"),
 GET_CONTENT_SUMMARY(CommonStatisticNames.OP_GET_CONTENT_SUMMARY),
+GET_EC_CODECS("op_get_ec_codecs"),
+GET_EC_POLICY("op_get_ec_policy"),
+GET_EC_POLICIES("op_get_ec_policies"),
 GET_FILE_BLOCK_LOCATIONS("op_get_file_block_locations"),
 GET_FILE_CHECKSUM(CommonStatisticNames.OP_GET_FILE_CHECKSUM),
 GET_FILE_LINK_STATUS("op_get_file_link_status"),
@@ -76,11 +82,13 @@ public class DFSOpsCountStatistics extends 
StorageStatistics {
 REMOVE_ACL(CommonStatisticNames.OP_REMOVE_ACL),
 REMOVE_ACL_ENTRIES(CommonStatisticNames.OP_REMOVE_ACL_ENTRIES),
 REMOVE_DEFAULT_ACL(CommonStatisticNames.OP_REMOVE_DEFAULT_ACL),
+REMOVE_EC_POLICY("op_remove_ec_policy"),
 REMOVE_XATTR("op_remove_xattr"),
 RENAME(CommonStatisticNames.OP_RENAME),
 RENAME_SNAPSHOT("op_rename_snapshot"),
 RESOLVE_LINK("op_resolve_link"),
 SET_ACL(CommonStatisticNames.OP_SET_ACL),
+SET_EC_POLICY("op_set_ec_policy"),
 SET_OWNER(CommonStatisticNames.OP_SET_OWNER),
 SET_PERMISSION(CommonStatisticNames.OP_SET_PERMISSION),
 SET_REPLICATION("op_set_replication"),
@@ -90,6 +98,7 @@ public class DFSOpsCountStatistics extends StorageStatistics {
 GET_SNAPSHOT_DIFF("op_get_snapshot_diff"),
 GET_SNAPSHOTTABLE_DIRECTORY_LIST("op_get_snapshottable_directory_list"),
 TRUNCATE(CommonStatisticNames.OP_TRUNCATE),
+UNSET_EC_POLICY("op_unset_ec_policy"),
 UNSET_STORAGE_POLICY("op_unset_storage_policy");
 
 private static final Map SYMBOL_MAP =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0b83c95f/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
index 65d211c..3553428 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
@@ -2846,6 +2846,8 @@ public class DistributedFileSystem extends 

[1/2] hadoop git commit: HDFS-14095. EC: Track Erasure Coding commands in DFS statistics. Contributed by Ayush Saxena.

2018-11-29 Thread brahma
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 065a1e72f -> 689555004
  refs/heads/trunk d0edd3726 -> f53473686


HDFS-14095. EC: Track Erasure Coding commands in DFS statistics. Contributed by 
Ayush Saxena.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f5347368
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f5347368
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f5347368

Branch: refs/heads/trunk
Commit: f534736867eed962899615ca1b7eb68bcf591d17
Parents: d0edd37
Author: Brahma Reddy Battula 
Authored: Fri Nov 30 00:18:27 2018 +0530
Committer: Brahma Reddy Battula 
Committed: Fri Nov 30 00:18:27 2018 +0530

--
 .../hadoop/hdfs/DFSOpsCountStatistics.java  |  9 +++
 .../hadoop/hdfs/DistributedFileSystem.java  | 18 ++
 .../hadoop/hdfs/TestDistributedFileSystem.java  | 63 +++-
 3 files changed, 89 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f5347368/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
index 3dcf13b..b9852ba 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
@@ -41,6 +41,7 @@ public class DFSOpsCountStatistics extends StorageStatistics {
 
   /** This is for counting distributed file system operations. */
   public enum OpType {
+ADD_EC_POLICY("op_add_ec_policy"),
 ALLOW_SNAPSHOT("op_allow_snapshot"),
 APPEND(CommonStatisticNames.OP_APPEND),
 CONCAT("op_concat"),
@@ -51,10 +52,15 @@ public class DFSOpsCountStatistics extends 
StorageStatistics {
 CREATE_SYM_LINK("op_create_symlink"),
 DELETE(CommonStatisticNames.OP_DELETE),
 DELETE_SNAPSHOT("op_delete_snapshot"),
+DISABLE_EC_POLICY("op_disable_ec_policy"),
 DISALLOW_SNAPSHOT("op_disallow_snapshot"),
+ENABLE_EC_POLICY("op_enable_ec_policy"),
 EXISTS(CommonStatisticNames.OP_EXISTS),
 GET_BYTES_WITH_FUTURE_GS("op_get_bytes_with_future_generation_stamps"),
 GET_CONTENT_SUMMARY(CommonStatisticNames.OP_GET_CONTENT_SUMMARY),
+GET_EC_CODECS("op_get_ec_codecs"),
+GET_EC_POLICY("op_get_ec_policy"),
+GET_EC_POLICIES("op_get_ec_policies"),
 GET_FILE_BLOCK_LOCATIONS("op_get_file_block_locations"),
 GET_FILE_CHECKSUM(CommonStatisticNames.OP_GET_FILE_CHECKSUM),
 GET_FILE_LINK_STATUS("op_get_file_link_status"),
@@ -76,11 +82,13 @@ public class DFSOpsCountStatistics extends 
StorageStatistics {
 REMOVE_ACL(CommonStatisticNames.OP_REMOVE_ACL),
 REMOVE_ACL_ENTRIES(CommonStatisticNames.OP_REMOVE_ACL_ENTRIES),
 REMOVE_DEFAULT_ACL(CommonStatisticNames.OP_REMOVE_DEFAULT_ACL),
+REMOVE_EC_POLICY("op_remove_ec_policy"),
 REMOVE_XATTR("op_remove_xattr"),
 RENAME(CommonStatisticNames.OP_RENAME),
 RENAME_SNAPSHOT("op_rename_snapshot"),
 RESOLVE_LINK("op_resolve_link"),
 SET_ACL(CommonStatisticNames.OP_SET_ACL),
+SET_EC_POLICY("op_set_ec_policy"),
 SET_OWNER(CommonStatisticNames.OP_SET_OWNER),
 SET_PERMISSION(CommonStatisticNames.OP_SET_PERMISSION),
 SET_REPLICATION("op_set_replication"),
@@ -90,6 +98,7 @@ public class DFSOpsCountStatistics extends StorageStatistics {
 GET_SNAPSHOT_DIFF("op_get_snapshot_diff"),
 GET_SNAPSHOTTABLE_DIRECTORY_LIST("op_get_snapshottable_directory_list"),
 TRUNCATE(CommonStatisticNames.OP_TRUNCATE),
+UNSET_EC_POLICY("op_unset_ec_policy"),
 UNSET_STORAGE_POLICY("op_unset_storage_policy");
 
 private static final Map SYMBOL_MAP =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f5347368/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
index ca1546c..7dd02bd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
@@ -2845,6 +2845,8 @@ public class DistributedFileSystem extends FileSystem
*/
   public void setErasureCodingPolicy(final Path path,
   final 

[2/2] hadoop git commit: HDFS-14095. EC: Track Erasure Coding commands in DFS statistics. Contributed by Ayush Saxena.

2018-11-29 Thread brahma
HDFS-14095. EC: Track Erasure Coding commands in DFS statistics. Contributed by 
Ayush Saxena.

(cherry picked from commit f534736867eed962899615ca1b7eb68bcf591d17)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/68955500
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/68955500
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/68955500

Branch: refs/heads/branch-3.0
Commit: 6895550043ea75d54600673102b36c1bfabb69d5
Parents: 065a1e7
Author: Brahma Reddy Battula 
Authored: Fri Nov 30 00:18:27 2018 +0530
Committer: Brahma Reddy Battula 
Committed: Fri Nov 30 00:21:07 2018 +0530

--
 .../hadoop/hdfs/DFSOpsCountStatistics.java  |  9 +++
 .../hadoop/hdfs/DistributedFileSystem.java  | 18 ++
 .../hadoop/hdfs/TestDistributedFileSystem.java  | 63 +++-
 3 files changed, 89 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/68955500/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
index 3dcf13b..b9852ba 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
@@ -41,6 +41,7 @@ public class DFSOpsCountStatistics extends StorageStatistics {
 
   /** This is for counting distributed file system operations. */
   public enum OpType {
+ADD_EC_POLICY("op_add_ec_policy"),
 ALLOW_SNAPSHOT("op_allow_snapshot"),
 APPEND(CommonStatisticNames.OP_APPEND),
 CONCAT("op_concat"),
@@ -51,10 +52,15 @@ public class DFSOpsCountStatistics extends 
StorageStatistics {
 CREATE_SYM_LINK("op_create_symlink"),
 DELETE(CommonStatisticNames.OP_DELETE),
 DELETE_SNAPSHOT("op_delete_snapshot"),
+DISABLE_EC_POLICY("op_disable_ec_policy"),
 DISALLOW_SNAPSHOT("op_disallow_snapshot"),
+ENABLE_EC_POLICY("op_enable_ec_policy"),
 EXISTS(CommonStatisticNames.OP_EXISTS),
 GET_BYTES_WITH_FUTURE_GS("op_get_bytes_with_future_generation_stamps"),
 GET_CONTENT_SUMMARY(CommonStatisticNames.OP_GET_CONTENT_SUMMARY),
+GET_EC_CODECS("op_get_ec_codecs"),
+GET_EC_POLICY("op_get_ec_policy"),
+GET_EC_POLICIES("op_get_ec_policies"),
 GET_FILE_BLOCK_LOCATIONS("op_get_file_block_locations"),
 GET_FILE_CHECKSUM(CommonStatisticNames.OP_GET_FILE_CHECKSUM),
 GET_FILE_LINK_STATUS("op_get_file_link_status"),
@@ -76,11 +82,13 @@ public class DFSOpsCountStatistics extends 
StorageStatistics {
 REMOVE_ACL(CommonStatisticNames.OP_REMOVE_ACL),
 REMOVE_ACL_ENTRIES(CommonStatisticNames.OP_REMOVE_ACL_ENTRIES),
 REMOVE_DEFAULT_ACL(CommonStatisticNames.OP_REMOVE_DEFAULT_ACL),
+REMOVE_EC_POLICY("op_remove_ec_policy"),
 REMOVE_XATTR("op_remove_xattr"),
 RENAME(CommonStatisticNames.OP_RENAME),
 RENAME_SNAPSHOT("op_rename_snapshot"),
 RESOLVE_LINK("op_resolve_link"),
 SET_ACL(CommonStatisticNames.OP_SET_ACL),
+SET_EC_POLICY("op_set_ec_policy"),
 SET_OWNER(CommonStatisticNames.OP_SET_OWNER),
 SET_PERMISSION(CommonStatisticNames.OP_SET_PERMISSION),
 SET_REPLICATION("op_set_replication"),
@@ -90,6 +98,7 @@ public class DFSOpsCountStatistics extends StorageStatistics {
 GET_SNAPSHOT_DIFF("op_get_snapshot_diff"),
 GET_SNAPSHOTTABLE_DIRECTORY_LIST("op_get_snapshottable_directory_list"),
 TRUNCATE(CommonStatisticNames.OP_TRUNCATE),
+UNSET_EC_POLICY("op_unset_ec_policy"),
 UNSET_STORAGE_POLICY("op_unset_storage_policy");
 
 private static final Map SYMBOL_MAP =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/68955500/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
index 5f63f81..4bf73fb 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
@@ -2628,6 +2628,8 @@ public class DistributedFileSystem extends FileSystem
*/
   public void setErasureCodingPolicy(final Path path,
   final String ecPolicyName) throws IOException {
+

hadoop git commit: YARN-9067. Fixed Resource Manager resource leak via YARN service. Contributed by Eric Yang

2018-11-29 Thread eyang
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 65e0d6ff4 -> 1b731de94


YARN-9067.  Fixed Resource Manager resource leak via YARN service.
Contributed by Eric Yang


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1b731de9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1b731de9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1b731de9

Branch: refs/heads/branch-3.1
Commit: 1b731de94403c05743f698b6fee3dbaf1a2540a3
Parents: 65e0d6f
Author: Eric Yang 
Authored: Thu Nov 29 13:50:06 2018 -0500
Committer: Eric Yang 
Committed: Thu Nov 29 13:51:39 2018 -0500

--
 .../hadoop/yarn/service/webapp/ApiServer.java   | 194 +++
 .../hadoop/yarn/service/ServiceClientTest.java  |   2 +-
 .../yarn/service/client/ServiceClient.java  |   1 +
 3 files changed, 117 insertions(+), 80 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1b731de9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/webapp/ApiServer.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/webapp/ApiServer.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/webapp/ApiServer.java
index c4e3317..51ad00a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/webapp/ApiServer.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/webapp/ApiServer.java
@@ -117,10 +117,13 @@ public class ApiServer {
   @Override
   public Void run() throws YarnException, IOException {
 ServiceClient sc = getServiceClient();
-sc.init(YARN_CONFIG);
-sc.start();
-sc.actionBuild(service);
-sc.close();
+try {
+  sc.init(YARN_CONFIG);
+  sc.start();
+  sc.actionBuild(service);
+} finally {
+  sc.close();
+}
 return null;
   }
 });
@@ -132,11 +135,14 @@ public class ApiServer {
   @Override
   public ApplicationId run() throws IOException, YarnException {
 ServiceClient sc = getServiceClient();
-sc.init(YARN_CONFIG);
-sc.start();
-ApplicationId applicationId = sc.actionCreate(service);
-sc.close();
-return applicationId;
+try {
+  sc.init(YARN_CONFIG);
+  sc.start();
+  ApplicationId applicationId = sc.actionCreate(service);
+  return applicationId;
+} finally {
+  sc.close();
+}
   }
 });
 serviceStatus.setDiagnostics("Application ID: " + applicationId);
@@ -244,29 +250,32 @@ public class ApiServer {
   public Integer run() throws Exception {
 int result = 0;
 ServiceClient sc = getServiceClient();
-sc.init(YARN_CONFIG);
-sc.start();
-Exception stopException = null;
 try {
-  result = sc.actionStop(appName, destroy);
-  if (result == EXIT_SUCCESS) {
-LOG.info("Successfully stopped service {}", appName);
-  }
-} catch (Exception e) {
-  LOG.info("Got exception stopping service", e);
-  stopException = e;
-}
-if (destroy) {
-  result = sc.actionDestroy(appName);
-  if (result == EXIT_SUCCESS) {
-LOG.info("Successfully deleted service {}", appName);
+  sc.init(YARN_CONFIG);
+  sc.start();
+  Exception stopException = null;
+  try {
+result = sc.actionStop(appName, destroy);
+if (result == EXIT_SUCCESS) {
+  LOG.info("Successfully stopped service {}", appName);
+}
+  } catch (Exception e) {
+LOG.info("Got exception stopping service", e);
+stopException = e;
   }
-} else {
-  if (stopException != null) {
-throw stopException;
+  if (destroy) {
+result = sc.actionDestroy(appName);
+if (result == EXIT_SUCCESS) {
+  LOG.info("Successfully deleted service {}", appName);
+}
+  } else {
+if 

svn commit: r1847745 - /hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml

2018-11-29 Thread xiao
Author: xiao
Date: Thu Nov 29 18:20:01 2018
New Revision: 1847745

URL: http://svn.apache.org/viewvc?rev=1847745=rev
Log:
update xiao

Modified:
hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml

Modified: hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml?rev=1847745=1847744=1847745=diff
==
--- hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml 
(original)
+++ hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml Thu 
Nov 29 18:20:01 2018
@@ -626,8 +626,8 @@

  xiao
  Xiao Chen
- Cloudera
- HDFS
+ Netflix
+ 
  -8

 
@@ -1719,8 +1719,8 @@

  xiao
  Xiao Chen
- Cloudera
- HDFS
+ Netflix
+ 
  -8

 



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[1/4] hadoop git commit: HADOOP-15959. Revert "HADOOP-12751. While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple"

2018-11-29 Thread stevel
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 b42c2679a -> 065a1e72f
  refs/heads/branch-3.1 a7d3f22b4 -> 65e0d6ff4
  refs/heads/branch-3.2 183ec39c4 -> 1a448565a
  refs/heads/trunk 5e102f9aa -> d0edd3726


HADOOP-15959. Revert "HADOOP-12751. While using kerberos Hadoop incorrectly 
assumes names with '@' to be non-simple"

This reverts commit 829a2e4d271f05afb209ddc834cd4a0e85492eda.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d0edd372
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d0edd372
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d0edd372

Branch: refs/heads/trunk
Commit: d0edd37269bb40290b409d583bcf3b70897c13e0
Parents: 5e102f9
Author: Steve Loughran 
Authored: Thu Nov 29 17:52:11 2018 +
Committer: Steve Loughran 
Committed: Thu Nov 29 17:52:11 2018 +

--
 .../authentication/util/KerberosName.java   |  9 ++--
 .../TestKerberosAuthenticationHandler.java  |  7 ++-
 .../authentication/util/TestKerberosName.java   | 17 ++--
 .../java/org/apache/hadoop/security/KDiag.java  | 46 +---
 .../src/site/markdown/SecureMode.md |  6 ---
 .../org/apache/hadoop/security/TestKDiag.java   | 16 ---
 .../security/TestUserGroupInformation.java  | 27 
 7 files changed, 33 insertions(+), 95 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d0edd372/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
index 4e7ee3c..287bb13 100644
--- 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
+++ 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
@@ -324,8 +324,8 @@ public class KerberosName {
 }
   }
   if (result != null && nonSimplePattern.matcher(result).find()) {
-LOG.info("Non-simple name {} after auth_to_local rule {}",
-result, this);
+throw new NoMatchingRule("Non-simple name " + result +
+ " after auth_to_local rule " + this);
   }
   if (toLowerCase && result != null) {
 result = result.toLowerCase(Locale.ENGLISH);
@@ -378,7 +378,7 @@ public class KerberosName {
   /**
* Get the translation of the principal name into an operating system
* user name.
-   * @return the user name
+   * @return the short name
* @throws IOException throws if something is wrong with the rules
*/
   public String getShortName() throws IOException {
@@ -398,8 +398,7 @@ public class KerberosName {
 return result;
   }
 }
-LOG.info("No auth_to_local rules applied to {}", this);
-return toString();
+throw new NoMatchingRule("No rules applied to " + toString());
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d0edd372/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
 
b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
index e672391..8b4bc15 100644
--- 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
+++ 
b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
@@ -108,7 +108,12 @@ public class TestKerberosAuthenticationHandler
 kn = new KerberosName("bar@BAR");
 Assert.assertEquals("bar", kn.getShortName());
 kn = new KerberosName("bar@FOO");
-Assert.assertEquals("bar@FOO", kn.getShortName());
+try {
+  kn.getShortName();
+  Assert.fail();
+}
+catch (Exception ex) {  
+}
   }
 
   @Test(timeout=6)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d0edd372/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
--
diff --git 

[4/4] hadoop git commit: HADOOP-15959. Revert "HADOOP-12751. While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple"

2018-11-29 Thread stevel
HADOOP-15959. Revert "HADOOP-12751. While using kerberos Hadoop incorrectly 
assumes names with '@' to be non-simple"

This reverts commit 829a2e4d271f05afb209ddc834cd4a0e85492eda.

(cherry picked from commit d0edd37269bb40290b409d583bcf3b70897c13e0)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/065a1e72
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/065a1e72
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/065a1e72

Branch: refs/heads/branch-3.0
Commit: 065a1e72f268766a76af43103fcf39532a7873e6
Parents: b42c267
Author: Steve Loughran 
Authored: Thu Nov 29 17:58:15 2018 +
Committer: Steve Loughran 
Committed: Thu Nov 29 17:58:15 2018 +

--
 .../authentication/util/KerberosName.java   |  9 ++--
 .../TestKerberosAuthenticationHandler.java  |  7 ++-
 .../authentication/util/TestKerberosName.java   | 17 ++--
 .../java/org/apache/hadoop/security/KDiag.java  | 46 +---
 .../src/site/markdown/SecureMode.md |  6 ---
 .../org/apache/hadoop/security/TestKDiag.java   | 16 ---
 .../security/TestUserGroupInformation.java  | 27 
 7 files changed, 33 insertions(+), 95 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/065a1e72/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
index 4e7ee3c..287bb13 100644
--- 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
+++ 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
@@ -324,8 +324,8 @@ public class KerberosName {
 }
   }
   if (result != null && nonSimplePattern.matcher(result).find()) {
-LOG.info("Non-simple name {} after auth_to_local rule {}",
-result, this);
+throw new NoMatchingRule("Non-simple name " + result +
+ " after auth_to_local rule " + this);
   }
   if (toLowerCase && result != null) {
 result = result.toLowerCase(Locale.ENGLISH);
@@ -378,7 +378,7 @@ public class KerberosName {
   /**
* Get the translation of the principal name into an operating system
* user name.
-   * @return the user name
+   * @return the short name
* @throws IOException throws if something is wrong with the rules
*/
   public String getShortName() throws IOException {
@@ -398,8 +398,7 @@ public class KerberosName {
 return result;
   }
 }
-LOG.info("No auth_to_local rules applied to {}", this);
-return toString();
+throw new NoMatchingRule("No rules applied to " + toString());
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/065a1e72/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
 
b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
index e672391..8b4bc15 100644
--- 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
+++ 
b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
@@ -108,7 +108,12 @@ public class TestKerberosAuthenticationHandler
 kn = new KerberosName("bar@BAR");
 Assert.assertEquals("bar", kn.getShortName());
 kn = new KerberosName("bar@FOO");
-Assert.assertEquals("bar@FOO", kn.getShortName());
+try {
+  kn.getShortName();
+  Assert.fail();
+}
+catch (Exception ex) {  
+}
   }
 
   @Test(timeout=6)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/065a1e72/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
 
b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
index c584fce..2db0df4 

[3/4] hadoop git commit: HADOOP-15959. Revert "HADOOP-12751. While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple"

2018-11-29 Thread stevel
HADOOP-15959. Revert "HADOOP-12751. While using kerberos Hadoop incorrectly 
assumes names with '@' to be non-simple"

This reverts commit 829a2e4d271f05afb209ddc834cd4a0e85492eda.

(cherry picked from commit d0edd37269bb40290b409d583bcf3b70897c13e0)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/65e0d6ff
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/65e0d6ff
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/65e0d6ff

Branch: refs/heads/branch-3.1
Commit: 65e0d6ff46c4d891784d85c72abf9eff38d0d77b
Parents: a7d3f22
Author: Steve Loughran 
Authored: Thu Nov 29 17:57:24 2018 +
Committer: Steve Loughran 
Committed: Thu Nov 29 17:57:24 2018 +

--
 .../authentication/util/KerberosName.java   |  9 ++--
 .../TestKerberosAuthenticationHandler.java  |  7 ++-
 .../authentication/util/TestKerberosName.java   | 17 ++--
 .../java/org/apache/hadoop/security/KDiag.java  | 46 +---
 .../src/site/markdown/SecureMode.md |  6 ---
 .../org/apache/hadoop/security/TestKDiag.java   | 16 ---
 .../security/TestUserGroupInformation.java  | 27 
 7 files changed, 33 insertions(+), 95 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/65e0d6ff/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
index 4e7ee3c..287bb13 100644
--- 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
+++ 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
@@ -324,8 +324,8 @@ public class KerberosName {
 }
   }
   if (result != null && nonSimplePattern.matcher(result).find()) {
-LOG.info("Non-simple name {} after auth_to_local rule {}",
-result, this);
+throw new NoMatchingRule("Non-simple name " + result +
+ " after auth_to_local rule " + this);
   }
   if (toLowerCase && result != null) {
 result = result.toLowerCase(Locale.ENGLISH);
@@ -378,7 +378,7 @@ public class KerberosName {
   /**
* Get the translation of the principal name into an operating system
* user name.
-   * @return the user name
+   * @return the short name
* @throws IOException throws if something is wrong with the rules
*/
   public String getShortName() throws IOException {
@@ -398,8 +398,7 @@ public class KerberosName {
 return result;
   }
 }
-LOG.info("No auth_to_local rules applied to {}", this);
-return toString();
+throw new NoMatchingRule("No rules applied to " + toString());
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/65e0d6ff/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
 
b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
index e672391..8b4bc15 100644
--- 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
+++ 
b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
@@ -108,7 +108,12 @@ public class TestKerberosAuthenticationHandler
 kn = new KerberosName("bar@BAR");
 Assert.assertEquals("bar", kn.getShortName());
 kn = new KerberosName("bar@FOO");
-Assert.assertEquals("bar@FOO", kn.getShortName());
+try {
+  kn.getShortName();
+  Assert.fail();
+}
+catch (Exception ex) {  
+}
   }
 
   @Test(timeout=6)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/65e0d6ff/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
 
b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
index c584fce..2db0df4 

[2/4] hadoop git commit: HADOOP-15959. Revert "HADOOP-12751. While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple"

2018-11-29 Thread stevel
HADOOP-15959. Revert "HADOOP-12751. While using kerberos Hadoop incorrectly 
assumes names with '@' to be non-simple"

This reverts commit 829a2e4d271f05afb209ddc834cd4a0e85492eda.

(cherry picked from commit d0edd37269bb40290b409d583bcf3b70897c13e0)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1a448565
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1a448565
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1a448565

Branch: refs/heads/branch-3.2
Commit: 1a448565a85e13572795696ed5e5c6643de62431
Parents: 183ec39
Author: Steve Loughran 
Authored: Thu Nov 29 17:53:23 2018 +
Committer: Steve Loughran 
Committed: Thu Nov 29 17:53:23 2018 +

--
 .../authentication/util/KerberosName.java   |  9 ++--
 .../TestKerberosAuthenticationHandler.java  |  7 ++-
 .../authentication/util/TestKerberosName.java   | 17 ++--
 .../java/org/apache/hadoop/security/KDiag.java  | 46 +---
 .../src/site/markdown/SecureMode.md |  6 ---
 .../org/apache/hadoop/security/TestKDiag.java   | 16 ---
 .../security/TestUserGroupInformation.java  | 27 
 7 files changed, 33 insertions(+), 95 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1a448565/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
index 4e7ee3c..287bb13 100644
--- 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
+++ 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
@@ -324,8 +324,8 @@ public class KerberosName {
 }
   }
   if (result != null && nonSimplePattern.matcher(result).find()) {
-LOG.info("Non-simple name {} after auth_to_local rule {}",
-result, this);
+throw new NoMatchingRule("Non-simple name " + result +
+ " after auth_to_local rule " + this);
   }
   if (toLowerCase && result != null) {
 result = result.toLowerCase(Locale.ENGLISH);
@@ -378,7 +378,7 @@ public class KerberosName {
   /**
* Get the translation of the principal name into an operating system
* user name.
-   * @return the user name
+   * @return the short name
* @throws IOException throws if something is wrong with the rules
*/
   public String getShortName() throws IOException {
@@ -398,8 +398,7 @@ public class KerberosName {
 return result;
   }
 }
-LOG.info("No auth_to_local rules applied to {}", this);
-return toString();
+throw new NoMatchingRule("No rules applied to " + toString());
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1a448565/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
 
b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
index e672391..8b4bc15 100644
--- 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
+++ 
b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
@@ -108,7 +108,12 @@ public class TestKerberosAuthenticationHandler
 kn = new KerberosName("bar@BAR");
 Assert.assertEquals("bar", kn.getShortName());
 kn = new KerberosName("bar@FOO");
-Assert.assertEquals("bar@FOO", kn.getShortName());
+try {
+  kn.getShortName();
+  Assert.fail();
+}
+catch (Exception ex) {  
+}
   }
 
   @Test(timeout=6)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1a448565/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
 
b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
index c584fce..2db0df4 

hadoop git commit: HADOOP-15932. Oozie unable to create sharelib in s3a filesystem.

2018-11-29 Thread stevel
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 8be2d16b9 -> a7d3f22b4


HADOOP-15932. Oozie unable to create sharelib in s3a filesystem.

Contributed by Steve Loughran.

(cherry picked from commit 4c106fca0ca91536e288f11052568406a0b84300)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a7d3f22b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a7d3f22b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a7d3f22b

Branch: refs/heads/branch-3.1
Commit: a7d3f22b4ffee3015c36e5c8f74115f416ef99a7
Parents: 8be2d16
Author: Steve Loughran 
Authored: Thu Nov 29 17:56:29 2018 +
Committer: Steve Loughran 
Committed: Thu Nov 29 17:56:29 2018 +

--
 .../java/org/apache/hadoop/fs/s3a/S3AFileSystem.java |  8 +++-
 .../hadoop/fs/s3a/ITestS3ACopyFromLocalFile.java | 15 +++
 2 files changed, 18 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a7d3f22b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
index 4b0c208..db5b88b 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
@@ -2336,7 +2336,10 @@ public class S3AFileSystem extends FileSystem implements 
StreamCapabilities {
   @Override
   public void copyFromLocalFile(boolean delSrc, boolean overwrite, Path src,
   Path dst) throws IOException {
-innerCopyFromLocalFile(delSrc, overwrite, src, dst);
+entryPoint(INVOCATION_COPY_FROM_LOCAL_FILE);
+LOG.debug("Copying local file from {} to {}", src, dst);
+//innerCopyFromLocalFile(delSrc, overwrite, src, dst);
+super.copyFromLocalFile(delSrc, overwrite, src, dst);
   }
 
   /**
@@ -2346,6 +2349,9 @@ public class S3AFileSystem extends FileSystem implements 
StreamCapabilities {
* This version doesn't need to create a temporary file to calculate the md5.
* Sadly this doesn't seem to be used by the shell cp :(
*
+   * HADOOP-15932: this method has been unwired from
+   * {@link #copyFromLocalFile(boolean, boolean, Path, Path)} until
+   * it is extended to list and copy whole directories.
* delSrc indicates if the source should be removed
* @param delSrc whether to delete the src
* @param overwrite whether to overwrite an existing file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a7d3f22b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ACopyFromLocalFile.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ACopyFromLocalFile.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ACopyFromLocalFile.java
index 7dc286d..668e129 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ACopyFromLocalFile.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ACopyFromLocalFile.java
@@ -22,23 +22,27 @@ import java.io.File;
 import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
 
+import org.junit.Ignore;
 import org.junit.Test;
 
-import org.apache.commons.io.Charsets;
 import org.apache.commons.io.FileUtils;
 import org.apache.commons.io.IOUtils;
 import org.apache.hadoop.fs.FileAlreadyExistsException;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathExistsException;
 
 import static org.apache.hadoop.test.LambdaTestUtils.intercept;
 
 /**
  * Test {@link S3AFileSystem#copyFromLocalFile(boolean, boolean, Path, Path)}.
+ * Some of the tests have been disabled pending a fix for HADOOP-15932 and
+ * recursive directory copying; the test cases themselves may be obsolete.
  */
 public class ITestS3ACopyFromLocalFile extends AbstractS3ATestBase {
-  private static final Charset ASCII = Charsets.US_ASCII;
+  private static final Charset ASCII = StandardCharsets.US_ASCII;
 
   private File file;
 
@@ -80,7 +84,8 @@ public class ITestS3ACopyFromLocalFile extends 
AbstractS3ATestBase {
   public void testCopyFileNoOverwrite() throws Throwable {
 file = createTempFile("hello");
 Path dest = upload(file, true);
-intercept(FileAlreadyExistsException.class,
+// HADOOP-15932: the exception type changes here
+intercept(PathExistsException.class,
 () -> upload(file, false));
   }
 
@@ -95,6 +100,7 @@ public class 

hadoop git commit: HDDS-850. ReadStateMachineData hits OverlappingFileLockException in ContainerStateMachine. Contributed by Shashikant Banerjee.

2018-11-29 Thread shashikant
Repository: hadoop
Updated Branches:
  refs/heads/trunk 7eb0d3a32 -> 5e102f9aa


HDDS-850. ReadStateMachineData hits OverlappingFileLockException in 
ContainerStateMachine. Contributed by Shashikant Banerjee.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5e102f9a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5e102f9a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5e102f9a

Branch: refs/heads/trunk
Commit: 5e102f9aa54d3057ef5f0755d45428f22a24990b
Parents: 7eb0d3a
Author: Shashikant Banerjee 
Authored: Thu Nov 29 22:20:08 2018 +0530
Committer: Shashikant Banerjee 
Committed: Thu Nov 29 22:20:08 2018 +0530

--
 .../apache/hadoop/hdds/scm/ScmConfigKeys.java   |   8 ++
 .../apache/hadoop/ozone/OzoneConfigKeys.java|   9 ++
 .../main/proto/DatanodeContainerProtocol.proto  |   1 +
 .../common/src/main/resources/ozone-default.xml |   8 ++
 .../server/ratis/ContainerStateMachine.java | 134 +++
 .../server/ratis/XceiverServerRatis.java|  14 +-
 .../container/keyvalue/KeyValueHandler.java |   7 +-
 .../keyvalue/impl/ChunkManagerImpl.java |  11 +-
 .../keyvalue/interfaces/ChunkManager.java   |   5 +-
 .../keyvalue/TestChunkManagerImpl.java  |   6 +-
 .../common/impl/TestContainerPersistence.java   |  11 +-
 11 files changed, 143 insertions(+), 71 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5e102f9a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
index 6733b8e..062b101 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
@@ -93,6 +93,14 @@ public final class ScmConfigKeys {
   public static final String DFS_CONTAINER_RATIS_LOG_QUEUE_SIZE =
   "dfs.container.ratis.log.queue.size";
   public static final int DFS_CONTAINER_RATIS_LOG_QUEUE_SIZE_DEFAULT = 128;
+
+  // expiry interval stateMachineData cache entry inside containerStateMachine
+  public static final String
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_CACHE_EXPIRY_INTERVAL =
+  "dfs.container.ratis.statemachine.cache.expiry.interval";
+  public static final String
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_CACHE_EXPIRY_INTERVAL_DEFAULT =
+  "10s";
   public static final String DFS_RATIS_CLIENT_REQUEST_TIMEOUT_DURATION_KEY =
   "dfs.ratis.client.request.timeout.duration";
   public static final TimeDuration

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5e102f9a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
index 879f773..df233f7 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
@@ -249,6 +249,15 @@ public final class OzoneConfigKeys {
   DFS_CONTAINER_RATIS_STATEMACHINEDATA_SYNC_TIMEOUT_DEFAULT =
   ScmConfigKeys.DFS_CONTAINER_RATIS_STATEMACHINEDATA_SYNC_TIMEOUT_DEFAULT;
 
+  public static final String
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_CACHE_EXPIRY_INTERVAL =
+  ScmConfigKeys.
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_CACHE_EXPIRY_INTERVAL;
+  public static final String
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_CACHE_EXPIRY_INTERVAL_DEFAULT =
+  ScmConfigKeys.
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_CACHE_EXPIRY_INTERVAL_DEFAULT;
+
   public static final String DFS_CONTAINER_RATIS_DATANODE_STORAGE_DIR =
   "dfs.container.ratis.datanode.storage.dir";
   public static final String DFS_RATIS_CLIENT_REQUEST_TIMEOUT_DURATION_KEY =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5e102f9a/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
--
diff --git a/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto 
b/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
index 3695b6b..5237af8 100644
--- a/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
+++ b/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
@@ -392,6 +392,7 @@ message  WriteChunkResponseProto {
 message  ReadChunkRequestProto  {
   required DatanodeBlockID blockID = 1;
   required ChunkInfo chunkData 

hadoop git commit: YARN-9069. Fix SchedulerInfo#getSchedulerType for custom schedulers. Contributed by Bilwa S T.

2018-11-29 Thread bibinchundatt
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.2 ee9deb6e9 -> 183ec39c4


YARN-9069. Fix SchedulerInfo#getSchedulerType for custom schedulers. 
Contributed by Bilwa S T.

(cherry picked from commit 07142f54a8c7f70857e99c041f3a2a5189c809b5)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/183ec39c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/183ec39c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/183ec39c

Branch: refs/heads/branch-3.2
Commit: 183ec39c4bb3a132a8207fedfedd65f399529150
Parents: ee9deb6
Author: bibinchundatt 
Authored: Thu Nov 29 22:02:59 2018 +0530
Committer: bibinchundatt 
Committed: Thu Nov 29 22:16:32 2018 +0530

--
 .../yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java  | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/183ec39c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
index 163f707..ede0d15 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
@@ -54,6 +54,8 @@ public class SchedulerInfo {
   this.schedulerName = "Fair Scheduler";
 } else if (rs instanceof FifoScheduler) {
   this.schedulerName = "Fifo Scheduler";
+} else {
+  this.schedulerName = rs.getClass().getSimpleName();
 }
 this.minAllocResource = new 
ResourceInfo(rs.getMinimumResourceCapability());
 this.maxAllocResource = new 
ResourceInfo(rs.getMaximumResourceCapability());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-9069. Fix SchedulerInfo#getSchedulerType for custom schedulers. Contributed by Bilwa S T.

2018-11-29 Thread bibinchundatt
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 4c238b50d -> b42c2679a


YARN-9069. Fix SchedulerInfo#getSchedulerType for custom schedulers. 
Contributed by Bilwa S T.

(cherry picked from commit 07142f54a8c7f70857e99c041f3a2a5189c809b5)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b42c2679
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b42c2679
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b42c2679

Branch: refs/heads/branch-3.0
Commit: b42c2679ad24f0a5f9a48490b002351db7b7a6a1
Parents: 4c238b5
Author: bibinchundatt 
Authored: Thu Nov 29 22:02:59 2018 +0530
Committer: bibinchundatt 
Committed: Thu Nov 29 22:08:48 2018 +0530

--
 .../yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java  | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b42c2679/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
index 81491b1..ae6b3d3 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
@@ -53,6 +53,8 @@ public class SchedulerInfo {
   this.schedulerName = "Fair Scheduler";
 } else if (rs instanceof FifoScheduler) {
   this.schedulerName = "Fifo Scheduler";
+} else {
+  this.schedulerName = rs.getClass().getSimpleName();
 }
 this.minAllocResource = new 
ResourceInfo(rs.getMinimumResourceCapability());
 this.maxAllocResource = new 
ResourceInfo(rs.getMaximumResourceCapability());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-9069. Fix SchedulerInfo#getSchedulerType for custom schedulers. Contributed by Bilwa S T.

2018-11-29 Thread bibinchundatt
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 e7fa638fe -> 8be2d16b9


YARN-9069. Fix SchedulerInfo#getSchedulerType for custom schedulers. 
Contributed by Bilwa S T.

(cherry picked from commit 07142f54a8c7f70857e99c041f3a2a5189c809b5)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8be2d16b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8be2d16b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8be2d16b

Branch: refs/heads/branch-3.1
Commit: 8be2d16b940929653f9ee507e637df92ded1fa65
Parents: e7fa638
Author: bibinchundatt 
Authored: Thu Nov 29 22:02:59 2018 +0530
Committer: bibinchundatt 
Committed: Thu Nov 29 22:08:35 2018 +0530

--
 .../yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java  | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8be2d16b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
index 81491b1..ae6b3d3 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
@@ -53,6 +53,8 @@ public class SchedulerInfo {
   this.schedulerName = "Fair Scheduler";
 } else if (rs instanceof FifoScheduler) {
   this.schedulerName = "Fifo Scheduler";
+} else {
+  this.schedulerName = rs.getClass().getSimpleName();
 }
 this.minAllocResource = new 
ResourceInfo(rs.getMinimumResourceCapability());
 this.maxAllocResource = new 
ResourceInfo(rs.getMaximumResourceCapability());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-14927. ITestS3GuardTool failures in testDestroyNoBucket(). Contributed by Gabor Bota.

2018-11-29 Thread mackrorysd
Repository: hadoop
Updated Branches:
  refs/heads/trunk 184cced51 -> 7eb0d3a32


HADOOP-14927. ITestS3GuardTool failures in testDestroyNoBucket(). Contributed 
by Gabor Bota.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7eb0d3a3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7eb0d3a3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7eb0d3a3

Branch: refs/heads/trunk
Commit: 7eb0d3a32435da110dc9e6004dba8c5c9b082c35
Parents: 184cced
Author: Sean Mackrory 
Authored: Wed Nov 28 16:57:12 2018 -0700
Committer: Sean Mackrory 
Committed: Thu Nov 29 09:36:39 2018 -0700

--
 .../hadoop/fs/s3a/s3guard/S3GuardTool.java  | 38 
 1 file changed, 24 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7eb0d3a3/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
index 1316121..aea57a6 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
@@ -218,6 +218,27 @@ public abstract class S3GuardTool extends Configured 
implements Tool {
 format.addOptionWithValue(SECONDS_FLAG);
   }
 
+  protected void checkMetadataStoreUri(List paths) throws IOException {
+// be sure that path is provided in params, so there's no IOoBE
+String s3Path = "";
+if(!paths.isEmpty()) {
+  s3Path = paths.get(0);
+}
+
+// Check if DynamoDB url is set from arguments.
+String metadataStoreUri = getCommandFormat().getOptValue(META_FLAG);
+if(metadataStoreUri == null || metadataStoreUri.isEmpty()) {
+  // If not set, check if filesystem is guarded by creating an
+  // S3AFileSystem and check if hasMetadataStore is true
+  try (S3AFileSystem s3AFileSystem = (S3AFileSystem)
+  S3AFileSystem.newInstance(toUri(s3Path), getConf())){
+Preconditions.checkState(s3AFileSystem.hasMetadataStore(),
+"The S3 bucket is unguarded. " + getName()
++ " can not be used on an unguarded bucket.");
+  }
+}
+  }
+
   /**
* Parse metadata store from command line option or HDFS configuration.
*
@@ -500,20 +521,7 @@ public abstract class S3GuardTool extends Configured 
implements Tool {
 public int run(String[] args, PrintStream out) throws Exception {
   List paths = parseArgs(args);
   Map options = new HashMap<>();
-  String s3Path = paths.get(0);
-
-  // Check if DynamoDB url is set from arguments.
-  String metadataStoreUri = getCommandFormat().getOptValue(META_FLAG);
-  if(metadataStoreUri == null || metadataStoreUri.isEmpty()) {
-// If not set, check if filesystem is guarded by creating an
-// S3AFileSystem and check if hasMetadataStore is true
-try (S3AFileSystem s3AFileSystem = (S3AFileSystem)
-S3AFileSystem.newInstance(toUri(s3Path), getConf())){
-  Preconditions.checkState(s3AFileSystem.hasMetadataStore(),
-  "The S3 bucket is unguarded. " + getName()
-  + " can not be used on an unguarded bucket.");
-}
-  }
+  checkMetadataStoreUri(paths);
 
   String readCap = getCommandFormat().getOptValue(READ_FLAG);
   if (StringUtils.isNotEmpty(readCap)) {
@@ -590,6 +598,8 @@ public abstract class S3GuardTool extends Configured 
implements Tool {
 throw e;
   }
 
+  checkMetadataStoreUri(paths);
+
   try {
 initMetadataStore(false);
   } catch (FileNotFoundException e) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDDS-808. Simplify OMAction and DNAction classes used for AuditLogging. Contributed by Dinesh Chitlangia.

2018-11-29 Thread ajay
Repository: hadoop
Updated Branches:
  refs/heads/trunk 07142f54a -> 184cced51


HDDS-808. Simplify OMAction and DNAction classes used for AuditLogging. 
Contributed by Dinesh Chitlangia.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/184cced5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/184cced5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/184cced5

Branch: refs/heads/trunk
Commit: 184cced513c5599d7b33c9124692fbcd2e6d338e
Parents: 07142f5
Author: Ajay Kumar 
Authored: Thu Nov 29 08:35:02 2018 -0800
Committer: Ajay Kumar 
Committed: Thu Nov 29 08:35:20 2018 -0800

--
 .../org/apache/hadoop/ozone/audit/DNAction.java | 44 +++-
 .../apache/hadoop/ozone/audit/DummyAction.java  | 36 ++---
 .../org/apache/hadoop/ozone/audit/OMAction.java | 54 +---
 3 files changed, 58 insertions(+), 76 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/184cced5/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/DNAction.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/DNAction.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/DNAction.java
index ce34c46..1c87f2b 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/DNAction.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/audit/DNAction.java
@@ -21,34 +21,28 @@ package org.apache.hadoop.ozone.audit;
  */
 public enum DNAction implements AuditAction {
 
-  CREATE_CONTAINER("CREATE_CONTAINER"),
-  READ_CONTAINER("READ_CONTAINER"),
-  UPDATE_CONTAINER("UPDATE_CONTAINER"),
-  DELETE_CONTAINER("DELETE_CONTAINER"),
-  LIST_CONTAINER("LIST_CONTAINER"),
-  PUT_BLOCK("PUT_BLOCK"),
-  GET_BLOCK("GET_BLOCK"),
-  DELETE_BLOCK("DELETE_BLOCK"),
-  LIST_BLOCK("LIST_BLOCK"),
-  READ_CHUNK("READ_CHUNK"),
-  DELETE_CHUNK("DELETE_CHUNK"),
-  WRITE_CHUNK("WRITE_CHUNK"),
-  LIST_CHUNK("LIST_CHUNK"),
-  COMPACT_CHUNK("COMPACT_CHUNK"),
-  PUT_SMALL_FILE("PUT_SMALL_FILE"),
-  GET_SMALL_FILE("GET_SMALL_FILE"),
-  CLOSE_CONTAINER("CLOSE_CONTAINER"),
-  GET_COMMITTED_BLOCK_LENGTH("GET_COMMITTED_BLOCK_LENGTH");
-
-  private String action;
-
-  DNAction(String action) {
-this.action = action;
-  }
+  CREATE_CONTAINER,
+  READ_CONTAINER,
+  UPDATE_CONTAINER,
+  DELETE_CONTAINER,
+  LIST_CONTAINER,
+  PUT_BLOCK,
+  GET_BLOCK,
+  DELETE_BLOCK,
+  LIST_BLOCK,
+  READ_CHUNK,
+  DELETE_CHUNK,
+  WRITE_CHUNK,
+  LIST_CHUNK,
+  COMPACT_CHUNK,
+  PUT_SMALL_FILE,
+  GET_SMALL_FILE,
+  CLOSE_CONTAINER,
+  GET_COMMITTED_BLOCK_LENGTH;
 
   @Override
   public String getAction() {
-return this.action;
+return this.toString();
   }
 
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/184cced5/hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/audit/DummyAction.java
--
diff --git 
a/hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/audit/DummyAction.java
 
b/hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/audit/DummyAction.java
index 76cd39a..d2da3e6 100644
--- 
a/hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/audit/DummyAction.java
+++ 
b/hadoop-hdds/common/src/test/java/org/apache/hadoop/ozone/audit/DummyAction.java
@@ -22,30 +22,24 @@ package org.apache.hadoop.ozone.audit;
  */
 public enum DummyAction implements AuditAction {
 
-  CREATE_VOLUME("CREATE_VOLUME"),
-  CREATE_BUCKET("CREATE_BUCKET"),
-  CREATE_KEY("CREATE_KEY"),
-  READ_VOLUME("READ_VOLUME"),
-  READ_BUCKET("READ_BUCKET"),
-  READ_KEY("READ_BUCKET"),
-  UPDATE_VOLUME("UPDATE_VOLUME"),
-  UPDATE_BUCKET("UPDATE_BUCKET"),
-  UPDATE_KEY("UPDATE_KEY"),
-  DELETE_VOLUME("DELETE_VOLUME"),
-  DELETE_BUCKET("DELETE_BUCKET"),
-  DELETE_KEY("DELETE_KEY"),
-  SET_OWNER("SET_OWNER"),
-  SET_QUOTA("SET_QUOTA");
-
-  private final String action;
-
-  DummyAction(String action) {
-this.action = action;
-  }
+  CREATE_VOLUME,
+  CREATE_BUCKET,
+  CREATE_KEY,
+  READ_VOLUME,
+  READ_BUCKET,
+  READ_KEY,
+  UPDATE_VOLUME,
+  UPDATE_BUCKET,
+  UPDATE_KEY,
+  DELETE_VOLUME,
+  DELETE_BUCKET,
+  DELETE_KEY,
+  SET_OWNER,
+  SET_QUOTA;
 
   @Override
   public String getAction() {
-return this.action;
+return this.toString();
   }
 
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/184cced5/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/audit/OMAction.java
--
diff --git 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/audit/OMAction.java 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/audit/OMAction.java
index 1d4d646..8794014 100644
--- 

hadoop git commit: YARN-9069. Fix SchedulerInfo#getSchedulerType for custom schedulers. Contributed by Bilwa S T.

2018-11-29 Thread bibinchundatt
Repository: hadoop
Updated Branches:
  refs/heads/trunk a68d766e8 -> 07142f54a


YARN-9069. Fix SchedulerInfo#getSchedulerType for custom schedulers. 
Contributed by Bilwa S T.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/07142f54
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/07142f54
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/07142f54

Branch: refs/heads/trunk
Commit: 07142f54a8c7f70857e99c041f3a2a5189c809b5
Parents: a68d766
Author: bibinchundatt 
Authored: Thu Nov 29 22:02:59 2018 +0530
Committer: bibinchundatt 
Committed: Thu Nov 29 22:02:59 2018 +0530

--
 .../yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java  | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/07142f54/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
index 163f707..ede0d15 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/SchedulerInfo.java
@@ -54,6 +54,8 @@ public class SchedulerInfo {
   this.schedulerName = "Fair Scheduler";
 } else if (rs instanceof FifoScheduler) {
   this.schedulerName = "Fifo Scheduler";
+} else {
+  this.schedulerName = rs.getClass().getSimpleName();
 }
 this.minAllocResource = new 
ResourceInfo(rs.getMinimumResourceCapability());
 this.maxAllocResource = new 
ResourceInfo(rs.getMaximumResourceCapability());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-8948. PlacementRule interface should be for all YarnSchedulers. Contributed by Bibin A Chundatt.

2018-11-29 Thread bibinchundatt
Repository: hadoop
Updated Branches:
  refs/heads/trunk c1d24f848 -> a68d766e8


YARN-8948. PlacementRule interface should be for all YarnSchedulers. 
Contributed by Bibin A Chundatt.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a68d766e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a68d766e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a68d766e

Branch: refs/heads/trunk
Commit: a68d766e876631d7ee2e1a6504d4120ba628d178
Parents: c1d24f8
Author: bibinchundatt 
Authored: Thu Nov 29 21:43:34 2018 +0530
Committer: bibinchundatt 
Committed: Thu Nov 29 21:43:34 2018 +0530

--
 .../placement/AppNameMappingPlacementRule.java  | 12 ++--
 .../server/resourcemanager/placement/PlacementRule.java |  4 ++--
 .../placement/UserGroupMappingPlacementRule.java| 11 ++-
 3 files changed, 22 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a68d766e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/AppNameMappingPlacementRule.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/AppNameMappingPlacementRule.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/AppNameMappingPlacementRule.java
index 2debade..7a46962 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/AppNameMappingPlacementRule.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/AppNameMappingPlacementRule.java
@@ -20,11 +20,12 @@ package 
org.apache.hadoop.yarn.server.resourcemanager.placement;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.exceptions.YarnException;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueue;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerContext;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager;
@@ -61,8 +62,15 @@ public class AppNameMappingPlacementRule extends 
PlacementRule {
   }
 
   @Override
-  public boolean initialize(CapacitySchedulerContext schedulerContext)
+  public boolean initialize(ResourceScheduler scheduler)
   throws IOException {
+if (!(scheduler instanceof CapacityScheduler)) {
+  throw new IOException(
+  "AppNameMappingPlacementRule can be configured only for "
+  + "CapacityScheduler");
+}
+CapacitySchedulerContext schedulerContext =
+(CapacitySchedulerContext) scheduler;
 CapacitySchedulerConfiguration conf = schedulerContext.getConfiguration();
 boolean overrideWithQueueMappings = conf.getOverrideWithQueueMappings();
 LOG.info(

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a68d766e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/PlacementRule.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/PlacementRule.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/PlacementRule.java
index 21ab32a..0f3d43c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/PlacementRule.java
+++ 

[2/2] hadoop git commit: HDDS-804. Block token: Add secret token manager. Contributed by Ajay Kumar.

2018-11-29 Thread ajay
HDDS-804. Block token: Add secret token manager. Contributed by Ajay Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/278d4b9b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/278d4b9b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/278d4b9b

Branch: refs/heads/HDDS-4
Commit: 278d4b9b7bb2cd52d8708870602be292be189359
Parents: 1d3d40b
Author: Ajay Kumar 
Authored: Thu Nov 29 08:00:41 2018 -0800
Committer: Ajay Kumar 
Committed: Thu Nov 29 08:00:41 2018 -0800

--
 .../hdds/security/x509/SecurityConfig.java  |   9 +
 .../security/OzoneBlockTokenSecretManager.java  | 191 +++
 .../OzoneDelegationTokenSecretManager.java  | 455 +
 .../ozone/security/OzoneSecretManager.java  | 498 ---
 .../TestOzoneBlockTokenSecretManager.java   | 146 ++
 .../TestOzoneDelegationTokenSecretManager.java  | 218 
 .../ozone/security/TestOzoneSecretManager.java  | 216 
 .../apache/hadoop/ozone/om/OzoneManager.java|  23 +-
 .../security/TestOzoneManagerBlockToken.java| 251 ++
 9 files changed, 1371 insertions(+), 636 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/278d4b9b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/SecurityConfig.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/SecurityConfig.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/SecurityConfig.java
index ee20a21..b38ee7c 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/SecurityConfig.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/SecurityConfig.java
@@ -21,6 +21,7 @@ package org.apache.hadoop.hdds.security.x509;
 
 import com.google.common.base.Preconditions;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.OzoneConfigKeys;
 import org.bouncycastle.jce.provider.BouncyCastleProvider;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -77,6 +78,7 @@ public class SecurityConfig {
   private final Duration certDuration;
   private final String x509SignatureAlgo;
   private final Boolean grpcBlockTokenEnabled;
+  private final int getMaxKeyLength;
   private final String certificateDir;
   private final String certificateFileName;
 
@@ -88,6 +90,9 @@ public class SecurityConfig {
   public SecurityConfig(Configuration configuration) {
 Preconditions.checkNotNull(configuration, "Configuration cannot be null");
 this.configuration = configuration;
+this.getMaxKeyLength = configuration.getInt(
+OzoneConfigKeys.OZONE_MAX_KEY_LEN,
+OzoneConfigKeys.OZONE_MAX_KEY_LEN_DEFAULT);
 this.size = this.configuration.getInt(HDDS_KEY_LEN, HDDS_DEFAULT_KEY_LEN);
 this.keyAlgo = this.configuration.get(HDDS_KEY_ALGORITHM,
 HDDS_DEFAULT_KEY_ALGORITHM);
@@ -289,4 +294,8 @@ public class SecurityConfig {
   throw new SecurityException("Unknown security provider:" + provider);
 }
   }
+
+  public int getMaxKeyLength() {
+return this.getMaxKeyLength;
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/278d4b9b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneBlockTokenSecretManager.java
--
diff --git 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneBlockTokenSecretManager.java
 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneBlockTokenSecretManager.java
new file mode 100644
index 000..3b833cb
--- /dev/null
+++ 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/security/OzoneBlockTokenSecretManager.java
@@ -0,0 +1,191 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.security;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import 

[1/2] [hadoop] Git Push Summary

2018-11-29 Thread ajay
Repository: hadoop
Updated Branches:
  refs/heads/HDDS-4 1d3d40b9c -> 278d4b9b7

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org