hadoop git commit: HADOOP-13689. Do not attach javadoc and sources jars during non-dist build.

2016-10-06 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 29afba8a1 -> 7296999c8


HADOOP-13689. Do not attach javadoc and sources jars during non-dist build.

(cherry picked from commit bf372173d0f7cb97b62556cbd199a075254b96e6)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7296999c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7296999c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7296999c

Branch: refs/heads/branch-2.8
Commit: 7296999c8992c22cd3fb9aca6ef7d06dd49bf6ee
Parents: 29afba8
Author: Andrew Wang 
Authored: Thu Oct 6 15:08:24 2016 -0700
Committer: Andrew Wang 
Committed: Thu Oct 6 15:08:30 2016 -0700

--
 hadoop-project-dist/pom.xml | 16 
 1 file changed, 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7296999c/hadoop-project-dist/pom.xml
--
diff --git a/hadoop-project-dist/pom.xml b/hadoop-project-dist/pom.xml
index d3649a4..edc6950 100644
--- a/hadoop-project-dist/pom.xml
+++ b/hadoop-project-dist/pom.xml
@@ -86,22 +86,6 @@
 
   
   
-org.apache.maven.plugins
-maven-source-plugin
-
-  
-prepare-package
-
-  jar
-  test-jar
-
-  
-
-
-  true
-
-  
-  
 org.codehaus.mojo
 findbugs-maven-plugin
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-13689. Do not attach javadoc and sources jars during non-dist build.

2016-10-06 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/trunk 48b9d5fd2 -> bf372173d


HADOOP-13689. Do not attach javadoc and sources jars during non-dist build.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bf372173
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bf372173
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bf372173

Branch: refs/heads/trunk
Commit: bf372173d0f7cb97b62556cbd199a075254b96e6
Parents: 48b9d5f
Author: Andrew Wang 
Authored: Thu Oct 6 15:08:24 2016 -0700
Committer: Andrew Wang 
Committed: Thu Oct 6 15:08:24 2016 -0700

--
 hadoop-project-dist/pom.xml | 16 
 1 file changed, 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bf372173/hadoop-project-dist/pom.xml
--
diff --git a/hadoop-project-dist/pom.xml b/hadoop-project-dist/pom.xml
index e64f173..4423d94 100644
--- a/hadoop-project-dist/pom.xml
+++ b/hadoop-project-dist/pom.xml
@@ -88,22 +88,6 @@
 
   
   
-org.apache.maven.plugins
-maven-source-plugin
-
-  
-prepare-package
-
-  jar
-  test-jar
-
-  
-
-
-  true
-
-  
-  
 org.codehaus.mojo
 findbugs-maven-plugin
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-13689. Do not attach javadoc and sources jars during non-dist build.

2016-10-06 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 e3a9666d2 -> 931524210


HADOOP-13689. Do not attach javadoc and sources jars during non-dist build.

(cherry picked from commit bf372173d0f7cb97b62556cbd199a075254b96e6)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/93152421
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/93152421
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/93152421

Branch: refs/heads/branch-2
Commit: 931524210928a0c7a2fbccae409ebb672ccd
Parents: e3a9666
Author: Andrew Wang 
Authored: Thu Oct 6 15:08:24 2016 -0700
Committer: Andrew Wang 
Committed: Thu Oct 6 15:08:27 2016 -0700

--
 hadoop-project-dist/pom.xml | 16 
 1 file changed, 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/93152421/hadoop-project-dist/pom.xml
--
diff --git a/hadoop-project-dist/pom.xml b/hadoop-project-dist/pom.xml
index 07e73e7..5dd31a3 100644
--- a/hadoop-project-dist/pom.xml
+++ b/hadoop-project-dist/pom.xml
@@ -86,22 +86,6 @@
 
   
   
-org.apache.maven.plugins
-maven-source-plugin
-
-  
-prepare-package
-
-  jar
-  test-jar
-
-  
-
-
-  true
-
-  
-  
 org.codehaus.mojo
 findbugs-maven-plugin
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-10955. Pass IIP for FSDirAttr methods. Contributed by Daryn Sharp.

2016-10-06 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 e36b665f9 -> 29afba8a1


HDFS-10955. Pass IIP for FSDirAttr methods. Contributed by Daryn Sharp.

(cherry picked from commit e3a9666d285c62afb0d50abea74d9e2ffe2767a8)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/29afba8a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/29afba8a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/29afba8a

Branch: refs/heads/branch-2.8
Commit: 29afba8a15c05f3fcdc0bbe705344141205e97b5
Parents: e36b665
Author: Kihwal Lee 
Authored: Thu Oct 6 16:37:53 2016 -0500
Committer: Kihwal Lee 
Committed: Thu Oct 6 16:37:53 2016 -0500

--
 .../hdfs/server/namenode/FSDirAttrOp.java   | 110 ---
 .../hdfs/server/namenode/FSEditLogLoader.java   |  62 +++
 .../hdfs/server/namenode/FSNamesystem.java  |   3 +-
 3 files changed, 83 insertions(+), 92 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/29afba8a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
index b7b8804..6c3f6ab 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
@@ -50,9 +50,8 @@ import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_STORAGE_POLICY_ENABLED_KE
 
 public class FSDirAttrOp {
   static HdfsFileStatus setPermission(
-  FSDirectory fsd, final String srcArg, FsPermission permission)
+  FSDirectory fsd, final String src, FsPermission permission)
   throws IOException {
-String src = srcArg;
 if (FSDirectory.isExactReservedName(src)) {
   throw new InvalidPathException(src);
 }
@@ -61,13 +60,12 @@ public class FSDirAttrOp {
 fsd.writeLock();
 try {
   iip = fsd.resolvePathForWrite(pc, src);
-  src = iip.getPath();
   fsd.checkOwner(pc, iip);
-  unprotectedSetPermission(fsd, src, permission);
+  unprotectedSetPermission(fsd, iip, permission);
 } finally {
   fsd.writeUnlock();
 }
-fsd.getEditLog().logSetPermissions(src, permission);
+fsd.getEditLog().logSetPermissions(iip.getPath(), permission);
 return fsd.getAuditFileInfo(iip);
   }
 
@@ -82,7 +80,6 @@ public class FSDirAttrOp {
 fsd.writeLock();
 try {
   iip = fsd.resolvePathForWrite(pc, src);
-  src = iip.getPath();
   fsd.checkOwner(pc, iip);
   if (!pc.isSuperUser()) {
 if (username != null && !pc.getUser().equals(username)) {
@@ -92,11 +89,11 @@ public class FSDirAttrOp {
   throw new AccessControlException("User does not belong to " + group);
 }
   }
-  unprotectedSetOwner(fsd, src, username, group);
+  unprotectedSetOwner(fsd, iip, username, group);
 } finally {
   fsd.writeUnlock();
 }
-fsd.getEditLog().logSetOwner(src, username, group);
+fsd.getEditLog().logSetOwner(iip.getPath(), username, group);
 return fsd.getAuditFileInfo(iip);
   }
 
@@ -109,20 +106,18 @@ public class FSDirAttrOp {
 fsd.writeLock();
 try {
   iip = fsd.resolvePathForWrite(pc, src);
-  src = iip.getPath();
   // Write access is required to set access and modification times
   if (fsd.isPermissionEnabled()) {
 fsd.checkPathAccess(pc, iip, FsAction.WRITE);
   }
   final INode inode = iip.getLastINode();
   if (inode == null) {
-throw new FileNotFoundException("File/Directory " + src +
+throw new FileNotFoundException("File/Directory " + iip.getPath() +
 " does not exist.");
   }
-  boolean changed = unprotectedSetTimes(fsd, inode, mtime, atime, true,
-iip.getLatestSnapshotId());
+  boolean changed = unprotectedSetTimes(fsd, iip, mtime, atime, true);
   if (changed) {
-fsd.getEditLog().logTimes(src, mtime, atime);
+fsd.getEditLog().logTimes(iip.getPath(), mtime, atime);
   }
 } finally {
   fsd.writeUnlock();
@@ -139,16 +134,15 @@ public class FSDirAttrOp {
 fsd.writeLock();
 try {
   final INodesInPath iip = fsd.resolvePathForWrite(pc, src);
-  src = iip.getPath();
   if (fsd.isPermissionEnabled()) {
 fsd.checkPathAccess(pc, iip, FsAction.WRITE);
   }
 
-  final BlockInfo[] blocks = unprotectedSetReplication(fsd, src,
+  

hadoop git commit: HDFS-10955. Pass IIP for FSDirAttr methods. Contributed by Daryn Sharp.

2016-10-06 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 a28ffd0fd -> e3a9666d2


HDFS-10955. Pass IIP for FSDirAttr methods. Contributed by Daryn Sharp.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e3a9666d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e3a9666d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e3a9666d

Branch: refs/heads/branch-2
Commit: e3a9666d285c62afb0d50abea74d9e2ffe2767a8
Parents: a28ffd0
Author: Kihwal Lee 
Authored: Thu Oct 6 16:31:29 2016 -0500
Committer: Kihwal Lee 
Committed: Thu Oct 6 16:32:12 2016 -0500

--
 .../hdfs/server/namenode/FSDirAttrOp.java   | 110 ---
 .../hdfs/server/namenode/FSEditLogLoader.java   |  62 +++
 .../hdfs/server/namenode/FSNamesystem.java  |   3 +-
 3 files changed, 83 insertions(+), 92 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e3a9666d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
index b7b8804..6c3f6ab 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
@@ -50,9 +50,8 @@ import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_STORAGE_POLICY_ENABLED_KE
 
 public class FSDirAttrOp {
   static HdfsFileStatus setPermission(
-  FSDirectory fsd, final String srcArg, FsPermission permission)
+  FSDirectory fsd, final String src, FsPermission permission)
   throws IOException {
-String src = srcArg;
 if (FSDirectory.isExactReservedName(src)) {
   throw new InvalidPathException(src);
 }
@@ -61,13 +60,12 @@ public class FSDirAttrOp {
 fsd.writeLock();
 try {
   iip = fsd.resolvePathForWrite(pc, src);
-  src = iip.getPath();
   fsd.checkOwner(pc, iip);
-  unprotectedSetPermission(fsd, src, permission);
+  unprotectedSetPermission(fsd, iip, permission);
 } finally {
   fsd.writeUnlock();
 }
-fsd.getEditLog().logSetPermissions(src, permission);
+fsd.getEditLog().logSetPermissions(iip.getPath(), permission);
 return fsd.getAuditFileInfo(iip);
   }
 
@@ -82,7 +80,6 @@ public class FSDirAttrOp {
 fsd.writeLock();
 try {
   iip = fsd.resolvePathForWrite(pc, src);
-  src = iip.getPath();
   fsd.checkOwner(pc, iip);
   if (!pc.isSuperUser()) {
 if (username != null && !pc.getUser().equals(username)) {
@@ -92,11 +89,11 @@ public class FSDirAttrOp {
   throw new AccessControlException("User does not belong to " + group);
 }
   }
-  unprotectedSetOwner(fsd, src, username, group);
+  unprotectedSetOwner(fsd, iip, username, group);
 } finally {
   fsd.writeUnlock();
 }
-fsd.getEditLog().logSetOwner(src, username, group);
+fsd.getEditLog().logSetOwner(iip.getPath(), username, group);
 return fsd.getAuditFileInfo(iip);
   }
 
@@ -109,20 +106,18 @@ public class FSDirAttrOp {
 fsd.writeLock();
 try {
   iip = fsd.resolvePathForWrite(pc, src);
-  src = iip.getPath();
   // Write access is required to set access and modification times
   if (fsd.isPermissionEnabled()) {
 fsd.checkPathAccess(pc, iip, FsAction.WRITE);
   }
   final INode inode = iip.getLastINode();
   if (inode == null) {
-throw new FileNotFoundException("File/Directory " + src +
+throw new FileNotFoundException("File/Directory " + iip.getPath() +
 " does not exist.");
   }
-  boolean changed = unprotectedSetTimes(fsd, inode, mtime, atime, true,
-iip.getLatestSnapshotId());
+  boolean changed = unprotectedSetTimes(fsd, iip, mtime, atime, true);
   if (changed) {
-fsd.getEditLog().logTimes(src, mtime, atime);
+fsd.getEditLog().logTimes(iip.getPath(), mtime, atime);
   }
 } finally {
   fsd.writeUnlock();
@@ -139,16 +134,15 @@ public class FSDirAttrOp {
 fsd.writeLock();
 try {
   final INodesInPath iip = fsd.resolvePathForWrite(pc, src);
-  src = iip.getPath();
   if (fsd.isPermissionEnabled()) {
 fsd.checkPathAccess(pc, iip, FsAction.WRITE);
   }
 
-  final BlockInfo[] blocks = unprotectedSetReplication(fsd, src,
+  final BlockInfo[] blocks = unprotectedSetReplication(fsd, iip,
   

hadoop git commit: HDFS-10955. Pass IIP for FSDirAttr methods. Contributed by Daryn Sharp.

2016-10-06 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/trunk 1d330fbaf -> 48b9d5fd2


HDFS-10955. Pass IIP for FSDirAttr methods. Contributed by Daryn Sharp.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/48b9d5fd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/48b9d5fd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/48b9d5fd

Branch: refs/heads/trunk
Commit: 48b9d5fd2a96728b1118be217ca597c4098e99ca
Parents: 1d330fb
Author: Kihwal Lee 
Authored: Thu Oct 6 16:33:46 2016 -0500
Committer: Kihwal Lee 
Committed: Thu Oct 6 16:33:46 2016 -0500

--
 .../hdfs/server/namenode/FSDirAttrOp.java   | 110 ---
 .../hdfs/server/namenode/FSEditLogLoader.java   |  62 +++
 .../hdfs/server/namenode/FSNamesystem.java  |   3 +-
 3 files changed, 83 insertions(+), 92 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/48b9d5fd/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
index 4c5ecb1d..91d9bce 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
@@ -50,9 +50,8 @@ import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_STORAGE_POLICY_ENABLED_KE
 
 public class FSDirAttrOp {
   static HdfsFileStatus setPermission(
-  FSDirectory fsd, final String srcArg, FsPermission permission)
+  FSDirectory fsd, final String src, FsPermission permission)
   throws IOException {
-String src = srcArg;
 if (FSDirectory.isExactReservedName(src)) {
   throw new InvalidPathException(src);
 }
@@ -61,13 +60,12 @@ public class FSDirAttrOp {
 fsd.writeLock();
 try {
   iip = fsd.resolvePathForWrite(pc, src);
-  src = iip.getPath();
   fsd.checkOwner(pc, iip);
-  unprotectedSetPermission(fsd, src, permission);
+  unprotectedSetPermission(fsd, iip, permission);
 } finally {
   fsd.writeUnlock();
 }
-fsd.getEditLog().logSetPermissions(src, permission);
+fsd.getEditLog().logSetPermissions(iip.getPath(), permission);
 return fsd.getAuditFileInfo(iip);
   }
 
@@ -82,7 +80,6 @@ public class FSDirAttrOp {
 fsd.writeLock();
 try {
   iip = fsd.resolvePathForWrite(pc, src);
-  src = iip.getPath();
   fsd.checkOwner(pc, iip);
   if (!pc.isSuperUser()) {
 if (username != null && !pc.getUser().equals(username)) {
@@ -92,11 +89,11 @@ public class FSDirAttrOp {
   throw new AccessControlException("User does not belong to " + group);
 }
   }
-  unprotectedSetOwner(fsd, src, username, group);
+  unprotectedSetOwner(fsd, iip, username, group);
 } finally {
   fsd.writeUnlock();
 }
-fsd.getEditLog().logSetOwner(src, username, group);
+fsd.getEditLog().logSetOwner(iip.getPath(), username, group);
 return fsd.getAuditFileInfo(iip);
   }
 
@@ -109,20 +106,18 @@ public class FSDirAttrOp {
 fsd.writeLock();
 try {
   iip = fsd.resolvePathForWrite(pc, src);
-  src = iip.getPath();
   // Write access is required to set access and modification times
   if (fsd.isPermissionEnabled()) {
 fsd.checkPathAccess(pc, iip, FsAction.WRITE);
   }
   final INode inode = iip.getLastINode();
   if (inode == null) {
-throw new FileNotFoundException("File/Directory " + src +
+throw new FileNotFoundException("File/Directory " + iip.getPath() +
 " does not exist.");
   }
-  boolean changed = unprotectedSetTimes(fsd, inode, mtime, atime, true,
-  iip.getLatestSnapshotId());
+  boolean changed = unprotectedSetTimes(fsd, iip, mtime, atime, true);
   if (changed) {
-fsd.getEditLog().logTimes(src, mtime, atime);
+fsd.getEditLog().logTimes(iip.getPath(), mtime, atime);
   }
 } finally {
   fsd.writeUnlock();
@@ -139,16 +134,15 @@ public class FSDirAttrOp {
 fsd.writeLock();
 try {
   final INodesInPath iip = fsd.resolvePathForWrite(pc, src);
-  src = iip.getPath();
   if (fsd.isPermissionEnabled()) {
 fsd.checkPathAccess(pc, iip, FsAction.WRITE);
   }
 
-  final BlockInfo[] blocks = unprotectedSetReplication(fsd, src,
+  final BlockInfo[] blocks = unprotectedSetReplication(fsd, iip,
  

[2/3] hadoop git commit: HADOOP-13150. Avoid use of toString() in output of HDFS ACL shell commands. Contributed by Chris Nauroth.

2016-10-06 Thread cnauroth
HADOOP-13150. Avoid use of toString() in output of HDFS ACL shell commands. 
Contributed by Chris Nauroth.

(cherry picked from commit 1d330fbaf6b50802750aa461640773fb788ef884)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a28ffd0f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a28ffd0f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a28ffd0f

Branch: refs/heads/branch-2
Commit: a28ffd0fdeff64a12612e60f4ebc5e6311b4b112
Parents: ecccb11
Author: Chris Nauroth 
Authored: Thu Oct 6 12:45:11 2016 -0700
Committer: Chris Nauroth 
Committed: Thu Oct 6 13:22:35 2016 -0700

--
 .../apache/hadoop/fs/permission/AclEntry.java   | 24 ++--
 .../hadoop/fs/permission/AclEntryScope.java |  2 +-
 .../hadoop/fs/permission/AclEntryType.java  | 23 ++-
 .../apache/hadoop/fs/permission/AclStatus.java  |  2 +-
 .../org/apache/hadoop/fs/shell/AclCommands.java |  6 ++---
 .../hdfs/web/resources/AclPermissionParam.java  | 23 ---
 .../org/apache/hadoop/hdfs/web/JsonUtil.java|  2 +-
 7 files changed, 70 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a28ffd0f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
index 45402f8..b42c365 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
@@ -36,7 +36,7 @@ import org.apache.hadoop.util.StringUtils;
  * to create a new instance.
  */
 @InterfaceAudience.Public
-@InterfaceStability.Evolving
+@InterfaceStability.Stable
 public class AclEntry {
   private final AclEntryType type;
   private final String name;
@@ -100,13 +100,29 @@ public class AclEntry {
   }
 
   @Override
+  @InterfaceStability.Unstable
   public String toString() {
+// This currently just delegates to the stable string representation, but 
it
+// is permissible for the output of this method to change across versions.
+return toStringStable();
+  }
+
+  /**
+   * Returns a string representation guaranteed to be stable across versions to
+   * satisfy backward compatibility requirements, such as for shell command
+   * output or serialization.  The format of this string representation matches
+   * what is expected by the {@link #parseAclSpec(String, boolean)} and
+   * {@link #parseAclEntry(String, boolean)} methods.
+   *
+   * @return stable, backward compatible string representation
+   */
+  public String toStringStable() {
 StringBuilder sb = new StringBuilder();
 if (scope == AclEntryScope.DEFAULT) {
   sb.append("default:");
 }
 if (type != null) {
-  sb.append(StringUtils.toLowerCase(type.toString()));
+  sb.append(StringUtils.toLowerCase(type.toStringStable()));
 }
 sb.append(':');
 if (name != null) {
@@ -203,6 +219,8 @@ public class AclEntry {
   /**
* Parses a string representation of an ACL spec into a list of AclEntry
* objects. Example: "user::rwx,user:foo:rw-,group::r--,other::---"
+   * The expected format of ACL entries in the string parameter is the same
+   * format produced by the {@link #toStringStable()} method.
* 
* @param aclSpec
*  String representation of an ACL spec.
@@ -228,6 +246,8 @@ public class AclEntry {
 
   /**
* Parses a string representation of an ACL into a AclEntry object.
+   * The expected format of ACL entries in the string parameter is the same
+   * format produced by the {@link #toStringStable()} method.
* 
* @param aclStr
*  String representation of an ACL.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a28ffd0f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntryScope.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntryScope.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntryScope.java
index 6d941e7..64c70aa 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntryScope.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntryScope.java
@@ -24,7 +24,7 @@ import org.apache.hadoop.classification.InterfaceStability;
  * Specifies 

[1/3] hadoop git commit: HADOOP-13150. Avoid use of toString() in output of HDFS ACL shell commands. Contributed by Chris Nauroth.

2016-10-06 Thread cnauroth
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 ecccb114a -> a28ffd0fd
  refs/heads/branch-2.8 b9ce40e12 -> e36b665f9
  refs/heads/trunk f32e9fc8f -> 1d330fbaf


HADOOP-13150. Avoid use of toString() in output of HDFS ACL shell commands. 
Contributed by Chris Nauroth.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1d330fba
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1d330fba
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1d330fba

Branch: refs/heads/trunk
Commit: 1d330fbaf6b50802750aa461640773fb788ef884
Parents: f32e9fc
Author: Chris Nauroth 
Authored: Thu Oct 6 12:45:11 2016 -0700
Committer: Chris Nauroth 
Committed: Thu Oct 6 13:19:16 2016 -0700

--
 .../apache/hadoop/fs/permission/AclEntry.java   | 24 ++--
 .../hadoop/fs/permission/AclEntryScope.java |  2 +-
 .../hadoop/fs/permission/AclEntryType.java  | 23 ++-
 .../apache/hadoop/fs/permission/AclStatus.java  |  2 +-
 .../org/apache/hadoop/fs/shell/AclCommands.java |  6 ++---
 .../hdfs/web/resources/AclPermissionParam.java  | 23 ---
 .../org/apache/hadoop/hdfs/web/JsonUtil.java|  2 +-
 7 files changed, 70 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1d330fba/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
index 45402f8..b42c365 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
@@ -36,7 +36,7 @@ import org.apache.hadoop.util.StringUtils;
  * to create a new instance.
  */
 @InterfaceAudience.Public
-@InterfaceStability.Evolving
+@InterfaceStability.Stable
 public class AclEntry {
   private final AclEntryType type;
   private final String name;
@@ -100,13 +100,29 @@ public class AclEntry {
   }
 
   @Override
+  @InterfaceStability.Unstable
   public String toString() {
+// This currently just delegates to the stable string representation, but 
it
+// is permissible for the output of this method to change across versions.
+return toStringStable();
+  }
+
+  /**
+   * Returns a string representation guaranteed to be stable across versions to
+   * satisfy backward compatibility requirements, such as for shell command
+   * output or serialization.  The format of this string representation matches
+   * what is expected by the {@link #parseAclSpec(String, boolean)} and
+   * {@link #parseAclEntry(String, boolean)} methods.
+   *
+   * @return stable, backward compatible string representation
+   */
+  public String toStringStable() {
 StringBuilder sb = new StringBuilder();
 if (scope == AclEntryScope.DEFAULT) {
   sb.append("default:");
 }
 if (type != null) {
-  sb.append(StringUtils.toLowerCase(type.toString()));
+  sb.append(StringUtils.toLowerCase(type.toStringStable()));
 }
 sb.append(':');
 if (name != null) {
@@ -203,6 +219,8 @@ public class AclEntry {
   /**
* Parses a string representation of an ACL spec into a list of AclEntry
* objects. Example: "user::rwx,user:foo:rw-,group::r--,other::---"
+   * The expected format of ACL entries in the string parameter is the same
+   * format produced by the {@link #toStringStable()} method.
* 
* @param aclSpec
*  String representation of an ACL spec.
@@ -228,6 +246,8 @@ public class AclEntry {
 
   /**
* Parses a string representation of an ACL into a AclEntry object.
+   * The expected format of ACL entries in the string parameter is the same
+   * format produced by the {@link #toStringStable()} method.
* 
* @param aclStr
*  String representation of an ACL.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1d330fba/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntryScope.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntryScope.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntryScope.java
index 6d941e7..64c70aa 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntryScope.java
+++ 

hadoop git commit: HDFS-10939. Reduce performance penalty of encryption zones. Contributed by Daryn sharp.

2016-10-06 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/trunk 72a2ae645 -> f32e9fc8f


HDFS-10939. Reduce performance penalty of encryption zones. Contributed by 
Daryn sharp.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f32e9fc8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f32e9fc8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f32e9fc8

Branch: refs/heads/trunk
Commit: f32e9fc8f7150f0e889c0774b3ad712af26fbd65
Parents: 72a2ae6
Author: Kihwal Lee 
Authored: Thu Oct 6 15:11:14 2016 -0500
Committer: Kihwal Lee 
Committed: Thu Oct 6 15:11:14 2016 -0500

--
 .../namenode/EncryptionFaultInjector.java   |   6 +
 .../server/namenode/EncryptionZoneManager.java  |  25 +--
 .../server/namenode/FSDirEncryptionZoneOp.java  | 144 +---
 .../server/namenode/FSDirErasureCodingOp.java   |   2 +-
 .../hdfs/server/namenode/FSDirRenameOp.java |   4 +-
 .../server/namenode/FSDirStatAndListingOp.java  |  20 +--
 .../hdfs/server/namenode/FSDirWriteFileOp.java  | 163 +--
 .../hdfs/server/namenode/FSDirXAttrOp.java  |  21 +--
 .../hdfs/server/namenode/FSDirectory.java   |   5 +-
 .../hdfs/server/namenode/FSEditLogLoader.java   |   3 +-
 .../hdfs/server/namenode/FSNamesystem.java  | 115 ++---
 .../hdfs/server/namenode/XAttrStorage.java  |   7 +-
 .../apache/hadoop/hdfs/TestEncryptionZones.java |  50 --
 13 files changed, 295 insertions(+), 270 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f32e9fc8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java
index 27d8f50..104d8c3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java
@@ -35,5 +35,11 @@ public class EncryptionFaultInjector {
   }
 
   @VisibleForTesting
+  public void startFileNoKey() throws IOException {}
+
+  @VisibleForTesting
+  public void startFileBeforeGenerateKey() throws IOException {}
+
+  @VisibleForTesting
   public void startFileAfterGenerateKey() throws IOException {}
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f32e9fc8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
index 511c616..ceeccf6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
@@ -260,12 +260,14 @@ public class EncryptionZoneManager {
*
* @param srcIIP source IIP
* @param dstIIP destination IIP
-   * @param srcsource path, used for debugging
* @throws IOException if the src cannot be renamed to the dst
*/
-  void checkMoveValidity(INodesInPath srcIIP, INodesInPath dstIIP, String src)
+  void checkMoveValidity(INodesInPath srcIIP, INodesInPath dstIIP)
   throws IOException {
 assert dir.hasReadLock();
+if (!hasCreatedEncryptionZone()) {
+  return;
+}
 final EncryptionZoneInt srcParentEZI =
 getParentEncryptionZoneForPath(srcIIP);
 final EncryptionZoneInt dstParentEZI =
@@ -274,17 +276,17 @@ public class EncryptionZoneManager {
 final boolean dstInEZ = (dstParentEZI != null);
 if (srcInEZ && !dstInEZ) {
   throw new IOException(
-  src + " can't be moved from an encryption zone.");
+  srcIIP.getPath() + " can't be moved from an encryption zone.");
 } else if (dstInEZ && !srcInEZ) {
   throw new IOException(
-  src + " can't be moved into an encryption zone.");
+  srcIIP.getPath() + " can't be moved into an encryption zone.");
 }
 
 if (srcInEZ) {
   if (srcParentEZI != dstParentEZI) {
 final String srcEZPath = getFullPathName(srcParentEZI);
 final String dstEZPath = getFullPathName(dstParentEZI);
-final StringBuilder sb = new StringBuilder(src);
+final StringBuilder 

hadoop git commit: HADOOP-13688. Stop bundling HTML source code in javadoc JARs.

2016-10-06 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/trunk 2d46c3f6b -> 72a2ae645


HADOOP-13688. Stop bundling HTML source code in javadoc JARs.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/72a2ae64
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/72a2ae64
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/72a2ae64

Branch: refs/heads/trunk
Commit: 72a2ae6452e615c66d10829da38737896814e02b
Parents: 2d46c3f
Author: Andrew Wang 
Authored: Thu Oct 6 11:19:38 2016 -0700
Committer: Andrew Wang 
Committed: Thu Oct 6 11:19:38 2016 -0700

--
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml | 1 -
 hadoop-project-dist/pom.xml| 1 -
 pom.xml| 1 -
 3 files changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/72a2ae64/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
index df1d63b..0aa5fc1 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
@@ -299,7 +299,6 @@
 
 site
 
-  true
   true
   false
   ${maven.compile.source}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/72a2ae64/hadoop-project-dist/pom.xml
--
diff --git a/hadoop-project-dist/pom.xml b/hadoop-project-dist/pom.xml
index bf4fac7..e64f173 100644
--- a/hadoop-project-dist/pom.xml
+++ b/hadoop-project-dist/pom.xml
@@ -116,7 +116,6 @@
 org.apache.maven.plugins
 maven-javadoc-plugin
 
-  true
   512m
   true
   false

http://git-wip-us.apache.org/repos/asf/hadoop/blob/72a2ae64/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 250f5a1..1a3cd28 100644
--- a/pom.xml
+++ b/pom.xml
@@ -429,7 +429,6 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xs
 aggregate
 
   1024m
-  true
   true
   false
   ${maven.compile.source}


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[3/3] hadoop git commit: HADOOP-13323. Downgrade stack trace on FS load from Warn to debug. Contributed by Steve Loughran.

2016-10-06 Thread cnauroth
HADOOP-13323. Downgrade stack trace on FS load from Warn to debug. Contributed 
by Steve Loughran.

(cherry picked from commit 2d46c3f6b7d55b6a2f124d07fe26d37359615df4)
(cherry picked from commit ecccb114ae1b09526d11385df7085b6dd3376e2d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b9ce40e1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b9ce40e1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b9ce40e1

Branch: refs/heads/branch-2.8
Commit: b9ce40e12cb15022c7a6cbb2e35e3625e70004da
Parents: 319b101
Author: Chris Nauroth 
Authored: Thu Oct 6 10:57:01 2016 -0700
Committer: Chris Nauroth 
Committed: Thu Oct 6 10:57:15 2016 -0700

--
 .../src/main/java/org/apache/hadoop/fs/FileSystem.java  | 10 +-
 .../apache/hadoop/fs/TestFileSystemInitialization.java  | 12 
 2 files changed, 13 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9ce40e1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index 054f86e..9825181 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -2764,7 +2764,15 @@ public abstract class FileSystem extends Configured 
implements Closeable {
   ClassUtil.findContainingJar(fs.getClass()), e);
 }
   } catch (ServiceConfigurationError ee) {
-LOG.warn("Cannot load filesystem", ee);
+LOG.warn("Cannot load filesystem: " + ee);
+Throwable cause = ee.getCause();
+// print all the nested exception messages
+while (cause != null) {
+  LOG.warn(cause.toString());
+  cause = cause.getCause();
+}
+// and at debug: the full stack
+LOG.debug("Stack Trace", ee);
   }
 }
 FILE_SYSTEMS_LOADED = true;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9ce40e1/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemInitialization.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemInitialization.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemInitialization.java
index 18e8b01..4d627a5 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemInitialization.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemInitialization.java
@@ -47,16 +47,12 @@ public class TestFileSystemInitialization {
 
   @Test
   public void testMissingLibraries() {
-boolean catched = false;
 try {
   Configuration conf = new Configuration();
-  FileSystem.getFileSystemClass("s3a", conf);
-} catch (Exception e) {
-  catched = true;
-} catch (ServiceConfigurationError e) {
-  // S3A shouldn't find AWS SDK and fail
-  catched = true;
+  Class fs = FileSystem.getFileSystemClass("s3a",
+  conf);
+  fail("Expected an exception, got a filesystem: " + fs);
+} catch (Exception | ServiceConfigurationError expected) {
 }
-assertTrue(catched);
   }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[1/3] hadoop git commit: HADOOP-13323. Downgrade stack trace on FS load from Warn to debug. Contributed by Steve Loughran.

2016-10-06 Thread cnauroth
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 7aee005c0 -> ecccb114a
  refs/heads/branch-2.8 319b101b7 -> b9ce40e12
  refs/heads/trunk 2cc841f16 -> 2d46c3f6b


HADOOP-13323. Downgrade stack trace on FS load from Warn to debug. Contributed 
by Steve Loughran.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2d46c3f6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2d46c3f6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2d46c3f6

Branch: refs/heads/trunk
Commit: 2d46c3f6b7d55b6a2f124d07fe26d37359615df4
Parents: 2cc841f
Author: Chris Nauroth 
Authored: Thu Oct 6 10:57:01 2016 -0700
Committer: Chris Nauroth 
Committed: Thu Oct 6 10:57:01 2016 -0700

--
 .../src/main/java/org/apache/hadoop/fs/FileSystem.java  | 10 +-
 .../apache/hadoop/fs/TestFileSystemInitialization.java  | 12 
 2 files changed, 13 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2d46c3f6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index c36598f..cc062c4 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -2858,7 +2858,15 @@ public abstract class FileSystem extends Configured 
implements Closeable {
   ClassUtil.findContainingJar(fs.getClass()), e);
 }
   } catch (ServiceConfigurationError ee) {
-LOG.warn("Cannot load filesystem", ee);
+LOG.warn("Cannot load filesystem: " + ee);
+Throwable cause = ee.getCause();
+// print all the nested exception messages
+while (cause != null) {
+  LOG.warn(cause.toString());
+  cause = cause.getCause();
+}
+// and at debug: the full stack
+LOG.debug("Stack Trace", ee);
   }
 }
 FILE_SYSTEMS_LOADED = true;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2d46c3f6/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemInitialization.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemInitialization.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemInitialization.java
index 18e8b01..4d627a5 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemInitialization.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileSystemInitialization.java
@@ -47,16 +47,12 @@ public class TestFileSystemInitialization {
 
   @Test
   public void testMissingLibraries() {
-boolean catched = false;
 try {
   Configuration conf = new Configuration();
-  FileSystem.getFileSystemClass("s3a", conf);
-} catch (Exception e) {
-  catched = true;
-} catch (ServiceConfigurationError e) {
-  // S3A shouldn't find AWS SDK and fail
-  catched = true;
+  Class fs = FileSystem.getFileSystemClass("s3a",
+  conf);
+  fail("Expected an exception, got a filesystem: " + fs);
+} catch (Exception | ServiceConfigurationError expected) {
 }
-assertTrue(catched);
   }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-10745. Directly resolve paths into INodesInPath. Contributed by Daryn Sharp..

2016-10-06 Thread zhz
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 23d435117 -> 47fcae7da


HDFS-10745. Directly resolve paths into INodesInPath. Contributed by Daryn 
Sharp..


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/47fcae7d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/47fcae7d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/47fcae7d

Branch: refs/heads/branch-2.7
Commit: 47fcae7da8fee97b619a2c967a035e79d0745ba0
Parents: 23d4351
Author: Zhe Zhang 
Authored: Wed Oct 5 16:01:02 2016 -0700
Committer: Zhe Zhang 
Committed: Thu Oct 6 09:39:59 2016 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../hadoop/hdfs/server/namenode/FSDirAclOp.java |  29 ++--
 .../hdfs/server/namenode/FSDirAttrOp.java   |  20 +--
 .../hdfs/server/namenode/FSDirDeleteOp.java |   4 +-
 .../hdfs/server/namenode/FSDirMkdirOp.java  |   4 +-
 .../hdfs/server/namenode/FSDirRenameOp.java |  41 ++---
 .../server/namenode/FSDirStatAndListingOp.java  |  72 -
 .../hdfs/server/namenode/FSDirSymlinkOp.java|   4 +-
 .../hdfs/server/namenode/FSDirXAttrOp.java  |  25 ++-
 .../hdfs/server/namenode/FSDirectory.java   | 107 ++---
 .../hdfs/server/namenode/FSNamesystem.java  | 160 ++-
 .../hdfs/server/namenode/INodesInPath.java  |   8 +
 12 files changed, 246 insertions(+), 231 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/47fcae7d/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 4290133..3350509 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -47,6 +47,9 @@ Release 2.7.4 - UNRELEASED
 HDFS-9145. Tracking methods that hold FSNamesytemLock for too long.
 (Mingliang Liu via Haohui Mai)
 
+HDFS-10745. Directly resolve paths into INodesInPath.
+(Daryn Sharp via kihwal)
+
   OPTIMIZATIONS
 
 HDFS-10896. Move lock logging logic from FSNamesystem into 
FSNamesystemLock.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/47fcae7d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
index 296bed2..2153f02 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
@@ -25,7 +25,6 @@ import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.protocol.AclException;
-import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
 
 import java.io.IOException;
@@ -39,11 +38,11 @@ class FSDirAclOp {
 String src = srcArg;
 checkAclsConfigFlag(fsd);
 FSPermissionChecker pc = fsd.getPermissionChecker();
-src = fsd.resolvePath(pc, src);
 INodesInPath iip;
 fsd.writeLock();
 try {
-  iip = fsd.getINodesInPath4Write(FSDirectory.normalizePath(src), true);
+  iip = fsd.resolvePathForWrite(pc, src);
+  src = iip.getPath();
   fsd.checkOwner(pc, iip);
   INode inode = FSDirectory.resolveLastINode(iip);
   int snapshotId = iip.getLatestSnapshotId();
@@ -64,11 +63,11 @@ class FSDirAclOp {
 String src = srcArg;
 checkAclsConfigFlag(fsd);
 FSPermissionChecker pc = fsd.getPermissionChecker();
-src = fsd.resolvePath(pc, src);
 INodesInPath iip;
 fsd.writeLock();
 try {
-  iip = fsd.getINodesInPath4Write(FSDirectory.normalizePath(src), true);
+  iip = fsd.resolvePathForWrite(pc, src);
+  src = iip.getPath();
   fsd.checkOwner(pc, iip);
   INode inode = FSDirectory.resolveLastINode(iip);
   int snapshotId = iip.getLatestSnapshotId();
@@ -88,11 +87,11 @@ class FSDirAclOp {
 String src = srcArg;
 checkAclsConfigFlag(fsd);
 FSPermissionChecker pc = fsd.getPermissionChecker();
-src = fsd.resolvePath(pc, src);
 INodesInPath iip;
 fsd.writeLock();
 try {
-  iip = fsd.getINodesInPath4Write(FSDirectory.normalizePath(src), true);
+  iip = fsd.resolvePathForWrite(pc, src);
+  src = iip.getPath();
   fsd.checkOwner(pc, iip);
   INode inode = 

[2/2] hadoop git commit: HDFS-10893. Refactor TestDFSShell by setting up MiniDFSCluser once for all commands test. Contributed by Mingliang Liu

2016-10-06 Thread liuml07
HDFS-10893. Refactor TestDFSShell by setting up MiniDFSCluser once for all 
commands test. Contributed by Mingliang Liu


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7aee005c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7aee005c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7aee005c

Branch: refs/heads/branch-2
Commit: 7aee005c0389c009f7649531b0bf888abe4f6c11
Parents: 94a6f65
Author: Mingliang Liu 
Authored: Tue Sep 27 14:03:23 2016 -0700
Committer: Mingliang Liu 
Committed: Thu Oct 6 08:59:49 2016 -0700

--
 .../org/apache/hadoop/hdfs/TestDFSShell.java| 2107 --
 1 file changed, 908 insertions(+), 1199 deletions(-)
--



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[1/2] hadoop git commit: HDFS-10893. Refactor TestDFSShell by setting up MiniDFSCluser once for all commands test. Contributed by Mingliang Liu

2016-10-06 Thread liuml07
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 94a6f6598 -> 7aee005c0


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7aee005c/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
index 6068978..88f0c95 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
@@ -66,6 +66,10 @@ import org.apache.hadoop.test.PathUtils;
 import org.apache.hadoop.util.ReflectionUtils;
 import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.util.ToolRunner;
+import org.junit.rules.Timeout;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Rule;
 
 import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_TRASH_INTERVAL_KEY;
 import static org.apache.hadoop.fs.permission.AclEntryScope.ACCESS;
@@ -95,6 +99,37 @@ public class TestDFSShell {
   private static final byte[] RAW_A1_VALUE = new byte[]{0x32, 0x32, 0x32};
   private static final byte[] TRUSTED_A1_VALUE = new byte[]{0x31, 0x31, 0x31};
   private static final byte[] USER_A1_VALUE = new byte[]{0x31, 0x32, 0x33};
+  private static final int BLOCK_SIZE = 1024;
+
+  private static MiniDFSCluster miniCluster;
+  private static DistributedFileSystem dfs;
+
+  @BeforeClass
+  public static void setup() throws IOException {
+final Configuration conf = new Configuration();
+conf.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, true);
+conf.setInt(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, BLOCK_SIZE);
+// set up the shared miniCluster directory so individual tests can launch
+// new clusters without conflict
+conf.set(MiniDFSCluster.HDFS_MINIDFS_BASEDIR,
+GenericTestUtils.getTestDir("TestDFSShell").getAbsolutePath());
+conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_XATTRS_ENABLED_KEY, true);
+conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_ACLS_ENABLED_KEY, true);
+
+miniCluster = new MiniDFSCluster.Builder(conf).numDataNodes(2).build();
+miniCluster.waitActive();
+dfs = miniCluster.getFileSystem();
+  }
+
+  @AfterClass
+  public static void tearDown() {
+if (miniCluster != null) {
+  miniCluster.shutdown(true, true);
+}
+  }
+
+  @Rule
+  public Timeout globalTimeout= new Timeout(30 * 1000); // 30s
 
   static Path writeFile(FileSystem fs, Path f) throws IOException {
 DataOutputStream out = fs.create(f);
@@ -146,102 +181,74 @@ public class TestDFSShell {
 
   @Test (timeout = 3)
   public void testZeroSizeFile() throws IOException {
-Configuration conf = new HdfsConfiguration();
-MiniDFSCluster cluster = new 
MiniDFSCluster.Builder(conf).numDataNodes(2).build();
-FileSystem fs = cluster.getFileSystem();
-assertTrue("Not a HDFS: "+fs.getUri(),
-   fs instanceof DistributedFileSystem);
-final DistributedFileSystem dfs = (DistributedFileSystem)fs;
-
-try {
-  //create a zero size file
-  final File f1 = new File(TEST_ROOT_DIR, "f1");
-  assertTrue(!f1.exists());
-  assertTrue(f1.createNewFile());
-  assertTrue(f1.exists());
-  assertTrue(f1.isFile());
-  assertEquals(0L, f1.length());
-  
-  //copy to remote
-  final Path root = mkdir(dfs, new Path("/test/zeroSizeFile"));
-  final Path remotef = new Path(root, "dst");
-  show("copy local " + f1 + " to remote " + remotef);
-  dfs.copyFromLocalFile(false, false, new Path(f1.getPath()), remotef);
-  
-  //getBlockSize() should not throw exception
-  show("Block size = " + dfs.getFileStatus(remotef).getBlockSize());
-
-  //copy back
-  final File f2 = new File(TEST_ROOT_DIR, "f2");
-  assertTrue(!f2.exists());
-  dfs.copyToLocalFile(remotef, new Path(f2.getPath()));
-  assertTrue(f2.exists());
-  assertTrue(f2.isFile());
-  assertEquals(0L, f2.length());
-  
-  f1.delete();
-  f2.delete();
-} finally {
-  try {dfs.close();} catch (Exception e) {}
-  cluster.shutdown();
-}
+//create a zero size file
+final File f1 = new File(TEST_ROOT_DIR, "f1");
+assertTrue(!f1.exists());
+assertTrue(f1.createNewFile());
+assertTrue(f1.exists());
+assertTrue(f1.isFile());
+assertEquals(0L, f1.length());
+
+//copy to remote
+final Path root = mkdir(dfs, new Path("/test/zeroSizeFile"));
+final Path remotef = new Path(root, "dst");
+show("copy local " + f1 + " to remote " + remotef);
+dfs.copyFromLocalFile(false, false, new Path(f1.getPath()), remotef);
+
+//getBlockSize() should not throw exception
+show("Block size = " + dfs.getFileStatus(remotef).getBlockSize());
+
+//copy back
+final 

hadoop git commit: HADOOP-13678 Update jackson from 1.9.13 to 2.x in hadoop-tools. Contributed by Akira Ajisaka.

2016-10-06 Thread stevel
Repository: hadoop
Updated Branches:
  refs/heads/trunk 4d2f380d7 -> 2cc841f16


HADOOP-13678 Update jackson from 1.9.13 to 2.x in hadoop-tools. Contributed by 
Akira Ajisaka.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2cc841f1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2cc841f1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2cc841f1

Branch: refs/heads/trunk
Commit: 2cc841f16ec9aa5336495fc20ee781a1276fddc5
Parents: 4d2f380
Author: Steve Loughran 
Authored: Thu Oct 6 16:30:26 2016 +0100
Committer: Steve Loughran 
Committed: Thu Oct 6 16:31:00 2016 +0100

--
 hadoop-tools/hadoop-azure-datalake/pom.xml  |  4 +++
 ...ClientCredentialBasedAccesTokenProvider.java |  5 +--
 hadoop-tools/hadoop-azure/pom.xml   |  6 +++-
 .../hadoop/fs/azure/NativeAzureFileSystem.java  | 16 -
 hadoop-tools/hadoop-openstack/pom.xml   | 18 +-
 .../swift/auth/ApiKeyAuthenticationRequest.java |  2 +-
 .../fs/swift/auth/entities/AccessToken.java |  2 +-
 .../hadoop/fs/swift/auth/entities/Catalog.java  |  2 +-
 .../hadoop/fs/swift/auth/entities/Endpoint.java |  2 +-
 .../hadoop/fs/swift/auth/entities/Tenant.java   |  2 +-
 .../hadoop/fs/swift/auth/entities/User.java |  2 +-
 .../snative/SwiftNativeFileSystemStore.java |  3 +-
 .../apache/hadoop/fs/swift/util/JSONUtil.java   | 24 +
 hadoop-tools/hadoop-rumen/pom.xml   |  9 +
 .../apache/hadoop/tools/rumen/Anonymizer.java   | 23 ++---
 .../hadoop/tools/rumen/HadoopLogsAnalyzer.java  |  3 +-
 .../tools/rumen/JsonObjectMapperParser.java | 17 -
 .../tools/rumen/JsonObjectMapperWriter.java | 21 +---
 .../apache/hadoop/tools/rumen/LoggedJob.java|  2 +-
 .../hadoop/tools/rumen/LoggedLocation.java  |  2 +-
 .../tools/rumen/LoggedNetworkTopology.java  |  2 +-
 .../rumen/LoggedSingleRelativeRanking.java  |  4 +--
 .../apache/hadoop/tools/rumen/LoggedTask.java   |  2 +-
 .../hadoop/tools/rumen/LoggedTaskAttempt.java   |  2 +-
 .../hadoop/tools/rumen/datatypes/NodeName.java  |  2 +-
 .../rumen/serializers/BlockingSerializer.java   | 10 +++---
 .../DefaultAnonymizingRumenSerializer.java  |  8 ++---
 .../serializers/DefaultRumenSerializer.java |  9 ++---
 .../serializers/ObjectStringSerializer.java | 10 +++---
 .../apache/hadoop/tools/rumen/state/State.java  |  2 +-
 .../tools/rumen/state/StateDeserializer.java| 14 
 .../hadoop/tools/rumen/state/StatePool.java | 36 
 .../hadoop/tools/rumen/TestHistograms.java  | 13 +++
 hadoop-tools/hadoop-sls/pom.xml |  4 +++
 .../hadoop/yarn/sls/RumenToSLSConverter.java|  8 ++---
 .../org/apache/hadoop/yarn/sls/SLSRunner.java   |  7 ++--
 .../apache/hadoop/yarn/sls/utils/SLSUtils.java  | 10 +++---
 37 files changed, 151 insertions(+), 157 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2cc841f1/hadoop-tools/hadoop-azure-datalake/pom.xml
--
diff --git a/hadoop-tools/hadoop-azure-datalake/pom.xml 
b/hadoop-tools/hadoop-azure-datalake/pom.xml
index c07a1d7..e1a0bfe 100644
--- a/hadoop-tools/hadoop-azure-datalake/pom.xml
+++ b/hadoop-tools/hadoop-azure-datalake/pom.xml
@@ -181,5 +181,9 @@
   2.4.0
   test
 
+
+  com.fasterxml.jackson.core
+  jackson-databind
+
   
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2cc841f1/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/oauth2/AzureADClientCredentialBasedAccesTokenProvider.java
--
diff --git 
a/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/oauth2/AzureADClientCredentialBasedAccesTokenProvider.java
 
b/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/oauth2/AzureADClientCredentialBasedAccesTokenProvider.java
index 6dfc593..11d07e7 100644
--- 
a/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/oauth2/AzureADClientCredentialBasedAccesTokenProvider.java
+++ 
b/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/hdfs/web/oauth2/AzureADClientCredentialBasedAccesTokenProvider.java
@@ -18,6 +18,9 @@
  */
 package org.apache.hadoop.hdfs.web.oauth2;
 
+import com.fasterxml.jackson.databind.ObjectMapper;
+
+import com.fasterxml.jackson.databind.ObjectReader;
 import com.squareup.okhttp.OkHttpClient;
 import com.squareup.okhttp.Request;
 import com.squareup.okhttp.RequestBody;
@@ -29,8 +32,6 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.web.URLConnectionFactory;
 import org.apache.hadoop.util.Timer;
 import 

hadoop git commit: YARN-5101. YARN_APPLICATION_UPDATED event is parsed in ApplicationHistoryManagerOnTimelineStore#convertToApplicationReport with reversed order. Contributed by Sunil G.

2016-10-06 Thread rohithsharmaks
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 a4ee4f9e5 -> 319b101b7


YARN-5101. YARN_APPLICATION_UPDATED event is parsed in 
ApplicationHistoryManagerOnTimelineStore#convertToApplicationReport with 
reversed order. Contributed by Sunil G.

(cherry picked from commit 4d2f380d787a6145f45c87ba663079fedbf645b8)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/319b101b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/319b101b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/319b101b

Branch: refs/heads/branch-2.8
Commit: 319b101b7ed8d9e06072f6d0da28906f26e185a2
Parents: a4ee4f9
Author: Rohith Sharma K S 
Authored: Thu Oct 6 18:16:48 2016 +0530
Committer: Rohith Sharma K S 
Committed: Thu Oct 6 20:44:39 2016 +0530

--
 .../ApplicationHistoryManagerOnTimelineStore.java | 14 +++---
 .../TestApplicationHistoryManagerOnTimelineStore.java | 14 +-
 .../yarn/server/resourcemanager/rmapp/RMAppImpl.java  |  2 +-
 3 files changed, 21 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/319b101b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
index 84d4543..feeafdd 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
@@ -351,6 +351,7 @@ public class ApplicationHistoryManagerOnTimelineStore 
extends AbstractService
   }
 }
 List events = entity.getEvents();
+long updatedTimeStamp = 0L;
 if (events != null) {
   for (TimelineEvent event : events) {
 if (event.getEventType().equals(
@@ -358,9 +359,16 @@ public class ApplicationHistoryManagerOnTimelineStore 
extends AbstractService
   createdTime = event.getTimestamp();
 } else if (event.getEventType().equals(
 ApplicationMetricsConstants.UPDATED_EVENT_TYPE)) {
-  // TODO: YARN-5101. This type of events are parsed in
-  // time-stamp descending order which means the previous event
-  // could override the information from the later same type of event.
+  // This type of events are parsed in time-stamp descending order
+  // which means the previous event could override the information
+  // from the later same type of event. Hence compare timestamp
+  // before over writing.
+  if (event.getTimestamp() > updatedTimeStamp) {
+updatedTimeStamp = event.getTimestamp();
+  } else {
+continue;
+  }
+
   Map eventInfo = event.getEventInfo();
   if (eventInfo == null) {
 continue;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/319b101b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java
index 06f6ae3..a72f73f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java
+++ 

hadoop git commit: YARN-5101. YARN_APPLICATION_UPDATED event is parsed in ApplicationHistoryManagerOnTimelineStore#convertToApplicationReport with reversed order. Contributed by Sunil G.

2016-10-06 Thread rohithsharmaks
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 caafa980a -> 94a6f6598


YARN-5101. YARN_APPLICATION_UPDATED event is parsed in 
ApplicationHistoryManagerOnTimelineStore#convertToApplicationReport with 
reversed order. Contributed by Sunil G.

(cherry picked from commit 4d2f380d787a6145f45c87ba663079fedbf645b8)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/94a6f659
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/94a6f659
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/94a6f659

Branch: refs/heads/branch-2
Commit: 94a6f65989b48dc6c409e8bc71492293520f707b
Parents: caafa98
Author: Rohith Sharma K S 
Authored: Thu Oct 6 18:16:48 2016 +0530
Committer: Rohith Sharma K S 
Committed: Thu Oct 6 20:43:47 2016 +0530

--
 .../ApplicationHistoryManagerOnTimelineStore.java | 14 +++---
 .../TestApplicationHistoryManagerOnTimelineStore.java | 14 +-
 .../yarn/server/resourcemanager/rmapp/RMAppImpl.java  |  2 +-
 3 files changed, 21 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/94a6f659/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
index 84d4543..feeafdd 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
@@ -351,6 +351,7 @@ public class ApplicationHistoryManagerOnTimelineStore 
extends AbstractService
   }
 }
 List events = entity.getEvents();
+long updatedTimeStamp = 0L;
 if (events != null) {
   for (TimelineEvent event : events) {
 if (event.getEventType().equals(
@@ -358,9 +359,16 @@ public class ApplicationHistoryManagerOnTimelineStore 
extends AbstractService
   createdTime = event.getTimestamp();
 } else if (event.getEventType().equals(
 ApplicationMetricsConstants.UPDATED_EVENT_TYPE)) {
-  // TODO: YARN-5101. This type of events are parsed in
-  // time-stamp descending order which means the previous event
-  // could override the information from the later same type of event.
+  // This type of events are parsed in time-stamp descending order
+  // which means the previous event could override the information
+  // from the later same type of event. Hence compare timestamp
+  // before over writing.
+  if (event.getTimestamp() > updatedTimeStamp) {
+updatedTimeStamp = event.getTimestamp();
+  } else {
+continue;
+  }
+
   Map eventInfo = event.getEventInfo();
   if (eventInfo == null) {
 continue;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/94a6f659/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java
index b65b22b..dd1a453 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java
+++ 

[1/3] hadoop git commit: YARN-3139. Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler. Contributed by Wangda Tan

2016-10-06 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 69c1ab4ad -> caafa980a


http://git-wip-us.apache.org/repos/asf/hadoop/blob/caafa980/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
index 1c00fc0..254508f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
@@ -54,6 +54,7 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ActiveUsersManage
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceUsage;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedContainerChangeRequest;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplication;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.AMState;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerHealth;
@@ -2234,6 +2235,22 @@ public class LeafQueue extends AbstractCSQueue {
 }
   }
 
+  public void updateApplicationPriority(SchedulerApplication 
app,
+  Priority newAppPriority) {
+try {
+  writeLock.lock();
+  FiCaSchedulerApp attempt = app.getCurrentAppAttempt();
+  getOrderingPolicy().removeSchedulableEntity(attempt);
+
+  // Update new priority in SchedulerApplication
+  attempt.setPriority(newAppPriority);
+
+  getOrderingPolicy().addSchedulableEntity(attempt);
+} finally {
+  writeLock.unlock();
+}
+  }
+
   public OrderingPolicy
   getPendingAppsOrderingPolicy() {
 return pendingOrderingPolicy;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/caafa980/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
index cd1aad4..e6851ed 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
@@ -19,6 +19,13 @@
 package org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica;
 
 import com.google.common.annotations.VisibleForTesting;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
@@ -67,12 +74,6 @@ import 
org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator;
 import org.apache.hadoop.yarn.util.resource.ResourceCalculator;
 import org.apache.hadoop.yarn.util.resource.Resources;
 
-import java.util.Collections;
-import java.util.HashSet;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-
 /**
  * Represents an application attempt from the viewpoint of the FIFO or Capacity
  * scheduler.
@@ -665,6 +666,9 @@ public class FiCaSchedulerApp extends 
SchedulerApplicationAttempt {
 } finally {
   writeLock.unlock();
 }
+  }
 
+  public ReentrantReadWriteLock.WriteLock getWriteLock() {
+return this.writeLock;
   }
 }


[3/3] hadoop git commit: YARN-3139. Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler. Contributed by Wangda Tan

2016-10-06 Thread jianhe
YARN-3139. Improve locks in 
AbstractYarnScheduler/CapacityScheduler/FairScheduler. Contributed by Wangda Tan


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/caafa980
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/caafa980
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/caafa980

Branch: refs/heads/branch-2
Commit: caafa980af9a19427855df1d4b1d5b7681c3944e
Parents: 69c1ab4
Author: Jian He 
Authored: Thu Oct 6 07:54:22 2016 -0700
Committer: Jian He 
Committed: Thu Oct 6 07:55:14 2016 -0700

--
 .../server/resourcemanager/RMServerUtils.java   |5 +-
 .../scheduler/AbstractYarnScheduler.java|  416 +++--
 .../scheduler/SchedulerApplicationAttempt.java  |8 +-
 .../scheduler/capacity/CapacityScheduler.java   | 1731 ++
 .../scheduler/capacity/LeafQueue.java   |   17 +
 .../scheduler/common/fica/FiCaSchedulerApp.java |   16 +-
 .../scheduler/fair/FairScheduler.java   | 1048 ++-
 7 files changed, 1755 insertions(+), 1486 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/caafa980/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
index b90e499..b2a085a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
@@ -211,10 +211,7 @@ public class RMServerUtils {
   }
 
   /**
-   * Validate increase/decrease request. This function must be called under
-   * the queue lock to make sure that the access to container resource is
-   * atomic. Refer to LeafQueue.decreaseContainer() and
-   * CapacityScheduelr.updateIncreaseRequests()
+   * Validate increase/decrease request.
* 
* - Throw exception when any other error happens
* 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/caafa980/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
index 45415de..645e06d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
@@ -28,6 +28,7 @@ import java.util.Set;
 import java.util.Timer;
 import java.util.TimerTask;
 import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -72,8 +73,6 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerReco
 import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNode;
 import 
org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeCleanContainerEvent;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.activities.ActivitiesManager;
-import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity
-.LeafQueue;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.QueueEntitlement;
 import org.apache.hadoop.yarn.util.resource.Resources;
 import com.google.common.annotations.VisibleForTesting;
@@ -94,7 +93,7 @@ public abstract class AbstractYarnScheduler
 
   protected Resource minimumAllocation;
 
-  protected RMContext rmContext;
+  protected volatile RMContext rmContext;
   
   private volatile Priority 

[2/3] hadoop git commit: YARN-3139. Improve locks in AbstractYarnScheduler/CapacityScheduler/FairScheduler. Contributed by Wangda Tan

2016-10-06 Thread jianhe
http://git-wip-us.apache.org/repos/asf/hadoop/blob/caafa980/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index 5696c71..10df751 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -39,7 +39,6 @@ import org.apache.commons.lang.StringUtils;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.LimitedPrivate;
-import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Evolving;
 import org.apache.hadoop.conf.Configurable;
 import org.apache.hadoop.conf.Configuration;
@@ -268,8 +267,7 @@ public class CapacityScheduler extends
   }
 
   @Override
-  public synchronized RMContainerTokenSecretManager 
-  getContainerTokenSecretManager() {
+  public RMContainerTokenSecretManager getContainerTokenSecretManager() {
 return this.rmContext.getContainerTokenSecretManager();
   }
 
@@ -294,52 +292,62 @@ public class CapacityScheduler extends
   }
 
   @Override
-  public synchronized RMContext getRMContext() {
+  public RMContext getRMContext() {
 return this.rmContext;
   }
 
   @Override
-  public synchronized void setRMContext(RMContext rmContext) {
+  public void setRMContext(RMContext rmContext) {
 this.rmContext = rmContext;
   }
 
-  private synchronized void initScheduler(Configuration configuration) throws
+  private void initScheduler(Configuration configuration) throws
   IOException {
-this.conf = loadCapacitySchedulerConfiguration(configuration);
-validateConf(this.conf);
-this.minimumAllocation = this.conf.getMinimumAllocation();
-initMaximumResourceCapability(this.conf.getMaximumAllocation());
-this.calculator = this.conf.getResourceCalculator();
-this.usePortForNodeName = this.conf.getUsePortForNodeName();
-this.applications = new ConcurrentHashMap<>();
-this.labelManager = rmContext.getNodeLabelManager();
-authorizer = YarnAuthorizationProvider.getInstance(yarnConf);
-this.activitiesManager = new ActivitiesManager(rmContext);
-activitiesManager.init(conf);
-initializeQueues(this.conf);
-this.isLazyPreemptionEnabled = conf.getLazyPreemptionEnabled();
-
-scheduleAsynchronously = this.conf.getScheduleAynschronously();
-asyncScheduleInterval =
-this.conf.getLong(ASYNC_SCHEDULER_INTERVAL,
-DEFAULT_ASYNC_SCHEDULER_INTERVAL);
-if (scheduleAsynchronously) {
-  asyncSchedulerThread = new AsyncScheduleThread(this);
-}
-
-LOG.info("Initialized CapacityScheduler with " +
-"calculator=" + getResourceCalculator().getClass() + ", " +
-"minimumAllocation=<" + getMinimumResourceCapability() + ">, " +
-"maximumAllocation=<" + getMaximumResourceCapability() + ">, " +
-"asynchronousScheduling=" + scheduleAsynchronously + ", " +
-"asyncScheduleInterval=" + asyncScheduleInterval + "ms");
-  }
-
-  private synchronized void startSchedulerThreads() {
-if (scheduleAsynchronously) {
-  Preconditions.checkNotNull(asyncSchedulerThread,
-  "asyncSchedulerThread is null");
-  asyncSchedulerThread.start();
+try {
+  writeLock.lock();
+  this.conf = loadCapacitySchedulerConfiguration(configuration);
+  validateConf(this.conf);
+  this.minimumAllocation = this.conf.getMinimumAllocation();
+  initMaximumResourceCapability(this.conf.getMaximumAllocation());
+  this.calculator = this.conf.getResourceCalculator();
+  this.usePortForNodeName = this.conf.getUsePortForNodeName();
+  this.applications = new ConcurrentHashMap<>();
+  this.labelManager = rmContext.getNodeLabelManager();
+  authorizer = YarnAuthorizationProvider.getInstance(yarnConf);
+  this.activitiesManager = new ActivitiesManager(rmContext);
+  activitiesManager.init(conf);
+  initializeQueues(this.conf);
+  this.isLazyPreemptionEnabled = conf.getLazyPreemptionEnabled();
+
+  scheduleAsynchronously = this.conf.getScheduleAynschronously();
+  

[2/3] hadoop git commit: HDFS-10957. Retire BKJM from trunk (Vinayakumar B)

2016-10-06 Thread vinayakumarb
http://git-wip-us.apache.org/repos/asf/hadoop/blob/31195488/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/java/org/apache/hadoop/contrib/bkjournal/BKJMUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/java/org/apache/hadoop/contrib/bkjournal/BKJMUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/java/org/apache/hadoop/contrib/bkjournal/BKJMUtil.java
deleted file mode 100644
index b1fc3d7..000
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/java/org/apache/hadoop/contrib/bkjournal/BKJMUtil.java
+++ /dev/null
@@ -1,184 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.contrib.bkjournal;
-
-import static org.junit.Assert.*;
-
-import java.net.URI;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hdfs.DFSConfigKeys;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-
-import org.apache.zookeeper.ZooKeeper;
-import org.apache.zookeeper.Watcher;
-import org.apache.zookeeper.WatchedEvent;
-import org.apache.zookeeper.KeeperException;
-
-import org.apache.bookkeeper.proto.BookieServer;
-import org.apache.bookkeeper.conf.ServerConfiguration;
-import org.apache.bookkeeper.util.LocalBookKeeper;
-
-import java.util.concurrent.CountDownLatch;
-import java.util.concurrent.TimeUnit;
-import java.util.List;
-
-import java.io.IOException;
-import java.io.File;
-
-/**
- * Utility class for setting up bookkeeper ensembles
- * and bringing individual bookies up and down
- */
-class BKJMUtil {
-  protected static final Log LOG = LogFactory.getLog(BKJMUtil.class);
-
-  int nextPort = 6000; // next port for additionally created bookies
-  private Thread bkthread = null;
-  private final static String zkEnsemble = "127.0.0.1:2181";
-  int numBookies;
-
-  BKJMUtil(final int numBookies) throws Exception {
-this.numBookies = numBookies;
-
-bkthread = new Thread() {
-public void run() {
-  try {
-String[] args = new String[1];
-args[0] = String.valueOf(numBookies);
-LOG.info("Starting bk");
-LocalBookKeeper.main(args);
-  } catch (InterruptedException e) {
-// go away quietly
-  } catch (Exception e) {
-LOG.error("Error starting local bk", e);
-  }
-}
-  };
-  }
-
-  void start() throws Exception {
-bkthread.start();
-if (!LocalBookKeeper.waitForServerUp(zkEnsemble, 1)) {
-  throw new Exception("Error starting zookeeper/bookkeeper");
-}
-assertEquals("Not all bookies started",
- numBookies, checkBookiesUp(numBookies, 10));
-  }
-
-  void teardown() throws Exception {
-if (bkthread != null) {
-  bkthread.interrupt();
-  bkthread.join();
-}
-  }
-
-  static ZooKeeper connectZooKeeper()
-  throws IOException, KeeperException, InterruptedException {
-final CountDownLatch latch = new CountDownLatch(1);
-
-ZooKeeper zkc = new ZooKeeper(zkEnsemble, 3600, new Watcher() {
-public void process(WatchedEvent event) {
-  if (event.getState() == Watcher.Event.KeeperState.SyncConnected) {
-latch.countDown();
-  }
-}
-  });
-if (!latch.await(3, TimeUnit.SECONDS)) {
-  throw new IOException("Zookeeper took too long to connect");
-}
-return zkc;
-  }
-
-  static URI createJournalURI(String path) throws Exception {
-return URI.create("bookkeeper://" + zkEnsemble + path);
-  }
-
-  static void addJournalManagerDefinition(Configuration conf) {
-conf.set(DFSConfigKeys.DFS_NAMENODE_EDITS_PLUGIN_PREFIX + ".bookkeeper",
- "org.apache.hadoop.contrib.bkjournal.BookKeeperJournalManager");
-  }
-
-  BookieServer newBookie() throws Exception {
-int port = nextPort++;
-ServerConfiguration bookieConf = new ServerConfiguration();
-bookieConf.setBookiePort(port);
-File tmpdir = File.createTempFile("bookie" + Integer.toString(port) + "_",
-  "test");
-tmpdir.delete();
-tmpdir.mkdir();
-
-

[2/2] hadoop git commit: HADOOP-13690. Fix typos in core-default.xml. Contributed by Yiqun Lin

2016-10-06 Thread brahma
HADOOP-13690. Fix typos in core-default.xml. Contributed by Yiqun Lin

(cherry picked from commit 35b9d7de9f0a95c277b63a3e50134cce4941b78d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/69c1ab4a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/69c1ab4a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/69c1ab4a

Branch: refs/heads/branch-2
Commit: 69c1ab4ad513cfcfcb42f21fce2b6301943bb420
Parents: d9e4ad7
Author: Brahma Reddy Battula 
Authored: Thu Oct 6 18:06:08 2016 +0530
Committer: Brahma Reddy Battula 
Committed: Thu Oct 6 18:07:23 2016 +0530

--
 .../hadoop-common/src/main/resources/core-default.xml  | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/69c1ab4a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml 
b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
index 0888b5a..e8db5d7 100644
--- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
+++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
@@ -102,7 +102,7 @@
   
 The name of the Network Interface from which the service should determine
 its host name for Kerberos login. e.g. eth2. In a multi-homed environment,
-the setting can be used to affect the _HOST subsitution in the service
+the setting can be used to affect the _HOST substitution in the service
 Kerberos principal. If this configuration value is not set, the service
 will use its default hostname as returned by
 InetAddress.getLocalHost().getCanonicalHostName().
@@ -409,7 +409,7 @@
 The number of levels to go up the group hierarchy when determining
 which groups a user is part of. 0 Will represent checking just the
 group that the user belongs to.  Each additional level will raise the
-time it takes to exectue a query by at most
+time it takes to execute a query by at most
 hadoop.security.group.mapping.ldap.directory.search.timeout.
 The default will usually be appropriate for all LDAP systems.
   
@@ -1985,7 +1985,7 @@
   dr.who=;
   
 Static mapping of user to groups. This will override the groups if
-available in the system for the specified user. In otherwords, groups
+available in the system for the specified user. In other words, groups
 look-up will not happen for these users, instead groups mapped in this
 configuration will be used.
 Mapping should be in this format.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[1/2] hadoop git commit: HADOOP-13690. Fix typos in core-default.xml. Contributed by Yiqun Lin

2016-10-06 Thread brahma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 d9e4ad78c -> 69c1ab4ad
  refs/heads/trunk b90fc70d6 -> 35b9d7de9


HADOOP-13690. Fix typos in core-default.xml. Contributed by Yiqun Lin


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/35b9d7de
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/35b9d7de
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/35b9d7de

Branch: refs/heads/trunk
Commit: 35b9d7de9f0a95c277b63a3e50134cce4941b78d
Parents: b90fc70
Author: Brahma Reddy Battula 
Authored: Thu Oct 6 18:06:08 2016 +0530
Committer: Brahma Reddy Battula 
Committed: Thu Oct 6 18:06:08 2016 +0530

--
 .../hadoop-common/src/main/resources/core-default.xml  | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/35b9d7de/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml 
b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
index 5b8d49d..4882728 100644
--- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
+++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
@@ -93,7 +93,7 @@
   
 The name of the Network Interface from which the service should determine
 its host name for Kerberos login. e.g. eth2. In a multi-homed environment,
-the setting can be used to affect the _HOST subsitution in the service
+the setting can be used to affect the _HOST substitution in the service
 Kerberos principal. If this configuration value is not set, the service
 will use its default hostname as returned by
 InetAddress.getLocalHost().getCanonicalHostName().
@@ -400,7 +400,7 @@
 The number of levels to go up the group hierarchy when determining
 which groups a user is part of. 0 Will represent checking just the
 group that the user belongs to.  Each additional level will raise the
-time it takes to exectue a query by at most
+time it takes to execute a query by at most
 hadoop.security.group.mapping.ldap.directory.search.timeout.
 The default will usually be appropriate for all LDAP systems.
   
@@ -1939,7 +1939,7 @@
   dr.who=;
   
 Static mapping of user to groups. This will override the groups if
-available in the system for the specified user. In otherwords, groups
+available in the system for the specified user. In other words, groups
 look-up will not happen for these users, instead groups mapped in this
 configuration will be used.
 Mapping should be in this format.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-13670. Update CHANGES.txt to reflect all the changes in branch-2.7. Contributed by Brahma Reddy Battula

2016-10-06 Thread brahma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 039c3a735 -> 7e241cfcb


HADOOP-13670. Update CHANGES.txt to reflect all the changes in branch-2.7. 
Contributed by Brahma Reddy Battula


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7e241cfc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7e241cfc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7e241cfc

Branch: refs/heads/branch-2.7
Commit: 7e241cfcbac1a22d8248376f01147a578b60fcc0
Parents: 039c3a7
Author: Brahma Reddy Battula 
Authored: Thu Oct 6 17:56:45 2016 +0530
Committer: Brahma Reddy Battula 
Committed: Thu Oct 6 17:56:45 2016 +0530

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  28 -
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 107 ++-
 hadoop-mapreduce-project/CHANGES.txt|   6 +-
 hadoop-yarn-project/CHANGES.txt |   5 +-
 4 files changed, 142 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7e241cfc/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 1de102d..b7d309a 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -37,11 +37,34 @@ Release 2.7.4 - UNRELEASED
 HADOOP-13579. Fix source-level compatibility after HADOOP-11252.
 (Tsuyoshi Ozawa via aajisaka)
 
+HADOOP-13494. ReconfigurableBase can log sensitive information.
+(Sean Mackrory via Andrew Wang)
+
+HADOOP-13512. ReloadingX509TrustManager should keep reloading in
+case of exception. (Mingliang Liu)
+
+HADOOP-12668. Support excluding weak Ciphers in HttpServer2 through
+ssl-server.conf. (Vijay Singh via Zhe Zhang)
+
+HADOOP-12765. HttpServer2 should switch to using the non-blocking
+SslSelectChannelConnector to prevent performance degradation when
+handling SSL connections.
+(Min Shen,Wei-Chiu Chuang via Zhe Zhang)
+
+HADOOP-13558. UserGroupInformation created from a Subject incorrectly
+triesto renew the Kerberos ticket. (Xiao Chen).
+
+HADOOP-13601. Fix a log message typo in 
AbstractDelegationTokenSecretManager
+. (Mehran Hassani via Mingliang Liu).
+
+HADOOP-11780. Prevent IPC reader thread death.
+(Daryn Sharp via kihwal).
+
 HADOOP-12597. In kms-site.xml configuration
 "hadoop.security.keystore.JavaKeyStoreProvider.password" should be updated 
with
 new name. (Contributed by Surendra Singh Lilhore via Brahma Reddy Battula)
 
-Release 2.7.3 - UNRELEASED
+Release 2.7.3 - 2016-08-25
 
   INCOMPATIBLE CHANGES
 
@@ -188,6 +211,9 @@ Release 2.7.3 - UNRELEASED
 HADOOP-13312. Updated CHANGES.txt to reflect all the changes in branch-2.7.
 (Akira Ajisaka via vinodkv)
 
+HADOOP-13434. Add bash quoting to Shell class.
+(Owen O'Malley via Arpit Agarwal)
+
   INCOMPATIBLE CHANGES
 
   NEW FEATURES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7e241cfc/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 2e9ce7f..4290133 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -4,12 +4,54 @@ Release 2.7.4 - UNRELEASED
 
   INCOMPATIBLE CHANGES
 
+HDFS-7933. fsck should also report decommissioning replicas.
+(Xiaoyu Yao via Chris Nauroth).
+
   NEW FEATURES
 
+HDFS-9804. Allow long-running Balancer to login with keytab.
+(Xiao Chen via Zhe Zhang)
+
   IMPROVEMENTS
 
+HDFS-8721. Add a metric for number of encryption zones.
+(Rakesh R via Chris Nauroth)
+
+HDFS-8883. NameNode Metrics : Add FSNameSystem lock Queue Length.
+(Anu Engineer via Xiaoyu Yao)
+
+HDFS-8818. Changes the global moveExecutor to per datanode executors and
+changes MAX_SIZE_TO_MOVE to be configurable.(Tsz Wo Nicholas Sze)
+
+HDFS-8200. Refactor FSDirStatAndListingOp. (Haohui Mai)
+
+HDFS-9621. Consolidate FSDirStatAndListingOp#createFileStatus to let
+its INodesInPath parameter always include the target INode.
+(Jing Zhao)
+
+HDFS-10656. Optimize conversion of byte arrays back to path string.
+(Daryn Sharp via kihwal)
+
+HDFS-10674. Optimize creating a full path from an inode.
+(Daryn Sharp via kihwal)
+
+HDFS-10662. Optimize UTF8 string/byte conversions.
+(Daryn Sharp via kihwal)
+
+HDFS-10673. Optimize FSPermissionChecker's internal path usage.
+(Daryn Sharp via kihwal)
+
+HDFS-10744. Internally 

[2/3] hadoop git commit: HDFS-10963. Reduce log level when network topology cannot find enough datanodes. Contributed by Xiao chen

2016-10-06 Thread brahma
HDFS-10963. Reduce log level when network topology cannot find enough 
datanodes. Contributed by Xiao chen

(cherry picked from commit b90fc70d671481564e468550c770c925f25d7db0)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d9e4ad78
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d9e4ad78
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d9e4ad78

Branch: refs/heads/branch-2
Commit: d9e4ad78ca284fb92397bcb56e3abd98870e5039
Parents: 5f1432d
Author: Brahma Reddy Battula 
Authored: Thu Oct 6 17:47:31 2016 +0530
Committer: Brahma Reddy Battula 
Committed: Thu Oct 6 17:48:49 2016 +0530

--
 .../src/main/java/org/apache/hadoop/net/NetworkTopology.java| 2 +-
 .../server/blockmanagement/BlockPlacementPolicyDefault.java | 5 +++--
 2 files changed, 4 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d9e4ad78/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
index cf5b176..0e6c253 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
@@ -813,7 +813,7 @@ public class NetworkTopology {
   }
 }
 if (numOfDatanodes == 0) {
-  LOG.warn("Failed to find datanode (scope=\"{}\" excludedScope=\"{}\").",
+  LOG.debug("Failed to find datanode (scope=\"{}\" excludedScope=\"{}\").",
   String.valueOf(scope), String.valueOf(excludedScope));
   return null;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d9e4ad78/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
index dc5ed9b..4bc479a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
@@ -48,8 +48,9 @@ import com.google.common.annotations.VisibleForTesting;
 public class BlockPlacementPolicyDefault extends BlockPlacementPolicy {
 
   private static final String enableDebugLogging =
-"For more information, please enable DEBUG log level on "
-+ BlockPlacementPolicy.class.getName();
+  "For more information, please enable DEBUG log level on "
+  + BlockPlacementPolicy.class.getName() + " and "
+  + NetworkTopology.class.getName();
 
   private static final ThreadLocal debugLoggingBuilder
   = new ThreadLocal() {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[1/3] hadoop git commit: HDFS-10963. Reduce log level when network topology cannot find enough datanodes. Contributed by Xiao chen

2016-10-06 Thread brahma
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 5f1432d98 -> d9e4ad78c
  refs/heads/branch-2.8 c955a59d6 -> a4ee4f9e5
  refs/heads/trunk 272a21747 -> b90fc70d6


HDFS-10963. Reduce log level when network topology cannot find enough 
datanodes. Contributed by Xiao chen


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b90fc70d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b90fc70d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b90fc70d

Branch: refs/heads/trunk
Commit: b90fc70d671481564e468550c770c925f25d7db0
Parents: 272a217
Author: Brahma Reddy Battula 
Authored: Thu Oct 6 17:47:31 2016 +0530
Committer: Brahma Reddy Battula 
Committed: Thu Oct 6 17:47:31 2016 +0530

--
 .../src/main/java/org/apache/hadoop/net/NetworkTopology.java| 2 +-
 .../server/blockmanagement/BlockPlacementPolicyDefault.java | 5 +++--
 2 files changed, 4 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b90fc70d/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
index cf5b176..0e6c253 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
@@ -813,7 +813,7 @@ public class NetworkTopology {
   }
 }
 if (numOfDatanodes == 0) {
-  LOG.warn("Failed to find datanode (scope=\"{}\" excludedScope=\"{}\").",
+  LOG.debug("Failed to find datanode (scope=\"{}\" excludedScope=\"{}\").",
   String.valueOf(scope), String.valueOf(excludedScope));
   return null;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b90fc70d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
index abfa782..3958c73 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
@@ -49,8 +49,9 @@ import com.google.common.annotations.VisibleForTesting;
 public class BlockPlacementPolicyDefault extends BlockPlacementPolicy {
 
   private static final String enableDebugLogging =
-"For more information, please enable DEBUG log level on "
-+ BlockPlacementPolicy.class.getName();
+  "For more information, please enable DEBUG log level on "
+  + BlockPlacementPolicy.class.getName() + " and "
+  + NetworkTopology.class.getName();
 
   private static final ThreadLocal debugLoggingBuilder
   = new ThreadLocal() {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[Hadoop Wiki] Trivial Update of "HowToContribute" by QwertyManiac

2016-10-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "HowToContribute" page has been changed by QwertyManiac:
https://wiki.apache.org/hadoop/HowToContribute?action=diff=116=117

Comment:
Add gcc-c++ to RHEL instructions

  
  For RHEL (and hence also CentOS):
  {{{
- yum -y install  lzo-devel  zlib-devel  gcc autoconf automake libtool 
openssl-devel fuse-devel cmake
+ yum -y install  lzo-devel  zlib-devel  gcc gcc-c++ autoconf automake libtool 
openssl-devel fuse-devel cmake
  }}}
  
  For Debian and Ubuntu:

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[Hadoop Wiki] Trivial Update of "HowToContribute" by QwertyManiac

2016-10-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "HowToContribute" page has been changed by QwertyManiac:
https://wiki.apache.org/hadoop/HowToContribute?action=diff=116=117

Comment:
Add missing cmake to RHEL instructions

  
  For RHEL (and hence also CentOS):
  {{{
- yum -y install  lzo-devel  zlib-devel  gcc autoconf automake libtool 
openssl-devel fuse-devel cmake
+ yum -y install  lzo-devel  zlib-devel  gcc gcc-c++ autoconf automake libtool 
openssl-devel fuse-devel cmake
  }}}
  
  For Debian and Ubuntu:

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: MAPREDUCE-6789. Fix TestAMWebApp failure. Contributed by Daniel Templeton.

2016-10-06 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 a868cf404 -> 5f1432d98


MAPREDUCE-6789. Fix TestAMWebApp failure. Contributed by Daniel Templeton.

(cherry picked from commit 272a21747e8a89b6daccc19b71c21de3d17b8d62)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5f1432d9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5f1432d9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5f1432d9

Branch: refs/heads/branch-2
Commit: 5f1432d98e286259b7ab7bf1cb45f7a4ba1671c6
Parents: a868cf4
Author: Akira Ajisaka 
Authored: Thu Oct 6 15:57:15 2016 +0900
Committer: Akira Ajisaka 
Committed: Thu Oct 6 15:59:19 2016 +0900

--
 .../mapreduce/v2/app/webapp/TestAMWebApp.java   |  8 +--
 .../yarn/server/webproxy/ProxyUriUtils.java | 53 
 2 files changed, 48 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5f1432d9/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebApp.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebApp.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebApp.java
index acb31bd..21d37c8 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebApp.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebApp.java
@@ -247,9 +247,11 @@ public class TestAMWebApp {
   HttpURLConnection conn = (HttpURLConnection) httpUrl.openConnection();
   conn.setInstanceFollowRedirects(false);
   conn.connect();
-  String expectedURL =
-  scheme + conf.get(YarnConfiguration.PROXY_ADDRESS)
-  + ProxyUriUtils.getPath(app.getAppID(), "/mapreduce");
+
+  // Because we're not calling from the proxy's address, we'll be 
redirected
+  String expectedURL = scheme + conf.get(YarnConfiguration.PROXY_ADDRESS)
+  + ProxyUriUtils.getPath(app.getAppID(), "/mapreduce", true);
+
   Assert.assertEquals(expectedURL,
 conn.getHeaderField(HttpHeaders.LOCATION));
   Assert.assertEquals(HttpStatus.SC_MOVED_TEMPORARILY,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5f1432d9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/ProxyUriUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/ProxyUriUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/ProxyUriUtils.java
index e130225..c656742 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/ProxyUriUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/ProxyUriUtils.java
@@ -40,6 +40,8 @@ public class ProxyUriUtils {
   public static final String PROXY_SERVLET_NAME = "proxy";
   /**Base path where the proxy servlet will handle requests.*/
   public static final String PROXY_BASE = "/proxy/";
+  /**Path component added when the proxy redirects the connection.*/
+  public static final String REDIRECT = "redirect/";
   /**Path Specification for the proxy servlet.*/
   public static final String PROXY_PATH_SPEC = PROXY_BASE+"*";
   /**Query Parameter indicating that the URI was approved.*/
@@ -57,27 +59,58 @@ public class ProxyUriUtils {
   
   /**
* Get the proxied path for an application.
-   * @param id the application id to use.
-   * @return the base path to that application through the proxy.
+   *
+   * @param id the application id to use
+   * @return the base path to that application through the proxy
*/
   public static String getPath(ApplicationId id) {
-if(id == null) {
+return getPath(id, false);
+  }
+
+  /**
+   * Get the proxied path for an application.
+   *
+   * @param id the application id to use
+   * @param redirected whether the path should contain the redirect component
+   * @return the base path to that application through the proxy
+   */
+  public static String 

hadoop git commit: MAPREDUCE-6789. Fix TestAMWebApp failure. Contributed by Daniel Templeton.

2016-10-06 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/trunk 202325485 -> 272a21747


MAPREDUCE-6789. Fix TestAMWebApp failure. Contributed by Daniel Templeton.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/272a2174
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/272a2174
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/272a2174

Branch: refs/heads/trunk
Commit: 272a21747e8a89b6daccc19b71c21de3d17b8d62
Parents: 2023254
Author: Akira Ajisaka 
Authored: Thu Oct 6 15:57:15 2016 +0900
Committer: Akira Ajisaka 
Committed: Thu Oct 6 15:57:15 2016 +0900

--
 .../mapreduce/v2/app/webapp/TestAMWebApp.java   |  8 +--
 .../yarn/server/webproxy/ProxyUriUtils.java | 53 
 2 files changed, 48 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/272a2174/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebApp.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebApp.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebApp.java
index acb31bd..21d37c8 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebApp.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebApp.java
@@ -247,9 +247,11 @@ public class TestAMWebApp {
   HttpURLConnection conn = (HttpURLConnection) httpUrl.openConnection();
   conn.setInstanceFollowRedirects(false);
   conn.connect();
-  String expectedURL =
-  scheme + conf.get(YarnConfiguration.PROXY_ADDRESS)
-  + ProxyUriUtils.getPath(app.getAppID(), "/mapreduce");
+
+  // Because we're not calling from the proxy's address, we'll be 
redirected
+  String expectedURL = scheme + conf.get(YarnConfiguration.PROXY_ADDRESS)
+  + ProxyUriUtils.getPath(app.getAppID(), "/mapreduce", true);
+
   Assert.assertEquals(expectedURL,
 conn.getHeaderField(HttpHeaders.LOCATION));
   Assert.assertEquals(HttpStatus.SC_MOVED_TEMPORARILY,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/272a2174/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/ProxyUriUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/ProxyUriUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/ProxyUriUtils.java
index e130225..c656742 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/ProxyUriUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/ProxyUriUtils.java
@@ -40,6 +40,8 @@ public class ProxyUriUtils {
   public static final String PROXY_SERVLET_NAME = "proxy";
   /**Base path where the proxy servlet will handle requests.*/
   public static final String PROXY_BASE = "/proxy/";
+  /**Path component added when the proxy redirects the connection.*/
+  public static final String REDIRECT = "redirect/";
   /**Path Specification for the proxy servlet.*/
   public static final String PROXY_PATH_SPEC = PROXY_BASE+"*";
   /**Query Parameter indicating that the URI was approved.*/
@@ -57,27 +59,58 @@ public class ProxyUriUtils {
   
   /**
* Get the proxied path for an application.
-   * @param id the application id to use.
-   * @return the base path to that application through the proxy.
+   *
+   * @param id the application id to use
+   * @return the base path to that application through the proxy
*/
   public static String getPath(ApplicationId id) {
-if(id == null) {
+return getPath(id, false);
+  }
+
+  /**
+   * Get the proxied path for an application.
+   *
+   * @param id the application id to use
+   * @param redirected whether the path should contain the redirect component
+   * @return the base path to that application through the proxy
+   */
+  public static String getPath(ApplicationId id, boolean redirected) {
+if (id == null) {
   

hadoop git commit: MAPREDUCE-6740. Enforce mapreduce.task.timeout to be at least mapreduce.task.progress-report.interval. (Haibo Chen via kasha)

2016-10-06 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 a2cdd2215 -> c955a59d6


MAPREDUCE-6740. Enforce mapreduce.task.timeout to be at least 
mapreduce.task.progress-report.interval. (Haibo Chen via kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c955a59d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c955a59d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c955a59d

Branch: refs/heads/branch-2.8
Commit: c955a59d630bc2e52a8c716af6ec0c52673d2c1a
Parents: a2cdd22
Author: Karthik Kambatla 
Authored: Wed Oct 5 23:12:45 2016 -0700
Committer: Karthik Kambatla 
Committed: Wed Oct 5 23:12:45 2016 -0700

--
 .../mapreduce/v2/app/TaskHeartbeatHandler.java  | 24 ++-
 .../v2/app/TestTaskHeartbeatHandler.java| 67 
 .../java/org/apache/hadoop/mapred/Task.java |  8 ++-
 .../apache/hadoop/mapreduce/MRJobConfig.java|  9 ++-
 .../hadoop/mapreduce/util/MRJobConfUtil.java| 16 +
 5 files changed, 113 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c955a59d/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/TaskHeartbeatHandler.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/TaskHeartbeatHandler.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/TaskHeartbeatHandler.java
index 303b4c1..6a716c7 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/TaskHeartbeatHandler.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/TaskHeartbeatHandler.java
@@ -23,10 +23,12 @@ import java.util.Map;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ConcurrentMap;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.mapreduce.MRJobConfig;
+import org.apache.hadoop.mapreduce.util.MRJobConfUtil;
 import org.apache.hadoop.mapreduce.v2.api.records.TaskAttemptId;
 import 
org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptDiagnosticsUpdateEvent;
 import org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEvent;
@@ -67,7 +69,7 @@ public class TaskHeartbeatHandler extends AbstractService {
   //received from a task.
   private Thread lostTaskCheckerThread;
   private volatile boolean stopped;
-  private int taskTimeOut = 5 * 60 * 1000;// 5 mins
+  private long taskTimeOut;
   private int taskTimeOutCheckInterval = 30 * 1000; // 30 seconds.
 
   private final EventHandler eventHandler;
@@ -87,7 +89,19 @@ public class TaskHeartbeatHandler extends AbstractService {
   @Override
   protected void serviceInit(Configuration conf) throws Exception {
 super.serviceInit(conf);
-taskTimeOut = conf.getInt(MRJobConfig.TASK_TIMEOUT, 5 * 60 * 1000);
+taskTimeOut = conf.getLong(
+MRJobConfig.TASK_TIMEOUT, MRJobConfig.DEFAULT_TASK_TIMEOUT_MILLIS);
+
+// enforce task timeout is at least twice as long as task report interval
+long taskProgressReportIntervalMillis = MRJobConfUtil.
+getTaskProgressReportInterval(conf);
+long minimumTaskTimeoutAllowed = taskProgressReportIntervalMillis * 2;
+if(taskTimeOut < minimumTaskTimeoutAllowed) {
+  taskTimeOut = minimumTaskTimeoutAllowed;
+  LOG.info("Task timeout must be as least twice as long as the task " +
+  "status report interval. Setting task timeout to " + taskTimeOut);
+}
+
 taskTimeOutCheckInterval =
 conf.getInt(MRJobConfig.TASK_TIMEOUT_CHECK_INTERVAL_MS, 30 * 1000);
   }
@@ -140,7 +154,7 @@ public class TaskHeartbeatHandler extends AbstractService {
 
 while (iterator.hasNext()) {
   Map.Entry entry = iterator.next();
-  boolean taskTimedOut = (taskTimeOut > 0) && 
+  boolean taskTimedOut = (taskTimeOut > 0) &&
   (currentTime > (entry.getValue().getLastProgress() + 
taskTimeOut));

   if(taskTimedOut) {
@@ -163,4 +177,8 @@ public class TaskHeartbeatHandler extends AbstractService {
 }
   }
 
+  @VisibleForTesting
+  public long getTaskTimeOut() {
+return taskTimeOut;
+  }
 }


[2/2] hadoop git commit: Revert "HDFS-10893. Refactor TestDFSShell by setting up MiniDFSCluser once for all commands test. Contributed by Mingliang Liu"

2016-10-06 Thread liuml07
Revert "HDFS-10893. Refactor TestDFSShell by setting up MiniDFSCluser once for 
all commands test. Contributed by Mingliang Liu"

This reverts commit 14bacd2b999c91a3f7a19b38fd63404e6f91c4a0.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a868cf40
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a868cf40
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a868cf40

Branch: refs/heads/branch-2
Commit: a868cf40469f70e42e18f07c6ac5ff16adbb2722
Parents: 14bacd2
Author: Mingliang Liu 
Authored: Wed Oct 5 23:10:28 2016 -0700
Committer: Mingliang Liu 
Committed: Wed Oct 5 23:10:28 2016 -0700

--
 .../org/apache/hadoop/hdfs/TestDFSShell.java| 2107 ++
 1 file changed, 1199 insertions(+), 908 deletions(-)
--



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[1/2] hadoop git commit: Revert "HDFS-10893. Refactor TestDFSShell by setting up MiniDFSCluser once for all commands test. Contributed by Mingliang Liu"

2016-10-06 Thread liuml07
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 14bacd2b9 -> a868cf404


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a868cf40/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
index 88f0c95..6068978 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
@@ -66,10 +66,6 @@ import org.apache.hadoop.test.PathUtils;
 import org.apache.hadoop.util.ReflectionUtils;
 import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.util.ToolRunner;
-import org.junit.rules.Timeout;
-import org.junit.AfterClass;
-import org.junit.BeforeClass;
-import org.junit.Rule;
 
 import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_TRASH_INTERVAL_KEY;
 import static org.apache.hadoop.fs.permission.AclEntryScope.ACCESS;
@@ -99,37 +95,6 @@ public class TestDFSShell {
   private static final byte[] RAW_A1_VALUE = new byte[]{0x32, 0x32, 0x32};
   private static final byte[] TRUSTED_A1_VALUE = new byte[]{0x31, 0x31, 0x31};
   private static final byte[] USER_A1_VALUE = new byte[]{0x31, 0x32, 0x33};
-  private static final int BLOCK_SIZE = 1024;
-
-  private static MiniDFSCluster miniCluster;
-  private static DistributedFileSystem dfs;
-
-  @BeforeClass
-  public static void setup() throws IOException {
-final Configuration conf = new Configuration();
-conf.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, true);
-conf.setInt(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, BLOCK_SIZE);
-// set up the shared miniCluster directory so individual tests can launch
-// new clusters without conflict
-conf.set(MiniDFSCluster.HDFS_MINIDFS_BASEDIR,
-GenericTestUtils.getTestDir("TestDFSShell").getAbsolutePath());
-conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_XATTRS_ENABLED_KEY, true);
-conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_ACLS_ENABLED_KEY, true);
-
-miniCluster = new MiniDFSCluster.Builder(conf).numDataNodes(2).build();
-miniCluster.waitActive();
-dfs = miniCluster.getFileSystem();
-  }
-
-  @AfterClass
-  public static void tearDown() {
-if (miniCluster != null) {
-  miniCluster.shutdown(true, true);
-}
-  }
-
-  @Rule
-  public Timeout globalTimeout= new Timeout(30 * 1000); // 30s
 
   static Path writeFile(FileSystem fs, Path f) throws IOException {
 DataOutputStream out = fs.create(f);
@@ -181,74 +146,102 @@ public class TestDFSShell {
 
   @Test (timeout = 3)
   public void testZeroSizeFile() throws IOException {
-//create a zero size file
-final File f1 = new File(TEST_ROOT_DIR, "f1");
-assertTrue(!f1.exists());
-assertTrue(f1.createNewFile());
-assertTrue(f1.exists());
-assertTrue(f1.isFile());
-assertEquals(0L, f1.length());
-
-//copy to remote
-final Path root = mkdir(dfs, new Path("/test/zeroSizeFile"));
-final Path remotef = new Path(root, "dst");
-show("copy local " + f1 + " to remote " + remotef);
-dfs.copyFromLocalFile(false, false, new Path(f1.getPath()), remotef);
-
-//getBlockSize() should not throw exception
-show("Block size = " + dfs.getFileStatus(remotef).getBlockSize());
-
-//copy back
-final File f2 = new File(TEST_ROOT_DIR, "f2");
-assertTrue(!f2.exists());
-dfs.copyToLocalFile(remotef, new Path(f2.getPath()));
-assertTrue(f2.exists());
-assertTrue(f2.isFile());
-assertEquals(0L, f2.length());
-
-f1.delete();
-f2.delete();
+Configuration conf = new HdfsConfiguration();
+MiniDFSCluster cluster = new 
MiniDFSCluster.Builder(conf).numDataNodes(2).build();
+FileSystem fs = cluster.getFileSystem();
+assertTrue("Not a HDFS: "+fs.getUri(),
+   fs instanceof DistributedFileSystem);
+final DistributedFileSystem dfs = (DistributedFileSystem)fs;
+
+try {
+  //create a zero size file
+  final File f1 = new File(TEST_ROOT_DIR, "f1");
+  assertTrue(!f1.exists());
+  assertTrue(f1.createNewFile());
+  assertTrue(f1.exists());
+  assertTrue(f1.isFile());
+  assertEquals(0L, f1.length());
+  
+  //copy to remote
+  final Path root = mkdir(dfs, new Path("/test/zeroSizeFile"));
+  final Path remotef = new Path(root, "dst");
+  show("copy local " + f1 + " to remote " + remotef);
+  dfs.copyFromLocalFile(false, false, new Path(f1.getPath()), remotef);
+  
+  //getBlockSize() should not throw exception
+  show("Block size = " + dfs.getFileStatus(remotef).getBlockSize());
+
+  //copy back
+  final File f2 = new File(TEST_ROOT_DIR, "f2");
+  assertTrue(!f2.exists());
+  dfs.copyToLocalFile(remotef, new