hadoop git commit: HADOOP-13861. Spelling errors in logging and exceptions for code. Contributed by Grant Sohn.

2016-12-05 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/trunk 08a7253bc -> 7b988e889


HADOOP-13861. Spelling errors in logging and exceptions for code. Contributed 
by Grant Sohn.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7b988e88
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7b988e88
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7b988e88

Branch: refs/heads/trunk
Commit: 7b988e88992528a0cac2ca8893652c5d4a90c6b9
Parents: 08a7253
Author: Andrew Wang 
Authored: Mon Dec 5 23:18:18 2016 -0800
Committer: Andrew Wang 
Committed: Mon Dec 5 23:18:18 2016 -0800

--
 .../security/authentication/util/ZKSignerSecretProvider.java | 2 +-
 .../src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java| 2 +-
 .../src/main/java/org/apache/hadoop/fs/shell/CommandFormat.java  | 2 +-
 .../org/apache/hadoop/io/erasurecode/rawcoder/util/GF256.java| 2 +-
 .../src/main/java/org/apache/hadoop/io/file/tfile/TFile.java | 2 +-
 .../src/main/java/org/apache/hadoop/io/file/tfile/Utils.java | 2 +-
 .../src/main/java/org/apache/hadoop/net/NetworkTopology.java | 4 ++--
 .../src/main/java/org/apache/hadoop/security/KDiag.java  | 2 +-
 .../apache/hadoop/security/alias/LocalJavaKeyStoreProvider.java  | 2 +-
 9 files changed, 10 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7b988e88/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java
 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java
index 1d16b2d..48dfaaa 100644
--- 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java
+++ 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java
@@ -258,7 +258,7 @@ public class ZKSignerSecretProvider extends 
RolloverSignerSecretProvider {
 } catch (KeeperException.BadVersionException bve) {
   LOG.debug("Unable to push to znode; another server already did it");
 } catch (Exception ex) {
-  LOG.error("An unexpected exception occured pushing data to ZooKeeper",
+  LOG.error("An unexpected exception occurred pushing data to ZooKeeper",
   ex);
 }
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7b988e88/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java
index b14e1f0..1ed01ea 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java
@@ -525,7 +525,7 @@ public class LocalDirAllocator {
 try {
   advance();
 } catch (IOException ie) {
-  throw new RuntimeException("Can't check existance of " + next, ie);
+  throw new RuntimeException("Can't check existence of " + next, ie);
 }
 if (result == null) {
   throw new NoSuchElementException();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7b988e88/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandFormat.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandFormat.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandFormat.java
index 0aa3d65..bf30b22 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandFormat.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandFormat.java
@@ -248,7 +248,7 @@ public class CommandFormat {
 private static final long serialVersionUID = 0L;
 
 public DuplicatedOptionException(String duplicatedOption) {
-  super("option " + duplicatedOption + " already exsits!");
+  super("option " + duplicatedOption + " already exists!");
 }
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7b988e88/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/GF256.java

hadoop git commit: HADOOP-13861. Spelling errors in logging and exceptions for code. Contributed by Grant Sohn.

2016-12-05 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 6a2f239d9 -> 6bd600eef


HADOOP-13861. Spelling errors in logging and exceptions for code. Contributed 
by Grant Sohn.

(cherry picked from commit 7b988e88992528a0cac2ca8893652c5d4a90c6b9)

 Conflicts:

hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/GF256.java

hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/KDiag.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6bd600ee
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6bd600ee
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6bd600ee

Branch: refs/heads/branch-2
Commit: 6bd600eef2f4a2ab1515947bd4e909c9954d6acf
Parents: 6a2f239
Author: Andrew Wang 
Authored: Mon Dec 5 23:18:18 2016 -0800
Committer: Andrew Wang 
Committed: Mon Dec 5 23:21:03 2016 -0800

--
 .../security/authentication/util/ZKSignerSecretProvider.java | 2 +-
 .../src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java| 2 +-
 .../src/main/java/org/apache/hadoop/fs/shell/CommandFormat.java  | 2 +-
 .../src/main/java/org/apache/hadoop/io/file/tfile/TFile.java | 2 +-
 .../src/main/java/org/apache/hadoop/io/file/tfile/Utils.java | 2 +-
 .../src/main/java/org/apache/hadoop/net/NetworkTopology.java | 4 ++--
 .../apache/hadoop/security/alias/LocalJavaKeyStoreProvider.java  | 2 +-
 7 files changed, 8 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6bd600ee/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java
 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java
index 1d16b2d..48dfaaa 100644
--- 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java
+++ 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java
@@ -258,7 +258,7 @@ public class ZKSignerSecretProvider extends 
RolloverSignerSecretProvider {
 } catch (KeeperException.BadVersionException bve) {
   LOG.debug("Unable to push to znode; another server already did it");
 } catch (Exception ex) {
-  LOG.error("An unexpected exception occured pushing data to ZooKeeper",
+  LOG.error("An unexpected exception occurred pushing data to ZooKeeper",
   ex);
 }
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6bd600ee/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java
index b14e1f0..1ed01ea 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java
@@ -525,7 +525,7 @@ public class LocalDirAllocator {
 try {
   advance();
 } catch (IOException ie) {
-  throw new RuntimeException("Can't check existance of " + next, ie);
+  throw new RuntimeException("Can't check existence of " + next, ie);
 }
 if (result == null) {
   throw new NoSuchElementException();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6bd600ee/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandFormat.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandFormat.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandFormat.java
index c262231..6a9cdfd 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandFormat.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandFormat.java
@@ -248,7 +248,7 @@ public class CommandFormat {
 private static final long serialVersionUID = 0L;
 
 public DuplicatedOptionException(String duplicatedOption) {
-  super("option " + duplicatedOption + " already exsits!");
+  super("option " + duplicatedOption + " already exists!");
 }
   }
 }


hadoop git commit: HADOOP-13861. Spelling errors in logging and exceptions for code. Contributed by Grant Sohn.

2016-12-05 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 1a7b2ab9b -> 092d0602c


HADOOP-13861. Spelling errors in logging and exceptions for code. Contributed 
by Grant Sohn.

(cherry picked from commit 7b988e88992528a0cac2ca8893652c5d4a90c6b9)

 Conflicts:

hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/GF256.java

hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/KDiag.java

(cherry picked from commit 6bd600eef2f4a2ab1515947bd4e909c9954d6acf)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/092d0602
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/092d0602
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/092d0602

Branch: refs/heads/branch-2.8
Commit: 092d0602c21d9d981fd77c6f66c71d2f4da6f7ce
Parents: 1a7b2ab
Author: Andrew Wang 
Authored: Mon Dec 5 23:18:18 2016 -0800
Committer: Andrew Wang 
Committed: Mon Dec 5 23:21:17 2016 -0800

--
 .../security/authentication/util/ZKSignerSecretProvider.java | 2 +-
 .../src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java| 2 +-
 .../src/main/java/org/apache/hadoop/fs/shell/CommandFormat.java  | 2 +-
 .../src/main/java/org/apache/hadoop/io/file/tfile/TFile.java | 2 +-
 .../src/main/java/org/apache/hadoop/io/file/tfile/Utils.java | 2 +-
 .../src/main/java/org/apache/hadoop/net/NetworkTopology.java | 4 ++--
 .../apache/hadoop/security/alias/LocalJavaKeyStoreProvider.java  | 2 +-
 7 files changed, 8 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/092d0602/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java
 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java
index 1d16b2d..48dfaaa 100644
--- 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java
+++ 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java
@@ -258,7 +258,7 @@ public class ZKSignerSecretProvider extends 
RolloverSignerSecretProvider {
 } catch (KeeperException.BadVersionException bve) {
   LOG.debug("Unable to push to znode; another server already did it");
 } catch (Exception ex) {
-  LOG.error("An unexpected exception occured pushing data to ZooKeeper",
+  LOG.error("An unexpected exception occurred pushing data to ZooKeeper",
   ex);
 }
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/092d0602/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java
index b14e1f0..1ed01ea 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java
@@ -525,7 +525,7 @@ public class LocalDirAllocator {
 try {
   advance();
 } catch (IOException ie) {
-  throw new RuntimeException("Can't check existance of " + next, ie);
+  throw new RuntimeException("Can't check existence of " + next, ie);
 }
 if (result == null) {
   throw new NoSuchElementException();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/092d0602/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandFormat.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandFormat.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandFormat.java
index c262231..6a9cdfd 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandFormat.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandFormat.java
@@ -248,7 +248,7 @@ public class CommandFormat {
 private static final long serialVersionUID = 0L;
 
 public DuplicatedOptionException(String duplicatedOption) {
-  super("option " + duplicatedOption + " already exsits!");
+  super("option 

[hadoop] Git Push Summary

2016-12-05 Thread vvasudev
Repository: hadoop
Updated Branches:
  refs/heads/YARN-5673 [created] 08a7253bc

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: Revert "HDFS-11156. Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API. Contributed by Weiwei Yang"

2016-12-05 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 79f830ed2 -> 1a7b2ab9b


Revert "HDFS-11156. Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API. 
Contributed by Weiwei Yang"

This reverts commit 3b80424d4f2ff073310c9628f4645aab6cfb6d4d.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1a7b2ab9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1a7b2ab9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1a7b2ab9

Branch: refs/heads/branch-2.8
Commit: 1a7b2ab9bb596dcb26c264f354f42cc0aa8a1bfe
Parents: 79f830e
Author: Andrew Wang 
Authored: Mon Dec 5 23:09:30 2016 -0800
Committer: Andrew Wang 
Committed: Mon Dec 5 23:09:30 2016 -0800

--
 .../apache/hadoop/hdfs/web/JsonUtilClient.java  | 32 
 .../hadoop/hdfs/web/WebHdfsFileSystem.java  | 13 ++---
 .../hadoop/hdfs/web/resources/GetOpParam.java   | 12 +
 .../web/resources/NamenodeWebHdfsMethods.java   | 17 ---
 .../org/apache/hadoop/hdfs/web/JsonUtil.java| 30 
 .../org/apache/hadoop/hdfs/web/TestWebHDFS.java | 51 
 6 files changed, 4 insertions(+), 151 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1a7b2ab9/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
index 6031609..1166991 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
@@ -19,7 +19,6 @@ package org.apache.hadoop.hdfs.web;
 
 import com.google.common.collect.Lists;
 import com.google.common.collect.Maps;
-import org.apache.hadoop.fs.BlockLocation;
 import org.apache.hadoop.fs.ContentSummary;
 import org.apache.hadoop.fs.ContentSummary.Builder;
 import org.apache.hadoop.fs.FileChecksum;
@@ -546,35 +545,4 @@ class JsonUtilClient {
 lastLocatedBlock, isLastBlockComplete, null);
   }
 
-  /** Convert a Json map to BlockLocation. **/
-  static BlockLocation toBlockLocation(Map m)
-  throws IOException{
-long length = ((Number) m.get("length")).longValue();
-long offset = ((Number) m.get("offset")).longValue();
-boolean corrupt = Boolean.
-getBoolean(m.get("corrupt").toString());
-String[] storageIds = toStringArray(getList(m, "storageIds"));
-String[] cachedHosts = toStringArray(getList(m, "cachedHosts"));
-String[] hosts = toStringArray(getList(m, "hosts"));
-String[] names = toStringArray(getList(m, "names"));
-String[] topologyPaths = toStringArray(getList(m, "topologyPaths"));
-StorageType[] storageTypes = toStorageTypeArray(
-getList(m, "storageTypes"));
-return new BlockLocation(names, hosts, cachedHosts,
-topologyPaths, storageIds, storageTypes,
-offset, length, corrupt);
-  }
-
-  static String[] toStringArray(List list) {
-if (list == null) {
-  return null;
-} else {
-  final String[] array = new String[list.size()];
-  int i = 0;
-  for (Object object : list) {
-array[i++] = object.toString();
-  }
-  return array;
-}
-  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1a7b2ab9/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
index 444d77b..8c6718e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
@@ -1573,20 +1573,13 @@ public class WebHdfsFileSystem extends FileSystem
 statistics.incrementReadOps(1);
 storageStatistics.incrementOpCounter(OpType.GET_FILE_BLOCK_LOCATIONS);
 
-final HttpOpParam.Op op = GetOpParam.Op.GETFILEBLOCKLOCATIONS;
+final HttpOpParam.Op op = GetOpParam.Op.GET_BLOCK_LOCATIONS;
 return new FsPathResponseRunner(op, p,
 new OffsetParam(offset), new LengthParam(length)) {
   @Override
-  @SuppressWarnings("unchecked")
   BlockLocation[] decodeResponse(Map json) throws IOException {
-List list = 

hadoop git commit: Revert "HDFS-11156. Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API. Contributed by Weiwei Yang"

2016-12-05 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 54c5880cf -> 6a2f239d9


Revert "HDFS-11156. Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API. 
Contributed by Weiwei Yang"

This reverts commit be969e591883aa6cdd69bb62cea4e8904ece65f1.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6a2f239d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6a2f239d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6a2f239d

Branch: refs/heads/branch-2
Commit: 6a2f239d9eade4de4974e99979363a4b6b34b99d
Parents: 54c5880
Author: Andrew Wang 
Authored: Mon Dec 5 23:09:14 2016 -0800
Committer: Andrew Wang 
Committed: Mon Dec 5 23:09:14 2016 -0800

--
 .../apache/hadoop/hdfs/web/JsonUtilClient.java  | 32 
 .../hadoop/hdfs/web/WebHdfsFileSystem.java  | 13 ++---
 .../hadoop/hdfs/web/resources/GetOpParam.java   | 12 +
 .../web/resources/NamenodeWebHdfsMethods.java   | 17 ---
 .../org/apache/hadoop/hdfs/web/JsonUtil.java| 30 
 .../org/apache/hadoop/hdfs/web/TestWebHDFS.java | 51 
 6 files changed, 4 insertions(+), 151 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6a2f239d/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
index 2e7372b..fbc4324 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
@@ -20,7 +20,6 @@ package org.apache.hadoop.hdfs.web;
 import com.google.common.base.Preconditions;
 import com.google.common.collect.Lists;
 import com.google.common.collect.Maps;
-import org.apache.hadoop.fs.BlockLocation;
 import org.apache.hadoop.fs.ContentSummary;
 import org.apache.hadoop.fs.ContentSummary.Builder;
 import org.apache.hadoop.fs.FileChecksum;
@@ -589,35 +588,4 @@ class JsonUtilClient {
 lastLocatedBlock, isLastBlockComplete, null);
   }
 
-  /** Convert a Json map to BlockLocation. **/
-  static BlockLocation toBlockLocation(Map m)
-  throws IOException{
-long length = ((Number) m.get("length")).longValue();
-long offset = ((Number) m.get("offset")).longValue();
-boolean corrupt = Boolean.
-getBoolean(m.get("corrupt").toString());
-String[] storageIds = toStringArray(getList(m, "storageIds"));
-String[] cachedHosts = toStringArray(getList(m, "cachedHosts"));
-String[] hosts = toStringArray(getList(m, "hosts"));
-String[] names = toStringArray(getList(m, "names"));
-String[] topologyPaths = toStringArray(getList(m, "topologyPaths"));
-StorageType[] storageTypes = toStorageTypeArray(
-getList(m, "storageTypes"));
-return new BlockLocation(names, hosts, cachedHosts,
-topologyPaths, storageIds, storageTypes,
-offset, length, corrupt);
-  }
-
-  static String[] toStringArray(List list) {
-if (list == null) {
-  return null;
-} else {
-  final String[] array = new String[list.size()];
-  int i = 0;
-  for (Object object : list) {
-array[i++] = object.toString();
-  }
-  return array;
-}
-  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6a2f239d/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
index cd7ca74..c0d6de9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
@@ -1597,20 +1597,13 @@ public class WebHdfsFileSystem extends FileSystem
 statistics.incrementReadOps(1);
 storageStatistics.incrementOpCounter(OpType.GET_FILE_BLOCK_LOCATIONS);
 
-final HttpOpParam.Op op = GetOpParam.Op.GETFILEBLOCKLOCATIONS;
+final HttpOpParam.Op op = GetOpParam.Op.GET_BLOCK_LOCATIONS;
 return new FsPathResponseRunner(op, p,
 new OffsetParam(offset), new LengthParam(length)) {
   @Override
-  @SuppressWarnings("unchecked")
   BlockLocation[] decodeResponse(Map json) throws 

hadoop git commit: Revert "HDFS-11156. Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API. Contributed by Weiwei Yang"

2016-12-05 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/trunk b2a3d6c51 -> 08a7253bc


Revert "HDFS-11156. Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API. 
Contributed by Weiwei Yang"

This reverts commit c7ff34f8dcca3a2024230c5383abd9299daa1b20.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/08a7253b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/08a7253b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/08a7253b

Branch: refs/heads/trunk
Commit: 08a7253bc0eb6c9155457feecb9c5cdc17c3a814
Parents: b2a3d6c
Author: Andrew Wang 
Authored: Mon Dec 5 23:08:49 2016 -0800
Committer: Andrew Wang 
Committed: Mon Dec 5 23:09:35 2016 -0800

--
 .../apache/hadoop/hdfs/web/JsonUtilClient.java  | 32 
 .../hadoop/hdfs/web/WebHdfsFileSystem.java  | 13 ++---
 .../hadoop/hdfs/web/resources/GetOpParam.java   | 12 +
 .../web/resources/NamenodeWebHdfsMethods.java   | 17 ---
 .../org/apache/hadoop/hdfs/web/JsonUtil.java| 30 
 .../org/apache/hadoop/hdfs/web/TestWebHDFS.java | 51 
 6 files changed, 4 insertions(+), 151 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/08a7253b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
index 12899f4..a75f4f1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
@@ -22,7 +22,6 @@ import com.fasterxml.jackson.databind.ObjectReader;
 import com.google.common.base.Preconditions;
 import com.google.common.collect.Lists;
 import com.google.common.collect.Maps;
-import org.apache.hadoop.fs.BlockLocation;
 import org.apache.hadoop.fs.ContentSummary;
 import org.apache.hadoop.fs.ContentSummary.Builder;
 import org.apache.hadoop.fs.FileChecksum;
@@ -589,35 +588,4 @@ class JsonUtilClient {
 lastLocatedBlock, isLastBlockComplete, null, null);
   }
 
-  /** Convert a Json map to BlockLocation. **/
-  static BlockLocation toBlockLocation(Map m)
-  throws IOException{
-long length = ((Number) m.get("length")).longValue();
-long offset = ((Number) m.get("offset")).longValue();
-boolean corrupt = Boolean.
-getBoolean(m.get("corrupt").toString());
-String[] storageIds = toStringArray(getList(m, "storageIds"));
-String[] cachedHosts = toStringArray(getList(m, "cachedHosts"));
-String[] hosts = toStringArray(getList(m, "hosts"));
-String[] names = toStringArray(getList(m, "names"));
-String[] topologyPaths = toStringArray(getList(m, "topologyPaths"));
-StorageType[] storageTypes = toStorageTypeArray(
-getList(m, "storageTypes"));
-return new BlockLocation(names, hosts, cachedHosts,
-topologyPaths, storageIds, storageTypes,
-offset, length, corrupt);
-  }
-
-  static String[] toStringArray(List list) {
-if (list == null) {
-  return null;
-} else {
-  final String[] array = new String[list.size()];
-  int i = 0;
-  for (Object object : list) {
-array[i++] = object.toString();
-  }
-  return array;
-}
-  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/08a7253b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
index e82e9f6..23804b7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
@@ -1610,20 +1610,13 @@ public class WebHdfsFileSystem extends FileSystem
 statistics.incrementReadOps(1);
 storageStatistics.incrementOpCounter(OpType.GET_FILE_BLOCK_LOCATIONS);
 
-final HttpOpParam.Op op = GetOpParam.Op.GETFILEBLOCKLOCATIONS;
+final HttpOpParam.Op op = GetOpParam.Op.GET_BLOCK_LOCATIONS;
 return new FsPathResponseRunner(op, p,
 new OffsetParam(offset), new LengthParam(length)) {
   @Override
-  @SuppressWarnings("unchecked")
   BlockLocation[] decodeResponse(Map 

hadoop git commit: HADOOP-13793. S3guard: add inconsistency injection, integration tests. Contributed by Aaron Fabbri

2016-12-05 Thread liuml07
Repository: hadoop
Updated Branches:
  refs/heads/HADOOP-13345 cfd0fbf13 -> 013a3c454


HADOOP-13793. S3guard: add inconsistency injection, integration tests. 
Contributed by Aaron Fabbri


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/013a3c45
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/013a3c45
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/013a3c45

Branch: refs/heads/HADOOP-13345
Commit: 013a3c4540f2622fc8182a214e19cdb407f21b8b
Parents: cfd0fbf
Author: Mingliang Liu 
Authored: Mon Dec 5 22:10:14 2016 -0800
Committer: Mingliang Liu 
Committed: Mon Dec 5 22:53:54 2016 -0800

--
 .../org/apache/hadoop/fs/s3a/Constants.java |   2 +-
 .../hadoop/fs/s3a/DefaultS3ClientFactory.java   | 223 +++
 .../fs/s3a/InconsistentAmazonS3Client.java  | 189 
 .../apache/hadoop/fs/s3a/S3ClientFactory.java   | 186 
 .../fs/s3a/ITestS3GuardListConsistency.java |  79 +++
 .../fs/s3a/InconsistentS3ClientFactory.java |  35 +++
 6 files changed, 527 insertions(+), 187 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/013a3c45/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
index 518bd33..c102460 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
@@ -280,7 +280,7 @@ public final class Constants {
   @InterfaceStability.Unstable
   public static final Class
   DEFAULT_S3_CLIENT_FACTORY_IMPL =
-  S3ClientFactory.DefaultS3ClientFactory.class;
+  DefaultS3ClientFactory.class;
 
   /**
* Maximum number of partitions in a multipart upload: {@value}.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/013a3c45/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java
new file mode 100644
index 000..a43a746
--- /dev/null
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java
@@ -0,0 +1,223 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import com.amazonaws.ClientConfiguration;
+import com.amazonaws.Protocol;
+import com.amazonaws.auth.AWSCredentialsProvider;
+import com.amazonaws.services.s3.AmazonS3;
+import com.amazonaws.services.s3.AmazonS3Client;
+import com.amazonaws.services.s3.S3ClientOptions;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.util.VersionInfo;
+import org.slf4j.Logger;
+
+import java.io.IOException;
+import java.net.URI;
+
+import static org.apache.hadoop.fs.s3a.Constants.*;
+import static org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet;
+import static org.apache.hadoop.fs.s3a.S3AUtils.intOption;
+
+/**
+ * The default factory implementation, which calls the AWS SDK to configure
+ * and create an {@link AmazonS3Client} that communicates with the S3 service.
+ */
+public class DefaultS3ClientFactory extends Configured implements
+S3ClientFactory {
+
+  private static final Logger LOG = S3AFileSystem.LOG;
+
+  @Override
+  public AmazonS3 createS3Client(URI name, URI uri) throws IOException {
+Configuration conf = getConf();
+AWSCredentialsProvider credentials =
+createAWSCredentialProviderSet(name, conf, uri);
+ClientConfiguration awsConf = new ClientConfiguration();
+

hadoop git commit: YARN-5921. Incorrect synchronization in RMContextImpl#setHAServiceState/getHAServiceState. Contributed by Varun Saxena

2016-12-05 Thread naganarasimha_gr
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 2fdae7324 -> 54c5880cf


YARN-5921. Incorrect synchronization in 
RMContextImpl#setHAServiceState/getHAServiceState. Contributed by Varun Saxena

(cherry picked from commit f3b8ff54ab08545d7093bf8861b44ec9912e8dc3)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/54c5880c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/54c5880c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/54c5880c

Branch: refs/heads/branch-2
Commit: 54c5880cf7bec7aef7729519b36187ee03807410
Parents: 2fdae73
Author: Naganarasimha 
Authored: Tue Dec 6 06:53:38 2016 +0530
Committer: Naganarasimha 
Committed: Tue Dec 6 11:01:14 2016 +0530

--
 .../hadoop/yarn/server/resourcemanager/RMContextImpl.java | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/54c5880c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
index d0b1625..1f2485f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
@@ -79,6 +79,8 @@ public class RMContextImpl implements RMContext {
 
   private QueueLimitCalculator queueLimitCalculator;
 
+  private final Object haServiceStateLock = new Object();
+
   /**
* Default constructor. To be used in conjunction with setter methods for
* individual fields.
@@ -253,9 +255,9 @@ public class RMContextImpl implements RMContext {
 this.isHAEnabled = isHAEnabled;
   }
 
-  void setHAServiceState(HAServiceState haServiceState) {
-synchronized (haServiceState) {
-  this.haServiceState = haServiceState;
+  void setHAServiceState(HAServiceState serviceState) {
+synchronized (haServiceStateLock) {
+  this.haServiceState = serviceState;
 }
   }
 
@@ -351,7 +353,7 @@ public class RMContextImpl implements RMContext {
 
   @Override
   public HAServiceState getHAServiceState() {
-synchronized (haServiceState) {
+synchronized (haServiceStateLock) {
   return haServiceState;
 }
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[3/6] hadoop git commit: HADOOP-13835. Move Google Test Framework code from mapreduce to hadoop-common. Contributed by Varun Vasudev.

2016-12-05 Thread aajisaka
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b2a3d6c5/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/pom.xml
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/pom.xml
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/pom.xml
index 5f33843..9f12986 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/pom.xml
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/pom.xml
@@ -92,8 +92,6 @@
 
   
 src/main/native/testData/*
-
-src/main/native/gtest/**/*
   
 
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b2a3d6c5/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/CMakeLists.txt
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/CMakeLists.txt
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/CMakeLists.txt
index ee02062..d9cec1a 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/CMakeLists.txt
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/CMakeLists.txt
@@ -21,6 +21,9 @@ cmake_minimum_required(VERSION 2.6 FATAL_ERROR)
 list(APPEND CMAKE_MODULE_PATH 
${CMAKE_SOURCE_DIR}/../../../../hadoop-common-project/hadoop-common/)
 include(HadoopCommon)
 
+# Set gtest path
+set(GTEST_SRC_DIR 
${CMAKE_SOURCE_DIR}/../../../../hadoop-common-project/hadoop-common/src/main/native/gtest)
+
 # Add extra compiler and linker flags.
 # -Wno-sign-compare
 hadoop_add_compiler_flags("-DNDEBUG -DSIMPLE_MEMCPY -fno-strict-aliasing")
@@ -94,9 +97,10 @@ include_directories(
 ${CMAKE_BINARY_DIR}
 ${JNI_INCLUDE_DIRS}
 ${SNAPPY_INCLUDE_DIR}
+${GTEST_SRC_DIR}/include
 )
 # add gtest as system library to suppress gcc warnings
-include_directories(SYSTEM ${SRC}/gtest/include)
+include_directories(SYSTEM ${GTEST_SRC_DIR}/include)
 
 set(CMAKE_MACOSX_RPATH TRUE)
 set(CMAKE_BUILD_WITH_INSTALL_RPATH TRUE)
@@ -155,7 +159,7 @@ hadoop_add_dual_library(nativetask
 
 target_link_libraries(nativetask ${NT_DEPEND_LIBRARY})
 
-add_library(gtest ${SRC}/gtest/gtest-all.cc)
+add_library(gtest ${GTEST_SRC_DIR}/gtest-all.cc)
 set_target_properties(gtest PROPERTIES COMPILE_FLAGS "-w")
 add_executable(nttest
 ${SRC}/test/lib/TestByteArray.cc


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[6/6] hadoop git commit: HADOOP-13835. Move Google Test Framework code from mapreduce to hadoop-common. Contributed by Varun Vasudev.

2016-12-05 Thread aajisaka
HADOOP-13835. Move Google Test Framework code from mapreduce to hadoop-common. 
Contributed by Varun Vasudev.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b2a3d6c5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b2a3d6c5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b2a3d6c5

Branch: refs/heads/trunk
Commit: b2a3d6c519d83283a49b0d2172dcf1de97f9c4bc
Parents: 8e63fa9
Author: Akira Ajisaka 
Authored: Tue Dec 6 14:01:47 2016 +0900
Committer: Akira Ajisaka 
Committed: Tue Dec 6 14:01:47 2016 +0900

--
 LICENSE.txt | 2 +-
 hadoop-common-project/hadoop-common/pom.xml | 1 +
 .../src/main/native/gtest/gtest-all.cc  | 10403 
 .../src/main/native/gtest/include/gtest/gtest.h | 21192 +
 .../hadoop-mapreduce-client-nativetask/pom.xml  | 2 -
 .../src/CMakeLists.txt  | 8 +-
 .../src/main/native/gtest/gtest-all.cc  | 10403 
 .../src/main/native/gtest/include/gtest/gtest.h | 21192 -
 8 files changed, 31603 insertions(+), 31600 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b2a3d6c5/LICENSE.txt
--
diff --git a/LICENSE.txt b/LICENSE.txt
index 04d2daa..2183f0e 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -289,7 +289,7 @@ For 
src/main/native/src/org/apache/hadoop/io/compress/lz4/{lz4.h,lz4.c,lz4hc.h,l
 */
 
 
-For 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/gtest
+For hadoop-common-project/hadoop-common/src/main/native/gtest
 -
 Copyright 2008, Google Inc.
 All rights reserved.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b2a3d6c5/hadoop-common-project/hadoop-common/pom.xml
--
diff --git a/hadoop-common-project/hadoop-common/pom.xml 
b/hadoop-common-project/hadoop-common/pom.xml
index 1d45452..c9b282f 100644
--- a/hadoop-common-project/hadoop-common/pom.xml
+++ b/hadoop-common-project/hadoop-common/pom.xml
@@ -523,6 +523,7 @@
 
src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4hc.h
 
src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4hc.c
 
src/main/native/src/org/apache/hadoop/io/compress/lz4/lz4hc_encoder.h
+src/main/native/gtest/**/*
 src/test/resources/test-untar.tgz
 src/test/resources/test.har/_SUCCESS
 src/test/resources/test.har/_index


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[1/6] hadoop git commit: HADOOP-13835. Move Google Test Framework code from mapreduce to hadoop-common. Contributed by Varun Vasudev.

2016-12-05 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/trunk 8e63fa98e -> b2a3d6c51


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b2a3d6c5/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/gtest/include/gtest/gtest.h
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/gtest/include/gtest/gtest.h
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/gtest/include/gtest/gtest.h
deleted file mode 100644
index c04205d..000
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/gtest/include/gtest/gtest.h
+++ /dev/null
@@ -1,21192 +0,0 @@
-// Copyright 2005, Google Inc.
-// All rights reserved.
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are
-// met:
-//
-// * Redistributions of source code must retain the above copyright
-// notice, this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above
-// copyright notice, this list of conditions and the following disclaimer
-// in the documentation and/or other materials provided with the
-// distribution.
-// * Neither the name of Google Inc. nor the names of its
-// contributors may be used to endorse or promote products derived from
-// this software without specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: w...@google.com (Zhanyong Wan)
-//
-// The Google C++ Testing Framework (Google Test)
-//
-// This header file defines the public API for Google Test.  It should be
-// included by any test program that uses Google Test.
-//
-// IMPORTANT NOTE: Due to limitation of the C++ language, we have to
-// leave some internal implementation details in this header file.
-// They are clearly marked by comments like this:
-//
-//   // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.
-//
-// Such code is NOT meant to be used by a user directly, and is subject
-// to CHANGE WITHOUT NOTICE.  Therefore DO NOT DEPEND ON IT in a user
-// program!
-//
-// Acknowledgment: Google Test borrowed the idea of automatic test
-// registration from Barthelemy Dagenais' (barthel...@prologique.com)
-// easyUnit framework.
-
-#ifndef GTEST_INCLUDE_GTEST_GTEST_H_
-#define GTEST_INCLUDE_GTEST_GTEST_H_
-
-#include 
-#include 
-#include 
-
-// Copyright 2005, Google Inc.
-// All rights reserved.
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are
-// met:
-//
-// * Redistributions of source code must retain the above copyright
-// notice, this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above
-// copyright notice, this list of conditions and the following disclaimer
-// in the documentation and/or other materials provided with the
-// distribution.
-// * Neither the name of Google Inc. nor the names of its
-// contributors may be used to endorse or promote products derived from
-// this software without specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-//
-// Authors: w...@google.com (Zhanyong Wan), 

[5/6] hadoop git commit: HADOOP-13835. Move Google Test Framework code from mapreduce to hadoop-common. Contributed by Varun Vasudev.

2016-12-05 Thread aajisaka
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b2a3d6c5/hadoop-common-project/hadoop-common/src/main/native/gtest/gtest-all.cc
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/native/gtest/gtest-all.cc 
b/hadoop-common-project/hadoop-common/src/main/native/gtest/gtest-all.cc
new file mode 100644
index 000..4f8c08a
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/src/main/native/gtest/gtest-all.cc
@@ -0,0 +1,10403 @@
+// Copyright 2008, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: mhe...@google.com (Markus Heule)
+//
+// Google C++ Testing Framework (Google Test)
+//
+// Sometimes it's desirable to build Google Test by compiling a single file.
+// This file serves this purpose.
+
+// This line ensures that gtest.h can be compiled on its own, even
+// when it's fused.
+#include "gtest/gtest.h"
+
+// The following lines pull in the real gtest *.cc files.
+// Copyright 2005, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: w...@google.com (Zhanyong Wan)
+//
+// The Google C++ Testing Framework (Google Test)
+
+// Copyright 2007, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED 

[4/6] hadoop git commit: HADOOP-13835. Move Google Test Framework code from mapreduce to hadoop-common. Contributed by Varun Vasudev.

2016-12-05 Thread aajisaka
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b2a3d6c5/hadoop-common-project/hadoop-common/src/main/native/gtest/include/gtest/gtest.h
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/native/gtest/include/gtest/gtest.h
 
b/hadoop-common-project/hadoop-common/src/main/native/gtest/include/gtest/gtest.h
new file mode 100644
index 000..c04205d
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/native/gtest/include/gtest/gtest.h
@@ -0,0 +1,21192 @@
+// Copyright 2005, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: w...@google.com (Zhanyong Wan)
+//
+// The Google C++ Testing Framework (Google Test)
+//
+// This header file defines the public API for Google Test.  It should be
+// included by any test program that uses Google Test.
+//
+// IMPORTANT NOTE: Due to limitation of the C++ language, we have to
+// leave some internal implementation details in this header file.
+// They are clearly marked by comments like this:
+//
+//   // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.
+//
+// Such code is NOT meant to be used by a user directly, and is subject
+// to CHANGE WITHOUT NOTICE.  Therefore DO NOT DEPEND ON IT in a user
+// program!
+//
+// Acknowledgment: Google Test borrowed the idea of automatic test
+// registration from Barthelemy Dagenais' (barthel...@prologique.com)
+// easyUnit framework.
+
+#ifndef GTEST_INCLUDE_GTEST_GTEST_H_
+#define GTEST_INCLUDE_GTEST_GTEST_H_
+
+#include 
+#include 
+#include 
+
+// Copyright 2005, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: w...@google.com (Zhanyong Wan), eef...@gmail.com (Sean Mcafee)
+//
+// The Google C++ Testing Framework (Google Test)
+//
+// This header file declares functions and macros used internally by
+// Google Test.  They are subject to change without notice.
+
+#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_INTERNAL_H_

[2/6] hadoop git commit: HADOOP-13835. Move Google Test Framework code from mapreduce to hadoop-common. Contributed by Varun Vasudev.

2016-12-05 Thread aajisaka
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b2a3d6c5/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/gtest/gtest-all.cc
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/gtest/gtest-all.cc
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/gtest/gtest-all.cc
deleted file mode 100644
index 4f8c08a..000
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/gtest/gtest-all.cc
+++ /dev/null
@@ -1,10403 +0,0 @@
-// Copyright 2008, Google Inc.
-// All rights reserved.
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are
-// met:
-//
-// * Redistributions of source code must retain the above copyright
-// notice, this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above
-// copyright notice, this list of conditions and the following disclaimer
-// in the documentation and/or other materials provided with the
-// distribution.
-// * Neither the name of Google Inc. nor the names of its
-// contributors may be used to endorse or promote products derived from
-// this software without specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: mhe...@google.com (Markus Heule)
-//
-// Google C++ Testing Framework (Google Test)
-//
-// Sometimes it's desirable to build Google Test by compiling a single file.
-// This file serves this purpose.
-
-// This line ensures that gtest.h can be compiled on its own, even
-// when it's fused.
-#include "gtest/gtest.h"
-
-// The following lines pull in the real gtest *.cc files.
-// Copyright 2005, Google Inc.
-// All rights reserved.
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are
-// met:
-//
-// * Redistributions of source code must retain the above copyright
-// notice, this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above
-// copyright notice, this list of conditions and the following disclaimer
-// in the documentation and/or other materials provided with the
-// distribution.
-// * Neither the name of Google Inc. nor the names of its
-// contributors may be used to endorse or promote products derived from
-// this software without specific prior written permission.
-//
-// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: w...@google.com (Zhanyong Wan)
-//
-// The Google C++ Testing Framework (Google Test)
-
-// Copyright 2007, Google Inc.
-// All rights reserved.
-//
-// Redistribution and use in source and binary forms, with or without
-// modification, are permitted provided that the following conditions are
-// met:
-//
-// * Redistributions of source code must retain the above copyright
-// notice, this list of conditions and the following disclaimer.
-// * Redistributions in binary form must reproduce the above
-// copyright notice, this list of conditions and the following disclaimer
-// in the documentation and/or other materials provided with the
-// distribution.
-// * Neither the name of 

hadoop git commit: HDFS-10581. Hide redundant table on NameNode WebUI when no nodes are decomissioning. Contributed by Weiwei Yang.

2016-12-05 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/trunk a2b5d6022 -> 8e63fa98e


HDFS-10581. Hide redundant table on NameNode WebUI when no nodes are 
decomissioning. Contributed by Weiwei Yang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8e63fa98
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8e63fa98
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8e63fa98

Branch: refs/heads/trunk
Commit: 8e63fa98eabac55bdb2254306584ad1e759c79eb
Parents: a2b5d60
Author: Andrew Wang 
Authored: Mon Dec 5 18:13:53 2016 -0800
Committer: Andrew Wang 
Committed: Mon Dec 5 18:13:53 2016 -0800

--
 .../hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html | 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8e63fa98/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
index b0db3a1..13569fe 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
@@ -352,6 +352,7 @@
 
 Decommissioning
 
+{?DecomNodes}
 
   
 
@@ -370,6 +371,9 @@
   
   {/DecomNodes}
 
+{:else}
+No nodes are decommissioning
+{/DecomNodes}
 
 
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-10581. Hide redundant table on NameNode WebUI when no nodes are decomissioning. Contributed by Weiwei Yang.

2016-12-05 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 802a1fb2f -> 79f830ed2


HDFS-10581. Hide redundant table on NameNode WebUI when no nodes are 
decomissioning. Contributed by Weiwei Yang.

(cherry picked from commit 8e63fa98eabac55bdb2254306584ad1e759c79eb)
(cherry picked from commit 2fdae7324b1c98a2262d0de8e73c0c9099ca6ff0)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/79f830ed
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/79f830ed
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/79f830ed

Branch: refs/heads/branch-2.8
Commit: 79f830ed2259fce85865c1bb3a416a1b55cb4d54
Parents: 802a1fb
Author: Andrew Wang 
Authored: Mon Dec 5 18:13:53 2016 -0800
Committer: Andrew Wang 
Committed: Mon Dec 5 18:14:09 2016 -0800

--
 .../hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html | 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/79f830ed/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
index b0db3a1..13569fe 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
@@ -352,6 +352,7 @@
 
 Decommissioning
 
+{?DecomNodes}
 
   
 
@@ -370,6 +371,9 @@
   
   {/DecomNodes}
 
+{:else}
+No nodes are decommissioning
+{/DecomNodes}
 
 
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-10581. Hide redundant table on NameNode WebUI when no nodes are decomissioning. Contributed by Weiwei Yang.

2016-12-05 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 89eaa94c5 -> 2fdae7324


HDFS-10581. Hide redundant table on NameNode WebUI when no nodes are 
decomissioning. Contributed by Weiwei Yang.

(cherry picked from commit 8e63fa98eabac55bdb2254306584ad1e759c79eb)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2fdae732
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2fdae732
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2fdae732

Branch: refs/heads/branch-2
Commit: 2fdae7324b1c98a2262d0de8e73c0c9099ca6ff0
Parents: 89eaa94
Author: Andrew Wang 
Authored: Mon Dec 5 18:13:53 2016 -0800
Committer: Andrew Wang 
Committed: Mon Dec 5 18:14:06 2016 -0800

--
 .../hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html | 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2fdae732/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
index b0db3a1..13569fe 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
@@ -352,6 +352,7 @@
 
 Decommissioning
 
+{?DecomNodes}
 
   
 
@@ -370,6 +371,9 @@
   
   {/DecomNodes}
 
+{:else}
+No nodes are decommissioning
+{/DecomNodes}
 
 
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-13864. KMS should not require truststore password. Contributed by Mike Yoder.

2016-12-05 Thread xiao
Repository: hadoop
Updated Branches:
  refs/heads/trunk f3b8ff54a -> a2b5d6022


HADOOP-13864. KMS should not require truststore password. Contributed by Mike 
Yoder.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a2b5d602
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a2b5d602
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a2b5d602

Branch: refs/heads/trunk
Commit: a2b5d602201a4f619f6a68ec2168a884190d8de6
Parents: f3b8ff5
Author: Xiao Chen 
Authored: Mon Dec 5 12:19:26 2016 -0800
Committer: Xiao Chen 
Committed: Mon Dec 5 17:36:00 2016 -0800

--
 .../security/ssl/FileBasedKeyStoresFactory.java   |  6 --
 .../security/ssl/ReloadingX509TrustManager.java   |  2 +-
 .../ssl/TestReloadingX509TrustManager.java| 18 ++
 3 files changed, 23 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a2b5d602/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java
index 4e59010..a01d11a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java
@@ -202,8 +202,10 @@ public class FileBasedKeyStoresFactory implements 
KeyStoresFactory {
   SSL_TRUSTSTORE_PASSWORD_TPL_KEY);
   String truststorePassword = getPassword(conf, passwordProperty, "");
   if (truststorePassword.isEmpty()) {
-throw new GeneralSecurityException("The property '" + passwordProperty 
+
-"' has not been set in the ssl configuration file.");
+// An empty trust store password is legal; the trust store password
+// is only required when writing to a trust store. Otherwise it's
+// an optional integrity check.
+truststorePassword = null;
   }
   long truststoreReloadInterval =
   conf.getLong(

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a2b5d602/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/ReloadingX509TrustManager.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/ReloadingX509TrustManager.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/ReloadingX509TrustManager.java
index 597f8d7..2d3afea 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/ReloadingX509TrustManager.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/ReloadingX509TrustManager.java
@@ -167,7 +167,7 @@ public final class ReloadingX509TrustManager
 KeyStore ks = KeyStore.getInstance(type);
 FileInputStream in = new FileInputStream(file);
 try {
-  ks.load(in, password.toCharArray());
+  ks.load(in, (password == null) ? null : password.toCharArray());
   lastLoaded = file.lastModified();
   LOG.debug("Loaded truststore '" + file + "'");
 } finally {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a2b5d602/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/TestReloadingX509TrustManager.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/TestReloadingX509TrustManager.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/TestReloadingX509TrustManager.java
index bf058cd..3fb203e 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/TestReloadingX509TrustManager.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/TestReloadingX509TrustManager.java
@@ -199,4 +199,22 @@ public class TestReloadingX509TrustManager {
 }, reloadInterval, 10 * 1000);
   }
 
+  /** No password when accessing a trust store is legal. */
+  @Test
+  public void testNoPassword() throws Exception {
+KeyPair kp = generateKeyPair("RSA");
+cert1 = generateCertificate("CN=Cert1", kp, 30, "SHA1withRSA");
+cert2 = generateCertificate("CN=Cert2", kp, 30, "SHA1withRSA");
+String truststoreLocation = BASEDIR + "/testreload.jks";
+createTrustStore(truststoreLocation, "password", "cert1", 

hadoop git commit: YARN-5921. Incorrect synchronization in RMContextImpl#setHAServiceState/getHAServiceState. Contributed by Varun Saxena

2016-12-05 Thread naganarasimha_gr
Repository: hadoop
Updated Branches:
  refs/heads/trunk dcedb72af -> f3b8ff54a


YARN-5921. Incorrect synchronization in 
RMContextImpl#setHAServiceState/getHAServiceState. Contributed by Varun Saxena


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f3b8ff54
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f3b8ff54
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f3b8ff54

Branch: refs/heads/trunk
Commit: f3b8ff54ab08545d7093bf8861b44ec9912e8dc3
Parents: dcedb72
Author: Naganarasimha 
Authored: Tue Dec 6 06:53:38 2016 +0530
Committer: Naganarasimha 
Committed: Tue Dec 6 06:53:38 2016 +0530

--
 .../hadoop/yarn/server/resourcemanager/RMContextImpl.java | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f3b8ff54/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
index dc8f7d1..3f17ac6 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
@@ -80,6 +80,8 @@ public class RMContextImpl implements RMContext {
 
   private QueueLimitCalculator queueLimitCalculator;
 
+  private final Object haServiceStateLock = new Object();
+
   /**
* Default constructor. To be used in conjunction with setter methods for
* individual fields.
@@ -254,9 +256,9 @@ public class RMContextImpl implements RMContext {
 this.isHAEnabled = isHAEnabled;
   }
 
-  void setHAServiceState(HAServiceState haServiceState) {
-synchronized (haServiceState) {
-  this.haServiceState = haServiceState;
+  void setHAServiceState(HAServiceState serviceState) {
+synchronized (haServiceStateLock) {
+  this.haServiceState = serviceState;
 }
   }
 
@@ -352,7 +354,7 @@ public class RMContextImpl implements RMContext {
 
   @Override
   public HAServiceState getHAServiceState() {
-synchronized (haServiceState) {
+synchronized (haServiceStateLock) {
   return haServiceState;
 }
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-5694. ZKRMStateStore can prevent the transition to standby if the ZK node is unreachable. Contributed by Daniel Templeton

2016-12-05 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.6 c2d936f30 -> db716c27b


YARN-5694. ZKRMStateStore can prevent the transition to standby if the ZK node 
is unreachable. Contributed by Daniel Templeton


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/db716c27
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/db716c27
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/db716c27

Branch: refs/heads/branch-2.6
Commit: db716c27baf65b4f7c402077b23017df776324b7
Parents: c2d936f
Author: Jian He 
Authored: Mon Dec 5 13:49:03 2016 -0800
Committer: Jian He 
Committed: Mon Dec 5 13:49:03 2016 -0800

--
 .../recovery/ZKRMStateStore.java|  9 +-
 .../recovery/TestZKRMStateStore.java| 94 +++-
 2 files changed, 98 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/db716c27/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
index 5343a8b..aaef538 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
@@ -389,7 +389,7 @@ public class ZKRMStateStore extends RMStateStore {
   }
 
   @Override
-  protected synchronized void closeInternal() throws Exception {
+  protected void closeInternal() throws Exception {
 if (verifyActiveStatusThread != null) {
   verifyActiveStatusThread.interrupt();
   verifyActiveStatusThread.join(1000);
@@ -976,9 +976,12 @@ public class ZKRMStateStore extends RMStateStore {
   /**
* Helper method that creates fencing node, executes the passed operations,
* and deletes the fencing node.
+   *
+   * @param opList the list of ZK operations to perform
+   * @throws Exception if any of the ZK operations fail
*/
-  private synchronized void doMultiWithRetries(
-  final List opList) throws Exception {
+  @VisibleForTesting
+  synchronized void doMultiWithRetries(final List opList) throws Exception 
{
 final List execOpList = new ArrayList(opList.size() + 2);
 execOpList.add(createFencingNodePathOp);
 execOpList.addAll(opList);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/db716c27/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestZKRMStateStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestZKRMStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestZKRMStateStore.java
index 6af0edd..9adebb7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestZKRMStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestZKRMStateStore.java
@@ -23,6 +23,9 @@ import static org.junit.Assert.assertTrue;
 
 import java.io.IOException;
 import java.util.List;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -34,12 +37,15 @@ import org.apache.hadoop.yarn.conf.HAUtil;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.server.records.Version;
 import org.apache.hadoop.yarn.server.records.impl.pb.VersionPBImpl;
+import org.apache.hadoop.yarn.server.resourcemanager.MockRM;
 import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager;
 import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp;

hadoop git commit: YARN-5694. ZKRMStateStore can prevent the transition to standby if the ZK node is unreachable. Contributed by Daniel Templeton

2016-12-05 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 6d8df4e81 -> c95ab6b89


YARN-5694. ZKRMStateStore can prevent the transition to standby if the ZK node 
is unreachable. Contributed by Daniel Templeton


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c95ab6b8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c95ab6b8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c95ab6b8

Branch: refs/heads/branch-2.7
Commit: c95ab6b895828952d0aaed3e89d119d607ae26b0
Parents: 6d8df4e
Author: Jian He 
Authored: Mon Dec 5 13:49:49 2016 -0800
Committer: Jian He 
Committed: Mon Dec 5 13:49:49 2016 -0800

--
 .../recovery/ZKRMStateStore.java|   7 +-
 .../recovery/TestZKRMStateStore.java| 101 ++-
 2 files changed, 103 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c95ab6b8/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
index 9e4eec2..8a7fa4e 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
@@ -405,7 +405,7 @@ public class ZKRMStateStore extends RMStateStore {
   }
 
   @Override
-  protected synchronized void closeInternal() throws Exception {
+  protected void closeInternal() throws Exception {
 if (verifyActiveStatusThread != null) {
   verifyActiveStatusThread.interrupt();
   verifyActiveStatusThread.join(1000);
@@ -963,8 +963,9 @@ public class ZKRMStateStore extends RMStateStore {
* Helper method that creates fencing node, executes the passed operations,
* and deletes the fencing node.
*/
-  private synchronized void doStoreMultiWithRetries(
-  final List opList) throws Exception {
+  @VisibleForTesting
+  synchronized void doStoreMultiWithRetries(final List opList)
+  throws Exception {
 final List execOpList = new ArrayList(opList.size() + 2);
 execOpList.add(createFencingNodePathOp);
 execOpList.addAll(opList);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c95ab6b8/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestZKRMStateStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestZKRMStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestZKRMStateStore.java
index ea66c14..f17bccc 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestZKRMStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestZKRMStateStore.java
@@ -65,14 +65,17 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptM
 import 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptState;
 import 
org.apache.hadoop.yarn.server.resourcemanager.security.ClientToAMTokenSecretManagerInRM;
 import org.apache.hadoop.yarn.util.ConverterUtils;
+import java.util.concurrent.atomic.AtomicBoolean;
 import org.apache.zookeeper.KeeperException;
 import org.apache.zookeeper.ZooDefs.Perms;
 import org.apache.zookeeper.ZooKeeper;
 import org.apache.zookeeper.CreateMode;
+import org.apache.zookeeper.Op;
 import org.apache.zookeeper.data.ACL;
 import org.apache.zookeeper.data.Stat;
 import org.junit.Assert;
 import org.junit.Test;
+import static org.junit.Assert.assertFalse;
 
 public class TestZKRMStateStore extends RMStateStoreTestBase {
 
@@ -80,7 +83,6 @@ public class TestZKRMStateStore 

hadoop git commit: Revert "HADOOP-10930. Refactor: Wrap Datanode IO related operations. Contributed by Xiaoyu Yao."

2016-12-05 Thread xyao
Repository: hadoop
Updated Branches:
  refs/heads/trunk 15dd1f338 -> dcedb72af


Revert "HADOOP-10930. Refactor: Wrap Datanode IO related operations. 
Contributed by Xiaoyu Yao."

This reverts commit aeecfa24f4fb6af289920cbf8830c394e66bd78e.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/dcedb72a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/dcedb72a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/dcedb72a

Branch: refs/heads/trunk
Commit: dcedb72af468128458e597f08d22f5c34b744ae5
Parents: 15dd1f3
Author: Xiaoyu Yao 
Authored: Mon Dec 5 12:08:48 2016 -0800
Committer: Xiaoyu Yao 
Committed: Mon Dec 5 12:44:20 2016 -0800

--
 .../hdfs/server/datanode/BlockReceiver.java |  66 ---
 .../hdfs/server/datanode/BlockSender.java   | 105 +++
 .../hadoop/hdfs/server/datanode/DNConf.java |   4 -
 .../hdfs/server/datanode/DataStorage.java   |   5 -
 .../hdfs/server/datanode/LocalReplica.java  | 179 ++-
 .../server/datanode/LocalReplicaInPipeline.java |  30 ++--
 .../hdfs/server/datanode/ReplicaInPipeline.java |   4 +-
 .../server/datanode/fsdataset/FsDatasetSpi.java |   3 +-
 .../datanode/fsdataset/ReplicaInputStreams.java | 102 +--
 .../fsdataset/ReplicaOutputStreams.java | 107 +--
 .../datanode/fsdataset/impl/BlockPoolSlice.java |  32 ++--
 .../impl/FsDatasetAsyncDiskService.java |   7 +-
 .../datanode/fsdataset/impl/FsDatasetImpl.java  |   5 +-
 .../datanode/fsdataset/impl/FsVolumeImpl.java   |   5 +-
 .../org/apache/hadoop/hdfs/TestFileAppend.java  |   2 +-
 .../server/datanode/SimulatedFSDataset.java |  13 +-
 .../hdfs/server/datanode/TestBlockRecovery.java |   2 +-
 .../server/datanode/TestSimulatedFSDataset.java |   2 +-
 .../extdataset/ExternalDatasetImpl.java |   4 +-
 .../extdataset/ExternalReplicaInPipeline.java   |   6 +-
 20 files changed, 238 insertions(+), 445 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/dcedb72a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
index f372072..39419c1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
@@ -24,7 +24,10 @@ import java.io.Closeable;
 import java.io.DataInputStream;
 import java.io.DataOutputStream;
 import java.io.EOFException;
+import java.io.FileDescriptor;
+import java.io.FileOutputStream;
 import java.io.IOException;
+import java.io.OutputStream;
 import java.io.OutputStreamWriter;
 import java.io.Writer;
 import java.nio.ByteBuffer;
@@ -50,6 +53,7 @@ import 
org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration;
 import org.apache.hadoop.hdfs.util.DataTransferThrottler;
 import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.io.nativeio.NativeIO;
 import org.apache.hadoop.util.Daemon;
 import org.apache.hadoop.util.DataChecksum;
 import org.apache.hadoop.util.StringUtils;
@@ -84,6 +88,8 @@ class BlockReceiver implements Closeable {
* the DataNode needs to recalculate checksums before writing.
*/
   private final boolean needsChecksumTranslation;
+  private OutputStream out = null; // to block file at local disk
+  private FileDescriptor outFd;
   private DataOutputStream checksumOut = null; // to crc file at local disk
   private final int bytesPerChecksum;
   private final int checksumSize;
@@ -244,8 +250,7 @@ class BlockReceiver implements Closeable {
   
   final boolean isCreate = isDatanode || isTransfer 
   || stage == BlockConstructionStage.PIPELINE_SETUP_CREATE;
-  streams = replicaInfo.createStreams(isCreate, requestedChecksum,
-  datanodeSlowLogThresholdMs);
+  streams = replicaInfo.createStreams(isCreate, requestedChecksum);
   assert streams != null : "null streams!";
 
   // read checksum meta information
@@ -255,6 +260,13 @@ class BlockReceiver implements Closeable {
   this.bytesPerChecksum = diskChecksum.getBytesPerChecksum();
   this.checksumSize = diskChecksum.getChecksumSize();
 
+  this.out = streams.getDataOut();
+  if (out instanceof FileOutputStream) {
+this.outFd = ((FileOutputStream)out).getFD();
+  } else {
+LOG.warn("Could not get file descriptor for 

[2/6] hadoop git commit: HADOOP-13847. KMSWebApp should close KeyProviderCryptoExtension. Contributed by John Zhuge.

2016-12-05 Thread asuresh
HADOOP-13847. KMSWebApp should close KeyProviderCryptoExtension. Contributed by 
John Zhuge.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/291df5c7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/291df5c7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/291df5c7

Branch: refs/heads/YARN-5085
Commit: 291df5c7fb713d5442ee29eb3f272127afb05a3c
Parents: c51bfd2
Author: Xiao Chen 
Authored: Mon Dec 5 09:34:39 2016 -0800
Committer: Xiao Chen 
Committed: Mon Dec 5 09:35:17 2016 -0800

--
 .../apache/hadoop/crypto/key/KeyProviderCryptoExtension.java  | 5 +++--
 .../org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java| 7 ++-
 2 files changed, 9 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/291df5c7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
index 1ecd9f6..0543222 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
@@ -427,8 +427,9 @@ public class KeyProviderCryptoExtension extends
 
   @Override
   public void close() throws IOException {
-if (getKeyProvider() != null) {
-  getKeyProvider().close();
+KeyProvider provider = getKeyProvider();
+if (provider != null && provider != this) {
+  provider.close();
 }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/291df5c7/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
index cd773dd..40ae19f 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
@@ -40,9 +40,9 @@ import javax.servlet.ServletContextEvent;
 import javax.servlet.ServletContextListener;
 
 import java.io.File;
+import java.io.IOException;
 import java.net.URI;
 import java.net.URL;
-import java.util.List;
 
 @InterfaceAudience.Private
 public class KMSWebApp implements ServletContextListener {
@@ -215,6 +215,11 @@ public class KMSWebApp implements ServletContextListener {
 
   @Override
   public void contextDestroyed(ServletContextEvent sce) {
+try {
+  keyProviderCryptoExtension.close();
+} catch (IOException ioe) {
+  LOG.error("Error closing KeyProviderCryptoExtension", ioe);
+}
 kmsAudit.shutdown();
 kmsAcls.stopReloader();
 jmxReporter.stop();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[1/2] hadoop git commit: HADOOP-13675. Bug in return value for delete() calls in WASB. Contributed by Dushyanth

2016-12-05 Thread liuml07
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 0f6fbfc0d -> 89eaa94c5
  refs/heads/trunk 8c4680852 -> 15dd1f338


HADOOP-13675. Bug in return value for delete() calls in WASB. Contributed by 
Dushyanth


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/15dd1f33
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/15dd1f33
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/15dd1f33

Branch: refs/heads/trunk
Commit: 15dd1f3381069c5fdc6690e3ab1907a133ba14bf
Parents: 8c46808
Author: Mingliang Liu 
Authored: Mon Dec 5 12:04:07 2016 -0800
Committer: Mingliang Liu 
Committed: Mon Dec 5 12:04:07 2016 -0800

--
 .../fs/azure/AzureNativeFileSystemStore.java|  31 +++--
 .../hadoop/fs/azure/NativeAzureFileSystem.java  |  25 ++--
 .../hadoop/fs/azure/NativeFileSystemStore.java  |  23 +++-
 ...estNativeAzureFileSystemConcurrencyLive.java | 119 +++
 4 files changed, 176 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/15dd1f33/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
index 3e864a4..ac6c514 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
@@ -2045,10 +2045,10 @@ public class AzureNativeFileSystemStore implements 
NativeFileSystemStore {
*  The key to search for.
* @return The wanted directory, or null if not found.
*/
-  private static FileMetadata getDirectoryInList(
+  private static FileMetadata getFileMetadataInList(
   final Iterable list, String key) {
 for (FileMetadata current : list) {
-  if (current.isDir() && current.getKey().equals(key)) {
+  if (current.getKey().equals(key)) {
 return current;
   }
 }
@@ -2114,7 +2114,7 @@ public class AzureNativeFileSystemStore implements 
NativeFileSystemStore {
 
   // Add the metadata to the list, but remove any existing duplicate
   // entries first that we may have added by finding nested files.
-  FileMetadata existing = getDirectoryInList(fileMetadata, blobKey);
+  FileMetadata existing = getFileMetadataInList(fileMetadata, blobKey);
   if (existing != null) {
 fileMetadata.remove(existing);
   }
@@ -2141,7 +2141,7 @@ public class AzureNativeFileSystemStore implements 
NativeFileSystemStore {
 
   // Add the directory metadata to the list only if it's not already
   // there.
-  if (getDirectoryInList(fileMetadata, dirKey) == null) {
+  if (getFileMetadataInList(fileMetadata, dirKey) == null) {
 fileMetadata.add(directoryMetadata);
   }
 
@@ -2249,7 +2249,7 @@ public class AzureNativeFileSystemStore implements 
NativeFileSystemStore {
 
   // Add the directory metadata to the list only if it's not already
   // there.
-  FileMetadata existing = getDirectoryInList(aFileMetadataList, 
blobKey);
+  FileMetadata existing = getFileMetadataInList(aFileMetadataList, 
blobKey);
   if (existing != null) {
 aFileMetadataList.remove(existing);
   }
@@ -2278,7 +2278,7 @@ public class AzureNativeFileSystemStore implements 
NativeFileSystemStore {
 // absolute path is being used or not.
 String dirKey = normalizeKey(directory);
 
-if (getDirectoryInList(aFileMetadataList, dirKey) == null) {
+if (getFileMetadataInList(aFileMetadataList, dirKey) == null) {
   // Reached the targeted listing depth. Return metadata for the
   // directory using default permissions.
   //
@@ -2376,18 +2376,24 @@ public class AzureNativeFileSystemStore implements 
NativeFileSystemStore {
 }
   }
 
+  /**
+   * API implementation to delete a blob in the back end azure storage.
+   */
   @Override
-  public void delete(String key, SelfRenewingLease lease) throws IOException {
+  public boolean delete(String key, SelfRenewingLease lease) throws 
IOException {
 try {
   if (checkContainer(ContainerAccessType.ReadThenWrite) == 
ContainerState.DoesntExist) {
 // Container doesn't exist, no need to do anything
-return;
+return true;
   }
 
   // Get the blob reference and delete it.
   CloudBlobWrapper blob = getBlobReference(key);
   

[4/6] hadoop git commit: Revert "HDFS-11201. Spelling errors in the logging, help, assertions and exception messages. Contributed by Grant Sohn."

2016-12-05 Thread asuresh
Revert "HDFS-11201. Spelling errors in the logging, help, assertions and 
exception messages. Contributed by Grant Sohn."

This reverts commit b9522e86a55564c2ccb5ca3f1ca871965cbe74de.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1b5cceaf
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1b5cceaf
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1b5cceaf

Branch: refs/heads/YARN-5085
Commit: 1b5cceaffbdde50a87ede81552dc380832db8e79
Parents: b9522e8
Author: Wei-Chiu Chuang 
Authored: Mon Dec 5 10:54:43 2016 -0800
Committer: Wei-Chiu Chuang 
Committed: Mon Dec 5 10:54:43 2016 -0800

--
 .../src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java | 2 +-
 .../src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java | 4 ++--
 .../main/java/org/apache/hadoop/lib/server/ServerException.java  | 2 +-
 .../java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtxCache.java   | 2 +-
 .../src/main/java/org/apache/hadoop/hdfs/DFSUtil.java| 2 +-
 .../apache/hadoop/hdfs/server/blockmanagement/BlockManager.java  | 2 +-
 .../hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java  | 2 +-
 .../hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java | 2 +-
 .../hadoop/hdfs/server/diskbalancer/command/QueryCommand.java| 2 +-
 .../org/apache/hadoop/hdfs/server/namenode/FSDirTruncateOp.java  | 2 +-
 .../org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java  | 2 +-
 .../java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java   | 4 ++--
 .../server/namenode/web/resources/NamenodeWebHdfsMethods.java| 2 +-
 13 files changed, 15 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1b5cceaf/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
index aabd6fd..5783f90 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
@@ -1052,7 +1052,7 @@ public class DFSInputStream extends FSInputStream
 reader.getNetworkDistance(), nread);
 if (nread != len) {
   throw new IOException("truncated return from reader.read(): " +
-  "expected " + len + ", got " + nread);
+  "excpected " + len + ", got " + nread);
 }
 DFSClientFaultInjector.get().readFromDatanodeDelay();
 return;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1b5cceaf/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
index db064e4..51ad08f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
@@ -57,11 +57,11 @@ public class LongBitFormat implements Serializable {
   public long combine(long value, long record) {
 if (value < MIN) {
   throw new IllegalArgumentException(
-  "Illegal value: " + NAME + " = " + value + " < MIN = " + MIN);
+  "Illagal value: " + NAME + " = " + value + " < MIN = " + MIN);
 }
 if (value > MAX) {
   throw new IllegalArgumentException(
-  "Illegal value: " + NAME + " = " + value + " > MAX = " + MAX);
+  "Illagal value: " + NAME + " = " + value + " > MAX = " + MAX);
 }
 return (record & ~MASK) | (value << OFFSET);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1b5cceaf/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/ServerException.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/ServerException.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/ServerException.java
index fdca64e..e3759ce 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/ServerException.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/ServerException.java
@@ 

[2/2] hadoop git commit: HADOOP-13675. Bug in return value for delete() calls in WASB. Contributed by Dushyanth

2016-12-05 Thread liuml07
HADOOP-13675. Bug in return value for delete() calls in WASB. Contributed by 
Dushyanth

(cherry picked from commit 15dd1f3381069c5fdc6690e3ab1907a133ba14bf)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/89eaa94c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/89eaa94c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/89eaa94c

Branch: refs/heads/branch-2
Commit: 89eaa94c59681e8d3d8ff8ce758d31908e26c5c8
Parents: 0f6fbfc
Author: Mingliang Liu 
Authored: Mon Dec 5 12:04:07 2016 -0800
Committer: Mingliang Liu 
Committed: Mon Dec 5 12:06:35 2016 -0800

--
 .../fs/azure/AzureNativeFileSystemStore.java|  31 +++--
 .../hadoop/fs/azure/NativeAzureFileSystem.java  |  25 ++--
 .../hadoop/fs/azure/NativeFileSystemStore.java  |  23 +++-
 ...estNativeAzureFileSystemConcurrencyLive.java | 119 +++
 4 files changed, 176 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/89eaa94c/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
index eaca82e..dc49596 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
@@ -2045,10 +2045,10 @@ public class AzureNativeFileSystemStore implements 
NativeFileSystemStore {
*  The key to search for.
* @return The wanted directory, or null if not found.
*/
-  private static FileMetadata getDirectoryInList(
+  private static FileMetadata getFileMetadataInList(
   final Iterable list, String key) {
 for (FileMetadata current : list) {
-  if (current.isDir() && current.getKey().equals(key)) {
+  if (current.getKey().equals(key)) {
 return current;
   }
 }
@@ -2114,7 +2114,7 @@ public class AzureNativeFileSystemStore implements 
NativeFileSystemStore {
 
   // Add the metadata to the list, but remove any existing duplicate
   // entries first that we may have added by finding nested files.
-  FileMetadata existing = getDirectoryInList(fileMetadata, blobKey);
+  FileMetadata existing = getFileMetadataInList(fileMetadata, blobKey);
   if (existing != null) {
 fileMetadata.remove(existing);
   }
@@ -2141,7 +2141,7 @@ public class AzureNativeFileSystemStore implements 
NativeFileSystemStore {
 
   // Add the directory metadata to the list only if it's not already
   // there.
-  if (getDirectoryInList(fileMetadata, dirKey) == null) {
+  if (getFileMetadataInList(fileMetadata, dirKey) == null) {
 fileMetadata.add(directoryMetadata);
   }
 
@@ -2249,7 +2249,7 @@ public class AzureNativeFileSystemStore implements 
NativeFileSystemStore {
 
   // Add the directory metadata to the list only if it's not already
   // there.
-  FileMetadata existing = getDirectoryInList(aFileMetadataList, 
blobKey);
+  FileMetadata existing = getFileMetadataInList(aFileMetadataList, 
blobKey);
   if (existing != null) {
 aFileMetadataList.remove(existing);
   }
@@ -2278,7 +2278,7 @@ public class AzureNativeFileSystemStore implements 
NativeFileSystemStore {
 // absolute path is being used or not.
 String dirKey = normalizeKey(directory);
 
-if (getDirectoryInList(aFileMetadataList, dirKey) == null) {
+if (getFileMetadataInList(aFileMetadataList, dirKey) == null) {
   // Reached the targeted listing depth. Return metadata for the
   // directory using default permissions.
   //
@@ -2376,18 +2376,24 @@ public class AzureNativeFileSystemStore implements 
NativeFileSystemStore {
 }
   }
 
+  /**
+   * API implementation to delete a blob in the back end azure storage.
+   */
   @Override
-  public void delete(String key, SelfRenewingLease lease) throws IOException {
+  public boolean delete(String key, SelfRenewingLease lease) throws 
IOException {
 try {
   if (checkContainer(ContainerAccessType.ReadThenWrite) == 
ContainerState.DoesntExist) {
 // Container doesn't exist, no need to do anything
-return;
+return true;
   }
 
   // Get the blob reference and delete it.
   CloudBlobWrapper blob = getBlobReference(key);
   if (blob.exists(getInstrumentedContext())) {

[3/6] hadoop git commit: HDFS-11201. Spelling errors in the logging, help, assertions and exception messages. Contributed by Grant Sohn.

2016-12-05 Thread asuresh
HDFS-11201. Spelling errors in the logging, help, assertions and exception 
messages. Contributed by Grant Sohn.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b9522e86
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b9522e86
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b9522e86

Branch: refs/heads/YARN-5085
Commit: b9522e86a55564c2ccb5ca3f1ca871965cbe74de
Parents: 291df5c
Author: Wei-Chiu Chuang 
Authored: Mon Dec 5 09:37:12 2016 -0800
Committer: Wei-Chiu Chuang 
Committed: Mon Dec 5 10:48:25 2016 -0800

--
 .../src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java | 2 +-
 .../src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java | 4 ++--
 .../main/java/org/apache/hadoop/lib/server/ServerException.java  | 2 +-
 .../java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtxCache.java   | 2 +-
 .../src/main/java/org/apache/hadoop/hdfs/DFSUtil.java| 2 +-
 .../apache/hadoop/hdfs/server/blockmanagement/BlockManager.java  | 2 +-
 .../hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java  | 2 +-
 .../hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java | 2 +-
 .../hadoop/hdfs/server/diskbalancer/command/QueryCommand.java| 2 +-
 .../org/apache/hadoop/hdfs/server/namenode/FSDirTruncateOp.java  | 2 +-
 .../org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java  | 2 +-
 .../java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java   | 4 ++--
 .../server/namenode/web/resources/NamenodeWebHdfsMethods.java| 2 +-
 13 files changed, 15 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9522e86/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
index 5783f90..aabd6fd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
@@ -1052,7 +1052,7 @@ public class DFSInputStream extends FSInputStream
 reader.getNetworkDistance(), nread);
 if (nread != len) {
   throw new IOException("truncated return from reader.read(): " +
-  "excpected " + len + ", got " + nread);
+  "expected " + len + ", got " + nread);
 }
 DFSClientFaultInjector.get().readFromDatanodeDelay();
 return;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9522e86/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
index 51ad08f..db064e4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
@@ -57,11 +57,11 @@ public class LongBitFormat implements Serializable {
   public long combine(long value, long record) {
 if (value < MIN) {
   throw new IllegalArgumentException(
-  "Illagal value: " + NAME + " = " + value + " < MIN = " + MIN);
+  "Illegal value: " + NAME + " = " + value + " < MIN = " + MIN);
 }
 if (value > MAX) {
   throw new IllegalArgumentException(
-  "Illagal value: " + NAME + " = " + value + " > MAX = " + MAX);
+  "Illegal value: " + NAME + " = " + value + " > MAX = " + MAX);
 }
 return (record & ~MASK) | (value << OFFSET);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9522e86/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/ServerException.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/ServerException.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/ServerException.java
index e3759ce..fdca64e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/ServerException.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/ServerException.java
@@ -38,7 +38,7 @@ public class ServerException extends XException {
 

[6/6] hadoop git commit: HDFS-11094. Send back HAState along with NamespaceInfo during a versionRequest as an optional parameter. Contributed by Eric Badger

2016-12-05 Thread asuresh
HDFS-11094. Send back HAState along with NamespaceInfo during a versionRequest 
as an optional parameter. Contributed by Eric Badger


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8c468085
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8c468085
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8c468085

Branch: refs/heads/YARN-5085
Commit: 8c4680852b20ad0e65e77dd123c9ba5bb6f2fa39
Parents: 43ebff2
Author: Mingliang Liu 
Authored: Mon Dec 5 11:34:13 2016 -0800
Committer: Mingliang Liu 
Committed: Mon Dec 5 11:48:58 2016 -0800

--
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java | 76 +---
 .../hdfs/server/datanode/BPOfferService.java| 10 ++-
 .../hdfs/server/datanode/BPServiceActor.java|  4 +-
 .../hdfs/server/namenode/FSNamesystem.java  |  8 ++-
 .../hdfs/server/protocol/NamespaceInfo.java | 26 +++
 .../hadoop-hdfs/src/main/proto/HdfsServer.proto |  2 +
 .../server/datanode/TestBPOfferService.java | 31 
 .../hdfs/server/namenode/TestFSNamesystem.java  | 21 ++
 8 files changed, 148 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8c468085/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
index 78371f5..1e6d882 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
@@ -26,7 +26,7 @@ import com.google.protobuf.ByteString;
 
 import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.ha.HAServiceProtocol.HAServiceState;
-import org.apache.hadoop.ha.proto.HAServiceProtocolProtos;
+import org.apache.hadoop.ha.proto.HAServiceProtocolProtos.HAServiceStateProto;
 import org.apache.hadoop.hdfs.DFSUtilClient;
 import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.protocol.DatanodeID;
@@ -338,7 +338,8 @@ public class PBHelper {
 StorageInfoProto storage = info.getStorageInfo();
 return new NamespaceInfo(storage.getNamespceID(), storage.getClusterID(),
 info.getBlockPoolID(), storage.getCTime(), info.getBuildVersion(),
-info.getSoftwareVersion(), info.getCapabilities());
+info.getSoftwareVersion(), info.getCapabilities(),
+convert(info.getState()));
   }
 
   public static NamenodeCommand convert(NamenodeCommandProto cmd) {
@@ -744,43 +745,68 @@ public class PBHelper {
   }
   
   public static NamespaceInfoProto convert(NamespaceInfo info) {
-return NamespaceInfoProto.newBuilder()
-.setBlockPoolID(info.getBlockPoolID())
+NamespaceInfoProto.Builder builder = NamespaceInfoProto.newBuilder();
+builder.setBlockPoolID(info.getBlockPoolID())
 .setBuildVersion(info.getBuildVersion())
 .setUnused(0)
 .setStorageInfo(PBHelper.convert((StorageInfo)info))
 .setSoftwareVersion(info.getSoftwareVersion())
-.setCapabilities(info.getCapabilities())
-.build();
+.setCapabilities(info.getCapabilities());
+HAServiceState state = info.getState();
+if(state != null) {
+  builder.setState(convert(info.getState()));
+}
+return builder.build();
   }
 
-  public static NNHAStatusHeartbeat convert(NNHAStatusHeartbeatProto s) {
-if (s == null) return null;
-switch (s.getState()) {
+  public static HAServiceState convert(HAServiceStateProto s) {
+if (s == null) {
+  return null;
+}
+switch (s) {
+case INITIALIZING:
+  return HAServiceState.INITIALIZING;
 case ACTIVE:
-  return new NNHAStatusHeartbeat(HAServiceState.ACTIVE, s.getTxid());
+  return HAServiceState.ACTIVE;
 case STANDBY:
-  return new NNHAStatusHeartbeat(HAServiceState.STANDBY, s.getTxid());
+  return HAServiceState.STANDBY;
 default:
-  throw new IllegalArgumentException("Unexpected 
NNHAStatusHeartbeat.State:" + s.getState());
+  throw new IllegalArgumentException("Unexpected HAServiceStateProto:"
+  + s);
 }
   }
 
+  public static HAServiceStateProto convert(HAServiceState s) {
+if (s == null) {
+  return null;
+}
+switch (s) {
+case INITIALIZING:
+  return HAServiceStateProto.INITIALIZING;
+case ACTIVE:
+  return HAServiceStateProto.ACTIVE;
+case STANDBY:
+  return HAServiceStateProto.STANDBY;
+default:
+  throw new IllegalArgumentException("Unexpected HAServiceState:"
+   

[1/6] hadoop git commit: HDFS-11181. Fuse wrapper has a typo. Contributed by Wei-Chiu Chuang.

2016-12-05 Thread asuresh
Repository: hadoop
Updated Branches:
  refs/heads/YARN-5085 f885160f4 -> 8c4680852


HDFS-11181. Fuse wrapper has a typo. Contributed by Wei-Chiu Chuang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c51bfd29
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c51bfd29
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c51bfd29

Branch: refs/heads/YARN-5085
Commit: c51bfd29cd1e6ec619742f2c47ebfc8bbfb231b6
Parents: f885160
Author: Wei-Chiu Chuang 
Authored: Mon Dec 5 08:44:40 2016 -0800
Committer: Wei-Chiu Chuang 
Committed: Mon Dec 5 08:44:40 2016 -0800

--
 .../src/main/native/fuse-dfs/fuse_dfs_wrapper.sh   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c51bfd29/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
index c52c5f9..d5bfd09 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
@@ -43,7 +43,7 @@ done < <(find "$HADOOP_HOME/hadoop-client" -name "*.jar" 
-print0)
 while IFS= read -r -d '' file
 do
   export CLASSPATH=$CLASSPATH:$file
-done < <(find "$HADOOP_HOME/hhadoop-hdfs-project" -name "*.jar" -print0)
+done < <(find "$HADOOP_HOME/hadoop-hdfs-project" -name "*.jar" -print0)
 
 export CLASSPATH=$HADOOP_CONF_DIR:$CLASSPATH
 export PATH=$FUSEDFS_PATH:$PATH


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[5/6] hadoop git commit: YARN-5559. Analyse 2.8.0/3.0.0 jdiff reports and fix any issues. Contributed by Akira Ajisaka & Wangda Tan

2016-12-05 Thread asuresh
YARN-5559. Analyse 2.8.0/3.0.0 jdiff reports and fix any issues. Contributed by 
 Akira Ajisaka & Wangda Tan


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/43ebff2e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/43ebff2e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/43ebff2e

Branch: refs/heads/YARN-5085
Commit: 43ebff2e354142bddcb42755766a965ae8a503a6
Parents: 1b5ccea
Author: Jian He 
Authored: Mon Dec 5 11:39:34 2016 -0800
Committer: Jian He 
Committed: Mon Dec 5 11:39:34 2016 -0800

--
 .../GetClusterNodeLabelsResponse.java   | 50 
 .../yarn/client/api/impl/YarnClientImpl.java|  2 +-
 .../pb/GetClusterNodeLabelsResponsePBImpl.java  | 41 ++--
 .../yarn/security/ContainerTokenIdentifier.java | 25 ++
 .../state/InvalidStateTransitionException.java  | 22 ++---
 .../state/InvalidStateTransitonException.java   | 19 ++--
 .../resourcemanager/TestClientRMService.java|  4 +-
 7 files changed, 125 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/43ebff2e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeLabelsResponse.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeLabelsResponse.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeLabelsResponse.java
index cf6e683..cb2ccfb 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeLabelsResponse.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeLabelsResponse.java
@@ -18,7 +18,9 @@
 
 package org.apache.hadoop.yarn.api.protocolrecords;
 
+import java.util.ArrayList;
 import java.util.List;
+import java.util.Set;
 
 import org.apache.hadoop.classification.InterfaceAudience.Public;
 import org.apache.hadoop.classification.InterfaceStability.Evolving;
@@ -28,18 +30,48 @@ import org.apache.hadoop.yarn.util.Records;
 @Public
 @Evolving
 public abstract class GetClusterNodeLabelsResponse {
+  /**
+   * Creates a new instance.
+   *
+   * @param labels Node labels
+   * @return response
+   * @deprecated Use {@link #newInstance(List)} instead.
+   */
+  @Deprecated
+  public static GetClusterNodeLabelsResponse newInstance(Set labels) {
+List list = new ArrayList<>();
+for (String label : labels) {
+  list.add(NodeLabel.newInstance(label));
+}
+return newInstance(list);
+  }
+
   public static GetClusterNodeLabelsResponse newInstance(List 
labels) {
-GetClusterNodeLabelsResponse request =
+GetClusterNodeLabelsResponse response =
 Records.newRecord(GetClusterNodeLabelsResponse.class);
-request.setNodeLabels(labels);
-return request;
+response.setNodeLabelList(labels);
+return response;
   }
 
-  @Public
-  @Evolving
-  public abstract void setNodeLabels(List labels);
+  public abstract void setNodeLabelList(List labels);
+
+  public abstract List getNodeLabelList();
+
+  /**
+   * Set node labels to the response.
+   *
+   * @param labels Node labels
+   * @deprecated Use {@link #setNodeLabelList(List)} instead.
+   */
+  @Deprecated
+  public abstract void setNodeLabels(Set labels);
 
-  @Public
-  @Evolving
-  public abstract List getNodeLabels();
+  /**
+   * Get node labels of the response.
+   *
+   * @return Node labels
+   * @deprecated Use {@link #getNodeLabelList()} instead.
+   */
+  @Deprecated
+  public abstract Set getNodeLabels();
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/43ebff2e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
index a0f9678..50f1b490a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
@@ -899,7 +899,7 @@ public class YarnClientImpl extends YarnClient {
   @Override
   public List getClusterNodeLabels() throws YarnException, 
IOException {
 

hadoop git commit: HDFS-11094. Send back HAState along with NamespaceInfo during a versionRequest as an optional parameter. Contributed by Eric Badger

2016-12-05 Thread liuml07
Repository: hadoop
Updated Branches:
  refs/heads/trunk 43ebff2e3 -> 8c4680852


HDFS-11094. Send back HAState along with NamespaceInfo during a versionRequest 
as an optional parameter. Contributed by Eric Badger


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8c468085
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8c468085
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8c468085

Branch: refs/heads/trunk
Commit: 8c4680852b20ad0e65e77dd123c9ba5bb6f2fa39
Parents: 43ebff2
Author: Mingliang Liu 
Authored: Mon Dec 5 11:34:13 2016 -0800
Committer: Mingliang Liu 
Committed: Mon Dec 5 11:48:58 2016 -0800

--
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java | 76 +---
 .../hdfs/server/datanode/BPOfferService.java| 10 ++-
 .../hdfs/server/datanode/BPServiceActor.java|  4 +-
 .../hdfs/server/namenode/FSNamesystem.java  |  8 ++-
 .../hdfs/server/protocol/NamespaceInfo.java | 26 +++
 .../hadoop-hdfs/src/main/proto/HdfsServer.proto |  2 +
 .../server/datanode/TestBPOfferService.java | 31 
 .../hdfs/server/namenode/TestFSNamesystem.java  | 21 ++
 8 files changed, 148 insertions(+), 30 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8c468085/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
index 78371f5..1e6d882 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
@@ -26,7 +26,7 @@ import com.google.protobuf.ByteString;
 
 import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.ha.HAServiceProtocol.HAServiceState;
-import org.apache.hadoop.ha.proto.HAServiceProtocolProtos;
+import org.apache.hadoop.ha.proto.HAServiceProtocolProtos.HAServiceStateProto;
 import org.apache.hadoop.hdfs.DFSUtilClient;
 import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.protocol.DatanodeID;
@@ -338,7 +338,8 @@ public class PBHelper {
 StorageInfoProto storage = info.getStorageInfo();
 return new NamespaceInfo(storage.getNamespceID(), storage.getClusterID(),
 info.getBlockPoolID(), storage.getCTime(), info.getBuildVersion(),
-info.getSoftwareVersion(), info.getCapabilities());
+info.getSoftwareVersion(), info.getCapabilities(),
+convert(info.getState()));
   }
 
   public static NamenodeCommand convert(NamenodeCommandProto cmd) {
@@ -744,43 +745,68 @@ public class PBHelper {
   }
   
   public static NamespaceInfoProto convert(NamespaceInfo info) {
-return NamespaceInfoProto.newBuilder()
-.setBlockPoolID(info.getBlockPoolID())
+NamespaceInfoProto.Builder builder = NamespaceInfoProto.newBuilder();
+builder.setBlockPoolID(info.getBlockPoolID())
 .setBuildVersion(info.getBuildVersion())
 .setUnused(0)
 .setStorageInfo(PBHelper.convert((StorageInfo)info))
 .setSoftwareVersion(info.getSoftwareVersion())
-.setCapabilities(info.getCapabilities())
-.build();
+.setCapabilities(info.getCapabilities());
+HAServiceState state = info.getState();
+if(state != null) {
+  builder.setState(convert(info.getState()));
+}
+return builder.build();
   }
 
-  public static NNHAStatusHeartbeat convert(NNHAStatusHeartbeatProto s) {
-if (s == null) return null;
-switch (s.getState()) {
+  public static HAServiceState convert(HAServiceStateProto s) {
+if (s == null) {
+  return null;
+}
+switch (s) {
+case INITIALIZING:
+  return HAServiceState.INITIALIZING;
 case ACTIVE:
-  return new NNHAStatusHeartbeat(HAServiceState.ACTIVE, s.getTxid());
+  return HAServiceState.ACTIVE;
 case STANDBY:
-  return new NNHAStatusHeartbeat(HAServiceState.STANDBY, s.getTxid());
+  return HAServiceState.STANDBY;
 default:
-  throw new IllegalArgumentException("Unexpected 
NNHAStatusHeartbeat.State:" + s.getState());
+  throw new IllegalArgumentException("Unexpected HAServiceStateProto:"
+  + s);
 }
   }
 
+  public static HAServiceStateProto convert(HAServiceState s) {
+if (s == null) {
+  return null;
+}
+switch (s) {
+case INITIALIZING:
+  return HAServiceStateProto.INITIALIZING;
+case ACTIVE:
+  return HAServiceStateProto.ACTIVE;
+case STANDBY:
+  return HAServiceStateProto.STANDBY;
+

hadoop git commit: YARN-5559. Analyse 2.8.0/3.0.0 jdiff reports and fix any issues. Contributed by Akira Ajisaka & Wangda Tan

2016-12-05 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 49f9e7cf7 -> 802a1fb2f


YARN-5559. Analyse 2.8.0/3.0.0 jdiff reports and fix any issues. Contributed by 
 Akira Ajisaka & Wangda Tan

(cherry picked from commit 43ebff2e354142bddcb42755766a965ae8a503a6)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/802a1fb2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/802a1fb2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/802a1fb2

Branch: refs/heads/branch-2.8
Commit: 802a1fb2fc8b49f39b37026e30e386f2cab3f2de
Parents: 49f9e7c
Author: Jian He 
Authored: Mon Dec 5 11:39:34 2016 -0800
Committer: Jian He 
Committed: Mon Dec 5 11:40:46 2016 -0800

--
 .../GetClusterNodeLabelsResponse.java   | 50 
 .../yarn/client/api/impl/YarnClientImpl.java|  2 +-
 .../pb/GetClusterNodeLabelsResponsePBImpl.java  | 41 ++--
 .../yarn/security/ContainerTokenIdentifier.java | 25 ++
 .../state/InvalidStateTransitionException.java  | 22 ++---
 .../state/InvalidStateTransitonException.java   | 19 ++--
 .../resourcemanager/TestClientRMService.java|  4 +-
 7 files changed, 125 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/802a1fb2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeLabelsResponse.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeLabelsResponse.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeLabelsResponse.java
index cf6e683..cb2ccfb 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeLabelsResponse.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeLabelsResponse.java
@@ -18,7 +18,9 @@
 
 package org.apache.hadoop.yarn.api.protocolrecords;
 
+import java.util.ArrayList;
 import java.util.List;
+import java.util.Set;
 
 import org.apache.hadoop.classification.InterfaceAudience.Public;
 import org.apache.hadoop.classification.InterfaceStability.Evolving;
@@ -28,18 +30,48 @@ import org.apache.hadoop.yarn.util.Records;
 @Public
 @Evolving
 public abstract class GetClusterNodeLabelsResponse {
+  /**
+   * Creates a new instance.
+   *
+   * @param labels Node labels
+   * @return response
+   * @deprecated Use {@link #newInstance(List)} instead.
+   */
+  @Deprecated
+  public static GetClusterNodeLabelsResponse newInstance(Set labels) {
+List list = new ArrayList<>();
+for (String label : labels) {
+  list.add(NodeLabel.newInstance(label));
+}
+return newInstance(list);
+  }
+
   public static GetClusterNodeLabelsResponse newInstance(List 
labels) {
-GetClusterNodeLabelsResponse request =
+GetClusterNodeLabelsResponse response =
 Records.newRecord(GetClusterNodeLabelsResponse.class);
-request.setNodeLabels(labels);
-return request;
+response.setNodeLabelList(labels);
+return response;
   }
 
-  @Public
-  @Evolving
-  public abstract void setNodeLabels(List labels);
+  public abstract void setNodeLabelList(List labels);
+
+  public abstract List getNodeLabelList();
+
+  /**
+   * Set node labels to the response.
+   *
+   * @param labels Node labels
+   * @deprecated Use {@link #setNodeLabelList(List)} instead.
+   */
+  @Deprecated
+  public abstract void setNodeLabels(Set labels);
 
-  @Public
-  @Evolving
-  public abstract List getNodeLabels();
+  /**
+   * Get node labels of the response.
+   *
+   * @return Node labels
+   * @deprecated Use {@link #getNodeLabelList()} instead.
+   */
+  @Deprecated
+  public abstract Set getNodeLabels();
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/802a1fb2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
index 5628109..78fe84f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
@@ 

hadoop git commit: YARN-5559. Analyse 2.8.0/3.0.0 jdiff reports and fix any issues. Contributed by Akira Ajisaka & Wangda Tan

2016-12-05 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 7e58eec62 -> 0f6fbfc0d


YARN-5559. Analyse 2.8.0/3.0.0 jdiff reports and fix any issues. Contributed by 
 Akira Ajisaka & Wangda Tan

(cherry picked from commit 43ebff2e354142bddcb42755766a965ae8a503a6)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0f6fbfc0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0f6fbfc0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0f6fbfc0

Branch: refs/heads/branch-2
Commit: 0f6fbfc0db4199821f16f0d7cf6ed6b9e750d58d
Parents: 7e58eec
Author: Jian He 
Authored: Mon Dec 5 11:39:34 2016 -0800
Committer: Jian He 
Committed: Mon Dec 5 11:40:26 2016 -0800

--
 .../GetClusterNodeLabelsResponse.java   | 50 
 .../yarn/client/api/impl/YarnClientImpl.java|  2 +-
 .../pb/GetClusterNodeLabelsResponsePBImpl.java  | 41 ++--
 .../yarn/security/ContainerTokenIdentifier.java | 25 ++
 .../state/InvalidStateTransitionException.java  | 22 ++---
 .../state/InvalidStateTransitonException.java   | 19 ++--
 .../resourcemanager/TestClientRMService.java|  4 +-
 7 files changed, 125 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0f6fbfc0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeLabelsResponse.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeLabelsResponse.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeLabelsResponse.java
index cf6e683..cb2ccfb 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeLabelsResponse.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeLabelsResponse.java
@@ -18,7 +18,9 @@
 
 package org.apache.hadoop.yarn.api.protocolrecords;
 
+import java.util.ArrayList;
 import java.util.List;
+import java.util.Set;
 
 import org.apache.hadoop.classification.InterfaceAudience.Public;
 import org.apache.hadoop.classification.InterfaceStability.Evolving;
@@ -28,18 +30,48 @@ import org.apache.hadoop.yarn.util.Records;
 @Public
 @Evolving
 public abstract class GetClusterNodeLabelsResponse {
+  /**
+   * Creates a new instance.
+   *
+   * @param labels Node labels
+   * @return response
+   * @deprecated Use {@link #newInstance(List)} instead.
+   */
+  @Deprecated
+  public static GetClusterNodeLabelsResponse newInstance(Set labels) {
+List list = new ArrayList<>();
+for (String label : labels) {
+  list.add(NodeLabel.newInstance(label));
+}
+return newInstance(list);
+  }
+
   public static GetClusterNodeLabelsResponse newInstance(List 
labels) {
-GetClusterNodeLabelsResponse request =
+GetClusterNodeLabelsResponse response =
 Records.newRecord(GetClusterNodeLabelsResponse.class);
-request.setNodeLabels(labels);
-return request;
+response.setNodeLabelList(labels);
+return response;
   }
 
-  @Public
-  @Evolving
-  public abstract void setNodeLabels(List labels);
+  public abstract void setNodeLabelList(List labels);
+
+  public abstract List getNodeLabelList();
+
+  /**
+   * Set node labels to the response.
+   *
+   * @param labels Node labels
+   * @deprecated Use {@link #setNodeLabelList(List)} instead.
+   */
+  @Deprecated
+  public abstract void setNodeLabels(Set labels);
 
-  @Public
-  @Evolving
-  public abstract List getNodeLabels();
+  /**
+   * Get node labels of the response.
+   *
+   * @return Node labels
+   * @deprecated Use {@link #getNodeLabelList()} instead.
+   */
+  @Deprecated
+  public abstract Set getNodeLabels();
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0f6fbfc0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
index a0f9678..50f1b490a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
@@ 

hadoop git commit: YARN-5559. Analyse 2.8.0/3.0.0 jdiff reports and fix any issues. Contributed by Akira Ajisaka & Wangda Tan

2016-12-05 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/trunk 1b5cceaff -> 43ebff2e3


YARN-5559. Analyse 2.8.0/3.0.0 jdiff reports and fix any issues. Contributed by 
 Akira Ajisaka & Wangda Tan


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/43ebff2e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/43ebff2e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/43ebff2e

Branch: refs/heads/trunk
Commit: 43ebff2e354142bddcb42755766a965ae8a503a6
Parents: 1b5ccea
Author: Jian He 
Authored: Mon Dec 5 11:39:34 2016 -0800
Committer: Jian He 
Committed: Mon Dec 5 11:39:34 2016 -0800

--
 .../GetClusterNodeLabelsResponse.java   | 50 
 .../yarn/client/api/impl/YarnClientImpl.java|  2 +-
 .../pb/GetClusterNodeLabelsResponsePBImpl.java  | 41 ++--
 .../yarn/security/ContainerTokenIdentifier.java | 25 ++
 .../state/InvalidStateTransitionException.java  | 22 ++---
 .../state/InvalidStateTransitonException.java   | 19 ++--
 .../resourcemanager/TestClientRMService.java|  4 +-
 7 files changed, 125 insertions(+), 38 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/43ebff2e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeLabelsResponse.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeLabelsResponse.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeLabelsResponse.java
index cf6e683..cb2ccfb 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeLabelsResponse.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodeLabelsResponse.java
@@ -18,7 +18,9 @@
 
 package org.apache.hadoop.yarn.api.protocolrecords;
 
+import java.util.ArrayList;
 import java.util.List;
+import java.util.Set;
 
 import org.apache.hadoop.classification.InterfaceAudience.Public;
 import org.apache.hadoop.classification.InterfaceStability.Evolving;
@@ -28,18 +30,48 @@ import org.apache.hadoop.yarn.util.Records;
 @Public
 @Evolving
 public abstract class GetClusterNodeLabelsResponse {
+  /**
+   * Creates a new instance.
+   *
+   * @param labels Node labels
+   * @return response
+   * @deprecated Use {@link #newInstance(List)} instead.
+   */
+  @Deprecated
+  public static GetClusterNodeLabelsResponse newInstance(Set labels) {
+List list = new ArrayList<>();
+for (String label : labels) {
+  list.add(NodeLabel.newInstance(label));
+}
+return newInstance(list);
+  }
+
   public static GetClusterNodeLabelsResponse newInstance(List 
labels) {
-GetClusterNodeLabelsResponse request =
+GetClusterNodeLabelsResponse response =
 Records.newRecord(GetClusterNodeLabelsResponse.class);
-request.setNodeLabels(labels);
-return request;
+response.setNodeLabelList(labels);
+return response;
   }
 
-  @Public
-  @Evolving
-  public abstract void setNodeLabels(List labels);
+  public abstract void setNodeLabelList(List labels);
+
+  public abstract List getNodeLabelList();
+
+  /**
+   * Set node labels to the response.
+   *
+   * @param labels Node labels
+   * @deprecated Use {@link #setNodeLabelList(List)} instead.
+   */
+  @Deprecated
+  public abstract void setNodeLabels(Set labels);
 
-  @Public
-  @Evolving
-  public abstract List getNodeLabels();
+  /**
+   * Get node labels of the response.
+   *
+   * @return Node labels
+   * @deprecated Use {@link #getNodeLabelList()} instead.
+   */
+  @Deprecated
+  public abstract Set getNodeLabels();
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/43ebff2e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
index a0f9678..50f1b490a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java
@@ -899,7 +899,7 @@ public class YarnClientImpl extends YarnClient {
   

hadoop git commit: Revert "HDFS-11201. Spelling errors in the logging, help, assertions and exception messages. Contributed by Grant Sohn."

2016-12-05 Thread weichiu
Repository: hadoop
Updated Branches:
  refs/heads/trunk b9522e86a -> 1b5cceaff


Revert "HDFS-11201. Spelling errors in the logging, help, assertions and 
exception messages. Contributed by Grant Sohn."

This reverts commit b9522e86a55564c2ccb5ca3f1ca871965cbe74de.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1b5cceaf
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1b5cceaf
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1b5cceaf

Branch: refs/heads/trunk
Commit: 1b5cceaffbdde50a87ede81552dc380832db8e79
Parents: b9522e8
Author: Wei-Chiu Chuang 
Authored: Mon Dec 5 10:54:43 2016 -0800
Committer: Wei-Chiu Chuang 
Committed: Mon Dec 5 10:54:43 2016 -0800

--
 .../src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java | 2 +-
 .../src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java | 4 ++--
 .../main/java/org/apache/hadoop/lib/server/ServerException.java  | 2 +-
 .../java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtxCache.java   | 2 +-
 .../src/main/java/org/apache/hadoop/hdfs/DFSUtil.java| 2 +-
 .../apache/hadoop/hdfs/server/blockmanagement/BlockManager.java  | 2 +-
 .../hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java  | 2 +-
 .../hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java | 2 +-
 .../hadoop/hdfs/server/diskbalancer/command/QueryCommand.java| 2 +-
 .../org/apache/hadoop/hdfs/server/namenode/FSDirTruncateOp.java  | 2 +-
 .../org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java  | 2 +-
 .../java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java   | 4 ++--
 .../server/namenode/web/resources/NamenodeWebHdfsMethods.java| 2 +-
 13 files changed, 15 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1b5cceaf/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
index aabd6fd..5783f90 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
@@ -1052,7 +1052,7 @@ public class DFSInputStream extends FSInputStream
 reader.getNetworkDistance(), nread);
 if (nread != len) {
   throw new IOException("truncated return from reader.read(): " +
-  "expected " + len + ", got " + nread);
+  "excpected " + len + ", got " + nread);
 }
 DFSClientFaultInjector.get().readFromDatanodeDelay();
 return;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1b5cceaf/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
index db064e4..51ad08f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
@@ -57,11 +57,11 @@ public class LongBitFormat implements Serializable {
   public long combine(long value, long record) {
 if (value < MIN) {
   throw new IllegalArgumentException(
-  "Illegal value: " + NAME + " = " + value + " < MIN = " + MIN);
+  "Illagal value: " + NAME + " = " + value + " < MIN = " + MIN);
 }
 if (value > MAX) {
   throw new IllegalArgumentException(
-  "Illegal value: " + NAME + " = " + value + " > MAX = " + MAX);
+  "Illagal value: " + NAME + " = " + value + " > MAX = " + MAX);
 }
 return (record & ~MASK) | (value << OFFSET);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1b5cceaf/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/ServerException.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/ServerException.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/ServerException.java
index fdca64e..e3759ce 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/ServerException.java
+++ 

hadoop git commit: HDFS-11201. Spelling errors in the logging, help, assertions and exception messages. Contributed by Grant Sohn.

2016-12-05 Thread weichiu
Repository: hadoop
Updated Branches:
  refs/heads/trunk 291df5c7f -> b9522e86a


HDFS-11201. Spelling errors in the logging, help, assertions and exception 
messages. Contributed by Grant Sohn.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b9522e86
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b9522e86
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b9522e86

Branch: refs/heads/trunk
Commit: b9522e86a55564c2ccb5ca3f1ca871965cbe74de
Parents: 291df5c
Author: Wei-Chiu Chuang 
Authored: Mon Dec 5 09:37:12 2016 -0800
Committer: Wei-Chiu Chuang 
Committed: Mon Dec 5 10:48:25 2016 -0800

--
 .../src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java | 2 +-
 .../src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java | 4 ++--
 .../main/java/org/apache/hadoop/lib/server/ServerException.java  | 2 +-
 .../java/org/apache/hadoop/hdfs/nfs/nfs3/OpenFileCtxCache.java   | 2 +-
 .../src/main/java/org/apache/hadoop/hdfs/DFSUtil.java| 2 +-
 .../apache/hadoop/hdfs/server/blockmanagement/BlockManager.java  | 2 +-
 .../hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java  | 2 +-
 .../hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java | 2 +-
 .../hadoop/hdfs/server/diskbalancer/command/QueryCommand.java| 2 +-
 .../org/apache/hadoop/hdfs/server/namenode/FSDirTruncateOp.java  | 2 +-
 .../org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java  | 2 +-
 .../java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java   | 4 ++--
 .../server/namenode/web/resources/NamenodeWebHdfsMethods.java| 2 +-
 13 files changed, 15 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9522e86/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
index 5783f90..aabd6fd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
@@ -1052,7 +1052,7 @@ public class DFSInputStream extends FSInputStream
 reader.getNetworkDistance(), nread);
 if (nread != len) {
   throw new IOException("truncated return from reader.read(): " +
-  "excpected " + len + ", got " + nread);
+  "expected " + len + ", got " + nread);
 }
 DFSClientFaultInjector.get().readFromDatanodeDelay();
 return;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9522e86/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
index 51ad08f..db064e4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/LongBitFormat.java
@@ -57,11 +57,11 @@ public class LongBitFormat implements Serializable {
   public long combine(long value, long record) {
 if (value < MIN) {
   throw new IllegalArgumentException(
-  "Illagal value: " + NAME + " = " + value + " < MIN = " + MIN);
+  "Illegal value: " + NAME + " = " + value + " < MIN = " + MIN);
 }
 if (value > MAX) {
   throw new IllegalArgumentException(
-  "Illagal value: " + NAME + " = " + value + " > MAX = " + MAX);
+  "Illegal value: " + NAME + " = " + value + " > MAX = " + MAX);
 }
 return (record & ~MASK) | (value << OFFSET);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b9522e86/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/ServerException.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/ServerException.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/ServerException.java
index e3759ce..fdca64e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/ServerException.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/ServerException.java

[18/29] hadoop git commit: YARN-5901. Fix race condition in TestGetGroups beforeclass setup() (Contributed by Haibo Chen via Daniel Templeton)

2016-12-05 Thread xgong
YARN-5901. Fix race condition in TestGetGroups beforeclass setup() (Contributed 
by Haibo Chen via Daniel Templeton)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2d77dc72
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2d77dc72
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2d77dc72

Branch: refs/heads/YARN-5734
Commit: 2d77dc727d9b5e56009bbc36643d85500efcbca5
Parents: 19f373a
Author: Daniel Templeton 
Authored: Thu Dec 1 15:57:39 2016 -0800
Committer: Daniel Templeton 
Committed: Thu Dec 1 15:57:39 2016 -0800

--
 .../hadoop/yarn/client/TestGetGroups.java   | 36 +---
 1 file changed, 24 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2d77dc72/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestGetGroups.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestGetGroups.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestGetGroups.java
index e947ece..da0258c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestGetGroups.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/TestGetGroups.java
@@ -20,16 +20,21 @@ package org.apache.hadoop.yarn.client;
 
 import java.io.IOException;
 import java.io.PrintStream;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.TimeUnit;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.service.Service;
 import org.apache.hadoop.service.Service.STATE;
+import org.apache.hadoop.service.ServiceStateChangeListener;
 import org.apache.hadoop.tools.GetGroupsTestBase;
 import org.apache.hadoop.util.Tool;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager;
 import org.junit.AfterClass;
+import org.junit.Assert;
 import org.junit.Before;
 import org.junit.BeforeClass;
 
@@ -42,30 +47,37 @@ public class TestGetGroups extends GetGroupsTestBase {
   private static Configuration conf;
   
   @BeforeClass
-  public static void setUpResourceManager() throws IOException, 
InterruptedException {
+  public static void setUpResourceManager() throws InterruptedException {
 conf = new YarnConfiguration();
 resourceManager = new ResourceManager() {
   @Override
   protected void doSecureLogin() throws IOException {
   };
 };
+
+// a reliable way to wait for resource manager to start
+CountDownLatch rmStartedSignal = new CountDownLatch(1);
+ServiceStateChangeListener rmStateChangeListener =
+new ServiceStateChangeListener() {
+  @Override
+  public void stateChanged(Service service) {
+if (service.getServiceState() == STATE.STARTED) {
+  rmStartedSignal.countDown();
+}
+  }
+};
+resourceManager.registerServiceListener(rmStateChangeListener);
+
 resourceManager.init(conf);
 new Thread() {
   public void run() {
 resourceManager.start();
   };
 }.start();
-int waitCount = 0;
-while (resourceManager.getServiceState() == STATE.INITED
-&& waitCount++ < 10) {
-  LOG.info("Waiting for RM to start...");
-  Thread.sleep(1000);
-}
-if (resourceManager.getServiceState() != STATE.STARTED) {
-  throw new IOException(
-  "ResourceManager failed to start. Final state is "
-  + resourceManager.getServiceState());
-}
+
+boolean rmStarted = rmStartedSignal.await(6L, TimeUnit.MILLISECONDS);
+Assert.assertTrue("ResourceManager failed to start up.", rmStarted);
+
 LOG.info("ResourceManager RMAdmin address: " +
 conf.get(YarnConfiguration.RM_ADMIN_ADDRESS));
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[22/29] hadoop git commit: HDFS-11156. Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API. Contributed by Weiwei Yang

2016-12-05 Thread xgong
HDFS-11156. Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API. Contributed 
by Weiwei Yang


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c7ff34f8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c7ff34f8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c7ff34f8

Branch: refs/heads/YARN-5734
Commit: c7ff34f8dcca3a2024230c5383abd9299daa1b20
Parents: 0cfd7ad
Author: Mingliang Liu 
Authored: Fri Dec 2 11:10:09 2016 -0800
Committer: Mingliang Liu 
Committed: Fri Dec 2 11:10:13 2016 -0800

--
 .../apache/hadoop/hdfs/web/JsonUtilClient.java  | 32 
 .../hadoop/hdfs/web/WebHdfsFileSystem.java  | 13 +++--
 .../hadoop/hdfs/web/resources/GetOpParam.java   | 12 -
 .../web/resources/NamenodeWebHdfsMethods.java   | 17 +++
 .../org/apache/hadoop/hdfs/web/JsonUtil.java| 30 
 .../org/apache/hadoop/hdfs/web/TestWebHDFS.java | 51 
 6 files changed, 151 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c7ff34f8/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
index a75f4f1..12899f4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
@@ -22,6 +22,7 @@ import com.fasterxml.jackson.databind.ObjectReader;
 import com.google.common.base.Preconditions;
 import com.google.common.collect.Lists;
 import com.google.common.collect.Maps;
+import org.apache.hadoop.fs.BlockLocation;
 import org.apache.hadoop.fs.ContentSummary;
 import org.apache.hadoop.fs.ContentSummary.Builder;
 import org.apache.hadoop.fs.FileChecksum;
@@ -588,4 +589,35 @@ class JsonUtilClient {
 lastLocatedBlock, isLastBlockComplete, null, null);
   }
 
+  /** Convert a Json map to BlockLocation. **/
+  static BlockLocation toBlockLocation(Map m)
+  throws IOException{
+long length = ((Number) m.get("length")).longValue();
+long offset = ((Number) m.get("offset")).longValue();
+boolean corrupt = Boolean.
+getBoolean(m.get("corrupt").toString());
+String[] storageIds = toStringArray(getList(m, "storageIds"));
+String[] cachedHosts = toStringArray(getList(m, "cachedHosts"));
+String[] hosts = toStringArray(getList(m, "hosts"));
+String[] names = toStringArray(getList(m, "names"));
+String[] topologyPaths = toStringArray(getList(m, "topologyPaths"));
+StorageType[] storageTypes = toStorageTypeArray(
+getList(m, "storageTypes"));
+return new BlockLocation(names, hosts, cachedHosts,
+topologyPaths, storageIds, storageTypes,
+offset, length, corrupt);
+  }
+
+  static String[] toStringArray(List list) {
+if (list == null) {
+  return null;
+} else {
+  final String[] array = new String[list.size()];
+  int i = 0;
+  for (Object object : list) {
+array[i++] = object.toString();
+  }
+  return array;
+}
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c7ff34f8/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
index 23804b7..e82e9f6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
@@ -1610,13 +1610,20 @@ public class WebHdfsFileSystem extends FileSystem
 statistics.incrementReadOps(1);
 storageStatistics.incrementOpCounter(OpType.GET_FILE_BLOCK_LOCATIONS);
 
-final HttpOpParam.Op op = GetOpParam.Op.GET_BLOCK_LOCATIONS;
+final HttpOpParam.Op op = GetOpParam.Op.GETFILEBLOCKLOCATIONS;
 return new FsPathResponseRunner(op, p,
 new OffsetParam(offset), new LengthParam(length)) {
   @Override
+  @SuppressWarnings("unchecked")
   BlockLocation[] decodeResponse(Map json) throws IOException {
-return DFSUtilClient.locatedBlocks2Locations(
-JsonUtilClient.toLocatedBlocks(json));
+   

[14/29] hadoop git commit: HADOOP-13840. Implement getUsed() for ViewFileSystem. Contributed by Manoj Govindassamy.

2016-12-05 Thread xgong
HADOOP-13840. Implement getUsed() for ViewFileSystem. Contributed by Manoj 
Govindassamy.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1f7613be
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1f7613be
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1f7613be

Branch: refs/heads/YARN-5734
Commit: 1f7613be958bbdb735fd2b49e3f0b48e2c8b7c13
Parents: 7226a71
Author: Andrew Wang 
Authored: Wed Nov 30 17:55:12 2016 -0800
Committer: Andrew Wang 
Committed: Wed Nov 30 17:55:12 2016 -0800

--
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java | 18 
 .../fs/viewfs/ViewFileSystemBaseTest.java   | 29 
 2 files changed, 47 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1f7613be/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index ed1bda2..8be666c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -859,6 +859,24 @@ public class ViewFileSystem extends FileSystem {
   }
 
   /**
+   * Return the total size of all files under "/", if {@link
+   * Constants#CONFIG_VIEWFS_LINK_MERGE_SLASH} is supported and is a valid
+   * mount point. Else, throw NotInMountpointException.
+   *
+   * @throws IOException
+   */
+  @Override
+  public long getUsed() throws IOException {
+InodeTree.ResolveResult res = fsState.resolve(
+getUriPath(InodeTree.SlashPath), true);
+if (res.isInternalDir()) {
+  throw new NotInMountpointException(InodeTree.SlashPath, "getUsed");
+} else {
+  return res.targetFileSystem.getUsed();
+}
+  }
+
+  /**
* An instance of this class represents an internal dir of the viewFs
* that is internal dir of the mount table.
* It is a read only mount tables and create, mkdir or delete operations

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1f7613be/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
index 06f9868..9a0bf02 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
@@ -1108,4 +1108,33 @@ abstract public class ViewFileSystemBaseTest {
   }
 });
   }
+
+  @Test
+  public void testUsed() throws IOException {
+try {
+  fsView.getUsed();
+  fail("ViewFileSystem getUsed() should fail for slash root path when the" 
+
+  " slash root mount point is not configured.");
+} catch (NotInMountpointException e) {
+  // expected exception.
+}
+long usedSpaceByPathViaViewFs = fsView.getUsed(new Path("/user"));
+long usedSpaceByPathViaTargetFs =
+fsTarget.getUsed(new Path(targetTestRoot, "user"));
+assertEquals("Space used not matching between ViewFileSystem and " +
+"the mounted FileSystem!",
+usedSpaceByPathViaTargetFs, usedSpaceByPathViaViewFs);
+
+Path mountDataRootPath = new Path("/data");
+String fsTargetFileName = "debug.log";
+Path fsTargetFilePath = new Path(targetTestRoot, "data/debug.log");
+Path mountDataFilePath = new Path(mountDataRootPath, fsTargetFileName);
+fileSystemTestHelper.createFile(fsTarget, fsTargetFilePath);
+
+usedSpaceByPathViaViewFs = fsView.getUsed(mountDataFilePath);
+usedSpaceByPathViaTargetFs = fsTarget.getUsed(fsTargetFilePath);
+assertEquals("Space used not matching between ViewFileSystem and " +
+"the mounted FileSystem!",
+usedSpaceByPathViaTargetFs, usedSpaceByPathViaViewFs);
+  }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[19/29] hadoop git commit: MAPREDUCE-6787. Allow job_conf.xml to be downloadable on the job overview page in JHS (haibochen via rkanter)

2016-12-05 Thread xgong
MAPREDUCE-6787. Allow job_conf.xml to be downloadable on the job overview page 
in JHS (haibochen via rkanter)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c87b3a44
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c87b3a44
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c87b3a44

Branch: refs/heads/YARN-5734
Commit: c87b3a448a00df97149a4e93a8c39d9ad0268bdb
Parents: 2d77dc7
Author: Robert Kanter 
Authored: Thu Dec 1 17:29:16 2016 -0800
Committer: Robert Kanter 
Committed: Thu Dec 1 17:29:38 2016 -0800

--
 .../mapreduce/v2/app/webapp/AppController.java  | 34 
 .../mapreduce/v2/app/webapp/ConfBlock.java  |  2 +-
 .../v2/app/webapp/TestAppController.java| 14 
 .../hadoop/mapreduce/v2/hs/webapp/HsWebApp.java |  2 ++
 .../org/apache/hadoop/yarn/webapp/Router.java   | 23 ++---
 .../org/apache/hadoop/yarn/webapp/WebApp.java   | 13 
 6 files changed, 83 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c87b3a44/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/AppController.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/AppController.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/AppController.java
index 305ec7e..e30e1b9 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/AppController.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/AppController.java
@@ -324,6 +324,40 @@ public class AppController extends Controller implements 
AMParams {
   }
 
   /**
+   * Handle requests to download the job configuration.
+   */
+  public void downloadConf() {
+try {
+  requireJob();
+} catch (Exception e) {
+  renderText(e.getMessage());
+  return;
+}
+writeJobConf();
+  }
+
+  private void writeJobConf() {
+String jobId = $(JOB_ID);
+assert(!jobId.isEmpty());
+
+JobId jobID = MRApps.toJobID($(JOB_ID));
+Job job = app.context.getJob(jobID);
+assert(job != null);
+
+try {
+  Configuration jobConf = job.loadConfFile();
+  response().setContentType("text/xml");
+  response().setHeader("Content-Disposition",
+  "attachment; filename=" + jobId + ".xml");
+  jobConf.writeXml(writer());
+} catch (IOException e) {
+  LOG.error("Error reading/writing job" +
+  " conf file for job: " + jobId, e);
+  renderText(e.getMessage());
+}
+  }
+
+  /**
* Render a BAD_REQUEST error.
* @param s the error message to include.
*/

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c87b3a44/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/ConfBlock.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/ConfBlock.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/ConfBlock.java
index 4cb79bf..532c2bd 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/ConfBlock.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/ConfBlock.java
@@ -70,7 +70,7 @@ public class ConfBlock extends HtmlBlock {
 try {
   ConfInfo info = new ConfInfo(job);
 
-  html.div().h3(confPath.toString())._();
+  html.div().a("/jobhistory/downloadconf/" + jid, confPath.toString());
   TBODY tbody = html.
 // Tasks table
   table("#conf").

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c87b3a44/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAppController.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAppController.java
 

[20/29] hadoop git commit: YARN-5915. ATS 1.5 FileSystemTimelineWriter causes flush() to be called after every event write. Contributed by Atul Sikaria

YARN-5915. ATS 1.5 FileSystemTimelineWriter causes flush() to be called after 
every event write. Contributed by Atul Sikaria


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f304ccae
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f304ccae
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f304ccae

Branch: refs/heads/YARN-5734
Commit: f304ccae3c2e0849b0b0b24c4bfe7a3a1ec2bb94
Parents: c87b3a4
Author: Jason Lowe 
Authored: Fri Dec 2 16:54:15 2016 +
Committer: Jason Lowe 
Committed: Fri Dec 2 16:54:15 2016 +

--
 .../hadoop/yarn/client/api/impl/FileSystemTimelineWriter.java   | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f304ccae/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/FileSystemTimelineWriter.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/FileSystemTimelineWriter.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/FileSystemTimelineWriter.java
index 54b4912..fc3385b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/FileSystemTimelineWriter.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/FileSystemTimelineWriter.java
@@ -63,6 +63,7 @@ import com.fasterxml.jackson.core.JsonFactory;
 import com.fasterxml.jackson.core.JsonGenerator;
 import com.fasterxml.jackson.core.util.MinimalPrettyPrinter;
 import com.fasterxml.jackson.databind.ObjectMapper;
+import com.fasterxml.jackson.databind.SerializationFeature;
 import com.fasterxml.jackson.databind.type.TypeFactory;
 import com.fasterxml.jackson.module.jaxb.JaxbAnnotationIntrospector;
 import com.sun.jersey.api.client.Client;
@@ -276,6 +277,7 @@ public class FileSystemTimelineWriter extends 
TimelineWriter{
 mapper.setAnnotationIntrospector(
 new JaxbAnnotationIntrospector(TypeFactory.defaultInstance()));
 mapper.setSerializationInclusion(JsonInclude.Include.NON_NULL);
+mapper.configure(SerializationFeature.FLUSH_AFTER_WRITE_VALUE, false);
 return mapper;
   }
 
@@ -356,6 +358,7 @@ public class FileSystemTimelineWriter extends 
TimelineWriter{
 
 public void flush() throws IOException {
   if (stream != null) {
+jsonGenerator.flush();
 stream.hflush();
   }
 }
@@ -368,8 +371,6 @@ public class FileSystemTimelineWriter extends 
TimelineWriter{
   this.stream = createLogFileStream(fs, logPath);
   this.jsonGenerator = new JsonFactory().createGenerator(stream);
   this.jsonGenerator.setPrettyPrinter(new MinimalPrettyPrinter("\n"));
-  this.jsonGenerator.configure(
-  JsonGenerator.Feature.FLUSH_PASSED_TO_STREAM, false);
   this.lastModifiedTime = Time.monotonicNow();
 }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[17/29] hadoop git commit: HDFS-11132. Allow AccessControlException in contract tests when getFileStatus on subdirectory of existing files. Contributed by Vishwajeet Dusane

HDFS-11132. Allow AccessControlException in contract tests when getFileStatus 
on subdirectory of existing files. Contributed by Vishwajeet Dusane


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/19f373a4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/19f373a4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/19f373a4

Branch: refs/heads/YARN-5734
Commit: 19f373a46b2abb7a575f7884a9c7443b8ed67cd3
Parents: 96c5749
Author: Mingliang Liu 
Authored: Thu Dec 1 12:54:03 2016 -0800
Committer: Mingliang Liu 
Committed: Thu Dec 1 12:54:28 2016 -0800

--
 .../fs/FileContextMainOperationsBaseTest.java   | 21 
 .../hadoop/fs/FileSystemContractBaseTest.java   | 17 ++--
 2 files changed, 32 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/19f373a4/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextMainOperationsBaseTest.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextMainOperationsBaseTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextMainOperationsBaseTest.java
index 5f9151a..2b3ab2a 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextMainOperationsBaseTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextMainOperationsBaseTest.java
@@ -29,6 +29,7 @@ import org.apache.hadoop.HadoopIllegalArgumentException;
 import org.apache.hadoop.fs.Options.CreateOpts;
 import org.apache.hadoop.fs.Options.Rename;
 import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.junit.After;
 import org.junit.Assert;
@@ -251,8 +252,14 @@ public abstract class FileContextMainOperationsBaseTest  {
 } catch (IOException e) {
   // expected
 }
-Assert.assertFalse(exists(fc, testSubDir));
-
+
+try {
+  Assert.assertFalse(exists(fc, testSubDir));
+} catch (AccessControlException e) {
+  // Expected : HDFS-11132 Checks on paths under file may be rejected by
+  // file missing execute permission.
+}
+
 Path testDeepSubDir = getTestRootPath(fc, "test/hadoop/file/deep/sub/dir");
 try {
   fc.mkdir(testDeepSubDir, FsPermission.getDefault(), true);
@@ -260,8 +267,14 @@ public abstract class FileContextMainOperationsBaseTest  {
 } catch (IOException e) {
   // expected
 }
-Assert.assertFalse(exists(fc, testDeepSubDir));
-
+
+try {
+  Assert.assertFalse(exists(fc, testDeepSubDir));
+} catch (AccessControlException e) {
+  // Expected : HDFS-11132 Checks on paths under file may be rejected by
+  // file missing execute permission.
+}
+
   }
   
   @Test

http://git-wip-us.apache.org/repos/asf/hadoop/blob/19f373a4/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
index bbd7336..6247959 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
@@ -28,6 +28,7 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.util.StringUtils;
 
 /**
@@ -158,7 +159,13 @@ public abstract class FileSystemContractBaseTest extends 
TestCase {
 } catch (IOException e) {
   // expected
 }
-assertFalse(fs.exists(testSubDir));
+
+try {
+  assertFalse(fs.exists(testSubDir));
+} catch (AccessControlException e) {
+  // Expected : HDFS-11132 Checks on paths under file may be rejected by
+  // file missing execute permission.
+}
 
 Path testDeepSubDir = path("/test/hadoop/file/deep/sub/dir");
 try {
@@ -167,7 +174,13 @@ public abstract class FileSystemContractBaseTest extends 
TestCase {
 } catch (IOException e) {
   // expected
 }
-assertFalse(fs.exists(testDeepSubDir));
+
+try {
+  assertFalse(fs.exists(testDeepSubDir));
+} catch (AccessControlException e) {
+   

[05/29] hadoop git commit: HDFS-10994. Support an XOR policy XOR-2-1-64k in HDFS. Contributed by Sammi Chen

HDFS-10994. Support an XOR policy XOR-2-1-64k in HDFS. Contributed by Sammi Chen


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/51e6c1cc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/51e6c1cc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/51e6c1cc

Branch: refs/heads/YARN-5734
Commit: 51e6c1cc3f66f9908d2e816e7291ac34bee43f52
Parents: cfd8076
Author: Kai Zheng 
Authored: Wed Nov 30 15:52:56 2016 +0800
Committer: Kai Zheng 
Committed: Wed Nov 30 15:52:56 2016 +0800

--
 .../io/erasurecode/ErasureCodeConstants.java|  3 ++
 .../hadoop/hdfs/protocol/HdfsConstants.java |  1 +
 .../namenode/ErasureCodingPolicyManager.java| 23 +++--
 .../hadoop/hdfs/server/namenode/INodeFile.java  |  8 +++-
 .../org/apache/hadoop/hdfs/DFSTestUtil.java | 28 +--
 .../hadoop/hdfs/TestDFSStripedInputStream.java  | 50 +---
 .../hadoop/hdfs/TestDFSStripedOutputStream.java | 27 ---
 .../TestDFSStripedOutputStreamWithFailure.java  | 37 +++
 .../hdfs/TestDFSXORStripedInputStream.java  | 33 +
 .../hdfs/TestDFSXORStripedOutputStream.java | 35 ++
 ...estDFSXORStripedOutputStreamWithFailure.java | 36 ++
 11 files changed, 240 insertions(+), 41 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/51e6c1cc/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeConstants.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeConstants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeConstants.java
index 8d6ff85..ffa0bce 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeConstants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeConstants.java
@@ -38,4 +38,7 @@ public final class ErasureCodeConstants {
 
   public static final ECSchema RS_6_3_LEGACY_SCHEMA = new ECSchema(
   RS_LEGACY_CODEC_NAME, 6, 3);
+
+  public static final ECSchema XOR_2_1_SCHEMA = new ECSchema(
+  XOR_CODEC_NAME, 2, 1);
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/51e6c1cc/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
index acbc8f6..b55b4df 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
@@ -147,6 +147,7 @@ public final class HdfsConstants {
   public static final byte RS_6_3_POLICY_ID = 0;
   public static final byte RS_3_2_POLICY_ID = 1;
   public static final byte RS_6_3_LEGACY_POLICY_ID = 2;
+  public static final byte XOR_2_1_POLICY_ID = 3;
 
   /* Hidden constructor */
   protected HdfsConstants() {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/51e6c1cc/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingPolicyManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingPolicyManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingPolicyManager.java
index c4bc8de..8a85d23 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingPolicyManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingPolicyManager.java
@@ -36,7 +36,7 @@ import java.util.TreeMap;
 public final class ErasureCodingPolicyManager {
 
   /**
-   * TODO: HDFS-8095
+   * TODO: HDFS-8095.
*/
   private static final int DEFAULT_CELLSIZE = 64 * 1024;
   private static final ErasureCodingPolicy SYS_POLICY1 =
@@ -48,10 +48,14 @@ public final class ErasureCodingPolicyManager {
   private static final ErasureCodingPolicy SYS_POLICY3 =
   new ErasureCodingPolicy(ErasureCodeConstants.RS_6_3_LEGACY_SCHEMA,
   DEFAULT_CELLSIZE, HdfsConstants.RS_6_3_LEGACY_POLICY_ID);
+  private static final ErasureCodingPolicy SYS_POLICY4 =
+  new 

[28/29] hadoop git commit: HDFS-11181. Fuse wrapper has a typo. Contributed by Wei-Chiu Chuang.

HDFS-11181. Fuse wrapper has a typo. Contributed by Wei-Chiu Chuang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c51bfd29
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c51bfd29
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c51bfd29

Branch: refs/heads/YARN-5734
Commit: c51bfd29cd1e6ec619742f2c47ebfc8bbfb231b6
Parents: f885160
Author: Wei-Chiu Chuang 
Authored: Mon Dec 5 08:44:40 2016 -0800
Committer: Wei-Chiu Chuang 
Committed: Mon Dec 5 08:44:40 2016 -0800

--
 .../src/main/native/fuse-dfs/fuse_dfs_wrapper.sh   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c51bfd29/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
index c52c5f9..d5bfd09 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
@@ -43,7 +43,7 @@ done < <(find "$HADOOP_HOME/hadoop-client" -name "*.jar" 
-print0)
 while IFS= read -r -d '' file
 do
   export CLASSPATH=$CLASSPATH:$file
-done < <(find "$HADOOP_HOME/hhadoop-hdfs-project" -name "*.jar" -print0)
+done < <(find "$HADOOP_HOME/hadoop-hdfs-project" -name "*.jar" -print0)
 
 export CLASSPATH=$HADOOP_CONF_DIR:$CLASSPATH
 export PATH=$FUSEDFS_PATH:$PATH


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[29/29] hadoop git commit: HADOOP-13847. KMSWebApp should close KeyProviderCryptoExtension. Contributed by John Zhuge.

HADOOP-13847. KMSWebApp should close KeyProviderCryptoExtension. Contributed by 
John Zhuge.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/291df5c7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/291df5c7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/291df5c7

Branch: refs/heads/YARN-5734
Commit: 291df5c7fb713d5442ee29eb3f272127afb05a3c
Parents: c51bfd2
Author: Xiao Chen 
Authored: Mon Dec 5 09:34:39 2016 -0800
Committer: Xiao Chen 
Committed: Mon Dec 5 09:35:17 2016 -0800

--
 .../apache/hadoop/crypto/key/KeyProviderCryptoExtension.java  | 5 +++--
 .../org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java| 7 ++-
 2 files changed, 9 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/291df5c7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
index 1ecd9f6..0543222 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
@@ -427,8 +427,9 @@ public class KeyProviderCryptoExtension extends
 
   @Override
   public void close() throws IOException {
-if (getKeyProvider() != null) {
-  getKeyProvider().close();
+KeyProvider provider = getKeyProvider();
+if (provider != null && provider != this) {
+  provider.close();
 }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/291df5c7/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
index cd773dd..40ae19f 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
@@ -40,9 +40,9 @@ import javax.servlet.ServletContextEvent;
 import javax.servlet.ServletContextListener;
 
 import java.io.File;
+import java.io.IOException;
 import java.net.URI;
 import java.net.URL;
-import java.util.List;
 
 @InterfaceAudience.Private
 public class KMSWebApp implements ServletContextListener {
@@ -215,6 +215,11 @@ public class KMSWebApp implements ServletContextListener {
 
   @Override
   public void contextDestroyed(ServletContextEvent sce) {
+try {
+  keyProviderCryptoExtension.close();
+} catch (IOException ioe) {
+  LOG.error("Error closing KeyProviderCryptoExtension", ioe);
+}
 kmsAudit.shutdown();
 kmsAcls.stopReloader();
 jmxReporter.stop();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[15/29] hadoop git commit: HDFS-11180. Intermittent deadlock in NameNode when failover happens.

HDFS-11180. Intermittent deadlock in NameNode when failover happens.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e0fa4923
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e0fa4923
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e0fa4923

Branch: refs/heads/YARN-5734
Commit: e0fa49234fd37aca88e1caa95bac77bca192bae4
Parents: 1f7613b
Author: Akira Ajisaka 
Authored: Thu Dec 1 23:08:59 2016 +0900
Committer: Akira Ajisaka 
Committed: Thu Dec 1 23:08:59 2016 +0900

--
 .../dev-support/findbugsExcludeFile.xml | 27 
 .../hadoop/hdfs/server/namenode/FSEditLog.java  | 72 +---
 .../hadoop/hdfs/server/namenode/FSImage.java| 15 +++-
 .../hdfs/server/namenode/FSNamesystem.java  | 27 ++--
 .../hdfs/server/namenode/NameNodeRpcServer.java |  2 +-
 .../server/namenode/ha/StandbyCheckpointer.java |  4 +-
 .../server/namenode/TestFSNamesystemMBean.java  | 24 +++
 7 files changed, 148 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e0fa4923/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml 
b/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
index 426fb72..e6e4057 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
@@ -109,6 +109,33 @@
 
 
 
+
+
+  
+  
+  
+
+
+
+  
+  
+  
+
+
+
+  
+  
+  
+
  

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e0fa4923/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
index ef9eb68..c9ee32b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
@@ -155,14 +155,16 @@ public class FSEditLog implements LogsPurgeable {
   private EditLogOutputStream editLogStream = null;
 
   // a monotonically increasing counter that represents transactionIds.
-  private long txid = 0;
+  // All of the threads which update/increment txid are synchronized,
+  // so make txid volatile instead of AtomicLong.
+  private volatile long txid = 0;
 
   // stores the last synced transactionId.
   private long synctxid = 0;
 
   // the first txid of the log that's currently open for writing.
   // If this value is N, we are currently writing to edits_inprogress_N
-  private long curSegmentTxId = HdfsServerConstants.INVALID_TXID;
+  private volatile long curSegmentTxId = HdfsServerConstants.INVALID_TXID;
 
   // the time of printing the statistics to the log file.
   private long lastPrintTime;
@@ -338,7 +340,18 @@ public class FSEditLog implements LogsPurgeable {
 return state == State.IN_SEGMENT ||
   state == State.BETWEEN_LOG_SEGMENTS;
   }
-  
+
+  /**
+   * Return true if the log is currently open in write mode.
+   * This method is not synchronized and must be used only for metrics.
+   * @return true if the log is currently open in write mode, regardless
+   * of whether it actually has an open segment.
+   */
+  boolean isOpenForWriteWithoutLock() {
+return state == State.IN_SEGMENT ||
+state == State.BETWEEN_LOG_SEGMENTS;
+  }
+
   /**
* @return true if the log is open in write mode and has a segment open
* ready to take edits.
@@ -348,6 +361,16 @@ public class FSEditLog implements LogsPurgeable {
   }
 
   /**
+   * Return true the state is IN_SEGMENT.
+   * This method is not synchronized and must be used only for metrics.
+   * @return true if the log is open in write mode and has a segment open
+   * ready to take edits.
+   */
+  boolean isSegmentOpenWithoutLock() {
+return state == State.IN_SEGMENT;
+  }
+
+  /**
* @return true if the log is open in read mode.
*/
   public synchronized boolean isOpenForRead() {
@@ -522,7 +545,16 @@ public class FSEditLog implements LogsPurgeable {
   public synchronized long getLastWrittenTxId() {
 return txid;
   }
-  
+
+  /**
+   * Return the transaction ID of the last transaction written to the log.
+   * This method is not synchronized and must be used only for metrics.
+   * @return The 

[21/29] hadoop git commit: MAPREDUCE-6815. Fix flaky TestKill.testKillTask(). Contributed by Haibo Chen

MAPREDUCE-6815. Fix flaky TestKill.testKillTask(). Contributed by Haibo Chen


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0cfd7ad2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0cfd7ad2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0cfd7ad2

Branch: refs/heads/YARN-5734
Commit: 0cfd7ad21f4457513ed3416e5d77f3123bfe9da0
Parents: f304cca
Author: Jason Lowe 
Authored: Fri Dec 2 17:22:11 2016 +
Committer: Jason Lowe 
Committed: Fri Dec 2 17:22:11 2016 +

--
 .../java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskImpl.java | 1 +
 .../src/test/java/org/apache/hadoop/mapreduce/v2/app/TestKill.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0cfd7ad2/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskImpl.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskImpl.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskImpl.java
index 34d9f0e..8a6fa30 100755
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskImpl.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskImpl.java
@@ -259,6 +259,7 @@ public abstract class TaskImpl implements Task, 
EventHandler {
 // d. TA processes TA_KILL event and sends T_ATTEMPT_KILLED to the task.
 .addTransition(TaskStateInternal.KILLED, TaskStateInternal.KILLED,
 EnumSet.of(TaskEventType.T_KILL,
+   TaskEventType.T_SCHEDULE,
TaskEventType.T_ATTEMPT_KILLED,
TaskEventType.T_ADD_SPEC_ATTEMPT))
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0cfd7ad2/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestKill.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestKill.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestKill.java
index 0714647..f681cf8 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestKill.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestKill.java
@@ -105,7 +105,7 @@ public class TestKill {
 Job job = app.submit(new Configuration());
 
 //wait and vailidate for Job to become RUNNING
-app.waitForState(job, JobState.RUNNING);
+app.waitForInternalState((JobImpl) job, JobStateInternal.RUNNING);
 Map tasks = job.getTasks();
 Assert.assertEquals("No of tasks is not correct", 2, 
 tasks.size());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[26/29] hadoop git commit: HADOOP-13257. Improve Azure Data Lake contract tests. Contributed by Vishwajeet Dusane

HADOOP-13257. Improve Azure Data Lake contract tests. Contributed by Vishwajeet 
Dusane


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4113ec5f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4113ec5f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4113ec5f

Branch: refs/heads/YARN-5734
Commit: 4113ec5fa5ca049ebaba039b1faf3911c6a34f7b
Parents: 51211a7
Author: Mingliang Liu 
Authored: Fri Dec 2 15:54:57 2016 -0800
Committer: Mingliang Liu 
Committed: Fri Dec 2 15:54:57 2016 -0800

--
 .../org/apache/hadoop/fs/adl/AdlFileSystem.java |  24 +-
 .../org/apache/hadoop/fs/adl/TestAdlRead.java   |   6 +-
 .../apache/hadoop/fs/adl/TestListStatus.java|   6 +-
 .../fs/adl/live/TestAdlContractAppendLive.java  |  11 +-
 .../fs/adl/live/TestAdlContractConcatLive.java  |  23 +-
 .../fs/adl/live/TestAdlContractCreateLive.java  |  19 +-
 .../fs/adl/live/TestAdlContractDeleteLive.java  |  11 +-
 .../live/TestAdlContractGetFileStatusLive.java  |  36 ++
 .../fs/adl/live/TestAdlContractMkdirLive.java   |  25 +-
 .../fs/adl/live/TestAdlContractOpenLive.java|  11 +-
 .../fs/adl/live/TestAdlContractRenameLive.java  |  30 +-
 .../fs/adl/live/TestAdlContractRootDirLive.java |  19 +-
 .../fs/adl/live/TestAdlContractSeekLive.java|  11 +-
 .../live/TestAdlDifferentSizeWritesLive.java|  69 ++--
 .../live/TestAdlFileContextCreateMkdirLive.java |  67 
 .../TestAdlFileContextMainOperationsLive.java   |  99 ++
 .../adl/live/TestAdlFileSystemContractLive.java |  57 +---
 .../live/TestAdlInternalCreateNonRecursive.java | 134 
 .../fs/adl/live/TestAdlPermissionLive.java  | 116 +++
 .../adl/live/TestAdlSupportedCharsetInPath.java | 334 +++
 .../apache/hadoop/fs/adl/live/TestMetadata.java | 111 ++
 21 files changed, 995 insertions(+), 224 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4113ec5f/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileSystem.java
 
b/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileSystem.java
index 9083afc..bd43c52 100644
--- 
a/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileSystem.java
+++ 
b/hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileSystem.java
@@ -346,7 +346,6 @@ public class AdlFileSystem extends FileSystem {
* @see #setPermission(Path, FsPermission)
* @deprecated API only for 0.20-append
*/
-  @Deprecated
   @Override
   public FSDataOutputStream createNonRecursive(Path f, FsPermission permission,
   EnumSet flags, int bufferSize, short replication,
@@ -471,6 +470,10 @@ public class AdlFileSystem extends FileSystem {
   @Override
   public boolean rename(final Path src, final Path dst) throws IOException {
 statistics.incrementWriteOps(1);
+if (toRelativeFilePath(src).equals("/")) {
+  return false;
+}
+
 return adlClient.rename(toRelativeFilePath(src), toRelativeFilePath(dst));
   }
 
@@ -522,9 +525,24 @@ public class AdlFileSystem extends FileSystem {
   public boolean delete(final Path path, final boolean recursive)
   throws IOException {
 statistics.incrementWriteOps(1);
+String relativePath = toRelativeFilePath(path);
+// Delete on root directory not supported.
+if (relativePath.equals("/")) {
+  // This is important check after recent commit
+  // HADOOP-12977 and HADOOP-13716 validates on root for
+  // 1. if root is empty and non recursive delete then return false.
+  // 2. if root is non empty and non recursive delete then throw exception.
+  if (!recursive
+  && adlClient.enumerateDirectory(toRelativeFilePath(path), 1).size()
+  > 0) {
+throw new IOException("Delete on root is not supported.");
+  }
+  return false;
+}
+
 return recursive ?
-adlClient.deleteRecursive(toRelativeFilePath(path)) :
-adlClient.delete(toRelativeFilePath(path));
+adlClient.deleteRecursive(relativePath) :
+adlClient.delete(relativePath);
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4113ec5f/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestAdlRead.java
--
diff --git 
a/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestAdlRead.java
 
b/hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/TestAdlRead.java
index 734256a..172663c 100644
--- 

[24/29] hadoop git commit: YARN-5929. Missing scheduling policy in the FS queue metric. (Contributed by Yufei Gu via Daniel Templeton)

YARN-5929. Missing scheduling policy in the FS queue metric. (Contributed by 
Yufei Gu via Daniel Templeton)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5bd18c49
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5bd18c49
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5bd18c49

Branch: refs/heads/YARN-5734
Commit: 5bd18c49bd5075fa20d24363dceea7828e3fa266
Parents: 2ff84a0
Author: Daniel Templeton 
Authored: Fri Dec 2 13:35:09 2016 -0800
Committer: Daniel Templeton 
Committed: Fri Dec 2 13:55:42 2016 -0800

--
 .../scheduler/fair/FSQueueMetrics.java  | 32 +++--
 .../scheduler/fair/TestFSQueueMetrics.java  | 69 
 2 files changed, 97 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5bd18c49/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSQueueMetrics.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSQueueMetrics.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSQueueMetrics.java
index a970815..ca375f2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSQueueMetrics.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSQueueMetrics.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.metrics2.MetricsSystem;
 import org.apache.hadoop.metrics2.annotation.Metric;
@@ -169,6 +170,12 @@ public class FSQueueMetrics extends QueueMetrics {
 amResourceUsageVCores.set(resource.getVirtualCores());
   }
 
+  /**
+   * Get the scheduling policy.
+   *
+   * @return the scheduling policy
+   */
+  @Metric("Scheduling policy")
   public String getSchedulingPolicy() {
 return schedulingPolicy;
   }
@@ -181,21 +188,38 @@ public class FSQueueMetrics extends QueueMetrics {
   static FSQueueMetrics forQueue(String queueName, Queue parent,
   boolean enableUserMetrics, Configuration conf) {
 MetricsSystem ms = DefaultMetricsSystem.instance();
+return forQueue(ms, queueName, parent, enableUserMetrics, conf);
+  }
+
+  /**
+   * Get the FS queue metric for the given queue. Create one and register it to
+   * metrics system if there isn't one for the queue.
+   *
+   * @param ms the metric system
+   * @param queueName queue name
+   * @param parent parent queue
+   * @param enableUserMetrics  if user metrics is needed
+   * @param conf configuration
+   * @return a FSQueueMetrics object
+   */
+  @VisibleForTesting
+  public synchronized
+  static FSQueueMetrics forQueue(MetricsSystem ms, String queueName,
+  Queue parent, boolean enableUserMetrics, Configuration conf) {
 QueueMetrics metrics = queueMetrics.get(queueName);
 if (metrics == null) {
   metrics = new FSQueueMetrics(ms, queueName, parent, enableUserMetrics, 
conf)
   .tag(QUEUE_INFO, queueName);
-  
+
   // Register with the MetricsSystems
   if (ms != null) {
 metrics = ms.register(
-sourceName(queueName).toString(), 
-"Metrics for queue: " + queueName, metrics);
+sourceName(queueName).toString(),
+"Metrics for queue: " + queueName, metrics);
   }
   queueMetrics.put(queueName, metrics);
 }
 
 return (FSQueueMetrics)metrics;
   }
-
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5bd18c49/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFSQueueMetrics.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFSQueueMetrics.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFSQueueMetrics.java
new file mode 100644
index 

[04/29] hadoop git commit: Revert due to an error "HDFS-10994. Support an XOR policy XOR-2-1-64k in HDFS. Contributed by Sammi Chen"

Revert due to an error "HDFS-10994. Support an XOR policy XOR-2-1-64k in HDFS. 
Contributed by Sammi Chen"

This reverts commit 5614f847b2ef2a5b70bd9a06edc4eba06174c6.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cfd8076f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cfd8076f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cfd8076f

Branch: refs/heads/YARN-5734
Commit: cfd8076f81930c3ffea8ec2ef42926217b83ab1a
Parents: aeecfa2
Author: Kai Zheng 
Authored: Wed Nov 30 15:44:52 2016 +0800
Committer: Kai Zheng 
Committed: Wed Nov 30 15:44:52 2016 +0800

--
 .../io/erasurecode/ErasureCodeConstants.java|   3 -
 .../hadoop/hdfs/protocol/HdfsConstants.java |   1 -
 .../namenode/ErasureCodingPolicyManager.java|  23 +-
 .../hadoop/hdfs/server/namenode/INodeFile.java  |   8 +-
 .../org/apache/hadoop/hdfs/DFSTestUtil.java |  28 +-
 .../hadoop/hdfs/TestDFSStripedInputStream.java  |  50 +--
 .../hadoop/hdfs/TestDFSStripedOutputStream.java |  27 +-
 .../TestDFSStripedOutputStreamWithFailure.java  |  37 +-
 .../hdfs/TestDFSXORStripedInputStream.java  |  33 --
 .../hdfs/TestDFSXORStripedOutputStream.java |  35 --
 ...estDFSXORStripedOutputStreamWithFailure.java |  36 --
 ...tyPreemptionPolicyForReservedContainers.java | 430 +++
 12 files changed, 471 insertions(+), 240 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cfd8076f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeConstants.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeConstants.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeConstants.java
index ffa0bce..8d6ff85 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeConstants.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCodeConstants.java
@@ -38,7 +38,4 @@ public final class ErasureCodeConstants {
 
   public static final ECSchema RS_6_3_LEGACY_SCHEMA = new ECSchema(
   RS_LEGACY_CODEC_NAME, 6, 3);
-
-  public static final ECSchema XOR_2_1_SCHEMA = new ECSchema(
-  XOR_CODEC_NAME, 2, 1);
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cfd8076f/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
index b55b4df..acbc8f6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
@@ -147,7 +147,6 @@ public final class HdfsConstants {
   public static final byte RS_6_3_POLICY_ID = 0;
   public static final byte RS_3_2_POLICY_ID = 1;
   public static final byte RS_6_3_LEGACY_POLICY_ID = 2;
-  public static final byte XOR_2_1_POLICY_ID = 3;
 
   /* Hidden constructor */
   protected HdfsConstants() {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cfd8076f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingPolicyManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingPolicyManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingPolicyManager.java
index 8a85d23..c4bc8de 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingPolicyManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingPolicyManager.java
@@ -36,7 +36,7 @@ import java.util.TreeMap;
 public final class ErasureCodingPolicyManager {
 
   /**
-   * TODO: HDFS-8095.
+   * TODO: HDFS-8095
*/
   private static final int DEFAULT_CELLSIZE = 64 * 1024;
   private static final ErasureCodingPolicy SYS_POLICY1 =
@@ -48,14 +48,10 @@ public final class ErasureCodingPolicyManager {
   private static final ErasureCodingPolicy SYS_POLICY3 =
   new ErasureCodingPolicy(ErasureCodeConstants.RS_6_3_LEGACY_SCHEMA,
   DEFAULT_CELLSIZE, HdfsConstants.RS_6_3_LEGACY_POLICY_ID);
-  private static final 

[27/29] hadoop git commit: YARN-5746. The state of the parentQueue and its childQueues should be synchronized. Contributed by Xuan Gong

YARN-5746. The state of the parentQueue and its childQueues should be 
synchronized. Contributed by Xuan Gong


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f885160f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f885160f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f885160f

Branch: refs/heads/YARN-5734
Commit: f885160f4ac56a0999e3b051eb7bccce928c1c33
Parents: 4113ec5
Author: Jian He 
Authored: Fri Dec 2 16:17:31 2016 -0800
Committer: Jian He 
Committed: Fri Dec 2 16:17:31 2016 -0800

--
 .../scheduler/capacity/AbstractCSQueue.java | 26 +-
 .../CapacitySchedulerConfiguration.java | 22 -
 .../scheduler/capacity/TestQueueState.java  | 96 
 3 files changed, 139 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f885160f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
index 3daabaf..dd2f0d9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
@@ -291,7 +291,8 @@ public abstract class AbstractCSQueue implements CSQueue {
 
   authorizer = YarnAuthorizationProvider.getInstance(csContext.getConf());
 
-  this.state = csContext.getConfiguration().getState(getQueuePath());
+  initializeQueueState();
+
   this.acls = csContext.getConfiguration().getAcls(getQueuePath());
 
   // Update metrics
@@ -330,6 +331,29 @@ public abstract class AbstractCSQueue implements CSQueue {
 }
   }
 
+  private void initializeQueueState() {
+// inherit from parent if state not set, only do this when we are not root
+if (parent != null) {
+  QueueState configuredState = csContext.getConfiguration()
+  .getConfiguredState(getQueuePath());
+  QueueState parentState = parent.getState();
+  if (configuredState == null) {
+this.state = parentState;
+  } else if (configuredState == QueueState.RUNNING
+  && parentState == QueueState.STOPPED) {
+throw new IllegalArgumentException(
+"The parent queue:" + parent.getQueueName() + " state is STOPPED, "
++ "child queue:" + queueName + " state cannot be RUNNING.");
+  } else {
+this.state = configuredState;
+  }
+} else {
+  // if this is the root queue, get the state from the configuration.
+  // if the state is not set, use RUNNING as default state.
+  this.state = csContext.getConfiguration().getState(getQueuePath());
+}
+  }
+
   protected QueueInfo getQueueInfo() {
 // Deliberately doesn't use lock here, because this method will be invoked
 // from schedulerApplicationAttempt, to avoid deadlock, sacrifice

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f885160f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
index f8335a8..bfaeba4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
@@ 

[16/29] hadoop git commit: HDFS-8674. Improve performance of postponed block scans. Contributed by Daryn Sharp.

HDFS-8674. Improve performance of postponed block scans. Contributed by Daryn 
Sharp.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/96c57492
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/96c57492
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/96c57492

Branch: refs/heads/YARN-5734
Commit: 96c574927a600d15fab919df1fdc9e07887af6c5
Parents: e0fa492
Author: Kihwal Lee 
Authored: Thu Dec 1 12:11:27 2016 -0600
Committer: Kihwal Lee 
Committed: Thu Dec 1 12:11:27 2016 -0600

--
 .../server/blockmanagement/BlockManager.java| 79 ++--
 1 file changed, 24 insertions(+), 55 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/96c57492/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 1b744e7..e60703b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -30,6 +30,7 @@ import java.util.EnumSet;
 import java.util.HashMap;
 import java.util.HashSet;
 import java.util.Iterator;
+import java.util.LinkedHashSet;
 import java.util.LinkedList;
 import java.util.List;
 import java.util.Map;
@@ -43,8 +44,6 @@ import java.util.concurrent.ExecutionException;
 import java.util.concurrent.FutureTask;
 import java.util.concurrent.ThreadLocalRandom;
 import java.util.concurrent.TimeUnit;
-import java.util.concurrent.atomic.AtomicLong;
-
 import javax.management.ObjectName;
 
 import org.apache.hadoop.HadoopIllegalArgumentException;
@@ -101,7 +100,6 @@ import 
org.apache.hadoop.hdfs.server.protocol.KeyUpdateCommand;
 import org.apache.hadoop.hdfs.server.protocol.ReceivedDeletedBlockInfo;
 import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks;
 import org.apache.hadoop.hdfs.util.FoldedTreeSet;
-import org.apache.hadoop.hdfs.util.LightWeightHashSet;
 import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy;
 import org.apache.hadoop.hdfs.server.namenode.CacheManager;
 
@@ -184,7 +182,6 @@ public class BlockManager implements BlockStatsMXBean {
   /** flag indicating whether replication queues have been initialized */
   private boolean initializedReplQueues;
 
-  private final AtomicLong postponedMisreplicatedBlocksCount = new 
AtomicLong(0L);
   private final long startupDelayBlockDeletionInMs;
   private final BlockReportLeaseManager blockReportLeaseManager;
   private ObjectName mxBeanName;
@@ -219,7 +216,7 @@ public class BlockManager implements BlockStatsMXBean {
   }
   /** Used by metrics */
   public long getPostponedMisreplicatedBlocksCount() {
-return postponedMisreplicatedBlocksCount.get();
+return postponedMisreplicatedBlocks.size();
   }
   /** Used by metrics */
   public int getPendingDataNodeMessageCount() {
@@ -275,8 +272,10 @@ public class BlockManager implements BlockStatsMXBean {
* notified of all block deletions that might have been pending
* when the failover happened.
*/
-  private final LightWeightHashSet postponedMisreplicatedBlocks =
-  new LightWeightHashSet<>();
+  private final Set postponedMisreplicatedBlocks =
+  new LinkedHashSet();
+  private final int blocksPerPostpondedRescan;
+  private final ArrayList rescannedMisreplicatedBlocks;
 
   /**
* Maps a StorageID to the set of blocks that are "extra" for this
@@ -378,7 +377,10 @@ public class BlockManager implements BlockStatsMXBean {
 datanodeManager = new DatanodeManager(this, namesystem, conf);
 heartbeatManager = datanodeManager.getHeartbeatManager();
 this.blockIdManager = new BlockIdManager(this);
-
+blocksPerPostpondedRescan = (int)Math.min(Integer.MAX_VALUE,
+datanodeManager.getBlocksPerPostponedMisreplicatedBlocksRescan());
+rescannedMisreplicatedBlocks =
+new ArrayList(blocksPerPostpondedRescan);
 startupDelayBlockDeletionInMs = conf.getLong(
 DFSConfigKeys.DFS_NAMENODE_STARTUP_DELAY_BLOCK_DELETION_SEC_KEY,
 DFSConfigKeys.DFS_NAMENODE_STARTUP_DELAY_BLOCK_DELETION_SEC_DEFAULT) * 
1000L;
@@ -1613,9 +1615,7 @@ public class BlockManager implements BlockStatsMXBean {
 
 
   private void postponeBlock(Block blk) {
-if (postponedMisreplicatedBlocks.add(blk)) {
-  postponedMisreplicatedBlocksCount.incrementAndGet();
-}
+postponedMisreplicatedBlocks.add(blk);
   }
   
   

[25/29] hadoop git commit: HADOOP-13855. Fix a couple of the s3a statistic names to be consistent with the rest. Contributed by Steve Loughran

HADOOP-13855. Fix a couple of the s3a statistic names to be consistent with the 
rest. Contributed by Steve Loughran


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/51211a7d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/51211a7d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/51211a7d

Branch: refs/heads/YARN-5734
Commit: 51211a7d7aa342b93951fe61da3f624f0652e101
Parents: 5bd18c4
Author: Mingliang Liu 
Authored: Fri Dec 2 13:48:15 2016 -0800
Committer: Mingliang Liu 
Committed: Fri Dec 2 14:01:42 2016 -0800

--
 .../src/main/java/org/apache/hadoop/fs/s3a/Statistic.java| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/51211a7d/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Statistic.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Statistic.java 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Statistic.java
index 36ec50b..789c6d7 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Statistic.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Statistic.java
@@ -92,12 +92,12 @@ public enum Statistic {
   "Count of times the TCP stream was aborted"),
   STREAM_BACKWARD_SEEK_OPERATIONS("stream_backward_seek_operations",
   "Number of executed seek operations which went backwards in a stream"),
-  STREAM_CLOSED("streamClosed", "Count of times the TCP stream was closed"),
+  STREAM_CLOSED("stream_closed", "Count of times the TCP stream was closed"),
   STREAM_CLOSE_OPERATIONS("stream_close_operations",
   "Total count of times an attempt to close a data stream was made"),
   STREAM_FORWARD_SEEK_OPERATIONS("stream_forward_seek_operations",
   "Number of executed seek operations which went forward in a stream"),
-  STREAM_OPENED("streamOpened",
+  STREAM_OPENED("stream_opened",
   "Total count of times an input stream to object store was opened"),
   STREAM_READ_EXCEPTIONS("stream_read_exceptions",
   "Number of seek operations invoked on input streams"),


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[06/29] hadoop git commit: HDFS-8678. Bring back the feature to view chunks of files in the HDFS file browser. Contributed by Ivo Udelsmann.

HDFS-8678. Bring back the feature to view chunks of files in the HDFS file 
browser. Contributed by Ivo Udelsmann.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/625df87c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/625df87c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/625df87c

Branch: refs/heads/YARN-5734
Commit: 625df87c7b8ec2787e743d845fadde5e73479dc1
Parents: 51e6c1c
Author: Ravi Prakash 
Authored: Wed Nov 30 09:11:19 2016 -0800
Committer: Ravi Prakash 
Committed: Wed Nov 30 09:12:15 2016 -0800

--
 .../src/main/webapps/hdfs/explorer.html | 13 +--
 .../src/main/webapps/hdfs/explorer.js   | 37 +---
 2 files changed, 43 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/625df87c/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
index ad8c374..3700a5e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
@@ -57,8 +57,17 @@
File information
  
  
-   Download
-
+   
+  
+Download
+  
+  
+Head the 
file (first 32K)
+  
+  
+Tail the 
file (last 32K)
+ 
+   


  

http://git-wip-us.apache.org/repos/asf/hadoop/blob/625df87c/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js
index 1739db2..3e276a9 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js
@@ -192,13 +192,40 @@
   var download_url = '/webhdfs/v1' + abs_path + '?op=OPEN';
 
   $('#file-info-download').attr('href', download_url);
-  $('#file-info-preview').click(function() {
+
+  var processPreview = function(url) {
+url += "=true";
+$.ajax({
+  type: 'GET',
+  url: url,
+  processData: false,
+  crossDomain: true
+}).done(function(data) {
+  url = data.Location;
+  $.ajax({
+type: 'GET',
+url: url,
+processData: false,
+crossDomain: true
+  }).complete(function(data) {
+$('#file-info-preview-body').val(data.responseText);
+$('#file-info-tail').show();
+  }).error(function(jqXHR, textStatus, errorThrown) {
+show_err_msg("Couldn't preview the file. " + errorThrown);
+  });
+}).error(function(jqXHR, textStatus, errorThrown) {
+  show_err_msg("Couldn't find datanode to read file from. " + 
errorThrown);
+});
+  }
+
+  $('#file-info-preview-tail').click(function() {
 var offset = d.fileLength - TAIL_CHUNK_SIZE;
 var url = offset > 0 ? download_url + '=' + offset : 
download_url;
-$.get(url, function(t) {
-  $('#file-info-preview-body').val(t);
-  $('#file-info-tail').show();
-}, "text").error(network_error_handler(url));
+processPreview(url);
+  });
+  $('#file-info-preview-head').click(function() {
+var url = d.fileLength > TAIL_CHUNK_SIZE ? download_url + '=' + 
TAIL_CHUNK_SIZE : download_url;
+processPreview(url);
   });
 
   if (d.fileLength > 0) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[09/29] hadoop git commit: HADOOP-13790. Make qbt script executable. Contributed by Andrew Wang.

HADOOP-13790. Make qbt script executable. Contributed by Andrew Wang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/be5a7570
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/be5a7570
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/be5a7570

Branch: refs/heads/YARN-5734
Commit: be5a757096246d5c4ef73da9d233adda67bd3d69
Parents: 7c84871
Author: Akira Ajisaka 
Authored: Thu Dec 1 03:52:04 2016 +0900
Committer: Akira Ajisaka 
Committed: Thu Dec 1 03:52:44 2016 +0900

--
 dev-support/bin/qbt | 0
 1 file changed, 0 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/be5a7570/dev-support/bin/qbt
--
diff --git a/dev-support/bin/qbt b/dev-support/bin/qbt
old mode 100644
new mode 100755


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[02/29] hadoop git commit: HDFS-11149. Support for parallel checking of FsVolumes.

HDFS-11149. Support for parallel checking of FsVolumes.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/eaaa3295
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/eaaa3295
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/eaaa3295

Branch: refs/heads/YARN-5734
Commit: eaaa32950cbae42a74e28e3db3f0cdb1ff158119
Parents: 8f6e143
Author: Arpit Agarwal 
Authored: Tue Nov 29 20:31:02 2016 -0800
Committer: Arpit Agarwal 
Committed: Tue Nov 29 20:31:02 2016 -0800

--
 .../datanode/checker/DatasetVolumeChecker.java  | 442 +++
 .../server/datanode/fsdataset/FsDatasetSpi.java |   7 +
 .../server/datanode/fsdataset/FsVolumeSpi.java  |  12 +-
 .../datanode/fsdataset/impl/FsVolumeImpl.java   |  15 +-
 .../src/main/resources/hdfs-default.xml |  10 +-
 .../server/datanode/SimulatedFSDataset.java |   7 +
 .../server/datanode/TestDirectoryScanner.java   |   7 +
 .../checker/TestDatasetVolumeChecker.java   | 261 +++
 .../TestDatasetVolumeCheckerFailures.java   | 193 
 .../datanode/extdataset/ExternalVolumeImpl.java |   7 +
 10 files changed, 953 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/eaaa3295/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/DatasetVolumeChecker.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/DatasetVolumeChecker.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/DatasetVolumeChecker.java
new file mode 100644
index 000..8a57812
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/DatasetVolumeChecker.java
@@ -0,0 +1,442 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.datanode.checker;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.Sets;
+import com.google.common.util.concurrent.FutureCallback;
+import com.google.common.util.concurrent.Futures;
+import com.google.common.util.concurrent.ListenableFuture;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.server.datanode.StorageLocation;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsDatasetSpi;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi;
+import 
org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi.VolumeCheckContext;
+import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.util.DiskChecker.DiskErrorException;
+import org.apache.hadoop.util.Timer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nonnull;
+import javax.annotation.Nullable;
+import java.nio.channels.ClosedChannelException;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicLong;
+
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_DISK_CHECK_MIN_GAP_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_DISK_CHECK_TIMEOUT_DEFAULT;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_DISK_CHECK_TIMEOUT_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_FAILED_VOLUMES_TOLERATED_DEFAULT;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_FAILED_VOLUMES_TOLERATED_KEY;
+
+/**
+ * A class that encapsulates running disk checks against each volume of an
+ * 

[10/29] hadoop git commit: YARN-5942. "Overridden" is misspelled as "overriden" in FairScheduler.md (Contributed by Heather Sutherland via Daniel Templeton)

YARN-5942. "Overridden" is misspelled as "overriden" in FairScheduler.md
(Contributed by Heather Sutherland via Daniel Templeton)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4fca94fb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4fca94fb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4fca94fb

Branch: refs/heads/YARN-5734
Commit: 4fca94fbdad16e845e670758939aabb7a97154d9
Parents: be5a757
Author: Daniel Templeton 
Authored: Wed Nov 30 11:22:21 2016 -0800
Committer: Daniel Templeton 
Committed: Wed Nov 30 11:23:51 2016 -0800

--
 .../hadoop-yarn-site/src/site/markdown/FairScheduler.md  | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4fca94fb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md
index ecbb309..ae4c3ab 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md
@@ -129,13 +129,13 @@ The allocation file must be in XML format. The format 
contains five types of ele
 
 * **A defaultFairSharePreemptionThreshold element**: which sets the fair share 
preemption threshold for the root queue; overridden by 
fairSharePreemptionThreshold element in root queue.
 
-* **A queueMaxAppsDefault element**: which sets the default running app limit 
for queues; overriden by maxRunningApps element in each queue.
+* **A queueMaxAppsDefault element**: which sets the default running app limit 
for queues; overridden by maxRunningApps element in each queue.
 
-* **A queueMaxResourcesDefault element**: which sets the default max resource 
limit for queue; overriden by maxResources element in each queue.
+* **A queueMaxResourcesDefault element**: which sets the default max resource 
limit for queue; overridden by maxResources element in each queue.
 
-* **A queueMaxAMShareDefault element**: which sets the default AM resource 
limit for queue; overriden by maxAMShare element in each queue.
+* **A queueMaxAMShareDefault element**: which sets the default AM resource 
limit for queue; overridden by maxAMShare element in each queue.
 
-* **A defaultQueueSchedulingPolicy element**: which sets the default 
scheduling policy for queues; overriden by the schedulingPolicy element in each 
queue if specified. Defaults to "fair".
+* **A defaultQueueSchedulingPolicy element**: which sets the default 
scheduling policy for queues; overridden by the schedulingPolicy element in 
each queue if specified. Defaults to "fair".
 
 * **A queuePlacementPolicy element**: which contains a list of rule elements 
that tell the scheduler how to place incoming apps into queues. Rules are 
applied in the order that they are listed. Rules may take arguments. All rules 
accept the "create" argument, which indicates whether the rule can create a new 
queue. "Create" defaults to true; if set to false and the rule would place the 
app in a queue that is not configured in the allocations file, we continue on 
to the next rule. The last rule must be one that can never issue a continue. 
Valid rules are:
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[08/29] hadoop git commit: MAPREDUCE-6810. Fix hadoop-mapreduce-client-nativetask compilation with GCC-6.2.1. Contributed by Ravi Prakash.

MAPREDUCE-6810. Fix hadoop-mapreduce-client-nativetask compilation with 
GCC-6.2.1. Contributed by Ravi Prakash.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7c848719
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7c848719
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7c848719

Branch: refs/heads/YARN-5734
Commit: 7c848719de778929258f1f9e2778e56f267c90ed
Parents: b3befc0
Author: Ravi Prakash 
Authored: Wed Nov 30 10:47:41 2016 -0800
Committer: Ravi Prakash 
Committed: Wed Nov 30 10:47:41 2016 -0800

--
 .../src/main/native/src/lib/Log.h  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7c848719/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib/Log.h
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib/Log.h
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib/Log.h
index a0c17f3..a84b055 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib/Log.h
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/src/lib/Log.h
@@ -32,7 +32,7 @@ extern FILE * LOG_DEVICE;
 #define LOG(_fmt_, args...)   if (LOG_DEVICE) { \
 time_t log_timer; struct tm log_tm; \
 time(_timer); localtime_r(_timer, _tm); \
-fprintf(LOG_DEVICE, "%02d/%02d/%02d %02d:%02d:%02d INFO "_fmt_"\n", \
+fprintf(LOG_DEVICE, "%02d/%02d/%02d %02d:%02d:%02d INFO " _fmt_ "\n", \
 log_tm.tm_year%100, log_tm.tm_mon+1, log_tm.tm_mday, \
 log_tm.tm_hour, log_tm.tm_min, log_tm.tm_sec, \
 ##args);}


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[12/29] hadoop git commit: YARN-5761. Separate QueueManager from Scheduler. (Xuan Gong via gtcarrera9)

YARN-5761. Separate QueueManager from Scheduler. (Xuan Gong via gtcarrera9)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/69fb70c3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/69fb70c3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/69fb70c3

Branch: refs/heads/YARN-5734
Commit: 69fb70c31aa277f7fb14b05c0185ddc5cd90793d
Parents: 3fd844b
Author: Li Lu 
Authored: Wed Nov 30 13:38:42 2016 -0800
Committer: Li Lu 
Committed: Wed Nov 30 13:38:42 2016 -0800

--
 .../scheduler/SchedulerQueueManager.java|  75 
 .../scheduler/capacity/CapacityScheduler.java   | 294 +++
 .../capacity/CapacitySchedulerQueueManager.java | 361 +++
 .../capacity/TestApplicationLimits.java |  35 +-
 .../TestApplicationLimitsByPartition.java   |   7 +-
 .../scheduler/capacity/TestChildQueueOrder.java |   9 +-
 .../scheduler/capacity/TestLeafQueue.java   |   9 +-
 .../scheduler/capacity/TestParentQueue.java |  39 +-
 .../scheduler/capacity/TestReservations.java|   8 +-
 .../scheduler/capacity/TestUtils.java   |   2 +-
 10 files changed, 536 insertions(+), 303 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/69fb70c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerQueueManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerQueueManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerQueueManager.java
new file mode 100644
index 000..92b989a
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerQueueManager.java
@@ -0,0 +1,75 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.hadoop.yarn.server.resourcemanager.scheduler;
+
+import java.io.IOException;
+import java.util.Map;
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationSchedulerConfiguration;
+
+/**
+ *
+ * Context of the Queues in Scheduler.
+ *
+ */
+@Private
+@Unstable
+public interface SchedulerQueueManager {
+
+  /**
+   * Get the root queue.
+   * @return root queue
+   */
+  T getRootQueue();
+
+  /**
+   * Get all the queues.
+   * @return a map contains all the queues as well as related queue names
+   */
+  Map getQueues();
+
+  /**
+   * Remove the queue from the existing queue.
+   * @param queueName the queue name
+   */
+  void removeQueue(String queueName);
+
+  /**
+   * Add a new queue to the existing queues.
+   * @param queueName the queue name
+   * @param queue the queue object
+   */
+  void addQueue(String queueName, T queue);
+
+  /**
+   * Get a queue matching the specified queue name.
+   * @param queueName the queue name
+   * @return a queue object
+   */
+  T getQueue(String queueName);
+
+  /**
+   * Reinitialize the queues.
+   * @param newConf the configuration
+   * @throws IOException if fails to re-initialize queues
+   */
+  void reinitializeQueues(E newConf) throws IOException;
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/69fb70c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
--
diff --git 

[07/29] hadoop git commit: YARN-4997. Update fair scheduler to use pluggable auth provider (Contributed by Tao Jie via Daniel Templeton)

YARN-4997. Update fair scheduler to use pluggable auth provider (Contributed by 
Tao Jie via Daniel Templeton)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b3befc02
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b3befc02
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b3befc02

Branch: refs/heads/YARN-5734
Commit: b3befc021b0e2d63d1a3710ea450797d1129f1f5
Parents: 625df87
Author: Daniel Templeton 
Authored: Wed Nov 30 09:50:33 2016 -0800
Committer: Daniel Templeton 
Committed: Wed Nov 30 09:50:33 2016 -0800

--
 .../security/YarnAuthorizationProvider.java | 15 +
 .../scheduler/fair/AllocationConfiguration.java | 38 +--
 .../fair/AllocationFileLoaderService.java   | 68 +---
 .../resourcemanager/scheduler/fair/FSQueue.java | 22 +--
 .../scheduler/fair/FairScheduler.java   | 45 +++--
 .../scheduler/fair/TestFairScheduler.java   |  4 +-
 6 files changed, 149 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b3befc02/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/YarnAuthorizationProvider.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/YarnAuthorizationProvider.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/YarnAuthorizationProvider.java
index 4b43ea1..9ae4bd7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/YarnAuthorizationProvider.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/security/YarnAuthorizationProvider.java
@@ -28,6 +28,7 @@ import org.apache.hadoop.security.authorize.AccessControlList;
 import org.apache.hadoop.util.ReflectionUtils;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 
+import com.google.common.annotations.VisibleForTesting;
 import java.util.List;
 
 /**
@@ -61,6 +62,20 @@ public abstract class YarnAuthorizationProvider {
   }
 
   /**
+   * Destroy the {@link YarnAuthorizationProvider} instance.
+   * This method is called only in Tests.
+   */
+  @VisibleForTesting
+  public static void destroy() {
+synchronized (YarnAuthorizationProvider.class) {
+  if (authorizer != null) {
+LOG.debug(authorizer.getClass().getName() + " is destroyed.");
+authorizer = null;
+  }
+}
+  }
+
+  /**
* Initialize the provider. Invoked on daemon startup. DefaultYarnAuthorizer 
is
* initialized based on configurations.
*/

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b3befc02/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
index c771887..7bd2616 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java
@@ -17,6 +17,7 @@
 */
 package org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair;
 
+import java.util.Collections;
 import java.util.HashMap;
 import java.util.HashSet;
 import java.util.Map;
@@ -25,13 +26,14 @@ import java.util.Set;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.authorize.AccessControlList;
 import org.apache.hadoop.yarn.api.records.QueueACL;
 import org.apache.hadoop.yarn.api.records.ReservationACL;
 import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.security.AccessType;
 import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationSchedulerConfiguration;
 import org.apache.hadoop.yarn.server.resourcemanager.resource.ResourceWeights;
+import 

[13/29] hadoop git commit: HDFS-5517. Lower the default maximum number of blocks per file. Contributed by Aaron T. Myers and Andrew Wang.

HDFS-5517. Lower the default maximum number of blocks per file. Contributed by 
Aaron T. Myers and Andrew Wang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7226a71b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7226a71b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7226a71b

Branch: refs/heads/YARN-5734
Commit: 7226a71b1f684f562bd88ee121f1dd7aa8b73816
Parents: 69fb70c
Author: Andrew Wang 
Authored: Wed Nov 30 15:58:31 2016 -0800
Committer: Andrew Wang 
Committed: Wed Nov 30 15:58:31 2016 -0800

--
 .../main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java  |  2 +-
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml  |  2 +-
 .../hdfs/server/datanode/TestDirectoryScanner.java   | 11 +--
 .../server/namenode/metrics/TestNameNodeMetrics.java |  2 +-
 4 files changed, 12 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7226a71b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index d7d3c9d..df21857 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -399,7 +399,7 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
   public static final String  DFS_NAMENODE_MIN_BLOCK_SIZE_KEY = 
"dfs.namenode.fs-limits.min-block-size";
   public static final longDFS_NAMENODE_MIN_BLOCK_SIZE_DEFAULT = 1024*1024;
   public static final String  DFS_NAMENODE_MAX_BLOCKS_PER_FILE_KEY = 
"dfs.namenode.fs-limits.max-blocks-per-file";
-  public static final longDFS_NAMENODE_MAX_BLOCKS_PER_FILE_DEFAULT = 
1024*1024;
+  public static final longDFS_NAMENODE_MAX_BLOCKS_PER_FILE_DEFAULT = 
10*1000;
   public static final String  DFS_NAMENODE_MAX_XATTRS_PER_INODE_KEY = 
"dfs.namenode.fs-limits.max-xattrs-per-inode";
   public static final int DFS_NAMENODE_MAX_XATTRS_PER_INODE_DEFAULT = 32;
   public static final String  DFS_NAMENODE_MAX_XATTR_SIZE_KEY = 
"dfs.namenode.fs-limits.max-xattr-size";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7226a71b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 671c98c..086f667 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -372,7 +372,7 @@
 
 
 dfs.namenode.fs-limits.max-blocks-per-file
-1048576
+1
 Maximum number of blocks per file, enforced by the Namenode on
 write. This prevents the creation of extremely large files which can
 degrade performance.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7226a71b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
index f08b579..d7c8383 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
@@ -590,8 +590,15 @@ public class TestDirectoryScanner {
   100);
   DataNode dataNode = cluster.getDataNodes().get(0);
 
-  createFile(GenericTestUtils.getMethodName(),
-  BLOCK_LENGTH * blocks, false);
+  final int maxBlocksPerFile = (int) DFSConfigKeys
+  .DFS_NAMENODE_MAX_BLOCKS_PER_FILE_DEFAULT;
+  int numBlocksToCreate = blocks;
+  while (numBlocksToCreate > 0) {
+final int toCreate = Math.min(maxBlocksPerFile, numBlocksToCreate);
+createFile(GenericTestUtils.getMethodName() + numBlocksToCreate,
+BLOCK_LENGTH * toCreate, false);
+numBlocksToCreate -= toCreate;
+  }
 
   float ratio = 0.0f;
   int retries = maxRetries;


[23/29] hadoop git commit: HADOOP-13857. S3AUtils.translateException to map (wrapped) InterruptedExceptions to InterruptedIOEs. Contributed by Steve Loughran

HADOOP-13857. S3AUtils.translateException to map (wrapped) 
InterruptedExceptions to InterruptedIOEs. Contributed by Steve Loughran


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2ff84a00
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2ff84a00
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2ff84a00

Branch: refs/heads/YARN-5734
Commit: 2ff84a00405e977b1fd791cfb974244580dd5ae8
Parents: c7ff34f
Author: Mingliang Liu 
Authored: Fri Dec 2 13:36:04 2016 -0800
Committer: Mingliang Liu 
Committed: Fri Dec 2 13:36:04 2016 -0800

--
 .../java/org/apache/hadoop/fs/s3a/S3AUtils.java | 23 
 .../fs/s3a/TestS3AExceptionTranslation.java | 38 
 2 files changed, 61 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2ff84a00/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java
index 49f8862..dedbfd4 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java
@@ -40,6 +40,7 @@ import org.slf4j.Logger;
 import java.io.EOFException;
 import java.io.FileNotFoundException;
 import java.io.IOException;
+import java.io.InterruptedIOException;
 import java.lang.reflect.Constructor;
 import java.lang.reflect.Method;
 import java.lang.reflect.Modifier;
@@ -113,6 +114,10 @@ public final class S3AUtils {
 path != null ? (" on " + path) : "",
 exception);
 if (!(exception instanceof AmazonServiceException)) {
+  if (containsInterruptedException(exception)) {
+return (IOException)new InterruptedIOException(message)
+.initCause(exception);
+  }
   return new AWSClientIOException(message, exception);
 } else {
 
@@ -195,6 +200,24 @@ public final class S3AUtils {
   }
 
   /**
+   * Recurse down the exception loop looking for any inner details about
+   * an interrupted exception.
+   * @param thrown exception thrown
+   * @return true if down the execution chain the operation was an interrupt
+   */
+  static boolean containsInterruptedException(Throwable thrown) {
+if (thrown == null) {
+  return false;
+}
+if (thrown instanceof InterruptedException ||
+thrown instanceof InterruptedIOException) {
+  return true;
+}
+// tail recurse
+return containsInterruptedException(thrown.getCause());
+  }
+
+  /**
* Get low level details of an amazon exception for logging; multi-line.
* @param e exception
* @return string details

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2ff84a00/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AExceptionTranslation.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AExceptionTranslation.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AExceptionTranslation.java
index a7dafa0..e548ac2 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AExceptionTranslation.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AExceptionTranslation.java
@@ -25,9 +25,12 @@ import static org.junit.Assert.*;
 
 import java.io.EOFException;
 import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.InterruptedIOException;
 import java.nio.file.AccessDeniedException;
 import java.util.Collections;
 import java.util.Map;
+import java.util.concurrent.ExecutionException;
 
 import com.amazonaws.AmazonClientException;
 import com.amazonaws.AmazonServiceException;
@@ -124,4 +127,39 @@ public class TestS3AExceptionTranslation {
 return verifyExceptionClass(clazz,
 translateException("test", "/", exception));
   }
+
+  private void assertContainsInterrupted(boolean expected, Throwable thrown)
+  throws Throwable {
+if (containsInterruptedException(thrown) != expected) {
+  throw thrown;
+}
+  }
+
+  @Test
+  public void testInterruptExceptionDetecting() throws Throwable {
+InterruptedException interrupted = new InterruptedException("irq");
+assertContainsInterrupted(true, interrupted);
+IOException ioe = new IOException("ioe");
+assertContainsInterrupted(false, ioe);
+assertContainsInterrupted(true, ioe.initCause(interrupted));
+assertContainsInterrupted(true,
+new InterruptedIOException("ioirq"));
+  }
+
+  @Test(expected = 

[03/29] hadoop git commit: HADOOP-10930. Refactor: Wrap Datanode IO related operations. Contributed by Xiaoyu Yao.

HADOOP-10930. Refactor: Wrap Datanode IO related operations. Contributed by 
Xiaoyu Yao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/aeecfa24
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/aeecfa24
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/aeecfa24

Branch: refs/heads/YARN-5734
Commit: aeecfa24f4fb6af289920cbf8830c394e66bd78e
Parents: eaaa329
Author: Xiaoyu Yao 
Authored: Tue Nov 29 20:52:36 2016 -0800
Committer: Arpit Agarwal 
Committed: Tue Nov 29 20:52:36 2016 -0800

--
 .../hdfs/server/datanode/BlockReceiver.java |  66 +++
 .../hdfs/server/datanode/BlockSender.java   | 105 ---
 .../hadoop/hdfs/server/datanode/DNConf.java |   4 +
 .../hdfs/server/datanode/DataStorage.java   |   5 +
 .../hdfs/server/datanode/LocalReplica.java  | 179 +--
 .../server/datanode/LocalReplicaInPipeline.java |  30 ++--
 .../hdfs/server/datanode/ReplicaInPipeline.java |   4 +-
 .../server/datanode/fsdataset/FsDatasetSpi.java |   3 +-
 .../datanode/fsdataset/ReplicaInputStreams.java | 102 ++-
 .../fsdataset/ReplicaOutputStreams.java | 107 ++-
 .../datanode/fsdataset/impl/BlockPoolSlice.java |  32 ++--
 .../impl/FsDatasetAsyncDiskService.java |   7 +-
 .../datanode/fsdataset/impl/FsDatasetImpl.java  |   5 +-
 .../datanode/fsdataset/impl/FsVolumeImpl.java   |   5 +-
 .../org/apache/hadoop/hdfs/TestFileAppend.java  |   2 +-
 .../server/datanode/SimulatedFSDataset.java |  13 +-
 .../hdfs/server/datanode/TestBlockRecovery.java |   2 +-
 .../server/datanode/TestSimulatedFSDataset.java |   2 +-
 .../extdataset/ExternalDatasetImpl.java |   4 +-
 .../extdataset/ExternalReplicaInPipeline.java   |   6 +-
 20 files changed, 445 insertions(+), 238 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/aeecfa24/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
index 39419c1..f372072 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
@@ -24,10 +24,7 @@ import java.io.Closeable;
 import java.io.DataInputStream;
 import java.io.DataOutputStream;
 import java.io.EOFException;
-import java.io.FileDescriptor;
-import java.io.FileOutputStream;
 import java.io.IOException;
-import java.io.OutputStream;
 import java.io.OutputStreamWriter;
 import java.io.Writer;
 import java.nio.ByteBuffer;
@@ -53,7 +50,6 @@ import 
org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration;
 import org.apache.hadoop.hdfs.util.DataTransferThrottler;
 import org.apache.hadoop.io.IOUtils;
-import org.apache.hadoop.io.nativeio.NativeIO;
 import org.apache.hadoop.util.Daemon;
 import org.apache.hadoop.util.DataChecksum;
 import org.apache.hadoop.util.StringUtils;
@@ -88,8 +84,6 @@ class BlockReceiver implements Closeable {
* the DataNode needs to recalculate checksums before writing.
*/
   private final boolean needsChecksumTranslation;
-  private OutputStream out = null; // to block file at local disk
-  private FileDescriptor outFd;
   private DataOutputStream checksumOut = null; // to crc file at local disk
   private final int bytesPerChecksum;
   private final int checksumSize;
@@ -250,7 +244,8 @@ class BlockReceiver implements Closeable {
   
   final boolean isCreate = isDatanode || isTransfer 
   || stage == BlockConstructionStage.PIPELINE_SETUP_CREATE;
-  streams = replicaInfo.createStreams(isCreate, requestedChecksum);
+  streams = replicaInfo.createStreams(isCreate, requestedChecksum,
+  datanodeSlowLogThresholdMs);
   assert streams != null : "null streams!";
 
   // read checksum meta information
@@ -260,13 +255,6 @@ class BlockReceiver implements Closeable {
   this.bytesPerChecksum = diskChecksum.getBytesPerChecksum();
   this.checksumSize = diskChecksum.getChecksumSize();
 
-  this.out = streams.getDataOut();
-  if (out instanceof FileOutputStream) {
-this.outFd = ((FileOutputStream)out).getFD();
-  } else {
-LOG.warn("Could not get file descriptor for outputstream of class " +
-out.getClass());
-  }
   this.checksumOut = new DataOutputStream(new BufferedOutputStream(
   

[01/29] hadoop git commit: MAPREDUCE-6565. Configuration to use host name in delegation token service is not read from job.xml during MapReduce job execution. Contributed by Li Lu.

Repository: hadoop
Updated Branches:
  refs/heads/YARN-5734 6d8b4f6c2 -> 291df5c7f


MAPREDUCE-6565. Configuration to use host name in delegation token service is 
not read from job.xml during MapReduce job execution. Contributed by Li Lu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8f6e1439
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8f6e1439
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8f6e1439

Branch: refs/heads/YARN-5734
Commit: 8f6e14399a3e77e1bdcc5034f7601e9f62163dea
Parents: 6d8b4f6
Author: Junping Du 
Authored: Tue Nov 29 15:51:27 2016 -0800
Committer: Junping Du 
Committed: Tue Nov 29 15:51:27 2016 -0800

--
 .../src/main/java/org/apache/hadoop/mapred/YarnChild.java | 2 ++
 .../main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java | 3 +++
 2 files changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8f6e1439/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/YarnChild.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/YarnChild.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/YarnChild.java
index 164f19d..97642a5 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/YarnChild.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/YarnChild.java
@@ -78,6 +78,8 @@ class YarnChild {
 // Initing with our JobConf allows us to avoid loading confs twice
 Limits.init(job);
 UserGroupInformation.setConfiguration(job);
+// MAPREDUCE-6565: need to set configuration for SecurityUtil.
+SecurityUtil.setConfiguration(job);
 
 String host = args[0];
 int port = Integer.parseInt(args[1]);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8f6e1439/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
index 4a8a90e..b383a02 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
@@ -123,6 +123,7 @@ import org.apache.hadoop.mapreduce.v2.util.MRBuilderUtils;
 import org.apache.hadoop.mapreduce.v2.util.MRWebAppUtil;
 import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
 import org.apache.hadoop.security.Credentials;
+import org.apache.hadoop.security.SecurityUtil;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.service.AbstractService;
@@ -1690,6 +1691,8 @@ public class MRAppMaster extends CompositeService {
   final JobConf conf, String jobUserName) throws IOException,
   InterruptedException {
 UserGroupInformation.setConfiguration(conf);
+// MAPREDUCE-6565: need to set configuration for SecurityUtil.
+SecurityUtil.setConfiguration(conf);
 // Security framework already loaded the tokens into current UGI, just use
 // them
 Credentials credentials =


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[11/29] hadoop git commit: HADOOP-13830. Intermittent failure of ITestS3NContractRootDir#testRecursiveRootListing: "Can not create a Path from an empty string". Contributed by Steve Loughran

 HADOOP-13830. Intermittent failure of 
ITestS3NContractRootDir#testRecursiveRootListing: "Can not create a Path from 
an empty string". Contributed by Steve Loughran


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3fd844b9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3fd844b9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3fd844b9

Branch: refs/heads/YARN-5734
Commit: 3fd844b99fdfae6be6e5e261f371d175aad14229
Parents: 4fca94f
Author: Mingliang Liu 
Authored: Wed Nov 30 13:01:02 2016 -0800
Committer: Mingliang Liu 
Committed: Wed Nov 30 13:01:19 2016 -0800

--
 .../org/apache/hadoop/fs/s3native/NativeS3FileSystem.java | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3fd844b9/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java
index f741298..1a45db3 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java
@@ -587,7 +587,12 @@ public class NativeS3FileSystem extends FileSystem {
   for (String commonPrefix : listing.getCommonPrefixes()) {
 Path subpath = keyToPath(commonPrefix);
 String relativePath = pathUri.relativize(subpath.toUri()).getPath();
-status.add(newDirectory(new Path(absolutePath, relativePath)));
+// sometimes the common prefix includes the base dir (HADOOP-13830).
+// avoid that problem by detecting it and keeping it out
+// of the list
+if (!relativePath.isEmpty()) {
+  status.add(newDirectory(new Path(absolutePath, relativePath)));
+}
   }
   priorLastKey = listing.getPriorLastKey();
 } while (priorLastKey != null);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-13847. KMSWebApp should close KeyProviderCryptoExtension. Contributed by John Zhuge.

Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 590c7f0d7 -> 49f9e7cf7


HADOOP-13847. KMSWebApp should close KeyProviderCryptoExtension. Contributed by 
John Zhuge.

(cherry picked from commit 291df5c7fb713d5442ee29eb3f272127afb05a3c)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/49f9e7cf
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/49f9e7cf
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/49f9e7cf

Branch: refs/heads/branch-2.8
Commit: 49f9e7cf719b202811264d1c68f1816382a3bc22
Parents: 590c7f0
Author: Xiao Chen 
Authored: Mon Dec 5 09:34:39 2016 -0800
Committer: Xiao Chen 
Committed: Mon Dec 5 09:35:54 2016 -0800

--
 .../apache/hadoop/crypto/key/KeyProviderCryptoExtension.java  | 5 +++--
 .../org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java| 7 ++-
 2 files changed, 9 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/49f9e7cf/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
index 73c9885..b32366b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
@@ -412,8 +412,9 @@ public class KeyProviderCryptoExtension extends
 
   @Override
   public void close() throws IOException {
-if (getKeyProvider() != null) {
-  getKeyProvider().close();
+KeyProvider provider = getKeyProvider();
+if (provider != null && provider != this) {
+  provider.close();
 }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/49f9e7cf/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
index 7cb6c37..b990999 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
@@ -40,9 +40,9 @@ import javax.servlet.ServletContextEvent;
 import javax.servlet.ServletContextListener;
 
 import java.io.File;
+import java.io.IOException;
 import java.net.URI;
 import java.net.URL;
-import java.util.List;
 
 @InterfaceAudience.Private
 public class KMSWebApp implements ServletContextListener {
@@ -218,6 +218,11 @@ public class KMSWebApp implements ServletContextListener {
 
   @Override
   public void contextDestroyed(ServletContextEvent sce) {
+try {
+  keyProviderCryptoExtension.close();
+} catch (IOException ioe) {
+  LOG.error("Error closing KeyProviderCryptoExtension", ioe);
+}
 kmsAudit.shutdown();
 kmsAcls.stopReloader();
 jmxReporter.stop();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-13847. KMSWebApp should close KeyProviderCryptoExtension. Contributed by John Zhuge.

Repository: hadoop
Updated Branches:
  refs/heads/trunk c51bfd29c -> 291df5c7f


HADOOP-13847. KMSWebApp should close KeyProviderCryptoExtension. Contributed by 
John Zhuge.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/291df5c7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/291df5c7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/291df5c7

Branch: refs/heads/trunk
Commit: 291df5c7fb713d5442ee29eb3f272127afb05a3c
Parents: c51bfd2
Author: Xiao Chen 
Authored: Mon Dec 5 09:34:39 2016 -0800
Committer: Xiao Chen 
Committed: Mon Dec 5 09:35:17 2016 -0800

--
 .../apache/hadoop/crypto/key/KeyProviderCryptoExtension.java  | 5 +++--
 .../org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java| 7 ++-
 2 files changed, 9 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/291df5c7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
index 1ecd9f6..0543222 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
@@ -427,8 +427,9 @@ public class KeyProviderCryptoExtension extends
 
   @Override
   public void close() throws IOException {
-if (getKeyProvider() != null) {
-  getKeyProvider().close();
+KeyProvider provider = getKeyProvider();
+if (provider != null && provider != this) {
+  provider.close();
 }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/291df5c7/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
index cd773dd..40ae19f 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
@@ -40,9 +40,9 @@ import javax.servlet.ServletContextEvent;
 import javax.servlet.ServletContextListener;
 
 import java.io.File;
+import java.io.IOException;
 import java.net.URI;
 import java.net.URL;
-import java.util.List;
 
 @InterfaceAudience.Private
 public class KMSWebApp implements ServletContextListener {
@@ -215,6 +215,11 @@ public class KMSWebApp implements ServletContextListener {
 
   @Override
   public void contextDestroyed(ServletContextEvent sce) {
+try {
+  keyProviderCryptoExtension.close();
+} catch (IOException ioe) {
+  LOG.error("Error closing KeyProviderCryptoExtension", ioe);
+}
 kmsAudit.shutdown();
 kmsAcls.stopReloader();
 jmxReporter.stop();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-13847. KMSWebApp should close KeyProviderCryptoExtension. Contributed by John Zhuge.

Repository: hadoop
Updated Branches:
  refs/heads/branch-2 b36af9b76 -> 7e58eec62


HADOOP-13847. KMSWebApp should close KeyProviderCryptoExtension. Contributed by 
John Zhuge.

(cherry picked from commit 291df5c7fb713d5442ee29eb3f272127afb05a3c)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7e58eec6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7e58eec6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7e58eec6

Branch: refs/heads/branch-2
Commit: 7e58eec620248bfc5a36032f3cc25dd8ddeddc8a
Parents: b36af9b
Author: Xiao Chen 
Authored: Mon Dec 5 09:34:39 2016 -0800
Committer: Xiao Chen 
Committed: Mon Dec 5 09:35:51 2016 -0800

--
 .../apache/hadoop/crypto/key/KeyProviderCryptoExtension.java  | 5 +++--
 .../org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java| 7 ++-
 2 files changed, 9 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7e58eec6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
index 9b60ff6..680a367 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
@@ -427,8 +427,9 @@ public class KeyProviderCryptoExtension extends
 
   @Override
   public void close() throws IOException {
-if (getKeyProvider() != null) {
-  getKeyProvider().close();
+KeyProvider provider = getKeyProvider();
+if (provider != null && provider != this) {
+  provider.close();
 }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7e58eec6/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
index 763f207..5772036 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java
@@ -40,9 +40,9 @@ import javax.servlet.ServletContextEvent;
 import javax.servlet.ServletContextListener;
 
 import java.io.File;
+import java.io.IOException;
 import java.net.URI;
 import java.net.URL;
-import java.util.List;
 
 @InterfaceAudience.Private
 public class KMSWebApp implements ServletContextListener {
@@ -215,6 +215,11 @@ public class KMSWebApp implements ServletContextListener {
 
   @Override
   public void contextDestroyed(ServletContextEvent sce) {
+try {
+  keyProviderCryptoExtension.close();
+} catch (IOException ioe) {
+  LOG.error("Error closing KeyProviderCryptoExtension", ioe);
+}
 kmsAudit.shutdown();
 kmsAcls.stopReloader();
 jmxReporter.stop();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-11181. Fuse wrapper has a typo. Contributed by Wei-Chiu Chuang.

Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 a87abf9b6 -> 590c7f0d7


HDFS-11181. Fuse wrapper has a typo. Contributed by Wei-Chiu Chuang.

(cherry picked from commit b36af9b76c37bbad0b33a16e39e69fd86dc0faee)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/590c7f0d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/590c7f0d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/590c7f0d

Branch: refs/heads/branch-2.8
Commit: 590c7f0d78f0130efabbcf960e37677316639c0f
Parents: a87abf9
Author: Wei-Chiu Chuang 
Authored: Mon Dec 5 08:46:57 2016 -0800
Committer: Wei-Chiu Chuang 
Committed: Mon Dec 5 08:56:24 2016 -0800

--
 .../src/main/native/fuse-dfs/fuse_dfs_wrapper.sh   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/590c7f0d/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
index 26dfd19..8c4b860 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
@@ -43,7 +43,7 @@ done < <(find "$HADOOP_PREFIX/hadoop-client" -name "*.jar" 
-print0)
 while IFS= read -r -d '' file
 do
   export CLASSPATH=$CLASSPATH:$file
-done < <(find "$HADOOP_PREFIX/hhadoop-hdfs-project" -name "*.jar" -print0)
+done < <(find "$HADOOP_PREFIX/hadoop-hdfs-project" -name "*.jar" -print0)
 
 export CLASSPATH=$HADOOP_CONF_DIR:$CLASSPATH
 export PATH=$FUSEDFS_PATH:$PATH


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-11181. Fuse wrapper has a typo. Contributed by Wei-Chiu Chuang.

Repository: hadoop
Updated Branches:
  refs/heads/trunk f885160f4 -> c51bfd29c


HDFS-11181. Fuse wrapper has a typo. Contributed by Wei-Chiu Chuang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c51bfd29
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c51bfd29
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c51bfd29

Branch: refs/heads/trunk
Commit: c51bfd29cd1e6ec619742f2c47ebfc8bbfb231b6
Parents: f885160
Author: Wei-Chiu Chuang 
Authored: Mon Dec 5 08:44:40 2016 -0800
Committer: Wei-Chiu Chuang 
Committed: Mon Dec 5 08:44:40 2016 -0800

--
 .../src/main/native/fuse-dfs/fuse_dfs_wrapper.sh   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c51bfd29/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
index c52c5f9..d5bfd09 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
@@ -43,7 +43,7 @@ done < <(find "$HADOOP_HOME/hadoop-client" -name "*.jar" 
-print0)
 while IFS= read -r -d '' file
 do
   export CLASSPATH=$CLASSPATH:$file
-done < <(find "$HADOOP_HOME/hhadoop-hdfs-project" -name "*.jar" -print0)
+done < <(find "$HADOOP_HOME/hadoop-hdfs-project" -name "*.jar" -print0)
 
 export CLASSPATH=$HADOOP_CONF_DIR:$CLASSPATH
 export PATH=$FUSEDFS_PATH:$PATH


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-11181. Fuse wrapper has a typo. Contributed by Wei-Chiu Chuang.

Repository: hadoop
Updated Branches:
  refs/heads/branch-2 d58fca010 -> b36af9b76


HDFS-11181. Fuse wrapper has a typo. Contributed by Wei-Chiu Chuang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b36af9b7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b36af9b7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b36af9b7

Branch: refs/heads/branch-2
Commit: b36af9b76c37bbad0b33a16e39e69fd86dc0faee
Parents: d58fca0
Author: Wei-Chiu Chuang 
Authored: Mon Dec 5 08:46:57 2016 -0800
Committer: Wei-Chiu Chuang 
Committed: Mon Dec 5 08:46:57 2016 -0800

--
 .../src/main/native/fuse-dfs/fuse_dfs_wrapper.sh   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b36af9b7/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
index 26dfd19..8c4b860 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
@@ -43,7 +43,7 @@ done < <(find "$HADOOP_PREFIX/hadoop-client" -name "*.jar" 
-print0)
 while IFS= read -r -d '' file
 do
   export CLASSPATH=$CLASSPATH:$file
-done < <(find "$HADOOP_PREFIX/hhadoop-hdfs-project" -name "*.jar" -print0)
+done < <(find "$HADOOP_PREFIX/hadoop-hdfs-project" -name "*.jar" -print0)
 
 export CLASSPATH=$HADOOP_CONF_DIR:$CLASSPATH
 export PATH=$FUSEDFS_PATH:$PATH


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org