hadoop git commit: YARN-8970. Improve the debug message in CS#allocateContainerOnSingleNode. Contributed by Zhankun Tang.

2018-11-05 Thread wwei
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 c3a0f07db -> ddb349ceb


YARN-8970. Improve the debug message in CS#allocateContainerOnSingleNode. 
Contributed by Zhankun Tang.

(cherry picked from commit 5d6554c722f08f79bce904e021243605ee75bae3)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ddb349ce
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ddb349ce
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ddb349ce

Branch: refs/heads/branch-3.0
Commit: ddb349ceb39f3fde3ff5c7ba45ddd4fdb7f89298
Parents: c3a0f07
Author: Weiwei Yang 
Authored: Tue Nov 6 14:50:09 2018 +0800
Committer: Weiwei Yang 
Committed: Tue Nov 6 14:54:33 2018 +0800

--
 .../resourcemanager/scheduler/capacity/CapacityScheduler.java| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ddb349ce/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index a1d854b..cb6c53e 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -1387,8 +1387,8 @@ public class CapacityScheduler extends
 .add(node.getUnallocatedResource(), 
node.getTotalKillableResources()),
 minimumAllocation) <= 0) {
   if (LOG.isDebugEnabled()) {
-LOG.debug("This node or this node partition doesn't have available or"
-+ "killable resource");
+LOG.debug("This node or node partition doesn't have available or" +
+" preemptible resource");
   }
   return null;
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-8970. Improve the debug message in CS#allocateContainerOnSingleNode. Contributed by Zhankun Tang.

2018-11-05 Thread wwei
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 71999f446 -> 631b31110


YARN-8970. Improve the debug message in CS#allocateContainerOnSingleNode. 
Contributed by Zhankun Tang.

(cherry picked from commit 5d6554c722f08f79bce904e021243605ee75bae3)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/631b3111
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/631b3111
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/631b3111

Branch: refs/heads/branch-3.1
Commit: 631b31110c17cb64a145cc26bb30dc240ded6cab
Parents: 71999f4
Author: Weiwei Yang 
Authored: Tue Nov 6 14:50:09 2018 +0800
Committer: Weiwei Yang 
Committed: Tue Nov 6 14:53:28 2018 +0800

--
 .../resourcemanager/scheduler/capacity/CapacityScheduler.java| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/631b3111/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index c3bc74f..e0f99bd 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -1539,8 +1539,8 @@ public class CapacityScheduler extends
 .add(node.getUnallocatedResource(), 
node.getTotalKillableResources()),
 minimumAllocation) <= 0) {
   if (LOG.isDebugEnabled()) {
-LOG.debug("This node or this node partition doesn't have available or"
-+ "killable resource");
+LOG.debug("This node or node partition doesn't have available or" +
+" preemptible resource");
   }
   return null;
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-8970. Improve the debug message in CS#allocateContainerOnSingleNode. Contributed by Zhankun Tang.

2018-11-05 Thread wwei
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.2 b474239e0 -> f00125e2d


YARN-8970. Improve the debug message in CS#allocateContainerOnSingleNode. 
Contributed by Zhankun Tang.

(cherry picked from commit 5d6554c722f08f79bce904e021243605ee75bae3)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f00125e2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f00125e2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f00125e2

Branch: refs/heads/branch-3.2
Commit: f00125e2d912b48db4c2cec1cdafdca3a9c59598
Parents: b474239
Author: Weiwei Yang 
Authored: Tue Nov 6 14:50:09 2018 +0800
Committer: Weiwei Yang 
Committed: Tue Nov 6 14:52:10 2018 +0800

--
 .../resourcemanager/scheduler/capacity/CapacityScheduler.java| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f00125e2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index fddd361..4befae7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -1615,8 +1615,8 @@ public class CapacityScheduler extends
 .add(node.getUnallocatedResource(), 
node.getTotalKillableResources()),
 minimumAllocation) <= 0) {
   if (LOG.isDebugEnabled()) {
-LOG.debug("This node or this node partition doesn't have available or"
-+ "killable resource");
+LOG.debug("This node or node partition doesn't have available or" +
+" preemptible resource");
   }
   return null;
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-8970. Improve the debug message in CS#allocateContainerOnSingleNode. Contributed by Zhankun Tang.

2018-11-05 Thread wwei
Repository: hadoop
Updated Branches:
  refs/heads/trunk ffc9c50e0 -> 5d6554c72


YARN-8970. Improve the debug message in CS#allocateContainerOnSingleNode. 
Contributed by Zhankun Tang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5d6554c7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5d6554c7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5d6554c7

Branch: refs/heads/trunk
Commit: 5d6554c722f08f79bce904e021243605ee75bae3
Parents: ffc9c50
Author: Weiwei Yang 
Authored: Tue Nov 6 14:50:09 2018 +0800
Committer: Weiwei Yang 
Committed: Tue Nov 6 14:50:09 2018 +0800

--
 .../resourcemanager/scheduler/capacity/CapacityScheduler.java| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5d6554c7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index c5ad2ce..5d7f1ba 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -1615,8 +1615,8 @@ public class CapacityScheduler extends
 .add(node.getUnallocatedResource(), 
node.getTotalKillableResources()),
 minimumAllocation) <= 0) {
   if (LOG.isDebugEnabled()) {
-LOG.debug("This node or this node partition doesn't have available or"
-+ "killable resource");
+LOG.debug("This node or node partition doesn't have available or" +
+" preemptible resource");
   }
   return null;
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-8858. CapacityScheduler should respect maximum node resource when per-queue maximum-allocation is being used. Contributed by Wangda Tan and WeiWei Yang.

2018-11-05 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 3d76d4785 -> 8b2363afe


YARN-8858. CapacityScheduler should respect maximum node resource when 
per-queue maximum-allocation is being used. Contributed by Wangda Tan and 
WeiWei Yang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8b2363af
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8b2363af
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8b2363af

Branch: refs/heads/branch-2.8
Commit: 8b2363afed4e06512f133c56d9430dd18b815823
Parents: 3d76d47
Author: Akira Ajisaka 
Authored: Tue Nov 6 13:58:30 2018 +0900
Committer: Akira Ajisaka 
Committed: Tue Nov 6 13:58:30 2018 +0900

--
 .../scheduler/AbstractYarnScheduler.java| 10 
 .../scheduler/capacity/CapacityScheduler.java   | 12 +++-
 .../capacity/TestContainerAllocation.java   | 58 
 3 files changed, 79 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8b2363af/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
index 46e732e..2f690b8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
@@ -230,6 +230,16 @@ public abstract class AbstractYarnScheduler
 return getMaximumResourceCapability();
   }
 
+  @VisibleForTesting
+  public void setForceConfiguredMaxAllocation(boolean flag) {
+maxAllocWriteLock.lock();
+try {
+  useConfiguredMaximumAllocationOnly = flag;
+} finally {
+  maxAllocWriteLock.unlock();
+}
+  }
+
   protected void initMaximumResourceCapability(Resource maximumAllocation) {
 maxAllocWriteLock.lock();
 try {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8b2363af/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index cc9f93c..7acde84 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -2056,7 +2056,17 @@ public class CapacityScheduler extends
   LOG.error("queue " + queueName + " is not an leaf queue");
   return getMaximumResourceCapability();
 }
-return ((LeafQueue)queue).getMaximumAllocation();
+
+// queue.getMaxAllocation returns *configured* maximum allocation.
+// getMaximumResourceCapability() returns maximum allocation considers
+// per-node maximum resources. So return (component-wise) min of the two.
+
+Resource queueMaxAllocation = ((LeafQueue)queue).getMaximumAllocation();
+Resource clusterMaxAllocationConsiderNodeMax =
+getMaximumResourceCapability();
+
+return Resources.componentwiseMin(queueMaxAllocation,
+clusterMaxAllocationConsiderNodeMax);
   }
 
   private String handleMoveToPlanQueue(String targetQueueName) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8b2363af/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestContainerAllocation.java
--
diff 

hadoop git commit: YARN-8969. AbstractYarnScheduler#getNodeTracker should return generic type to avoid type casting. Contributed by Wanqiang Ji.

2018-11-05 Thread wwei
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 f1ac158c4 -> 90074c136


YARN-8969. AbstractYarnScheduler#getNodeTracker should return generic type to 
avoid type casting. Contributed by Wanqiang Ji.

(cherry picked from commit c7fcca0d7ec9e31d43ef3040ecd576ec808f1f8b)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/90074c13
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/90074c13
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/90074c13

Branch: refs/heads/branch-2.9
Commit: 90074c1369ebdd72e71f5b320af060d3f042f07d
Parents: f1ac158
Author: Weiwei Yang 
Authored: Tue Nov 6 13:14:57 2018 +0800
Committer: Weiwei Yang 
Committed: Tue Nov 6 13:47:40 2018 +0800

--
 .../server/resourcemanager/scheduler/AbstractYarnScheduler.java| 2 +-
 .../server/resourcemanager/scheduler/fair/FSPreemptionThread.java  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/90074c13/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
index 4f51e4e..0530d99 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
@@ -228,7 +228,7 @@ public abstract class AbstractYarnScheduler
   }
 
   @VisibleForTesting
-  public ClusterNodeTracker getNodeTracker() {
+  public ClusterNodeTracker getNodeTracker() {
 return nodeTracker;
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/90074c13/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
index 18b4ba5..f0567a5 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
@@ -195,7 +195,7 @@ class FSPreemptionThread extends Thread {
 
   private void trackPreemptionsAgainstNode(List containers,
FSAppAttempt app) {
-FSSchedulerNode node = (FSSchedulerNode) scheduler.getNodeTracker()
+FSSchedulerNode node = scheduler.getNodeTracker()
 .getNode(containers.get(0).getNodeId());
 node.addContainersForPreemption(containers, app);
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-8969. AbstractYarnScheduler#getNodeTracker should return generic type to avoid type casting. Contributed by Wanqiang Ji.

2018-11-05 Thread wwei
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 4b47daa64 -> c3a0f07db


YARN-8969. AbstractYarnScheduler#getNodeTracker should return generic type to 
avoid type casting. Contributed by Wanqiang Ji.

(cherry picked from commit c7fcca0d7ec9e31d43ef3040ecd576ec808f1f8b)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c3a0f07d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c3a0f07d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c3a0f07d

Branch: refs/heads/branch-3.0
Commit: c3a0f07db0bf3b2926118d6331c6a99267a1b101
Parents: 4b47daa
Author: Weiwei Yang 
Authored: Tue Nov 6 13:14:57 2018 +0800
Committer: Weiwei Yang 
Committed: Tue Nov 6 13:43:24 2018 +0800

--
 .../server/resourcemanager/scheduler/AbstractYarnScheduler.java| 2 +-
 .../server/resourcemanager/scheduler/fair/FSPreemptionThread.java  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c3a0f07d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
index 7847573..50e7b10 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
@@ -230,7 +230,7 @@ public abstract class AbstractYarnScheduler
   }
 
   @VisibleForTesting
-  public ClusterNodeTracker getNodeTracker() {
+  public ClusterNodeTracker getNodeTracker() {
 return nodeTracker;
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c3a0f07d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
index 47e580d..cbadf03 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
@@ -208,7 +208,7 @@ class FSPreemptionThread extends Thread {
 
   private void trackPreemptionsAgainstNode(List containers,
FSAppAttempt app) {
-FSSchedulerNode node = (FSSchedulerNode) scheduler.getNodeTracker()
+FSSchedulerNode node = scheduler.getNodeTracker()
 .getNode(containers.get(0).getNodeId());
 node.addContainersForPreemption(containers, app);
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-8969. AbstractYarnScheduler#getNodeTracker should return generic type to avoid type casting. Contributed by Wanqiang Ji.

2018-11-05 Thread wwei
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 9bf4f3d61 -> 71999f446


YARN-8969. AbstractYarnScheduler#getNodeTracker should return generic type to 
avoid type casting. Contributed by Wanqiang Ji.

(cherry picked from commit c7fcca0d7ec9e31d43ef3040ecd576ec808f1f8b)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/71999f44
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/71999f44
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/71999f44

Branch: refs/heads/branch-3.1
Commit: 71999f4464ef2e7538d71d5cf7be67fbb796e7fd
Parents: 9bf4f3d
Author: Weiwei Yang 
Authored: Tue Nov 6 13:14:57 2018 +0800
Committer: Weiwei Yang 
Committed: Tue Nov 6 13:23:42 2018 +0800

--
 .../server/resourcemanager/scheduler/AbstractYarnScheduler.java| 2 +-
 .../server/resourcemanager/scheduler/fair/FSPreemptionThread.java  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/71999f44/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
index d2e81a5..4708e2a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
@@ -237,7 +237,7 @@ public abstract class AbstractYarnScheduler
   }
 
   @VisibleForTesting
-  public ClusterNodeTracker getNodeTracker() {
+  public ClusterNodeTracker getNodeTracker() {
 return nodeTracker;
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/71999f44/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
index c32565f..6ed90f8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
@@ -229,7 +229,7 @@ class FSPreemptionThread extends Thread {
 
   private void trackPreemptionsAgainstNode(List containers,
FSAppAttempt app) {
-FSSchedulerNode node = (FSSchedulerNode) scheduler.getNodeTracker()
+FSSchedulerNode node = scheduler.getNodeTracker()
 .getNode(containers.get(0).getNodeId());
 node.addContainersForPreemption(containers, app);
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-14053. Provide ability for NN to re-replicate based on topology changes. Contributed by Hrishikesh Gadre.

2018-11-05 Thread xiao
Repository: hadoop
Updated Branches:
  refs/heads/trunk c7fcca0d7 -> ffc9c50e0


HDFS-14053. Provide ability for NN to re-replicate based on topology changes. 
Contributed by Hrishikesh Gadre.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ffc9c50e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ffc9c50e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ffc9c50e

Branch: refs/heads/trunk
Commit: ffc9c50e074aeca804674c6e1e6b0f1eb629e230
Parents: c7fcca0
Author: Xiao Chen 
Authored: Mon Nov 5 21:36:43 2018 -0800
Committer: Xiao Chen 
Committed: Mon Nov 5 21:38:39 2018 -0800

--
 .../server/blockmanagement/BlockManager.java| 38 +++
 .../hdfs/server/namenode/NamenodeFsck.java  | 33 +
 .../org/apache/hadoop/hdfs/tools/DFSck.java | 10 ++-
 .../src/site/markdown/HDFSCommands.md   |  3 +-
 .../org/apache/hadoop/hdfs/DFSTestUtil.java | 21 ++
 .../TestBlocksWithNotEnoughRacks.java   | 72 
 6 files changed, 173 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ffc9c50e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index a5fb0b1..36bbeb1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -3535,6 +3535,44 @@ public class BlockManager implements BlockStatsMXBean {
   }
 
   /**
+   * Schedule replication work for a specified list of mis-replicated
+   * blocks and return total number of blocks scheduled for replication.
+   *
+   * @param blocks A list of blocks for which replication work needs to
+   *  be scheduled.
+   * @return Total number of blocks for which replication work is scheduled.
+   **/
+  public int processMisReplicatedBlocks(List blocks) {
+int processed = 0;
+Iterator iter = blocks.iterator();
+
+try {
+  while (isPopulatingReplQueues() && namesystem.isRunning()
+  && !Thread.currentThread().isInterrupted()
+  && iter.hasNext()) {
+int limit = processed + numBlocksPerIteration;
+namesystem.writeLockInterruptibly();
+try {
+  while (iter.hasNext() && processed < limit) {
+BlockInfo blk = iter.next();
+MisReplicationResult r = processMisReplicatedBlock(blk);
+LOG.debug("BLOCK* processMisReplicatedBlocks: " +
+"Re-scanned block {}, result is {}", blk, r);
+  }
+} finally {
+  namesystem.writeUnlock();
+}
+  }
+} catch (InterruptedException ex) {
+  LOG.info("Caught InterruptedException while scheduling replication work" 
+
+  " for mis-replicated blocks");
+  Thread.currentThread().interrupt();
+}
+
+return processed;
+  }
+
+  /**
* Process a single possibly misreplicated block. This adds it to the
* appropriate queues if necessary, and returns a result code indicating
* what happened with it.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ffc9c50e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
index 56607f0..f54b407 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
@@ -25,6 +25,7 @@ import java.net.InetAddress;
 import java.net.InetSocketAddress;
 import java.net.Socket;
 import java.util.ArrayList;
+import java.util.LinkedList;
 import java.util.Arrays;
 import java.util.Collection;
 import java.util.Date;
@@ -173,6 +174,14 @@ public class NamenodeFsck implements 
DataEncryptionKeyFactory {
*/
   private boolean doDelete = false;
 
+  /**
+   * True if the user specified the -replicate option.
+   *
+   * When this option is in effect, we will initiate replication work to make
+   * mis-replicated blocks confirm the block placement policy.
+   */
+  private boolean 

hadoop git commit: YARN-8969. AbstractYarnScheduler#getNodeTracker should return generic type to avoid type casting. Contributed by Wanqiang Ji.

2018-11-05 Thread wwei
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.2 378f189c4 -> b474239e0


YARN-8969. AbstractYarnScheduler#getNodeTracker should return generic type to 
avoid type casting. Contributed by Wanqiang Ji.

(cherry picked from commit c7fcca0d7ec9e31d43ef3040ecd576ec808f1f8b)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b474239e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b474239e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b474239e

Branch: refs/heads/branch-3.2
Commit: b474239e0b7ef58be574e6e8be5dfd1ff6f0a09b
Parents: 378f189
Author: Weiwei Yang 
Authored: Tue Nov 6 13:14:57 2018 +0800
Committer: Weiwei Yang 
Committed: Tue Nov 6 13:17:19 2018 +0800

--
 .../server/resourcemanager/scheduler/AbstractYarnScheduler.java| 2 +-
 .../server/resourcemanager/scheduler/fair/FSPreemptionThread.java  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b474239e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
index 0acfca7..7acedf2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
@@ -237,7 +237,7 @@ public abstract class AbstractYarnScheduler
   }
 
   @VisibleForTesting
-  public ClusterNodeTracker getNodeTracker() {
+  public ClusterNodeTracker getNodeTracker() {
 return nodeTracker;
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b474239e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
index c32565f..6ed90f8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
@@ -229,7 +229,7 @@ class FSPreemptionThread extends Thread {
 
   private void trackPreemptionsAgainstNode(List containers,
FSAppAttempt app) {
-FSSchedulerNode node = (FSSchedulerNode) scheduler.getNodeTracker()
+FSSchedulerNode node = scheduler.getNodeTracker()
 .getNode(containers.get(0).getNodeId());
 node.addContainersForPreemption(containers, app);
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-8969. AbstractYarnScheduler#getNodeTracker should return generic type to avoid type casting. Contributed by Wanqiang Ji.

2018-11-05 Thread wwei
Repository: hadoop
Updated Branches:
  refs/heads/trunk f3296501e -> c7fcca0d7


YARN-8969. AbstractYarnScheduler#getNodeTracker should return generic type to 
avoid type casting. Contributed by Wanqiang Ji.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c7fcca0d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c7fcca0d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c7fcca0d

Branch: refs/heads/trunk
Commit: c7fcca0d7ec9e31d43ef3040ecd576ec808f1f8b
Parents: f329650
Author: Weiwei Yang 
Authored: Tue Nov 6 13:14:57 2018 +0800
Committer: Weiwei Yang 
Committed: Tue Nov 6 13:14:57 2018 +0800

--
 .../server/resourcemanager/scheduler/AbstractYarnScheduler.java| 2 +-
 .../server/resourcemanager/scheduler/fair/FSPreemptionThread.java  | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c7fcca0d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
index 0acfca7..7acedf2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
@@ -237,7 +237,7 @@ public abstract class AbstractYarnScheduler
   }
 
   @VisibleForTesting
-  public ClusterNodeTracker getNodeTracker() {
+  public ClusterNodeTracker getNodeTracker() {
 return nodeTracker;
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c7fcca0d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
index c32565f..6ed90f8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSPreemptionThread.java
@@ -229,7 +229,7 @@ class FSPreemptionThread extends Thread {
 
   private void trackPreemptionsAgainstNode(List containers,
FSAppAttempt app) {
-FSSchedulerNode node = (FSSchedulerNode) scheduler.getNodeTracker()
+FSSchedulerNode node = scheduler.getNodeTracker()
 .getNode(containers.get(0).getNodeId());
 node.addContainersForPreemption(containers, app);
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[5/6] hadoop git commit: HDFS-14043. Tolerate corrupted seen_txid file. Contributed by Lukas Majercak.

2018-11-05 Thread inigoiri
HDFS-14043. Tolerate corrupted seen_txid file. Contributed by Lukas Majercak.

(cherry picked from commit f3296501e09fa7f1e81548dfcefa56f20fe337ca)
(cherry picked from commit 9bf4f3d61403b06f3d6a092dacab382c7c131e19)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/98075d92
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/98075d92
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/98075d92

Branch: refs/heads/branch-2
Commit: 98075d9224ed59aab4a0d86303a3496850428c7f
Parents: 939827d
Author: Inigo Goiri 
Authored: Mon Nov 5 16:48:37 2018 -0800
Committer: Inigo Goiri 
Committed: Mon Nov 5 16:56:37 2018 -0800

--
 .../hadoop/hdfs/util/PersistentLongFile.java|  2 +
 .../hdfs/server/namenode/TestSaveNamespace.java | 56 
 2 files changed, 58 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/98075d92/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
index 67bb2bb..b2075cf 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
@@ -98,6 +98,8 @@ public class PersistentLongFile {
 val = Long.parseLong(br.readLine());
 br.close();
 br = null;
+  } catch (NumberFormatException e) {
+throw new IOException(e);
   } finally {
 IOUtils.cleanup(LOG, br);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/98075d92/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
index 5050998..1f201f0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
@@ -28,8 +28,13 @@ import static org.mockito.Mockito.doThrow;
 import static org.mockito.Mockito.spy;
 
 import java.io.File;
+import java.io.FileWriter;
 import java.io.IOException;
 import java.io.OutputStream;
+import java.io.PrintWriter;
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.List;
 import java.util.concurrent.Callable;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
@@ -37,6 +42,8 @@ import java.util.concurrent.Future;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -687,6 +694,55 @@ public class TestSaveNamespace {
 }
   }
 
+  @Test(timeout=3)
+  public void testTxFaultTolerance() throws Exception {
+String baseDir = MiniDFSCluster.getBaseDirectory();
+List nameDirs = new ArrayList<>();
+nameDirs.add(fileAsURI(new File(baseDir, "name1")).toString());
+nameDirs.add(fileAsURI(new File(baseDir, "name2")).toString());
+
+Configuration conf = new HdfsConfiguration();
+String nameDirsStr = StringUtils.join(",", nameDirs);
+conf.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY, nameDirsStr);
+conf.set(DFSConfigKeys.DFS_NAMENODE_EDITS_DIR_KEY, nameDirsStr);
+
+NameNode.initMetrics(conf, NamenodeRole.NAMENODE);
+DFSTestUtil.formatNameNode(conf);
+FSNamesystem fsn = FSNamesystem.loadFromDisk(conf);
+try {
+  // We have a BEGIN_LOG_SEGMENT txn to start
+  assertEquals(1, fsn.getEditLog().getLastWrittenTxId());
+
+  doAnEdit(fsn, 1);
+
+  assertEquals(2, fsn.getEditLog().getLastWrittenTxId());
+
+  // Shut down
+  fsn.close();
+
+  // Corrupt one of the seen_txid files
+  File txidFile0 = new File(new URI(nameDirs.get(0) +
+  "/current/seen_txid"));
+  FileWriter fw = new FileWriter(txidFile0, false);
+  try (PrintWriter pw = new PrintWriter(fw)) {
+pw.print("corrupt!");
+  }
+
+  // Restart
+  fsn = FSNamesystem.loadFromDisk(conf);
+  assertEquals(4, fsn.getEditLog().getLastWrittenTxId());
+
+  // Check 

[3/6] hadoop git commit: HDFS-14043. Tolerate corrupted seen_txid file. Contributed by Lukas Majercak.

2018-11-05 Thread inigoiri
HDFS-14043. Tolerate corrupted seen_txid file. Contributed by Lukas Majercak.

(cherry picked from commit f3296501e09fa7f1e81548dfcefa56f20fe337ca)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9bf4f3d6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9bf4f3d6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9bf4f3d6

Branch: refs/heads/branch-3.1
Commit: 9bf4f3d61403b06f3d6a092dacab382c7c131e19
Parents: a1321d0
Author: Inigo Goiri 
Authored: Mon Nov 5 16:48:37 2018 -0800
Committer: Inigo Goiri 
Committed: Mon Nov 5 16:52:43 2018 -0800

--
 .../hadoop/hdfs/util/PersistentLongFile.java|  2 +
 .../hdfs/server/namenode/TestSaveNamespace.java | 56 
 2 files changed, 58 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9bf4f3d6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
index 67bb2bb..b2075cf 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
@@ -98,6 +98,8 @@ public class PersistentLongFile {
 val = Long.parseLong(br.readLine());
 br.close();
 br = null;
+  } catch (NumberFormatException e) {
+throw new IOException(e);
   } finally {
 IOUtils.cleanup(LOG, br);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9bf4f3d6/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
index 5602954..e89d0e7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
@@ -28,8 +28,13 @@ import static org.mockito.Mockito.doThrow;
 import static org.mockito.Mockito.spy;
 
 import java.io.File;
+import java.io.FileWriter;
 import java.io.IOException;
 import java.io.OutputStream;
+import java.io.PrintWriter;
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.List;
 import java.util.concurrent.Callable;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
@@ -37,6 +42,8 @@ import java.util.concurrent.Future;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -737,6 +744,55 @@ public class TestSaveNamespace {
 }
   }
 
+  @Test(timeout=3)
+  public void testTxFaultTolerance() throws Exception {
+String baseDir = MiniDFSCluster.getBaseDirectory();
+List nameDirs = new ArrayList<>();
+nameDirs.add(fileAsURI(new File(baseDir, "name1")).toString());
+nameDirs.add(fileAsURI(new File(baseDir, "name2")).toString());
+
+Configuration conf = new HdfsConfiguration();
+String nameDirsStr = StringUtils.join(",", nameDirs);
+conf.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY, nameDirsStr);
+conf.set(DFSConfigKeys.DFS_NAMENODE_EDITS_DIR_KEY, nameDirsStr);
+
+NameNode.initMetrics(conf, NamenodeRole.NAMENODE);
+DFSTestUtil.formatNameNode(conf);
+FSNamesystem fsn = FSNamesystem.loadFromDisk(conf);
+try {
+  // We have a BEGIN_LOG_SEGMENT txn to start
+  assertEquals(1, fsn.getEditLog().getLastWrittenTxId());
+
+  doAnEdit(fsn, 1);
+
+  assertEquals(2, fsn.getEditLog().getLastWrittenTxId());
+
+  // Shut down
+  fsn.close();
+
+  // Corrupt one of the seen_txid files
+  File txidFile0 = new File(new URI(nameDirs.get(0) +
+  "/current/seen_txid"));
+  FileWriter fw = new FileWriter(txidFile0, false);
+  try (PrintWriter pw = new PrintWriter(fw)) {
+pw.print("corrupt!");
+  }
+
+  // Restart
+  fsn = FSNamesystem.loadFromDisk(conf);
+  assertEquals(4, fsn.getEditLog().getLastWrittenTxId());
+
+  // Check seen_txid is same in both dirs
+  File txidFile1 = new File(new 

[4/6] hadoop git commit: HDFS-14043. Tolerate corrupted seen_txid file. Contributed by Lukas Majercak.

2018-11-05 Thread inigoiri
HDFS-14043. Tolerate corrupted seen_txid file. Contributed by Lukas Majercak.

(cherry picked from commit f3296501e09fa7f1e81548dfcefa56f20fe337ca)
(cherry picked from commit 9bf4f3d61403b06f3d6a092dacab382c7c131e19)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4b47daa6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4b47daa6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4b47daa6

Branch: refs/heads/branch-3.0
Commit: 4b47daa6409f705e8f5b002e12b6f021e44fd662
Parents: aee665f
Author: Inigo Goiri 
Authored: Mon Nov 5 16:48:37 2018 -0800
Committer: Inigo Goiri 
Committed: Mon Nov 5 16:53:28 2018 -0800

--
 .../hadoop/hdfs/util/PersistentLongFile.java|  2 +
 .../hdfs/server/namenode/TestSaveNamespace.java | 56 
 2 files changed, 58 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4b47daa6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
index 67bb2bb..b2075cf 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
@@ -98,6 +98,8 @@ public class PersistentLongFile {
 val = Long.parseLong(br.readLine());
 br.close();
 br = null;
+  } catch (NumberFormatException e) {
+throw new IOException(e);
   } finally {
 IOUtils.cleanup(LOG, br);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4b47daa6/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
index 0c86ef4..faa348c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
@@ -28,8 +28,13 @@ import static org.mockito.Mockito.doThrow;
 import static org.mockito.Mockito.spy;
 
 import java.io.File;
+import java.io.FileWriter;
 import java.io.IOException;
 import java.io.OutputStream;
+import java.io.PrintWriter;
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.List;
 import java.util.concurrent.Callable;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
@@ -37,6 +42,8 @@ import java.util.concurrent.Future;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -737,6 +744,55 @@ public class TestSaveNamespace {
 }
   }
 
+  @Test(timeout=3)
+  public void testTxFaultTolerance() throws Exception {
+String baseDir = MiniDFSCluster.getBaseDirectory();
+List nameDirs = new ArrayList<>();
+nameDirs.add(fileAsURI(new File(baseDir, "name1")).toString());
+nameDirs.add(fileAsURI(new File(baseDir, "name2")).toString());
+
+Configuration conf = new HdfsConfiguration();
+String nameDirsStr = StringUtils.join(",", nameDirs);
+conf.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY, nameDirsStr);
+conf.set(DFSConfigKeys.DFS_NAMENODE_EDITS_DIR_KEY, nameDirsStr);
+
+NameNode.initMetrics(conf, NamenodeRole.NAMENODE);
+DFSTestUtil.formatNameNode(conf);
+FSNamesystem fsn = FSNamesystem.loadFromDisk(conf);
+try {
+  // We have a BEGIN_LOG_SEGMENT txn to start
+  assertEquals(1, fsn.getEditLog().getLastWrittenTxId());
+
+  doAnEdit(fsn, 1);
+
+  assertEquals(2, fsn.getEditLog().getLastWrittenTxId());
+
+  // Shut down
+  fsn.close();
+
+  // Corrupt one of the seen_txid files
+  File txidFile0 = new File(new URI(nameDirs.get(0) +
+  "/current/seen_txid"));
+  FileWriter fw = new FileWriter(txidFile0, false);
+  try (PrintWriter pw = new PrintWriter(fw)) {
+pw.print("corrupt!");
+  }
+
+  // Restart
+  fsn = FSNamesystem.loadFromDisk(conf);
+  assertEquals(4, fsn.getEditLog().getLastWrittenTxId());
+
+  // Check 

[1/6] hadoop git commit: HDFS-14043. Tolerate corrupted seen_txid file. Contributed by Lukas Majercak.

2018-11-05 Thread inigoiri
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 939827daa -> 98075d922
  refs/heads/branch-2.9 1f3899fa3 -> f1ac158c4
  refs/heads/branch-3.0 aee665f2d -> 4b47daa64
  refs/heads/branch-3.1 a1321d020 -> 9bf4f3d61
  refs/heads/branch-3.2 6e1fad299 -> 378f189c4
  refs/heads/trunk f3f5e7ad0 -> f3296501e


HDFS-14043. Tolerate corrupted seen_txid file. Contributed by Lukas Majercak.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f3296501
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f3296501
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f3296501

Branch: refs/heads/trunk
Commit: f3296501e09fa7f1e81548dfcefa56f20fe337ca
Parents: f3f5e7a
Author: Inigo Goiri 
Authored: Mon Nov 5 16:48:37 2018 -0800
Committer: Inigo Goiri 
Committed: Mon Nov 5 16:48:37 2018 -0800

--
 .../hadoop/hdfs/util/PersistentLongFile.java|  2 +
 .../hdfs/server/namenode/TestSaveNamespace.java | 56 
 2 files changed, 58 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f3296501/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
index 777dd87..a94d7ed 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
@@ -98,6 +98,8 @@ public class PersistentLongFile {
 val = Long.parseLong(br.readLine());
 br.close();
 br = null;
+  } catch (NumberFormatException e) {
+throw new IOException(e);
   } finally {
 IOUtils.cleanupWithLogger(LOG, br);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f3296501/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
index 8fa8701..6688ef2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
@@ -28,13 +28,20 @@ import static org.mockito.Mockito.doThrow;
 import static org.mockito.Mockito.spy;
 
 import java.io.File;
+import java.io.FileWriter;
 import java.io.IOException;
 import java.io.OutputStream;
+import java.io.PrintWriter;
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.List;
 import java.util.concurrent.Callable;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
 
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.util.StringUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 import org.apache.hadoop.conf.Configuration;
@@ -737,6 +744,55 @@ public class TestSaveNamespace {
 }
   }
 
+  @Test(timeout=3)
+  public void testTxFaultTolerance() throws Exception {
+String baseDir = MiniDFSCluster.getBaseDirectory();
+List nameDirs = new ArrayList<>();
+nameDirs.add(fileAsURI(new File(baseDir, "name1")).toString());
+nameDirs.add(fileAsURI(new File(baseDir, "name2")).toString());
+
+Configuration conf = new HdfsConfiguration();
+String nameDirsStr = StringUtils.join(",", nameDirs);
+conf.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY, nameDirsStr);
+conf.set(DFSConfigKeys.DFS_NAMENODE_EDITS_DIR_KEY, nameDirsStr);
+
+NameNode.initMetrics(conf, NamenodeRole.NAMENODE);
+DFSTestUtil.formatNameNode(conf);
+FSNamesystem fsn = FSNamesystem.loadFromDisk(conf);
+try {
+  // We have a BEGIN_LOG_SEGMENT txn to start
+  assertEquals(1, fsn.getEditLog().getLastWrittenTxId());
+
+  doAnEdit(fsn, 1);
+
+  assertEquals(2, fsn.getEditLog().getLastWrittenTxId());
+
+  // Shut down
+  fsn.close();
+
+  // Corrupt one of the seen_txid files
+  File txidFile0 = new File(new URI(nameDirs.get(0) +
+  "/current/seen_txid"));
+  FileWriter fw = new FileWriter(txidFile0, false);
+  try (PrintWriter pw = new PrintWriter(fw)) {
+pw.print("corrupt!");
+  }
+
+  // Restart
+  fsn = FSNamesystem.loadFromDisk(conf);
+  assertEquals(4, 

[2/6] hadoop git commit: HDFS-14043. Tolerate corrupted seen_txid file. Contributed by Lukas Majercak.

2018-11-05 Thread inigoiri
HDFS-14043. Tolerate corrupted seen_txid file. Contributed by Lukas Majercak.

(cherry picked from commit f3296501e09fa7f1e81548dfcefa56f20fe337ca)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/378f189c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/378f189c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/378f189c

Branch: refs/heads/branch-3.2
Commit: 378f189c4fa3d2928df80a91ac57ee762b9cf30c
Parents: 6e1fad2
Author: Inigo Goiri 
Authored: Mon Nov 5 16:48:37 2018 -0800
Committer: Inigo Goiri 
Committed: Mon Nov 5 16:49:12 2018 -0800

--
 .../hadoop/hdfs/util/PersistentLongFile.java|  2 +
 .../hdfs/server/namenode/TestSaveNamespace.java | 56 
 2 files changed, 58 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/378f189c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
index 777dd87..a94d7ed 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
@@ -98,6 +98,8 @@ public class PersistentLongFile {
 val = Long.parseLong(br.readLine());
 br.close();
 br = null;
+  } catch (NumberFormatException e) {
+throw new IOException(e);
   } finally {
 IOUtils.cleanupWithLogger(LOG, br);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/378f189c/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
index 8fa8701..6688ef2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
@@ -28,13 +28,20 @@ import static org.mockito.Mockito.doThrow;
 import static org.mockito.Mockito.spy;
 
 import java.io.File;
+import java.io.FileWriter;
 import java.io.IOException;
 import java.io.OutputStream;
+import java.io.PrintWriter;
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.List;
 import java.util.concurrent.Callable;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
 
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.util.StringUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 import org.apache.hadoop.conf.Configuration;
@@ -737,6 +744,55 @@ public class TestSaveNamespace {
 }
   }
 
+  @Test(timeout=3)
+  public void testTxFaultTolerance() throws Exception {
+String baseDir = MiniDFSCluster.getBaseDirectory();
+List nameDirs = new ArrayList<>();
+nameDirs.add(fileAsURI(new File(baseDir, "name1")).toString());
+nameDirs.add(fileAsURI(new File(baseDir, "name2")).toString());
+
+Configuration conf = new HdfsConfiguration();
+String nameDirsStr = StringUtils.join(",", nameDirs);
+conf.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY, nameDirsStr);
+conf.set(DFSConfigKeys.DFS_NAMENODE_EDITS_DIR_KEY, nameDirsStr);
+
+NameNode.initMetrics(conf, NamenodeRole.NAMENODE);
+DFSTestUtil.formatNameNode(conf);
+FSNamesystem fsn = FSNamesystem.loadFromDisk(conf);
+try {
+  // We have a BEGIN_LOG_SEGMENT txn to start
+  assertEquals(1, fsn.getEditLog().getLastWrittenTxId());
+
+  doAnEdit(fsn, 1);
+
+  assertEquals(2, fsn.getEditLog().getLastWrittenTxId());
+
+  // Shut down
+  fsn.close();
+
+  // Corrupt one of the seen_txid files
+  File txidFile0 = new File(new URI(nameDirs.get(0) +
+  "/current/seen_txid"));
+  FileWriter fw = new FileWriter(txidFile0, false);
+  try (PrintWriter pw = new PrintWriter(fw)) {
+pw.print("corrupt!");
+  }
+
+  // Restart
+  fsn = FSNamesystem.loadFromDisk(conf);
+  assertEquals(4, fsn.getEditLog().getLastWrittenTxId());
+
+  // Check seen_txid is same in both dirs
+  File txidFile1 = new File(new URI(nameDirs.get(1) +
+  "/current/seen_txid"));
+  assertTrue(FileUtils.contentEquals(txidFile0, 

[6/6] hadoop git commit: HDFS-14043. Tolerate corrupted seen_txid file. Contributed by Lukas Majercak.

2018-11-05 Thread inigoiri
HDFS-14043. Tolerate corrupted seen_txid file. Contributed by Lukas Majercak.

(cherry picked from commit f3296501e09fa7f1e81548dfcefa56f20fe337ca)
(cherry picked from commit 9bf4f3d61403b06f3d6a092dacab382c7c131e19)
(cherry picked from commit 98075d9224ed59aab4a0d86303a3496850428c7f)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f1ac158c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f1ac158c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f1ac158c

Branch: refs/heads/branch-2.9
Commit: f1ac158c4d42e6cdbe303f6ad62d106ca6fef6f3
Parents: 1f3899f
Author: Inigo Goiri 
Authored: Mon Nov 5 16:48:37 2018 -0800
Committer: Inigo Goiri 
Committed: Mon Nov 5 16:57:23 2018 -0800

--
 .../hadoop/hdfs/util/PersistentLongFile.java|  2 +
 .../hdfs/server/namenode/TestSaveNamespace.java | 56 
 2 files changed, 58 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f1ac158c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
index 67bb2bb..b2075cf 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/PersistentLongFile.java
@@ -98,6 +98,8 @@ public class PersistentLongFile {
 val = Long.parseLong(br.readLine());
 br.close();
 br = null;
+  } catch (NumberFormatException e) {
+throw new IOException(e);
   } finally {
 IOUtils.cleanup(LOG, br);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f1ac158c/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
index 5050998..1f201f0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java
@@ -28,8 +28,13 @@ import static org.mockito.Mockito.doThrow;
 import static org.mockito.Mockito.spy;
 
 import java.io.File;
+import java.io.FileWriter;
 import java.io.IOException;
 import java.io.OutputStream;
+import java.io.PrintWriter;
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.List;
 import java.util.concurrent.Callable;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
@@ -37,6 +42,8 @@ import java.util.concurrent.Future;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -687,6 +694,55 @@ public class TestSaveNamespace {
 }
   }
 
+  @Test(timeout=3)
+  public void testTxFaultTolerance() throws Exception {
+String baseDir = MiniDFSCluster.getBaseDirectory();
+List nameDirs = new ArrayList<>();
+nameDirs.add(fileAsURI(new File(baseDir, "name1")).toString());
+nameDirs.add(fileAsURI(new File(baseDir, "name2")).toString());
+
+Configuration conf = new HdfsConfiguration();
+String nameDirsStr = StringUtils.join(",", nameDirs);
+conf.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY, nameDirsStr);
+conf.set(DFSConfigKeys.DFS_NAMENODE_EDITS_DIR_KEY, nameDirsStr);
+
+NameNode.initMetrics(conf, NamenodeRole.NAMENODE);
+DFSTestUtil.formatNameNode(conf);
+FSNamesystem fsn = FSNamesystem.loadFromDisk(conf);
+try {
+  // We have a BEGIN_LOG_SEGMENT txn to start
+  assertEquals(1, fsn.getEditLog().getLastWrittenTxId());
+
+  doAnEdit(fsn, 1);
+
+  assertEquals(2, fsn.getEditLog().getLastWrittenTxId());
+
+  // Shut down
+  fsn.close();
+
+  // Corrupt one of the seen_txid files
+  File txidFile0 = new File(new URI(nameDirs.get(0) +
+  "/current/seen_txid"));
+  FileWriter fw = new FileWriter(txidFile0, false);
+  try (PrintWriter pw = new PrintWriter(fw)) {
+pw.print("corrupt!");
+  }
+
+  // Restart
+  fsn = FSNamesystem.loadFromDisk(conf);
+  

[27/50] [abbrv] hadoop git commit: HDDS-771. ChunkGroupOutputStream stream entries need to be properly updated on closed container exception. Contributed by Lokesh Jain.

2018-11-05 Thread aengineer
HDDS-771. ChunkGroupOutputStream stream entries need to be properly updated on 
closed container exception. Contributed by Lokesh Jain.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e0ac3081
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e0ac3081
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e0ac3081

Branch: refs/heads/HDDS-4
Commit: e0ac3081e95bc70b13c36a2cad1565ecc35dec52
Parents: 2e8ac14
Author: Shashikant Banerjee 
Authored: Thu Nov 1 15:43:48 2018 +0530
Committer: Shashikant Banerjee 
Committed: Thu Nov 1 15:43:48 2018 +0530

--
 .../ozone/client/io/ChunkGroupOutputStream.java |  6 +-
 .../rpc/TestCloseContainerHandlingByClient.java | 60 
 2 files changed, 65 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e0ac3081/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
--
diff --git 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
index 78d69c1..3fe5d93 100644
--- 
a/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
+++ 
b/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
@@ -413,6 +413,11 @@ public class ChunkGroupOutputStream extends OutputStream {
   return;
 }
 
+// update currentStreamIndex in case of closed container exception. The
+// current stream entry cannot be used for further writes because
+// container is closed.
+currentStreamIndex += 1;
+
 // In case where not a single chunk of data has been written to the 
Datanode
 // yet. This block does not yet exist on the datanode but cached on the
 // outputStream buffer. No need to call GetCommittedBlockLength here
@@ -429,7 +434,6 @@ public class ChunkGroupOutputStream extends OutputStream {
   // allocate new block and write this data in the datanode. The cached
   // data in the buffer does not exceed chunkSize.
   Preconditions.checkState(buffer.position() < chunkSize);
-  currentStreamIndex += 1;
   // readjust the byteOffset value to the length actually been written.
   byteOffset -= buffer.position();
   handleWrite(buffer.array(), 0, buffer.position());

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e0ac3081/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestCloseContainerHandlingByClient.java
--
diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestCloseContainerHandlingByClient.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestCloseContainerHandlingByClient.java
index d06a0bc..c6ee872 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestCloseContainerHandlingByClient.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestCloseContainerHandlingByClient.java
@@ -34,6 +34,8 @@ import org.apache.hadoop.ozone.OzoneConfigKeys;
 import org.apache.hadoop.ozone.OzoneConsts;
 import org.apache.hadoop.ozone.client.ObjectStore;
 import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneVolume;
 import org.apache.hadoop.ozone.client.OzoneClientFactory;
 import org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream;
 import org.apache.hadoop.ozone.client.io.OzoneInputStream;
@@ -287,6 +289,64 @@ public class TestCloseContainerHandlingByClient {
 validateData(keyName, dataString.concat(dataString2).getBytes());
   }
 
+  @Test
+  public void testMultiBlockWrites3() throws Exception {
+
+String keyName = "standalone5";
+int keyLen = 4 * blockSize;
+OzoneOutputStream key =
+createKey(keyName, ReplicationType.RATIS, keyLen);
+ChunkGroupOutputStream groupOutputStream =
+(ChunkGroupOutputStream) key.getOutputStream();
+// With the initial size provided, it should have preallocated 4 blocks
+Assert.assertEquals(4, groupOutputStream.getStreamEntries().size());
+// write data 3 blocks and one more chunk
+byte[] writtenData = fixedLengthString(keyString, keyLen).getBytes();
+byte[] data = Arrays.copyOfRange(writtenData, 0, 3 * blockSize + 
chunkSize);
+Assert.assertEquals(data.length, 3 * blockSize + chunkSize);
+key.write(data);
+
+Assert.assertTrue(key.getOutputStream() instanceof ChunkGroupOutputStream);
+//get the name of a valid 

[37/50] [abbrv] hadoop git commit: YARN-8893. [AMRMProxy] Fix thread leak in AMRMClientRelayer and UAM client. Contributed by Botong Huang.

2018-11-05 Thread aengineer
YARN-8893. [AMRMProxy] Fix thread leak in AMRMClientRelayer and UAM client. 
Contributed by Botong Huang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/989715ec
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/989715ec
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/989715ec

Branch: refs/heads/HDDS-4
Commit: 989715ec5066c6ac7868e25ad9234dc64723e61e
Parents: aed836e
Author: Giovanni Matteo Fumarola 
Authored: Fri Nov 2 15:30:08 2018 -0700
Committer: Giovanni Matteo Fumarola 
Committed: Fri Nov 2 15:30:08 2018 -0700

--
 .../hadoop/yarn/server/AMRMClientRelayer.java   | 55 +---
 .../yarn/server/uam/UnmanagedAMPoolManager.java | 28 ++
 .../server/uam/UnmanagedApplicationManager.java | 28 +++---
 .../yarn/server/MockResourceManagerFacade.java  |  5 +-
 .../yarn/server/TestAMRMClientRelayer.java  | 10 ++--
 .../metrics/TestAMRMClientRelayerMetrics.java   |  6 ---
 .../uam/TestUnmanagedApplicationManager.java| 27 +-
 .../amrmproxy/FederationInterceptor.java| 18 +++
 .../amrmproxy/TestAMRMProxyService.java |  1 -
 9 files changed, 102 insertions(+), 76 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/989715ec/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMRMClientRelayer.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMRMClientRelayer.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMRMClientRelayer.java
index dc66868..ac43b12 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMRMClientRelayer.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMRMClientRelayer.java
@@ -27,9 +27,8 @@ import java.util.Map;
 import java.util.Set;
 import java.util.TreeSet;
 
-import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.HadoopIllegalArgumentException;
 import org.apache.hadoop.ipc.RPC;
-import org.apache.hadoop.service.AbstractService;
 import org.apache.hadoop.yarn.api.ApplicationMasterProtocol;
 import org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest;
 import org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse;
@@ -47,8 +46,6 @@ import org.apache.hadoop.yarn.api.records.SchedulingRequest;
 import org.apache.hadoop.yarn.api.records.UpdateContainerRequest;
 import org.apache.hadoop.yarn.api.records.UpdatedContainer;
 import org.apache.hadoop.yarn.client.AMRMClientUtils;
-import org.apache.hadoop.yarn.client.ClientRMProxy;
-import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import 
org.apache.hadoop.yarn.exceptions.ApplicationMasterNotRegisteredException;
 import 
org.apache.hadoop.yarn.exceptions.InvalidApplicationMasterRequestException;
 import org.apache.hadoop.yarn.exceptions.YarnException;
@@ -66,8 +63,7 @@ import com.google.common.annotations.VisibleForTesting;
  * pending requests similar to AMRMClient, and handles RM re-sync automatically
  * without propagate the re-sync exception back to AMRMClient.
  */
-public class AMRMClientRelayer extends AbstractService
-implements ApplicationMasterProtocol {
+public class AMRMClientRelayer implements ApplicationMasterProtocol {
   private static final Logger LOG =
   LoggerFactory.getLogger(AMRMClientRelayer.class);
 
@@ -136,51 +132,16 @@ public class AMRMClientRelayer extends AbstractService
 
   private AMRMClientRelayerMetrics metrics;
 
-  public AMRMClientRelayer() {
-super(AMRMClientRelayer.class.getName());
+  public AMRMClientRelayer(ApplicationMasterProtocol rmClient,
+  ApplicationId appId, String rmId) {
 this.resetResponseId = -1;
 this.metrics = AMRMClientRelayerMetrics.getInstance();
-this.rmClient = null;
-this.appId = null;
 this.rmId = "";
-  }
-
-  public AMRMClientRelayer(ApplicationMasterProtocol rmClient,
-  ApplicationId appId, String rmId) {
-this();
 this.rmClient = rmClient;
 this.appId = appId;
 this.rmId = rmId;
   }
 
-  @Override
-  protected void serviceInit(Configuration conf) throws Exception {
-super.serviceInit(conf);
-  }
-
-  @Override
-  protected void serviceStart() throws Exception {
-final YarnConfiguration conf = new YarnConfiguration(getConfig());
-try {
-  if (this.rmClient == null) {
-this.rmClient =
-ClientRMProxy.createRMProxy(conf, ApplicationMasterProtocol.class);
-  }
-} catch (IOException e) {
-

[50/50] [abbrv] hadoop git commit: Merge branch 'trunk' into HDDS-4

2018-11-05 Thread aengineer
Merge branch 'trunk' into HDDS-4

Conflicts:

hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7119be30
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7119be30
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7119be30

Branch: refs/heads/HDDS-4
Commit: 7119be30bd570e65c97ff88e8aa84705c79f227b
Parents: 2115256 f3f5e7a
Author: Anu Engineer 
Authored: Mon Nov 5 12:37:32 2018 -0800
Committer: Anu Engineer 
Committed: Mon Nov 5 12:37:32 2018 -0800

--
 LICENSE.txt |5 +-
 NOTICE.txt  |8 +-
 dev-support/bin/dist-layout-stitching   |1 +
 dev-support/docker/Dockerfile   |1 +
 .../assemblies/hadoop-registry-dist.xml |   41 +
 .../hadoop-client-minicluster/pom.xml   |2 +-
 .../hadoop-common/src/main/bin/hadoop   |6 +
 .../hadoop-common/src/main/conf/hadoop-env.sh   |   13 +
 .../fs/CommonConfigurationKeysPublic.java   |   31 +-
 .../io/compress/zstd/ZStandardDecompressor.java |4 +-
 .../main/java/org/apache/hadoop/ipc/Server.java |  114 +-
 .../org/apache/hadoop/security/Credentials.java |   15 +
 .../hadoop/security/SaslPropertiesResolver.java |4 +-
 .../hadoop/security/UserGroupInformation.java   |  192 +-
 .../hadoop/security/token/DtFileOperations.java |   28 +-
 .../hadoop/security/token/DtUtilShell.java  |   37 +-
 .../io/compress/zstd/ZStandardCompressor.c  |   11 +-
 .../io/compress/zstd/ZStandardDecompressor.c|1 +
 .../src/main/resources/core-default.xml |8 +
 .../src/site/markdown/CommandsManual.md |1 +
 .../src/site/markdown/CredentialProviderAPI.md  |  130 +-
 .../site/markdown/registry/hadoop-registry.md   | 1018 ++
 .../src/site/markdown/registry/index.md |   31 +
 .../markdown/registry/registry-configuration.md |  397 
 .../src/site/markdown/registry/registry-dns.md  |  224 +++
 .../site/markdown/registry/registry-security.md |  120 ++
 .../using-the-hadoop-service-registry.md|  273 +++
 .../hadoop/crypto/key/TestKeyProvider.java  |   32 +-
 .../TestZStandardCompressorDecompressor.java|   10 +-
 .../java/org/apache/hadoop/ipc/TestIPC.java |   53 +-
 .../apache/hadoop/security/TestCredentials.java |   57 +-
 .../hadoop/security/TestUGILoginFromKeytab.java |   56 +
 .../security/TestUserGroupInformation.java  |2 +-
 .../hadoop/security/ssl/KeyStoreTestUtil.java   |  105 +
 .../hadoop/security/token/TestDtUtilShell.java  |   44 +
 .../dev-support/findbugs-exclude.xml|   33 +
 hadoop-common-project/hadoop-registry/pom.xml   |  309 +++
 .../apache/hadoop/registry/cli/RegistryCli.java |  497 +
 .../hadoop/registry/client/api/BindFlags.java   |   41 +
 .../registry/client/api/DNSOperations.java  |   60 +
 .../client/api/DNSOperationsFactory.java|   78 +
 .../registry/client/api/RegistryConstants.java  |  388 
 .../registry/client/api/RegistryOperations.java |  182 ++
 .../client/api/RegistryOperationsFactory.java   |  160 ++
 .../registry/client/api/package-info.java   |   35 +
 .../registry/client/binding/JsonSerDeser.java   |  117 ++
 .../client/binding/RegistryPathUtils.java   |  238 +++
 .../client/binding/RegistryTypeUtils.java   |  291 +++
 .../registry/client/binding/RegistryUtils.java  |  399 
 .../registry/client/binding/package-info.java   |   22 +
 .../AuthenticationFailedException.java  |   39 +
 .../exceptions/InvalidPathnameException.java|   40 +
 .../exceptions/InvalidRecordException.java  |   41 +
 .../NoChildrenForEphemeralsException.java   |   48 +
 .../exceptions/NoPathPermissionsException.java  |   45 +
 .../client/exceptions/NoRecordException.java|   45 +
 .../client/exceptions/RegistryIOException.java  |   58 +
 .../client/exceptions/package-info.java |   33 +
 .../impl/FSRegistryOperationsService.java   |  248 +++
 .../client/impl/RegistryOperationsClient.java   |   55 +
 .../registry/client/impl/package-info.java  |   26 +
 .../client/impl/zk/BindingInformation.java  |   41 +
 .../registry/client/impl/zk/CuratorService.java |  896 +
 .../registry/client/impl/zk/ListenerHandle.java |   25 +
 .../registry/client/impl/zk/PathListener.java   |   30 +
 .../client/impl/zk/RegistryBindingSource.java   |   36 +
 .../impl/zk/RegistryInternalConstants.java  |   81 +
 .../impl/zk/RegistryOperationsService.java  |  165 ++
 .../client/impl/zk/RegistrySecurity.java| 1143 +++
 .../registry/client/impl/zk/ZKPathDumper.java   |  133 ++
 .../client/impl/zk/ZookeeperConfigOptions.java  |  118 ++
 .../registry/client/impl/zk/package-info.java   |   39 +
 .../registry/client/types/AddressTypes.java |  

[42/50] [abbrv] hadoop git commit: HADOOP-15900. Update JSch versions in LICENSE.txt.

2018-11-05 Thread aengineer
HADOOP-15900. Update JSch versions in LICENSE.txt.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d43cc5db
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d43cc5db
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d43cc5db

Branch: refs/heads/HDDS-4
Commit: d43cc5db0ff80958ca873df1dc2fa00054e37175
Parents: a5e21cd
Author: Akira Ajisaka 
Authored: Mon Nov 5 17:51:16 2018 +0900
Committer: Akira Ajisaka 
Committed: Mon Nov 5 17:51:26 2018 +0900

--
 LICENSE.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d43cc5db/LICENSE.txt
--
diff --git a/LICENSE.txt b/LICENSE.txt
index 1a97528..81eb32f 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -1836,7 +1836,7 @@ any resulting litigation.
 
 The binary distribution of this product bundles these dependencies under the
 following license:
-JSch 0.1.51
+JSch 0.1.54
 ParaNamer Core 2.3
 JLine 0.9.94
 leveldbjni-all 1.8


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[29/50] [abbrv] hadoop git commit: YARN-7225. Add queue and partition info to RM audit log. Contributed by Eric Payne

2018-11-05 Thread aengineer
YARN-7225. Add queue and partition info to RM audit log. Contributed by Eric 
Payne


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2ab611d4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2ab611d4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2ab611d4

Branch: refs/heads/HDDS-4
Commit: 2ab611d48b7669b31bd2c9b918f47251da77d0f6
Parents: d174b91
Author: Jonathan Hung 
Authored: Thu Nov 1 14:22:00 2018 -0700
Committer: Jonathan Hung 
Committed: Thu Nov 1 14:22:00 2018 -0700

--
 .../server/resourcemanager/ClientRMService.java | 12 ++-
 .../server/resourcemanager/RMAuditLogger.java   | 81 +---
 .../scheduler/common/fica/FiCaSchedulerApp.java | 19 -
 .../scheduler/fair/FSAppAttempt.java|  5 +-
 .../scheduler/fifo/FifoAppAttempt.java  | 10 ++-
 .../resourcemanager/TestRMAuditLogger.java  | 23 +-
 6 files changed, 129 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2ab611d4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
index 8f8f43e..2da000c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
@@ -580,7 +580,8 @@ public class ClientRMService extends AbstractService 
implements
   LOG.warn("Unable to get the current user.", ie);
   RMAuditLogger.logFailure(user, AuditConstants.SUBMIT_APP_REQUEST,
   ie.getMessage(), "ClientRMService",
-  "Exception in submitting application", applicationId, callerContext);
+  "Exception in submitting application", applicationId, callerContext,
+  submissionContext.getQueue());
   throw RPCUtil.getRemoteException(ie);
 }
 
@@ -603,7 +604,8 @@ public class ClientRMService extends AbstractService 
implements
 ". Flow run should be a long integer", e);
 RMAuditLogger.logFailure(user, AuditConstants.SUBMIT_APP_REQUEST,
 e.getMessage(), "ClientRMService",
-"Exception in submitting application", applicationId);
+"Exception in submitting application", applicationId,
+submissionContext.getQueue());
 throw RPCUtil.getRemoteException(e);
   }
 }
@@ -662,12 +664,14 @@ public class ClientRMService extends AbstractService 
implements
   LOG.info("Application with id " + applicationId.getId() + 
   " submitted by user " + user);
   RMAuditLogger.logSuccess(user, AuditConstants.SUBMIT_APP_REQUEST,
-  "ClientRMService", applicationId, callerContext);
+  "ClientRMService", applicationId, callerContext,
+  submissionContext.getQueue());
 } catch (YarnException e) {
   LOG.info("Exception in submitting " + applicationId, e);
   RMAuditLogger.logFailure(user, AuditConstants.SUBMIT_APP_REQUEST,
   e.getMessage(), "ClientRMService",
-  "Exception in submitting application", applicationId, callerContext);
+  "Exception in submitting application", applicationId, callerContext,
+  submissionContext.getQueue());
   throw e;
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2ab611d4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
index ab10895..292aa8b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
+++ 

[40/50] [abbrv] hadoop git commit: HDFS-14049. TestHttpFSServerWebServer fails on Windows because of missing winutils.exe. Contributed by Inigo Goiri.

2018-11-05 Thread aengineer
HDFS-14049. TestHttpFSServerWebServer fails on Windows because of missing 
winutils.exe. Contributed by Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4e3df75e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4e3df75e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4e3df75e

Branch: refs/heads/HDDS-4
Commit: 4e3df75eb72adbab18a1d6476f228a0b504238fa
Parents: cb8d679
Author: Yiqun Lin 
Authored: Sun Nov 4 09:15:53 2018 +0800
Committer: Yiqun Lin 
Committed: Sun Nov 4 09:15:53 2018 +0800

--
 .../hadoop/fs/http/server/TestHttpFSServerWebServer.java | 11 +++
 1 file changed, 11 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4e3df75e/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServerWebServer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServerWebServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServerWebServer.java
index 5250543..97d41d3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServerWebServer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServerWebServer.java
@@ -30,6 +30,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.security.authentication.server.AuthenticationFilter;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.test.HadoopUsersConfTestHelper;
+import org.apache.hadoop.util.Shell;
 import org.junit.Assert;
 import org.junit.Before;
 import org.junit.BeforeClass;
@@ -55,6 +56,16 @@ public class TestHttpFSServerWebServer {
 confDir.mkdirs();
 logsDir.mkdirs();
 tempDir.mkdirs();
+
+if (Shell.WINDOWS) {
+  File binDir = new File(homeDir, "bin");
+  binDir.mkdirs();
+  File winutils = Shell.getWinUtilsFile();
+  if (winutils.exists()) {
+FileUtils.copyFileToDirectory(winutils, binDir);
+  }
+}
+
 System.setProperty("hadoop.home.dir", homeDir.getAbsolutePath());
 System.setProperty("hadoop.log.dir", logsDir.getAbsolutePath());
 System.setProperty("httpfs.home.dir", homeDir.getAbsolutePath());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[24/50] [abbrv] hadoop git commit: HDDS-777. Fix missing jenkins issue in s3gateway module. Contributed by Bharat Viswanadham.

2018-11-05 Thread aengineer
HDDS-777. Fix missing jenkins issue in s3gateway module. Contributed by Bharat 
Viswanadham.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c5eb237e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c5eb237e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c5eb237e

Branch: refs/heads/HDDS-4
Commit: c5eb237e3e951e27565d40a47b6f55e7eb399f5c
Parents: 6668c19
Author: Bharat Viswanadham 
Authored: Wed Oct 31 19:11:12 2018 -0700
Committer: Bharat Viswanadham 
Committed: Wed Oct 31 19:11:12 2018 -0700

--
 hadoop-ozone/s3gateway/pom.xml  |  2 +-
 .../apache/hadoop/ozone/s3/util/S3Consts.java   | 18 ++
 .../apache/hadoop/ozone/s3/util/S3utils.java| 20 +++-
 3 files changed, 38 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c5eb237e/hadoop-ozone/s3gateway/pom.xml
--
diff --git a/hadoop-ozone/s3gateway/pom.xml b/hadoop-ozone/s3gateway/pom.xml
index 52eee5d..5bc7c62 100644
--- a/hadoop-ozone/s3gateway/pom.xml
+++ b/hadoop-ozone/s3gateway/pom.xml
@@ -22,7 +22,7 @@
 0.4.0-SNAPSHOT
   
   hadoop-ozone-s3gateway
-  Apache Hadoop Ozone S3 Gatway
+  Apache Hadoop Ozone S3 Gateway
   jar
   0.4.0-SNAPSHOT
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c5eb237e/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/S3Consts.java
--
diff --git 
a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/S3Consts.java
 
b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/S3Consts.java
index 70d8a96..6ece7df 100644
--- 
a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/S3Consts.java
+++ 
b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/S3Consts.java
@@ -1,3 +1,21 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
 package org.apache.hadoop.ozone.s3.util;
 
 import org.apache.hadoop.classification.InterfaceAudience;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c5eb237e/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/S3utils.java
--
diff --git 
a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/S3utils.java
 
b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/S3utils.java
index 8af0927..652ba7f 100644
--- 
a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/S3utils.java
+++ 
b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/util/S3utils.java
@@ -1,3 +1,21 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
 package org.apache.hadoop.ozone.s3.util;
 
 import org.apache.commons.codec.DecoderException;
@@ -52,7 +70,7 @@ public final class S3utils {
 byte[] actualKeyBytes = Hex.decodeHex(hex);
 String digestActualKey = DigestUtils.sha256Hex(actualKeyBytes);
 if (digest.equals(digestActualKey)) {
-  return new String(actualKeyBytes);
+  return new String(actualKeyBytes, StandardCharsets.UTF_8);
 } else {
   OS3Exception ex = 

[28/50] [abbrv] hadoop git commit: HADOOP-15895. [JDK9+] Add missing javax.annotation-api dependency to hadoop-yarn-csi.

2018-11-05 Thread aengineer
HADOOP-15895. [JDK9+] Add missing javax.annotation-api dependency to 
hadoop-yarn-csi.

Contributed by Takanobu Asanuma.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d174b916
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d174b916
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d174b916

Branch: refs/heads/HDDS-4
Commit: d174b916352ffe701058f3bdae433c3f24eb37c2
Parents: e0ac308
Author: Steve Loughran 
Authored: Thu Nov 1 10:11:21 2018 +
Committer: Steve Loughran 
Committed: Thu Nov 1 12:56:20 2018 +

--
 hadoop-project/pom.xml  | 5 +
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml | 5 +
 2 files changed, 10 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d174b916/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 2247109..937f21f 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1474,6 +1474,11 @@
 javax.activation-api
 1.2.0
   
+  
+javax.annotation
+javax.annotation-api
+1.3.2
+  
 
   
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d174b916/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml
index b58b0fe..41f5098 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/pom.xml
@@ -83,6 +83,11 @@
 test-jar
 test
 
+
+javax.annotation
+javax.annotation-api
+compile
+
 
 
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[38/50] [abbrv] hadoop git commit: YARN-8905. [Router] Add JvmMetricsInfo and pause monitor. Contributed by Bilwa S T.

2018-11-05 Thread aengineer
YARN-8905. [Router] Add JvmMetricsInfo and pause monitor. Contributed by Bilwa 
S T.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f84a278b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f84a278b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f84a278b

Branch: refs/heads/HDDS-4
Commit: f84a278baad6cb871ea4196257f23a938826ca23
Parents: 989715e
Author: bibinchundatt 
Authored: Sat Nov 3 20:35:31 2018 +0530
Committer: bibinchundatt 
Committed: Sat Nov 3 20:35:31 2018 +0530

--
 .../hadoop/yarn/server/router/Router.java   |  8 +
 .../hadoop/yarn/server/router/TestRouter.java   | 38 
 2 files changed, 46 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f84a278b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/Router.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/Router.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/Router.java
index 76050d0..b55c5d5 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/Router.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/Router.java
@@ -24,7 +24,9 @@ import java.util.concurrent.atomic.AtomicBoolean;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.source.JvmMetrics;
 import org.apache.hadoop.service.CompositeService;
+import org.apache.hadoop.util.JvmPauseMonitor;
 import org.apache.hadoop.util.ShutdownHookManager;
 import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.yarn.YarnUncaughtExceptionHandler;
@@ -64,6 +66,7 @@ public class Router extends CompositeService {
   private static CompositeServiceShutdownHook routerShutdownHook;
   private Configuration conf;
   private AtomicBoolean isStopping = new AtomicBoolean(false);
+  private JvmPauseMonitor pauseMonitor;
   private RouterClientRMService clientRMProxyService;
   private RouterRMAdminService rmAdminProxyService;
   private WebApp webApp;
@@ -100,6 +103,11 @@ public class Router extends CompositeService {
 WebAppUtils.getRouterWebAppURLWithoutScheme(this.conf));
 // Metrics
 DefaultMetricsSystem.initialize(METRICS_NAME);
+JvmMetrics jm = JvmMetrics.initSingleton("Router", null);
+pauseMonitor = new JvmPauseMonitor();
+addService(pauseMonitor);
+jm.setPauseMonitor(pauseMonitor);
+
 super.serviceInit(conf);
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f84a278b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/TestRouter.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/TestRouter.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/TestRouter.java
new file mode 100644
index 000..bf0c688
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/TestRouter.java
@@ -0,0 +1,38 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.router;
+
+import static org.junit.Assert.assertEquals;
+
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.junit.Test;
+
+/**
+ * Tests {@link Router}.
+ */

[32/50] [abbrv] hadoop git commit: HDDS-751. Replace usage of Guava Optional with Java Optional.

2018-11-05 Thread aengineer
HDDS-751. Replace usage of Guava Optional with Java Optional.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d16d5f73
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d16d5f73
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d16d5f73

Branch: refs/heads/HDDS-4
Commit: d16d5f730e9d139d3e026805f21ac2c9b0bbb98b
Parents: 8fe85af
Author: Yiqun Lin 
Authored: Fri Nov 2 10:50:32 2018 +0800
Committer: Yiqun Lin 
Committed: Fri Nov 2 10:50:32 2018 +0800

--
 .../java/org/apache/hadoop/hdds/HddsUtils.java  | 29 +---
 .../apache/hadoop/hdds/scm/HddsServerUtil.java  | 17 ++--
 .../hadoop/hdds/server/BaseHttpServer.java  |  9 --
 .../java/org/apache/hadoop/ozone/OmUtils.java   |  8 +++---
 4 files changed, 32 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d16d5f73/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
index 89edfdd..18637af 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
@@ -18,7 +18,6 @@
 
 package org.apache.hadoop.hdds;
 
-import com.google.common.base.Optional;
 import com.google.common.base.Strings;
 import com.google.common.net.HostAndPort;
 import org.apache.hadoop.classification.InterfaceAudience;
@@ -45,6 +44,7 @@ import java.util.Calendar;
 import java.util.Collection;
 import java.util.HashSet;
 import java.util.Map;
+import java.util.Optional;
 import java.util.TimeZone;
 
 import static org.apache.hadoop.hdfs.DFSConfigKeys
@@ -114,7 +114,7 @@ public final class HddsUtils {
 ScmConfigKeys.OZONE_SCM_CLIENT_ADDRESS_KEY);
 
 return NetUtils.createSocketAddr(host.get() + ":" + port
-.or(ScmConfigKeys.OZONE_SCM_CLIENT_PORT_DEFAULT));
+.orElse(ScmConfigKeys.OZONE_SCM_CLIENT_PORT_DEFAULT));
   }
 
   /**
@@ -162,7 +162,7 @@ public final class HddsUtils {
 ScmConfigKeys.OZONE_SCM_BLOCK_CLIENT_ADDRESS_KEY);
 
 return NetUtils.createSocketAddr(host.get() + ":" + port
-.or(ScmConfigKeys.OZONE_SCM_BLOCK_CLIENT_PORT_DEFAULT));
+.orElse(ScmConfigKeys.OZONE_SCM_BLOCK_CLIENT_PORT_DEFAULT));
   }
 
   /**
@@ -186,7 +186,7 @@ public final class HddsUtils {
 return hostName;
   }
 }
-return Optional.absent();
+return Optional.empty();
   }
 
   /**
@@ -196,7 +196,7 @@ public final class HddsUtils {
*/
   public static Optional getHostName(String value) {
 if ((value == null) || value.isEmpty()) {
-  return Optional.absent();
+  return Optional.empty();
 }
 return Optional.of(HostAndPort.fromString(value).getHostText());
   }
@@ -208,11 +208,11 @@ public final class HddsUtils {
*/
   public static Optional getHostPort(String value) {
 if ((value == null) || value.isEmpty()) {
-  return Optional.absent();
+  return Optional.empty();
 }
 int port = HostAndPort.fromString(value).getPortOrDefault(NO_PORT);
 if (port == NO_PORT) {
-  return Optional.absent();
+  return Optional.empty();
 } else {
   return Optional.of(port);
 }
@@ -239,7 +239,7 @@ public final class HddsUtils {
 return hostPort;
   }
 }
-return Optional.absent();
+return Optional.empty();
   }
 
   /**
@@ -261,20 +261,17 @@ public final class HddsUtils {
   + " Null or empty address list found.");
 }
 
-final com.google.common.base.Optional
-defaultPort =  com.google.common.base.Optional.of(ScmConfigKeys
-.OZONE_SCM_DEFAULT_PORT);
+final Optional defaultPort = Optional
+.of(ScmConfigKeys.OZONE_SCM_DEFAULT_PORT);
 for (String address : names) {
-  com.google.common.base.Optional hostname =
-  getHostName(address);
+  Optional hostname = getHostName(address);
   if (!hostname.isPresent()) {
 throw new IllegalArgumentException("Invalid hostname for SCM: "
 + hostname);
   }
-  com.google.common.base.Optional port =
-  getHostPort(address);
+  Optional port = getHostPort(address);
   InetSocketAddress addr = NetUtils.createSocketAddr(hostname.get(),
-  port.or(defaultPort.get()));
+  port.orElse(defaultPort.get()));
   addresses.add(addr);
 }
 return addresses;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d16d5f73/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/hdds/scm/HddsServerUtil.java
--
diff --git 

[35/50] [abbrv] hadoop git commit: YARN-8954. Reservations list field in ReservationListInfo is not accessible. Contributed by Oleksandr Shevchenko.

2018-11-05 Thread aengineer
YARN-8954. Reservations list field in ReservationListInfo is not accessible. 
Contributed by Oleksandr Shevchenko.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/babc946d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/babc946d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/babc946d

Branch: refs/heads/HDDS-4
Commit: babc946d4017e9c385d19a8e6f7f1ecd5080d619
Parents: 44e37b4
Author: Giovanni Matteo Fumarola 
Authored: Fri Nov 2 11:10:08 2018 -0700
Committer: Giovanni Matteo Fumarola 
Committed: Fri Nov 2 11:10:08 2018 -0700

--
 .../server/resourcemanager/webapp/dao/ReservationListInfo.java   | 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/babc946d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ReservationListInfo.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ReservationListInfo.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ReservationListInfo.java
index 25df67a..e0e65e0 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ReservationListInfo.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ReservationListInfo.java
@@ -50,4 +50,8 @@ public class ReservationListInfo {
   includeResourceAllocations));
 }
   }
+
+  public List getReservations() {
+return reservations;
+  }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[47/50] [abbrv] hadoop git commit: HDDS-794. addendum patch to fix compilation failure. Contributed by Shashikant Banerjee.

2018-11-05 Thread aengineer
HDDS-794. addendum patch to fix compilation failure. Contributed by Shashikant 
Banerjee.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/50f40e05
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/50f40e05
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/50f40e05

Branch: refs/heads/HDDS-4
Commit: 50f40e0536f38517aa33e8859f299bcf19f2f319
Parents: 5ddefdd
Author: Shashikant Banerjee 
Authored: Tue Nov 6 00:20:57 2018 +0530
Committer: Shashikant Banerjee 
Committed: Tue Nov 6 00:20:57 2018 +0530

--
 .../apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/50f40e05/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
index 8f9d589..dc44dc5 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
@@ -139,7 +139,7 @@ public final class ChunkUtils {
   }
 }
 log.debug("Write Chunk completed for chunkFile: {}, size {}", chunkFile,
-data.length);
+bufferSize);
   }
 
   /**


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[45/50] [abbrv] hadoop git commit: HDDS-799. Avoid ByteString to byte array conversion cost by using ByteBuffer in Datanode. Contributed by Mukul Kumar Singh.

2018-11-05 Thread aengineer
HDDS-799. Avoid ByteString to byte array conversion cost by using ByteBuffer in 
Datanode. Contributed by Mukul Kumar Singh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/942693bd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/942693bd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/942693bd

Branch: refs/heads/HDDS-4
Commit: 942693bddd5fba51b85a5f677e3496a41817cff3
Parents: c8ca174
Author: Shashikant Banerjee 
Authored: Mon Nov 5 23:43:22 2018 +0530
Committer: Shashikant Banerjee 
Committed: Mon Nov 5 23:43:22 2018 +0530

--
 .../container/keyvalue/KeyValueHandler.java | 11 +++---
 .../container/keyvalue/helpers/ChunkUtils.java  | 28 ---
 .../keyvalue/impl/ChunkManagerImpl.java |  2 +-
 .../keyvalue/interfaces/ChunkManager.java   |  3 +-
 .../keyvalue/TestChunkManagerImpl.java  | 37 ++--
 .../common/impl/TestContainerPersistence.java   | 28 ++-
 6 files changed, 62 insertions(+), 47 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/942693bd/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
index 4cb23ed..1271d99 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.ozone.container.keyvalue;
 
 import java.io.FileInputStream;
 import java.io.IOException;
+import java.nio.ByteBuffer;
 import java.util.HashMap;
 import java.util.LinkedList;
 import java.util.List;
@@ -76,7 +77,7 @@ import org.apache.hadoop.util.ReflectionUtils;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
-import com.google.protobuf.ByteString;
+import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
 import static org.apache.hadoop.hdds.HddsConfigKeys
 .HDDS_DATANODE_VOLUME_CHOOSING_POLICY;
 import static 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Result.*;
@@ -652,10 +653,10 @@ public class KeyValueHandler extends Handler {
   ChunkInfo chunkInfo = ChunkInfo.getFromProtoBuf(chunkInfoProto);
   Preconditions.checkNotNull(chunkInfo);
 
-  byte[] data = null;
+  ByteBuffer data = null;
   if (request.getWriteChunk().getStage() == Stage.WRITE_DATA ||
   request.getWriteChunk().getStage() == Stage.COMBINED) {
-data = request.getWriteChunk().getData().toByteArray();
+data = request.getWriteChunk().getData().asReadOnlyByteBuffer();
   }
 
   chunkManager.writeChunk(kvContainer, blockID, chunkInfo, data,
@@ -713,7 +714,7 @@ public class KeyValueHandler extends Handler {
   ChunkInfo chunkInfo = ChunkInfo.getFromProtoBuf(
   putSmallFileReq.getChunkInfo());
   Preconditions.checkNotNull(chunkInfo);
-  byte[] data = putSmallFileReq.getData().toByteArray();
+  ByteBuffer data = putSmallFileReq.getData().asReadOnlyByteBuffer();
   // chunks will be committed as a part of handling putSmallFile
   // here. There is no need to maintain this info in openContainerBlockMap.
   chunkManager.writeChunk(
@@ -724,7 +725,7 @@ public class KeyValueHandler extends Handler {
   blockData.setChunks(chunks);
   // TODO: add bcsId as a part of putSmallFile transaction
   blockManager.putBlock(kvContainer, blockData);
-  metrics.incContainerBytesStats(Type.PutSmallFile, data.length);
+  metrics.incContainerBytesStats(Type.PutSmallFile, data.capacity());
 
 } catch (StorageContainerException ex) {
   return ContainerUtils.logAndReturnError(LOG, ex, request);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/942693bd/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
index 20598d9..718f5de 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
+++ 

[30/50] [abbrv] hadoop git commit: HDFS-14008. NN should log snapshotdiff report. Contributed by Pranay Singh.

2018-11-05 Thread aengineer
HDFS-14008. NN should log snapshotdiff report. Contributed by Pranay Singh.

Signed-off-by: Wei-Chiu Chuang 


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d98b881e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d98b881e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d98b881e

Branch: refs/heads/HDDS-4
Commit: d98b881e9ab826eb7b70485d0de2a41ab7345334
Parents: 2ab611d
Author: Pranay Singh 
Authored: Thu Nov 1 17:25:11 2018 -0700
Committer: Wei-Chiu Chuang 
Committed: Thu Nov 1 17:26:00 2018 -0700

--
 .../hdfs/protocol/SnapshotDiffReport.java   | 65 
 .../hdfs/server/namenode/FSNamesystem.java  | 20 ++
 .../snapshot/DirectorySnapshottableFeature.java |  5 ++
 .../namenode/snapshot/SnapshotDiffInfo.java | 50 ++-
 4 files changed, 139 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d98b881e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotDiffReport.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotDiffReport.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotDiffReport.java
index 8ee4ec7..7bc95c9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotDiffReport.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotDiffReport.java
@@ -170,14 +170,75 @@ public class SnapshotDiffReport {
   /** end point of the diff */
   private final String toSnapshot;
 
+
   /** list of diff */
   private final List diffList;
 
+  /**
+   * Records the stats related to Snapshot diff operation.
+   */
+  public static class DiffStats {
+// Total dirs processed
+private long totalDirsProcessed;
+
+// Total dirs compared
+private long totalDirsCompared;
+
+// Total files processed
+private long totalFilesProcessed;
+
+// Total files compared
+private long totalFilesCompared;
+
+// Total children listing time
+private final long totalChildrenListingTime;
+
+public DiffStats(long totalDirsProcessed, long totalDirsCompared,
+  long totalFilesProcessed, long totalFilesCompared,
+  long totalChildrenListingTime) {
+  this.totalDirsCompared = totalDirsProcessed;
+  this.totalDirsProcessed = totalDirsCompared;
+  this.totalFilesCompared = totalFilesProcessed;
+  this.totalFilesProcessed = totalFilesCompared;
+  this.totalChildrenListingTime = totalChildrenListingTime;
+}
+
+public long getTotalDirsProcessed() {
+  return this.totalDirsProcessed;
+}
+
+public long getTotalDirsCompared() {
+  return this.totalDirsCompared;
+}
+
+public long getTotalFilesProcessed() {
+  return this.totalFilesProcessed;
+}
+
+public long getTotalFilesCompared() {
+  return this.totalFilesCompared;
+}
+
+public long getTotalChildrenListingTime() {
+  return totalChildrenListingTime;
+}
+  }
+
+  /* Stats associated with the SnapshotDiff Report. */
+  private final DiffStats diffStats;
+
   public SnapshotDiffReport(String snapshotRoot, String fromSnapshot,
   String toSnapshot, List entryList) {
+this(snapshotRoot, fromSnapshot, toSnapshot, new DiffStats(0, 0, 0, 0, 0),
+entryList);
+  }
+
+  public SnapshotDiffReport(String snapshotRoot, String fromSnapshot,
+  String toSnapshot, DiffStats dStat, List entryList) {
 this.snapshotRoot = snapshotRoot;
 this.fromSnapshot = fromSnapshot;
 this.toSnapshot = toSnapshot;
+this.diffStats = dStat;
 this.diffList = entryList != null ? entryList : Collections
 . emptyList();
   }
@@ -197,6 +258,10 @@ public class SnapshotDiffReport {
 return toSnapshot;
   }
 
+  public DiffStats getStats() {
+return this.diffStats;
+  }
+
   /** @return {@link #diffList} */
   public List getDiffList() {
 return diffList;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d98b881e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index d1904fa..8b1cdf6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 

[21/50] [abbrv] hadoop git commit: HDDS-773. Loading ozone s3 bucket browser could be failed. Contributed by Elek Marton.

2018-11-05 Thread aengineer
HDDS-773. Loading ozone s3 bucket browser could be failed. Contributed by Elek 
Marton.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/478b2cba
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/478b2cba
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/478b2cba

Branch: refs/heads/HDDS-4
Commit: 478b2cba0de5aadf655ac0b5a607760d46cc2a1e
Parents: b519f3f
Author: Bharat Viswanadham 
Authored: Wed Oct 31 07:54:23 2018 -0700
Committer: Bharat Viswanadham 
Committed: Wed Oct 31 07:54:23 2018 -0700

--
 hadoop-ozone/dist/src/main/smoketest/s3/README.md  |  2 +-
 hadoop-ozone/s3gateway/pom.xml |  6 ++
 .../hadoop/ozone/s3/endpoint/BucketEndpoint.java   | 13 -
 3 files changed, 15 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/478b2cba/hadoop-ozone/dist/src/main/smoketest/s3/README.md
--
diff --git a/hadoop-ozone/dist/src/main/smoketest/s3/README.md 
b/hadoop-ozone/dist/src/main/smoketest/s3/README.md
index 884ba2e..70ccda7 100644
--- a/hadoop-ozone/dist/src/main/smoketest/s3/README.md
+++ b/hadoop-ozone/dist/src/main/smoketest/s3/README.md
@@ -23,5 +23,5 @@ You need to
   3. Set bucket/endpointurl during the robot test execution
 
 ```
-robot -v bucket:ozonetest -v OZONE_S3_SET_CREDENTIALS:false -v 
ENDPOINT_URL:https://s3.us-east-2.amazonaws.com smoketest/s3
+robot -v bucket:ozonetest -v OZONE_TEST:false -v 
OZONE_S3_SET_CREDENTIALS:false -v 
ENDPOINT_URL:https://s3.us-east-2.amazonaws.com smoketest/s3
 ```

http://git-wip-us.apache.org/repos/asf/hadoop/blob/478b2cba/hadoop-ozone/s3gateway/pom.xml
--
diff --git a/hadoop-ozone/s3gateway/pom.xml b/hadoop-ozone/s3gateway/pom.xml
index 06012cf..52eee5d 100644
--- a/hadoop-ozone/s3gateway/pom.xml
+++ b/hadoop-ozone/s3gateway/pom.xml
@@ -174,5 +174,11 @@
   2.15.0
   test
 
+
+  com.google.code.findbugs
+  findbugs
+  3.0.1
+  provided
+
   
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/478b2cba/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
--
diff --git 
a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
 
b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
index 04e2348..bfbbb33 100644
--- 
a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
+++ 
b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
@@ -46,6 +46,7 @@ import 
org.apache.hadoop.ozone.s3.endpoint.MultiDeleteResponse.Error;
 import org.apache.hadoop.ozone.s3.exception.OS3Exception;
 import org.apache.hadoop.ozone.s3.exception.S3ErrorTable;
 
+import edu.umd.cs.findbugs.annotations.SuppressFBWarnings;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.ozone.s3.util.S3utils;
 import org.apache.http.HttpStatus;
@@ -70,6 +71,7 @@ public class BucketEndpoint extends EndpointBase {
* for more details.
*/
   @GET
+  @SuppressFBWarnings
   public Response list(
   @PathParam("bucket") String bucketName,
   @QueryParam("delimiter") String delimiter,
@@ -83,12 +85,12 @@ public class BucketEndpoint extends EndpointBase {
   @Context HttpHeaders hh) throws OS3Exception, IOException {
 
 if (browser != null) {
-  try (InputStream browserPage = getClass()
-  .getResourceAsStream("/browser.html")) {
-return Response.ok(browserPage,
+  InputStream browserPage = getClass()
+  .getResourceAsStream("/browser.html");
+  return Response.ok(browserPage,
 MediaType.TEXT_HTML_TYPE)
 .build();
-  }
+
 }
 
 if (prefix == null) {
@@ -295,7 +297,8 @@ public class BucketEndpoint extends EndpointBase {
 keyMetadata.setSize(next.getDataSize());
 keyMetadata.setETag("" + next.getModificationTime());
 keyMetadata.setStorageClass("STANDARD");
-
keyMetadata.setLastModified(Instant.ofEpochMilli(next.getModificationTime()));
+keyMetadata.setLastModified(Instant.ofEpochMilli(
+next.getModificationTime()));
 response.addKey(keyMetadata);
   }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[09/50] [abbrv] hadoop git commit: YARN-8854. Upgrade jquery datatable version references to v1.10.19. Contributed by Akhil PB.

2018-11-05 Thread aengineer
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d36012b6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/js/jquery.dataTables.min.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/js/jquery.dataTables.min.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/js/jquery.dataTables.min.js
new file mode 100644
index 000..3a79ccc
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/js/jquery.dataTables.min.js
@@ -0,0 +1,184 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+/*!
+ DataTables 1.10.18
+ ©2008-2018 SpryMedia Ltd - datatables.net/license
+*/
+(function(h){"function"===typeof 
define&?define(["jquery"],function(E){return 
h(E,window,document)}):"object"===typeof 
exports?module.exports=function(E,H){E||(E=window);H||(H="undefined"!==typeof 
window?require("jquery"):require("jquery")(E));return 
h(H,E,E.document)}:h(jQuery,window,document)})(function(h,E,H,k){function 
Z(a){var 
b,c,d={};h.each(a,function(e){if((b=e.match(/^([^A-Z]+?)([A-Z])/))&&-1!=="a aa 
ai ao as b fn i m o s ".indexOf(b[1]+" "))c=e.replace(b[0],b[2].toLowerCase()),
+d[c]=e,"o"===b[1]&(a[e])});a._hungarianMap=d}function 
J(a,b,c){a._hungarianMap||Z(a);var 
d;h.each(b,function(e){d=a._hungarianMap[e];if(d!==k&&(c||b[d]===k))"o"===d.charAt(0)?(b[d]||(b[d]={}),h.extend(!0,b[d],b[e]),J(a[d],b[d],c)):b[d]=b[e]})}function
 Ca(a){var b=n.defaults.oLanguage,c=b.sDecimal;c&(c);if(a){var 
d=a.sZeroRecords;!a.sEmptyTable&&(d&&"No data available in 
table"===b.sEmptyTable)&(a,a,"sZeroRecords","sEmptyTable");!a.sLoadingRecords&&(d&&"Loading..."===b.sLoadingRecords)&(a,
+a,"sZeroRecords","sLoadingRecords");a.sInfoThousands&&(a.sThousands=a.sInfoThousands);(a=a.sDecimal)&!==a&(a)}}function
 
eb(a){A(a,"ordering","bSort");A(a,"orderMulti","bSortMulti");A(a,"orderClasses","bSortClasses");A(a,"orderCellsTop","bSortCellsTop");A(a,"order","aaSorting");A(a,"orderFixed","aaSortingFixed");A(a,"paging","bPaginate");A(a,"pagingType","sPaginationType");A(a,"pageLength","iDisplayLength");A(a,"searching","bFilter");"boolean"===typeof
 a.sScrollX&&(a.sScrollX=a.sScrollX?"100%":
+"");"boolean"===typeof 
a.scrollX&&(a.scrollX=a.scrollX?"100%":"");if(a=a.aoSearchCols)for(var 
b=0,c=a.length;b").css({position:"fixed",top:0,left:-1*h(E).scrollLeft(),height:1,width:1,
+overflow:"hidden"}).append(h("").css({position:"absolute",top:1,left:1,width:100,overflow:"scroll"}).append(h("").css({width:"100%",height:10}))).appendTo("body"),d=c.children(),e=d.children();b.barWidth=d[0].offsetWidth-d[0].clientWidth;b.bScrollOversize=100===e[0].offsetWidth&&100!==d[0].clientWidth;b.bScrollbarLeft=1!==Math.round(e.offset().left);b.bBounding=c[0].getBoundingClientRect().width?!0:!1;c.remove()}h.extend(a.oBrowser,n.__browser);a.oScroll.iBarWidth=n.__browser.barWidth}
+function hb(a,b,c,d,e,f){var 
g,j=!1;c!==k&&(g=c,j=!0);for(;d!==e;)a.hasOwnProperty(d)&&(g=j?b(g,a[d],d,a):a[d],j=!0,d+=f);return
 g}function Ea(a,b){var 
c=n.defaults.column,d=a.aoColumns.length,c=h.extend({},n.models.oColumn,c,{nTh:b?b:H.createElement("th"),sTitle:c.sTitle?c.sTitle:b?b.innerHTML:"",aDataSort:c.aDataSort?c.aDataSort:[d],mData:c.mData?c.mData:d,idx:d});a.aoColumns.push(c);c=a.aoPreSearchCols;c[d]=h.extend({},n.models.oSearch,c[d]);ka(a,d,h(b).data())}function
 ka(a,b,c){var b=a.aoColumns[b],
+d=a.oClasses,e=h(b.nTh);if(!b.sWidthOrig){b.sWidthOrig=e.attr("width")||null;var
 
f=(e.attr("style")||"").match(/width:\s*(\d+[pxem%]+)/);f&&(b.sWidthOrig=f[1])}c!==k&!==c&&(fb(c),J(n.defaults.column,c),c.mDataProp!==k&&!c.mData&&(c.mData=c.mDataProp),c.sType&&(b._sManualType=c.sType),c.className&&!c.sClass&&(c.sClass=c.className),c.sClass&(c.sClass),h.extend(b,c),F(b,c,"sWidth","sWidthOrig"),c.iDataSort!==k&&(b.aDataSort=[c.iDataSort]),F(b,c,"aDataSort"));var
 g=b.mData,j=S(g),i=b.mRender?
+S(b.mRender):null,c=function(a){return"string"===typeof 

[10/50] [abbrv] hadoop git commit: YARN-8854. Upgrade jquery datatable version references to v1.10.19. Contributed by Akhil PB.

2018-11-05 Thread aengineer
YARN-8854. Upgrade jquery datatable version references to v1.10.19. Contributed 
by Akhil PB.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d36012b6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d36012b6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d36012b6

Branch: refs/heads/HDDS-4
Commit: d36012b69f01c9ddfd2e95545d1f5e1fbc1c3236
Parents: 62d98ca
Author: Sunil G 
Authored: Tue Oct 30 22:56:13 2018 +0530
Committer: Sunil G 
Committed: Tue Oct 30 22:56:46 2018 +0530

--
 LICENSE.txt |   2 +-
 .../hadoop-yarn/hadoop-yarn-common/pom.xml  |  10 +-
 .../hadoop/yarn/webapp/view/JQueryUI.java   |   6 +-
 .../static/dt-1.10.18/css/custom_datatable.css  |  68 +++
 .../webapps/static/dt-1.10.18/css/demo_page.css | 108 
 .../static/dt-1.10.18/css/demo_table.css| 544 +++
 .../static/dt-1.10.18/css/jquery.dataTables.css | 466 
 .../webapps/static/dt-1.10.18/css/jui-dt.css| 352 
 .../static/dt-1.10.18/images/Sorting icons.psd  | Bin 0 -> 27490 bytes
 .../static/dt-1.10.18/images/back_disabled.jpg  | Bin 0 -> 612 bytes
 .../static/dt-1.10.18/images/back_enabled.jpg   | Bin 0 -> 807 bytes
 .../static/dt-1.10.18/images/favicon.ico| Bin 0 -> 894 bytes
 .../dt-1.10.18/images/forward_disabled.jpg  | Bin 0 -> 635 bytes
 .../dt-1.10.18/images/forward_enabled.jpg   | Bin 0 -> 852 bytes
 .../static/dt-1.10.18/images/sort_asc.png   | Bin 0 -> 263 bytes
 .../dt-1.10.18/images/sort_asc_disabled.png | Bin 0 -> 252 bytes
 .../static/dt-1.10.18/images/sort_both.png  | Bin 0 -> 282 bytes
 .../static/dt-1.10.18/images/sort_desc.png  | Bin 0 -> 260 bytes
 .../dt-1.10.18/images/sort_desc_disabled.png| Bin 0 -> 251 bytes
 .../dt-1.10.18/js/jquery.dataTables.min.js  | 184 +++
 .../webapps/static/dt-1.10.7/css/demo_page.css  | 110 
 .../webapps/static/dt-1.10.7/css/demo_table.css | 538 --
 .../webapps/static/dt-1.10.7/css/jui-dt.css | 322 ---
 .../static/dt-1.10.7/images/Sorting icons.psd   | Bin 27490 -> 0 bytes
 .../static/dt-1.10.7/images/back_disabled.jpg   | Bin 612 -> 0 bytes
 .../static/dt-1.10.7/images/back_enabled.jpg| Bin 807 -> 0 bytes
 .../webapps/static/dt-1.10.7/images/favicon.ico | Bin 894 -> 0 bytes
 .../dt-1.10.7/images/forward_disabled.jpg   | Bin 635 -> 0 bytes
 .../static/dt-1.10.7/images/forward_enabled.jpg | Bin 852 -> 0 bytes
 .../static/dt-1.10.7/images/sort_asc.png| Bin 263 -> 0 bytes
 .../dt-1.10.7/images/sort_asc_disabled.png  | Bin 252 -> 0 bytes
 .../static/dt-1.10.7/images/sort_both.png   | Bin 282 -> 0 bytes
 .../static/dt-1.10.7/images/sort_desc.png   | Bin 260 -> 0 bytes
 .../dt-1.10.7/images/sort_desc_disabled.png | Bin 251 -> 0 bytes
 .../dt-1.10.7/js/jquery.dataTables.min.js   | 160 --
 35 files changed, 1733 insertions(+), 1137 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d36012b6/LICENSE.txt
--
diff --git a/LICENSE.txt b/LICENSE.txt
index 94c9065..1a97528 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -553,7 +553,7 @@ For:
 hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.js
 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.css
 
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js
-hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.7/
+hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/
 

 Copyright (C) 2008-2016, SpryMedia Ltd.
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d36012b6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
index 133003a..641a5f0 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
@@ -237,10 +237,12 @@
 src/main/resources/webapps/test/.keep
 src/main/resources/webapps/proxy/.keep
 src/main/resources/webapps/node/.keep
-
src/main/resources/webapps/static/dt-1.10.7/css/jui-dt.css
-
src/main/resources/webapps/static/dt-1.10.7/css/demo_table.css
-
src/main/resources/webapps/static/dt-1.10.7/images/Sorting 
icons.psd
-
src/main/resources/webapps/static/dt-1.10.7/js/jquery.dataTables.min.js
+

[39/50] [abbrv] hadoop git commit: HADOOP-15687. Credentials class should allow access to aliases.

2018-11-05 Thread aengineer
HADOOP-15687. Credentials class should allow access to aliases.

Author:Lars Francke 


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cb8d679c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cb8d679c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cb8d679c

Branch: refs/heads/HDDS-4
Commit: cb8d679c95642842efacc5d38ccf2a61b043c689
Parents: f84a278
Author: Lars Francke 
Authored: Sat Nov 3 16:21:29 2018 +
Committer: Steve Loughran 
Committed: Sat Nov 3 16:21:29 2018 +

--
 .../org/apache/hadoop/security/Credentials.java | 15 ++
 .../apache/hadoop/security/TestCredentials.java | 57 ++--
 2 files changed, 44 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cb8d679c/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Credentials.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Credentials.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Credentials.java
index 4fafa4a..4b0d889 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Credentials.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Credentials.java
@@ -31,6 +31,7 @@ import java.io.IOException;
 import java.nio.charset.StandardCharsets;
 import java.util.Arrays;
 import java.util.Collection;
+import java.util.Collections;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
@@ -142,6 +143,13 @@ public class Credentials implements Writable {
   }
 
   /**
+   * Returns an unmodifiable version of the full map of aliases to Tokens.
+   */
+  public Map> getTokenMap() {
+return Collections.unmodifiableMap(tokenMap);
+  }
+
+  /**
* @return number of Tokens in the in-memory map
*/
   public int numberOfTokens() {
@@ -192,6 +200,13 @@ public class Credentials implements Writable {
   }
 
   /**
+   * Returns an unmodifiable version of the full map of aliases to secret keys.
+   */
+  public Map getSecretKeyMap() {
+return Collections.unmodifiableMap(secretKeysMap);
+  }
+
+  /**
* Convenience method for reading a token storage file and loading its 
Tokens.
* @param filename
* @param conf

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cb8d679c/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestCredentials.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestCredentials.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestCredentials.java
index 1245c07..02ba153 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestCredentials.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestCredentials.java
@@ -39,8 +39,6 @@ import java.util.Collection;
 import javax.crypto.KeyGenerator;
 
 import org.apache.hadoop.io.Text;
-import org.apache.hadoop.io.WritableComparator;
-import org.apache.hadoop.security.Credentials;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.token.TokenIdentifier;
 import org.apache.hadoop.test.GenericTestUtils;
@@ -74,6 +72,9 @@ public class TestCredentials {
 Token token2 = new Token();
 Text service1 = new Text("service1");
 Text service2 = new Text("service2");
+Text alias1 = new Text("sometoken1");
+Text alias2 = new Text("sometoken2");
+
 Collection services = new ArrayList();
 
 services.add(service1);
@@ -81,8 +82,8 @@ public class TestCredentials {
 
 token1.setService(service1);
 token2.setService(service2);
-ts.addToken(new Text("sometoken1"), token1);
-ts.addToken(new Text("sometoken2"), token2);
+ts.addToken(alias1, token1);
+ts.addToken(alias2, token2);
 
 // create keys and put it in
 final KeyGenerator kg = KeyGenerator.getInstance(DEFAULT_HMAC_ALGORITHM);
@@ -109,32 +110,32 @@ public class TestCredentials {
 dis.close();
 
 // get the tokens and compare the services
-Collection> list = ts.getAllTokens();
-assertEquals("getAllTokens should return collection of size 2",
-list.size(), 2);
-boolean foundFirst = false;
-boolean foundSecond = false;
-for (Token token : list) {
-  if (token.getService().equals(service1)) {
-foundFirst = true;
-  }
-  if (token.getService().equals(service2)) {
-foundSecond = true;
-  }
-}
-assertTrue("Tokens for services service1 and service2 must be present",
-foundFirst 

[33/50] [abbrv] hadoop git commit: HDDS-788. Change title page of bucket browser in S3gateway. Contributed by Bharat Viswanadham.

2018-11-05 Thread aengineer
HDDS-788. Change title page of bucket browser in S3gateway. Contributed by 
Bharat Viswanadham.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6d9c18cf
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6d9c18cf
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6d9c18cf

Branch: refs/heads/HDDS-4
Commit: 6d9c18cfa9a458423b832a59166a15d098281ccd
Parents: d16d5f7
Author: Bharat Viswanadham 
Authored: Fri Nov 2 08:00:18 2018 -0700
Committer: Bharat Viswanadham 
Committed: Fri Nov 2 08:00:18 2018 -0700

--
 .../s3gateway/src/main/resources/browser.html   |   4 ++--
 .../resources/webapps/s3gateway/WEB-INF/web.xml |   4 
 .../main/resources/webapps/static/images/ozone.ico  | Bin 0 -> 1150 bytes
 3 files changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6d9c18cf/hadoop-ozone/s3gateway/src/main/resources/browser.html
--
diff --git a/hadoop-ozone/s3gateway/src/main/resources/browser.html 
b/hadoop-ozone/s3gateway/src/main/resources/browser.html
index dc05a00..a1f2338 100644
--- a/hadoop-ozone/s3gateway/src/main/resources/browser.html
+++ b/hadoop-ozone/s3gateway/src/main/resources/browser.html
@@ -19,10 +19,10 @@ permissions and limitations under the License.
 
 
 
-AWS S3 Explorer
+Ozone S3 Explorer
 
 
-https://aws.amazon.com/favicon.ico;>
+
 https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css;>
 http://git-wip-us.apache.org/repos/asf/hadoop/blob/6d9c18cf/hadoop-ozone/s3gateway/src/main/resources/webapps/s3gateway/WEB-INF/web.xml
--
diff --git 
a/hadoop-ozone/s3gateway/src/main/resources/webapps/s3gateway/WEB-INF/web.xml 
b/hadoop-ozone/s3gateway/src/main/resources/webapps/s3gateway/WEB-INF/web.xml
index a3552f0..36aad1c 100644
--- 
a/hadoop-ozone/s3gateway/src/main/resources/webapps/s3gateway/WEB-INF/web.xml
+++ 
b/hadoop-ozone/s3gateway/src/main/resources/webapps/s3gateway/WEB-INF/web.xml
@@ -21,6 +21,10 @@
   javax.ws.rs.Application
   org.apache.hadoop.ozone.s3.GatewayApplication
 
+
+  jersey.config.servlet.filter.staticContentRegex
+  /static/images/*.ico
+
 1
   
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6d9c18cf/hadoop-ozone/s3gateway/src/main/resources/webapps/static/images/ozone.ico
--
diff --git 
a/hadoop-ozone/s3gateway/src/main/resources/webapps/static/images/ozone.ico 
b/hadoop-ozone/s3gateway/src/main/resources/webapps/static/images/ozone.ico
new file mode 100755
index 000..72886ea
Binary files /dev/null and 
b/hadoop-ozone/s3gateway/src/main/resources/webapps/static/images/ozone.ico 
differ


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[31/50] [abbrv] hadoop git commit: HDFS-13996. Make HttpFS' ACLs RegEx configurable. Contributed by Siyao Meng.

2018-11-05 Thread aengineer
HDFS-13996. Make HttpFS' ACLs RegEx configurable. Contributed by Siyao Meng.

Signed-off-by: Wei-Chiu Chuang 


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8fe85af6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8fe85af6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8fe85af6

Branch: refs/heads/HDDS-4
Commit: 8fe85af63b37f2f61269e8719e5b6287f30bb0b3
Parents: d98b881
Author: Siyao Meng 
Authored: Thu Nov 1 17:36:18 2018 -0700
Committer: Wei-Chiu Chuang 
Committed: Thu Nov 1 17:36:50 2018 -0700

--
 .../http/server/HttpFSParametersProvider.java   | 10 ++-
 .../fs/http/client/BaseTestHttpFSWith.java  | 60 ++
 .../hadoop/fs/http/server/TestHttpFSServer.java | 65 +++-
 .../org/apache/hadoop/test/TestHdfsHelper.java  | 24 +++-
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  4 ++
 5 files changed, 156 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8fe85af6/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
index 754ae2b..b13e6d2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
@@ -22,6 +22,8 @@ import org.apache.hadoop.fs.XAttrCodec;
 import org.apache.hadoop.fs.XAttrSetFlag;
 import org.apache.hadoop.fs.http.client.HttpFSFileSystem;
 import org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation;
+import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
+import org.apache.hadoop.lib.service.FileSystemAccess;
 import org.apache.hadoop.lib.wsrs.BooleanParam;
 import org.apache.hadoop.lib.wsrs.EnumParam;
 import org.apache.hadoop.lib.wsrs.EnumSetParam;
@@ -37,8 +39,6 @@ import java.util.HashMap;
 import java.util.Map;
 import java.util.regex.Pattern;
 
-import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT;
-
 /**
  * HttpFS ParametersProvider.
  */
@@ -430,7 +430,11 @@ public class HttpFSParametersProvider extends 
ParametersProvider {
  */
 public AclPermissionParam() {
   super(NAME, HttpFSFileSystem.ACLSPEC_DEFAULT,
-  Pattern.compile(DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT));
+Pattern.compile(HttpFSServerWebApp.get()
+  .get(FileSystemAccess.class)
+  .getFileSystemConfiguration()
+  .get(HdfsClientConfigKeys.DFS_WEBHDFS_ACL_PERMISSION_PATTERN_KEY,
+HdfsClientConfigKeys.DFS_WEBHDFS_ACL_PERMISSION_PATTERN_DEFAULT)));
 }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8fe85af6/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
index a23ca7a..4514739 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
@@ -38,6 +38,8 @@ import org.apache.hadoop.hdfs.AppendTestUtil;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSTestUtil;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
 import org.apache.hadoop.hdfs.protocol.SnapshotException;
@@ -117,6 +119,14 @@ public abstract class BaseTestHttpFSWith extends 
HFSTestCase {
 conf.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, fsDefaultName);
 conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_ACLS_ENABLED_KEY, true);
 conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_XATTRS_ENABLED_KEY, true);
+// For BaseTestHttpFSWith#testFileAclsCustomizedUserAndGroupNames
+conf.set(HdfsClientConfigKeys.DFS_WEBHDFS_USER_PATTERN_KEY,
+"^[A-Za-z0-9_][A-Za-z0-9._-]*[$]?$");
+conf.set(HdfsClientConfigKeys.DFS_WEBHDFS_ACL_PERMISSION_PATTERN_KEY,
+

[23/50] [abbrv] hadoop git commit: HDDS-677. Create documentation for s3 gateway to the docs. Contributed by Elek Marton.

2018-11-05 Thread aengineer
HDDS-677. Create documentation for s3 gateway to the docs. Contributed by Elek 
Marton.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6668c19d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6668c19d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6668c19d

Branch: refs/heads/HDDS-4
Commit: 6668c19dafde530c43ccacb23de11455ae1813b5
Parents: 08bb036
Author: Bharat Viswanadham 
Authored: Wed Oct 31 11:48:23 2018 -0700
Committer: Bharat Viswanadham 
Committed: Wed Oct 31 11:48:23 2018 -0700

--
 hadoop-ozone/docs/content/S3.md | 130 +++
 1 file changed, 130 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6668c19d/hadoop-ozone/docs/content/S3.md
--
diff --git a/hadoop-ozone/docs/content/S3.md b/hadoop-ozone/docs/content/S3.md
new file mode 100644
index 000..cfefaf4
--- /dev/null
+++ b/hadoop-ozone/docs/content/S3.md
@@ -0,0 +1,130 @@
+---
+title: S3
+menu:
+   main:
+  parent: Client
+  weight: 1
+---
+
+
+
+Ozone provides S3 compatible REST interface to use the object store data with 
any S3 compatible tools.
+
+## Getting started
+
+S3 Gateway is a separated component which provides the S3 compatible. It 
should be started additional to the regular Ozone components.
+
+You can start a docker based cluster, including the S3 gateway from the 
release package.
+
+Go to the `compose/ozones3` directory, and start the server:
+
+```bash
+docker-compose up -d
+```
+
+You can access the S3 gateway at `http://localhost:9878`
+
+## URL Schema
+
+Ozone S3 gateway supports both the virtual-host-style URL s3 bucket addresses 
(eg. http://bucketname.host:9878) and the path-style addresses (eg. 
http://host:9878/bucketname)
+
+By default it uses the path-style addressing. To use virtual host style URLs 
set your main domain name in your `ozone-site.xml`:
+
+```xml
+
+   ozone.s3g.domain.name
+   s3g.internal
+
+```
+
+## Bucket browser
+
+Bucket's could be browsed from the browser with adding `?browser=true` to the 
bucket URL.
+
+For example the content of the 'testbucket' could be checked from the browser 
using the URL http://localhost:9878/testbucket?browser=true
+
+
+## Implemented REST endpoints
+
+Operations on S3Gateway service:
+
+Endpoint| Status  |
+|-|
+GET service | implemented |
+
+Operations on Bucket:
+
+Endpoint| Status  | Notes
+|-|---
+GET Bucket (List Objects) Version 2 | implemented |
+HEAD Bucket | implemented |
+DELETE Bucket   | implemented |
+PUT Bucket (Create bucket)  | implemented |
+Delete Multiple Objects (POST)  | implemented |
+
+Operation on Objects:
+
+Endpoint| Status  | Notes
+|-|---
+PUT Object  | implemented |
+GET Object  | implemented | Range headers are not 
supported
+Multipart Uplad | not implemented |
+DELETE Object   | implemented |
+HEAD Object | implemented |
+
+
+## Security
+
+Security is not yet implemented, you can *use* any AWS_ACCESS_KEY_ID and 
AWS_SECRET_ACCESS_KEY
+
+Note: Ozone has a notion for 'volumes' which is missing from the S3 Rest 
endpoint. Under the hood S3 bucket names are maped to Ozone 'volume/bucket' 
locations (depending from the given authentication information).
+
+To show the storage location of a S3 bucket, use the `ozone sh bucket path 
` command.
+
+```
+aws s3api --endpoint-url http://localhost:9878 create-bucket --bucket=bucket1
+
+ozone sh bucket path bucket1
+Volume name for S3Bucket is : s3thisisakey
+Ozone FileSystem Uri is : o3fs://bucket1.s3thisisakey
+```
+
+## Clients
+
+### AWS Cli
+
+`aws` CLI could be used with specifying the custom REST endpoint.
+
+```
+aws s3api --endpoint http://localhost:9878 create-bucket --bucket buckettest
+```
+
+Or
+
+```
+aws s3 ls --endpoint http://localhost:9878 s3://buckettest
+```
+
+### S3 Fuse driver (goofys)
+
+Goofys is a S3 FUSE driver. It could be used to any mount any Ozone bucket as 
posix file system:
+
+
+```
+goofys --endpoint http://localhost:9878 bucket1 /mount/bucket1
+```


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[03/50] [abbrv] hadoop git commit: YARN-8871. Document ATSv2 integrated LogWebService. Contributed by Suma Shivaprasad.

2018-11-05 Thread aengineer
YARN-8871. Document ATSv2 integrated LogWebService. Contributed by Suma 
Shivaprasad.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a283da21
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a283da21
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a283da21

Branch: refs/heads/HDDS-4
Commit: a283da21670c812a391d4c9ee98ebef22fc93868
Parents: 4ec4ec6
Author: Rohith Sharma K S 
Authored: Tue Oct 30 11:34:20 2018 +0530
Committer: Rohith Sharma K S 
Committed: Tue Oct 30 11:34:20 2018 +0530

--
 .../hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md | 5 +
 1 file changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a283da21/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md
index 04948ce..2314f30 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md
@@ -1570,3 +1570,8 @@ With this API, you can query set of available entity 
types for a given app id. I
 1. If any problem occurs in parsing request, HTTP 400 (Bad Request) is 
returned.
 1. If flow context information cannot be retrieved or entity for the given 
entity id cannot be found, HTTP 404 (Not Found) is returned.
 1. For non-recoverable errors while retrieving data, HTTP 500 (Internal Server 
Error) is returned.
+
+## Aggregated Log 
Serving for Historical Apps
+
+ TimelineService v.2 supports serving aggregated logs of historical apps. To 
enable this, configure "yarn.log.server.web-service.url" to "${yarn 
.timeline-service.hostname}:8188/ws/v2/applicationlog"
+ in `yarn-site.xml`


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[46/50] [abbrv] hadoop git commit: HDDS-794. Add configs to set StateMachineData write timeout in ContainerStateMachine. Contributed by Shashikant Banerjee.

2018-11-05 Thread aengineer
HDDS-794. Add configs to set StateMachineData write timeout in 
ContainerStateMachine. Contributed by Shashikant Banerjee.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5ddefdd5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5ddefdd5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5ddefdd5

Branch: refs/heads/HDDS-4
Commit: 5ddefdd50751ed316f2eb9046f294bbdcdfb2428
Parents: 942693b
Author: Arpit Agarwal 
Authored: Mon Nov 5 10:10:10 2018 -0800
Committer: Arpit Agarwal 
Committed: Mon Nov 5 10:41:28 2018 -0800

--
 .../org/apache/hadoop/hdds/scm/ScmConfigKeys.java |  6 ++
 .../org/apache/hadoop/ozone/OzoneConfigKeys.java  |  9 +
 .../common/src/main/resources/ozone-default.xml   |  7 +++
 .../server/ratis/ContainerStateMachine.java   | 18 --
 .../server/ratis/XceiverServerRatis.java  | 14 ++
 .../container/keyvalue/helpers/ChunkUtils.java|  2 ++
 .../container/keyvalue/impl/ChunkManagerImpl.java |  3 ++-
 7 files changed, 56 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5ddefdd5/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
index 56692af..38eec61 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
@@ -79,6 +79,12 @@ public final class ScmConfigKeys {
   "dfs.container.ratis.segment.preallocated.size";
   public static final int
   DFS_CONTAINER_RATIS_SEGMENT_PREALLOCATED_SIZE_DEFAULT = 128 * 1024 * 
1024;
+  public static final String
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_SYNC_TIMEOUT =
+  "dfs.container.ratis.statemachinedata.sync.timeout";
+  public static final TimeDuration
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_SYNC_TIMEOUT_DEFAULT =
+  TimeDuration.valueOf(10, TimeUnit.SECONDS);
   public static final String DFS_RATIS_CLIENT_REQUEST_TIMEOUT_DURATION_KEY =
   "dfs.ratis.client.request.timeout.duration";
   public static final TimeDuration

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5ddefdd5/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
index 3b4f017..54b1cf8 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
@@ -229,6 +229,15 @@ public final class OzoneConfigKeys {
   = ScmConfigKeys.DFS_CONTAINER_RATIS_SEGMENT_PREALLOCATED_SIZE_KEY;
   public static final int DFS_CONTAINER_RATIS_SEGMENT_PREALLOCATED_SIZE_DEFAULT
   = ScmConfigKeys.DFS_CONTAINER_RATIS_SEGMENT_PREALLOCATED_SIZE_DEFAULT;
+
+  // config settings to enable stateMachineData write timeout
+  public static final String
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_SYNC_TIMEOUT =
+  ScmConfigKeys.DFS_CONTAINER_RATIS_STATEMACHINEDATA_SYNC_TIMEOUT;
+  public static final TimeDuration
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_SYNC_TIMEOUT_DEFAULT =
+  ScmConfigKeys.DFS_CONTAINER_RATIS_STATEMACHINEDATA_SYNC_TIMEOUT_DEFAULT;
+
   public static final int DFS_CONTAINER_CHUNK_MAX_SIZE
   = ScmConfigKeys.OZONE_SCM_CHUNK_MAX_SIZE;
   public static final String DFS_CONTAINER_RATIS_DATANODE_STORAGE_DIR =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5ddefdd5/hadoop-hdds/common/src/main/resources/ozone-default.xml
--
diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml 
b/hadoop-hdds/common/src/main/resources/ozone-default.xml
index eb68662..5ff60eb 100644
--- a/hadoop-hdds/common/src/main/resources/ozone-default.xml
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -53,6 +53,13 @@
 
   
   
+dfs.container.ratis.statemachinedata.sync.timeout
+10s
+OZONE, DEBUG, CONTAINER, RATIS
+Timeout for StateMachine data writes by Ratis.
+
+  
+  
 dfs.container.ratis.datanode.storage.dir
 
 OZONE, CONTAINER, STORAGE, MANAGEMENT, RATIS

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5ddefdd5/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java

[12/50] [abbrv] hadoop git commit: HADOOP-15886. Fix findbugs warnings in RegistryDNS.java.

2018-11-05 Thread aengineer
HADOOP-15886. Fix findbugs warnings in RegistryDNS.java.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f747f5b0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f747f5b0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f747f5b0

Branch: refs/heads/HDDS-4
Commit: f747f5b06cb0da59c7c20b9f0e46d3eec9622eed
Parents: 277a3d8
Author: Akira Ajisaka 
Authored: Tue Oct 30 11:43:36 2018 +0900
Committer: Akira Ajisaka 
Committed: Wed Oct 31 10:01:31 2018 +0900

--
 .../dev-support/findbugs-exclude.xml| 33 
 hadoop-common-project/hadoop-registry/pom.xml   | 10 ++
 .../dev-support/findbugs-exclude.xml| 16 --
 3 files changed, 43 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f747f5b0/hadoop-common-project/hadoop-registry/dev-support/findbugs-exclude.xml
--
diff --git 
a/hadoop-common-project/hadoop-registry/dev-support/findbugs-exclude.xml 
b/hadoop-common-project/hadoop-registry/dev-support/findbugs-exclude.xml
new file mode 100644
index 000..dc7b139
--- /dev/null
+++ b/hadoop-common-project/hadoop-registry/dev-support/findbugs-exclude.xml
@@ -0,0 +1,33 @@
+
+
+  
+
+
+
+  
+  
+
+
+
+  
+  
+
+
+
+  
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f747f5b0/hadoop-common-project/hadoop-registry/pom.xml
--
diff --git a/hadoop-common-project/hadoop-registry/pom.xml 
b/hadoop-common-project/hadoop-registry/pom.xml
index ef9f3ef..7ca1c9e 100644
--- a/hadoop-common-project/hadoop-registry/pom.xml
+++ b/hadoop-common-project/hadoop-registry/pom.xml
@@ -163,6 +163,16 @@
 
 
   
+org.codehaus.mojo
+findbugs-maven-plugin
+
+  true
+  true
+  
${project.basedir}/dev-support/findbugs-exclude.xml
+  Max
+
+  
+  
 org.apache.rat
 apache-rat-plugin
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f747f5b0/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml 
b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
index 216c3bd..dd42129 100644
--- a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
+++ b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
@@ -639,22 +639,6 @@
 
   
 
-  
-
-
-
-  
-  
-
-
-
-  
-  
-
-
-
-  
-
   
   
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[13/50] [abbrv] hadoop git commit: HDDS-762. Fix unit test failure for TestContainerSQLCli & TestSCMMetrics. Contributed by Mukul Kumar Singh.

2018-11-05 Thread aengineer
HDDS-762. Fix unit test failure for TestContainerSQLCli & TestSCMMetrics.
Contributed by Mukul Kumar Singh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e33b61f3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e33b61f3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e33b61f3

Branch: refs/heads/HDDS-4
Commit: e33b61f3351c09b00717f6eef32ff7d24345d06e
Parents: f747f5b
Author: Anu Engineer 
Authored: Tue Oct 30 19:16:52 2018 -0700
Committer: Anu Engineer 
Committed: Tue Oct 30 19:16:52 2018 -0700

--
 hadoop-hdds/pom.xml   |  2 +-
 .../common/transport/server/ratis/TestCSMMetrics.java | 10 +++---
 hadoop-ozone/pom.xml  |  2 +-
 .../org/apache/hadoop/ozone/scm/TestContainerSQLCli.java  |  3 ++-
 4 files changed, 11 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e33b61f3/hadoop-hdds/pom.xml
--
diff --git a/hadoop-hdds/pom.xml b/hadoop-hdds/pom.xml
index f960e90..090a537 100644
--- a/hadoop-hdds/pom.xml
+++ b/hadoop-hdds/pom.xml
@@ -45,7 +45,7 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
 0.4.0-SNAPSHOT
 
 
-0.3.0-2272086-SNAPSHOT
+0.3.0-1d2ebee-SNAPSHOT
 
 1.60
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e33b61f3/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/TestCSMMetrics.java
--
diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/TestCSMMetrics.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/TestCSMMetrics.java
index a5a9641..67db7ff 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/TestCSMMetrics.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/TestCSMMetrics.java
@@ -47,6 +47,7 @@ import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 
 import static org.apache.ratis.rpc.SupportedRpcType.GRPC;
+import org.apache.ratis.protocol.RaftGroupId;
 import org.apache.ratis.util.CheckedBiConsumer;
 
 import java.util.function.BiConsumer;
@@ -104,7 +105,8 @@ public class TestCSMMetrics {
   client.connect();
 
   // Before Read Chunk/Write Chunk
-  MetricsRecordBuilder metric = getMetrics(CSMMetrics.SOURCE_NAME);
+  MetricsRecordBuilder metric = getMetrics(CSMMetrics.SOURCE_NAME +
+  RaftGroupId.valueOf(pipeline.getId().getId()).toString());
   assertCounter("NumWriteStateMachineOps", 0L, metric);
   assertCounter("NumReadStateMachineOps", 0L, metric);
   assertCounter("NumApplyTransactionOps", 0L, metric);
@@ -120,7 +122,8 @@ public class TestCSMMetrics {
   Assert.assertEquals(ContainerProtos.Result.SUCCESS,
   response.getResult());
 
-  metric = getMetrics(CSMMetrics.SOURCE_NAME);
+  metric = getMetrics(CSMMetrics.SOURCE_NAME +
+  RaftGroupId.valueOf(pipeline.getId().getId()).toString());
   assertCounter("NumWriteStateMachineOps", 1L, metric);
   assertCounter("NumApplyTransactionOps", 1L, metric);
 
@@ -132,7 +135,8 @@ public class TestCSMMetrics {
   Assert.assertEquals(ContainerProtos.Result.SUCCESS,
   response.getResult());
 
-  metric = getMetrics(CSMMetrics.SOURCE_NAME);
+  metric = getMetrics(CSMMetrics.SOURCE_NAME +
+  RaftGroupId.valueOf(pipeline.getId().getId()).toString());
   assertCounter("NumReadStateMachineOps", 1L, metric);
   assertCounter("NumApplyTransactionOps", 1L, metric);
 } finally {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e33b61f3/hadoop-ozone/pom.xml
--
diff --git a/hadoop-ozone/pom.xml b/hadoop-ozone/pom.xml
index 2fcffab..33af31b 100644
--- a/hadoop-ozone/pom.xml
+++ b/hadoop-ozone/pom.xml
@@ -33,7 +33,7 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
 3.2.1-SNAPSHOT
 0.4.0-SNAPSHOT
 0.4.0-SNAPSHOT
-0.3.0-2272086-SNAPSHOT
+0.3.0-1d2ebee-SNAPSHOT
 1.60
 Badlands
 ${ozone.version}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e33b61f3/hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSQLCli.java
--
diff --git 
a/hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSQLCli.java
 

[16/50] [abbrv] hadoop git commit: HDFS-13942. [JDK10] Fix javadoc errors in hadoop-hdfs module. Contributed by Dinesh Chitlangia.

2018-11-05 Thread aengineer
http://git-wip-us.apache.org/repos/asf/hadoop/blob/fac9f91b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Quota.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Quota.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Quota.java
index 5e708be..a195bf1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Quota.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/Quota.java
@@ -47,7 +47,7 @@ public enum Quota {
 
   /**
* Is quota violated?
-   * The quota is violated if quota is set and usage > quota. 
+   * The quota is violated if quota is set and usage  quota.
*/
   public static boolean isViolated(final long quota, final long usage) {
 return quota >= 0 && usage > quota;
@@ -55,7 +55,8 @@ public enum Quota {
 
   /**
* Is quota violated?
-   * The quota is violated if quota is set, delta > 0 and usage + delta > 
quota.
+   * The quota is violated if quota is set, delta  0 and
+   * usage + delta  quota.
*/
   static boolean isViolated(final long quota, final long usage,
   final long delta) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fac9f91b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
index a8acccd..2e13df5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
@@ -319,7 +319,7 @@ public class ReencryptionHandler implements Runnable {
   /**
* Main loop. It takes at most 1 zone per scan, and executes until the zone
* is completed.
-   * {@see #reencryptEncryptionZoneInt(Long)}.
+   * {@link #reencryptEncryptionZone(long)}.
*/
   @Override
   public void run() {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fac9f91b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
index e1bf027..b6f4f64 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
@@ -31,7 +31,7 @@ import com.google.common.base.Preconditions;
 import static 
org.apache.hadoop.hdfs.server.common.HdfsServerConstants.SECURITY_XATTR_UNREADABLE_BY_SUPERUSER;
 
 /**
- * There are four types of extended attributes  defined by the
+ * There are four types of extended attributes XAttr defined by the
  * following namespaces:
  * 
  * USER - extended user attributes: these can be assigned to files and
@@ -56,7 +56,7 @@ import static 
org.apache.hadoop.hdfs.server.common.HdfsServerConstants.SECURITY_
  *   is called on a file or directory in the /.reserved/raw HDFS directory
  *   hierarchy. These attributes can only be accessed by the user who have
  *   read access.
- * 
+ * 
  */
 @InterfaceAudience.Private
 public class XAttrPermissionFilter {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fac9f91b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrStorage.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrStorage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrStorage.java
index 1dab69c..d856f6d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrStorage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrStorage.java
@@ -33,7 +33,7 @@ public class XAttrStorage {
 
   /**
* Reads the extended attribute of an inode by name with prefix.
-   * 
+   * 
*
* @param inode INode to read
* @param snapshotId the snapshotId of the requested path
@@ -48,11 +48,11 @@ public class XAttrStorage {
 
   /**
* Reads the existing extended attributes of an inode.
-   * 
+   * 

[43/50] [abbrv] hadoop git commit: HDDS-796. Fix failed test TestStorageContainerManagerHttpServer#testHttpPolicy.

2018-11-05 Thread aengineer
HDDS-796. Fix failed test TestStorageContainerManagerHttpServer#testHttpPolicy.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/15df2e7a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/15df2e7a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/15df2e7a

Branch: refs/heads/HDDS-4
Commit: 15df2e7a7547e12e884b624d9f17ad2799d9ccf9
Parents: d43cc5d
Author: Yiqun Lin 
Authored: Mon Nov 5 17:31:06 2018 +0800
Committer: Yiqun Lin 
Committed: Mon Nov 5 17:31:06 2018 +0800

--
 .../java/org/apache/hadoop/hdds/server/BaseHttpServer.java| 7 ++-
 1 file changed, 2 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/15df2e7a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
--
diff --git 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
index 2726fc3..5e7d7b8 100644
--- 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
+++ 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
@@ -115,13 +115,10 @@ public abstract class BaseHttpServer {
 final Optional addressPort =
 getPortNumberFromConfigKeys(conf, addressKey);
 
-final Optional addresHost =
+final Optional addressHost =
 getHostNameFromConfigKeys(conf, addressKey);
 
-String hostName = bindHost.orElse(addresHost.get());
-if (hostName == null || hostName.isEmpty()) {
-  hostName = bindHostDefault;
-}
+String hostName = bindHost.orElse(addressHost.orElse(bindHostDefault));
 
 return NetUtils.createSocketAddr(
 hostName + ":" + addressPort.orElse(bindPortdefault));


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[02/50] [abbrv] hadoop git commit: YARN-8950. Fix compilation issue due to dependency convergence error for hbase.profile=2.0.

2018-11-05 Thread aengineer
YARN-8950. Fix compilation issue due to dependency convergence error for 
hbase.profile=2.0.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4ec4ec69
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4ec4ec69
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4ec4ec69

Branch: refs/heads/HDDS-4
Commit: 4ec4ec69711180d642c5b56cd3d3dbdf44d3c61f
Parents: db7e636
Author: Rohith Sharma K S 
Authored: Tue Oct 30 11:29:58 2018 +0530
Committer: Rohith Sharma K S 
Committed: Tue Oct 30 11:30:08 2018 +0530

--
 .../hadoop-yarn-server-timelineservice-hbase-client/pom.xml  | 8 
 .../pom.xml  | 8 
 2 files changed, 16 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4ec4ec69/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/pom.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/pom.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/pom.xml
index 86b2158..4225519 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/pom.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client/pom.xml
@@ -160,6 +160,14 @@
   org.mortbay.jetty
   jetty-sslengine
 
+
+  org.eclipse.jetty
+  jetty-security
+
+
+  org.eclipse.jetty
+  jetty-http
+
   
 
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4ec4ec69/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-2/pom.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-2/pom.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-2/pom.xml
index 4fde40c..984cac9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-2/pom.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-server/hadoop-yarn-server-timelineservice-hbase-server-2/pom.xml
@@ -147,6 +147,14 @@
   org.mortbay.jetty
   jetty-sslengine
 
+
+  org.eclipse.jetty
+  jetty-security
+
+
+  org.eclipse.jetty
+  jetty-http
+
   
 
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[49/50] [abbrv] hadoop git commit: Merge branch 'trunk' into HDDS-4

2018-11-05 Thread aengineer
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7119be30/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestSecureOzoneCluster.java
--
diff --cc 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestSecureOzoneCluster.java
index 904e597,000..bf36699
mode 100644,00..100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestSecureOzoneCluster.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestSecureOzoneCluster.java
@@@ -1,304 -1,0 +1,306 @@@
 +/**
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.hadoop.ozone;
 +
- import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS;
- import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_SECURITY_ENABLED_KEY;
- 
- import java.io.File;
- import java.io.IOException;
- import java.nio.file.Path;
- import java.nio.file.Paths;
- import java.util.Properties;
- import java.util.UUID;
- import java.util.concurrent.Callable;
 +import org.apache.hadoop.classification.InterfaceAudience;
 +import org.apache.hadoop.conf.Configuration;
 +import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 +import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 +import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 +import org.apache.hadoop.hdds.scm.ScmInfo;
 +import org.apache.hadoop.hdds.scm.server.SCMStorage;
 +import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
 +import org.apache.hadoop.minikdc.MiniKdc;
 +import org.apache.hadoop.ozone.om.OMConfigKeys;
 +import org.apache.hadoop.ozone.om.OMStorage;
 +import org.apache.hadoop.ozone.om.OzoneManager;
 +import org.apache.hadoop.security.KerberosAuthException;
 +import org.apache.hadoop.security.UserGroupInformation;
 +import 
org.apache.hadoop.security.authentication.client.AuthenticationException;
- import org.apache.hadoop.security.authentication.util.KerberosUtil;
 +import org.apache.hadoop.test.GenericTestUtils;
 +import org.apache.hadoop.test.GenericTestUtils.LogCapturer;
 +import org.apache.hadoop.test.LambdaTestUtils;
 +import org.junit.After;
 +import org.junit.Assert;
 +import org.junit.Before;
 +import org.junit.Rule;
 +import org.junit.Test;
 +import org.junit.rules.Timeout;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
++import java.io.File;
++import java.io.IOException;
++import java.net.InetAddress;
++import java.nio.file.Path;
++import java.nio.file.Paths;
++import java.util.Properties;
++import java.util.UUID;
++import java.util.concurrent.Callable;
++
++import static org.apache.hadoop.hdds.HddsConfigKeys.OZONE_METADATA_DIRS;
++import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS;
++import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_SECURITY_ENABLED_KEY;
++
 +/**
 + * Test class to for security enabled Ozone cluster.
 + */
 +@InterfaceAudience.Private
 +public final class TestSecureOzoneCluster {
 +
 +  private Logger LOGGER = LoggerFactory
 +  .getLogger(TestSecureOzoneCluster.class);
 +
 +  @Rule
 +  public Timeout timeout = new Timeout(8);
 +
 +  private MiniKdc miniKdc;
 +  private OzoneConfiguration conf;
 +  private File workDir;
 +  private static Properties securityProperties;
 +  private File scmKeytab;
 +  private File spnegoKeytab;
 +  private File omKeyTab;
 +  private String curUser;
 +  private StorageContainerManager scm;
 +  private OzoneManager om;
 +
 +  private static String clusterId;
 +  private static String scmId;
 +  private static String omId;
 +
 +  @Before
 +  public void init() {
 +try {
 +  conf = new OzoneConfiguration();
 +  startMiniKdc();
 +  setSecureConfig(conf);
 +  createCredentialsInKDC(conf, miniKdc);
 +} catch (IOException e) {
 +  LOGGER.error("Failed to initialize TestSecureOzoneCluster", e);
 +} catch (Exception e) {
 +  LOGGER.error("Failed to initialize TestSecureOzoneCluster", e);
 +}
 +  }
 +
 +  @After
 +  public void stop() {
 +try {
 +  stopMiniKdc();
 +  if (scm != null) {
 +scm.stop();
 +  }
 +  if (om != null) {
 +om.stop();
 +  }
 +} 

[22/50] [abbrv] hadoop git commit: HDDS-759. Create config settings for SCM and OM DB directories. Contributed by Arpit Agarwal.

2018-11-05 Thread aengineer
HDDS-759. Create config settings for SCM and OM DB directories. Contributed by 
Arpit Agarwal.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/08bb0362
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/08bb0362
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/08bb0362

Branch: refs/heads/HDDS-4
Commit: 08bb0362e0c57f562e2f2e366cba725649d1d9c8
Parents: 478b2cb
Author: Arpit Agarwal 
Authored: Wed Oct 31 11:23:15 2018 -0700
Committer: Arpit Agarwal 
Committed: Wed Oct 31 11:23:15 2018 -0700

--
 .../org/apache/hadoop/hdds/HddsConfigKeys.java  |  7 ++
 .../java/org/apache/hadoop/hdds/HddsUtils.java  |  2 +-
 .../apache/hadoop/hdds/scm/ScmConfigKeys.java   |  5 ++
 .../apache/hadoop/ozone/OzoneConfigKeys.java|  3 -
 .../common/src/main/resources/ozone-default.xml | 45 +++---
 .../apache/hadoop/hdds/scm/HddsServerUtil.java  | 16 ++--
 .../ozone/container/common/SCMTestUtils.java|  3 +-
 .../common/TestDatanodeStateMachine.java|  3 +-
 .../container/ozoneimpl/TestOzoneContainer.java |  4 +-
 .../apache/hadoop/hdds/server/ServerUtils.java  | 49 +--
 .../hdds/scm/block/DeletedBlockLogImpl.java | 10 +--
 .../hdds/scm/container/SCMContainerManager.java |  4 +-
 .../hdds/scm/pipeline/SCMPipelineManager.java   |  6 +-
 .../hadoop/hdds/scm/server/SCMStorage.java  |  4 +-
 .../hadoop/hdds/scm/TestHddsServerUtils.java| 50 +++
 .../hadoop/hdds/scm/block/TestBlockManager.java |  4 +-
 .../hdds/scm/block/TestDeletedBlockLog.java |  4 +-
 .../TestCloseContainerEventHandler.java |  4 +-
 .../container/TestContainerReportHandler.java   |  4 +-
 .../scm/container/TestSCMContainerManager.java  |  4 +-
 .../hdds/scm/node/TestContainerPlacement.java   |  4 +-
 .../hdds/scm/node/TestDeadNodeHandler.java  |  4 +-
 .../hadoop/hdds/scm/node/TestNodeManager.java   |  4 +-
 .../ozone/container/common/TestEndPoint.java|  3 +-
 .../java/org/apache/hadoop/ozone/OmUtils.java   | 42 +
 .../apache/hadoop/ozone/om/OMConfigKeys.java|  3 +
 .../org/apache/hadoop/ozone/TestOmUtils.java| 91 
 .../scm/pipeline/TestSCMPipelineManager.java|  4 +-
 .../hadoop/ozone/MiniOzoneClusterImpl.java  |  5 +-
 .../hadoop/ozone/TestMiniOzoneCluster.java  |  7 +-
 .../ozone/TestStorageContainerManager.java  |  9 +-
 .../hadoop/ozone/om/TestOzoneManager.java   |  3 +-
 .../org/apache/hadoop/ozone/om/OMStorage.java   |  5 +-
 .../hadoop/ozone/om/OmMetadataManagerImpl.java  |  5 +-
 .../apache/hadoop/ozone/om/TestOmSQLCli.java|  3 +-
 .../hadoop/ozone/scm/TestContainerSQLCli.java   |  3 +-
 36 files changed, 343 insertions(+), 83 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/08bb0362/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
index 210b075..abacafe 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
@@ -97,4 +97,11 @@ public final class HddsConfigKeys {
   "hdds.lock.max.concurrency";
   public static final int HDDS_LOCK_MAX_CONCURRENCY_DEFAULT = 100;
 
+  // This configuration setting is used as a fallback location by all
+  // Ozone/HDDS services for their metadata. It is useful as a single
+  // config point for test/PoC clusters.
+  //
+  // In any real cluster where performance matters, the SCM, OM and DN
+  // metadata locations must be configured explicitly.
+  public static final String OZONE_METADATA_DIRS = "ozone.metadata.dirs";
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/08bb0362/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
index 09fc75b..89edfdd 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
@@ -305,7 +305,7 @@ public final class HddsUtils {
   public static String getDatanodeIdFilePath(Configuration conf) {
 String dataNodeIDPath = conf.get(ScmConfigKeys.OZONE_SCM_DATANODE_ID);
 if (dataNodeIDPath == null) {
-  String metaPath = conf.get(OzoneConfigKeys.OZONE_METADATA_DIRS);
+  String metaPath = conf.get(HddsConfigKeys.OZONE_METADATA_DIRS);
   if (Strings.isNullOrEmpty(metaPath)) {
 // this 

[26/50] [abbrv] hadoop git commit: HDDS-786. Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline.

2018-11-05 Thread aengineer
HDDS-786. Fix the findbugs for SCMClientProtocolServer#getContainerWithPipeline.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2e8ac14d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2e8ac14d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2e8ac14d

Branch: refs/heads/HDDS-4
Commit: 2e8ac14dcb57a0fe07b2119c26535c3541665b70
Parents: b13c567
Author: Yiqun Lin 
Authored: Thu Nov 1 14:10:17 2018 +0800
Committer: Yiqun Lin 
Committed: Thu Nov 1 14:10:17 2018 +0800

--
 .../org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java  | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e8ac14d/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
--
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
index e92200a..58cb871 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
@@ -189,7 +189,6 @@ public class SCMClientProtocolServer implements
 }
   }
 }
-String remoteUser = getRpcRemoteUsername();
 getScm().checkAdminAccess(null);
 return scm.getContainerManager()
 .getContainerWithPipeline(ContainerID.valueof(containerID));


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[41/50] [abbrv] hadoop git commit: HADOOP-15899. Update AWS Java SDK versions in NOTICE.txt.

2018-11-05 Thread aengineer
HADOOP-15899. Update AWS Java SDK versions in NOTICE.txt.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a5e21cd6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a5e21cd6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a5e21cd6

Branch: refs/heads/HDDS-4
Commit: a5e21cd6a450990d00eb7515d579b51d2a0bff82
Parents: 4e3df75
Author: Akira Ajisaka 
Authored: Mon Nov 5 15:28:21 2018 +0900
Committer: Akira Ajisaka 
Committed: Mon Nov 5 15:28:21 2018 +0900

--
 NOTICE.txt | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a5e21cd6/NOTICE.txt
--
diff --git a/NOTICE.txt b/NOTICE.txt
index 00fa375..d6e5488 100644
--- a/NOTICE.txt
+++ b/NOTICE.txt
@@ -8,10 +8,10 @@ following notices:
 * Copyright 2011 FuseSource Corp. http://fusesource.com
 
 The binary distribution of this product bundles binaries of
-AWS SDK for Java - Bundle 1.11.134,
-AWS Java SDK for AWS KMS 1.11.134,
-AWS Java SDK for Amazon S3 1.11.134,
-AWS Java SDK for AWS STS 1.11.134,
+AWS SDK for Java - Bundle 1.11.375,
+AWS Java SDK for AWS KMS 1.11.375,
+AWS Java SDK for Amazon S3 1.11.375,
+AWS Java SDK for AWS STS 1.11.375,
 JMES Path Query library 1.0,
 which has the following notices:
  * This software includes third party software subject to the following


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[48/50] [abbrv] hadoop git commit: HDFS-14042. Fix NPE when PROVIDED storage is missing. Contributed by Virajith Jalaparti.

2018-11-05 Thread aengineer
HDFS-14042. Fix NPE when PROVIDED storage is missing. Contributed by Virajith 
Jalaparti.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f3f5e7ad
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f3f5e7ad
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f3f5e7ad

Branch: refs/heads/HDDS-4
Commit: f3f5e7ad005a88afad6fa09602073eaa450e21ed
Parents: 50f40e0
Author: Giovanni Matteo Fumarola 
Authored: Mon Nov 5 11:02:31 2018 -0800
Committer: Giovanni Matteo Fumarola 
Committed: Mon Nov 5 11:02:31 2018 -0800

--
 .../hdfs/server/blockmanagement/BlockManager.java  | 13 -
 .../server/blockmanagement/DatanodeDescriptor.java |  4 ++--
 .../hdfs/server/blockmanagement/HeartbeatManager.java  |  2 +-
 .../hdfs/server/datanode/TestDataNodeLifeline.java |  5 +
 4 files changed, 20 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f3f5e7ad/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index d74b523..a5fb0b1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -2447,7 +2447,7 @@ public class BlockManager implements BlockStatsMXBean {
 return providedStorageMap.getCapacity();
   }
 
-  public void updateHeartbeat(DatanodeDescriptor node, StorageReport[] reports,
+  void updateHeartbeat(DatanodeDescriptor node, StorageReport[] reports,
   long cacheCapacity, long cacheUsed, int xceiverCount, int failedVolumes,
   VolumeFailureSummary volumeFailureSummary) {
 
@@ -2458,6 +2458,17 @@ public class BlockManager implements BlockStatsMXBean {
 failedVolumes, volumeFailureSummary);
   }
 
+  void updateHeartbeatState(DatanodeDescriptor node,
+  StorageReport[] reports, long cacheCapacity, long cacheUsed,
+  int xceiverCount, int failedVolumes,
+  VolumeFailureSummary volumeFailureSummary) {
+for (StorageReport report: reports) {
+  providedStorageMap.updateStorage(node, report.getStorage());
+}
+node.updateHeartbeatState(reports, cacheCapacity, cacheUsed, xceiverCount,
+failedVolumes, volumeFailureSummary);
+  }
+
   /**
* StatefulBlockInfo is used to build the "toUC" list, which is a list of
* updates to the information about under-construction blocks.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f3f5e7ad/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
index 12b5c33..6aa2376 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
@@ -373,7 +373,7 @@ public class DatanodeDescriptor extends DatanodeInfo {
   /**
* Updates stats from datanode heartbeat.
*/
-  public void updateHeartbeat(StorageReport[] reports, long cacheCapacity,
+  void updateHeartbeat(StorageReport[] reports, long cacheCapacity,
   long cacheUsed, int xceiverCount, int volFailures,
   VolumeFailureSummary volumeFailureSummary) {
 updateHeartbeatState(reports, cacheCapacity, cacheUsed, xceiverCount,
@@ -384,7 +384,7 @@ public class DatanodeDescriptor extends DatanodeInfo {
   /**
* process datanode heartbeat or stats initialization.
*/
-  public void updateHeartbeatState(StorageReport[] reports, long cacheCapacity,
+  void updateHeartbeatState(StorageReport[] reports, long cacheCapacity,
   long cacheUsed, int xceiverCount, int volFailures,
   VolumeFailureSummary volumeFailureSummary) {
 updateStorageStats(reports, cacheCapacity, cacheUsed, xceiverCount,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f3f5e7ad/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
--
diff --git 

[25/50] [abbrv] hadoop git commit: HDDS-697. update and validate the BCSID for PutSmallFile/GetSmallFile command. Contributed by Shashikant Banerjee.

2018-11-05 Thread aengineer
HDDS-697. update and validate the BCSID for PutSmallFile/GetSmallFile command. 
Contributed by Shashikant Banerjee.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b13c5674
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b13c5674
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b13c5674

Branch: refs/heads/HDDS-4
Commit: b13c56742a6fc0f6cb1ddd63e1afd51eb216e052
Parents: c5eb237
Author: Shashikant Banerjee 
Authored: Thu Nov 1 10:21:25 2018 +0530
Committer: Shashikant Banerjee 
Committed: Thu Nov 1 10:21:39 2018 +0530

--
 .../scm/storage/ContainerProtocolCalls.java |  8 ++-
 .../main/proto/DatanodeContainerProtocol.proto  |  2 +-
 .../server/ratis/ContainerStateMachine.java | 24 ++---
 .../container/keyvalue/KeyValueHandler.java |  2 -
 .../container/keyvalue/helpers/BlockUtils.java  |  2 +-
 .../keyvalue/helpers/SmallFileUtils.java|  7 +++
 .../rpc/TestContainerStateMachineFailures.java  | 21 +---
 .../ozone/scm/TestContainerSmallFile.java   | 51 
 8 files changed, 85 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b13c5674/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/storage/ContainerProtocolCalls.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/storage/ContainerProtocolCalls.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/storage/ContainerProtocolCalls.java
index 150b1d6..c1d90a5 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/storage/ContainerProtocolCalls.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/storage/ContainerProtocolCalls.java
@@ -59,6 +59,8 @@ import 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Type;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
 .WriteChunkRequestProto;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.KeyValue;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.
+PutSmallFileResponseProto;
 import org.apache.hadoop.hdds.client.BlockID;
 
 import java.io.IOException;
@@ -231,10 +233,11 @@ public final class ContainerProtocolCalls  {
* @param blockID - ID of the block
* @param data - Data to be written into the container.
* @param traceID - Trace ID for logging purpose.
+   * @return container protocol writeSmallFile response
* @throws IOException
*/
-  public static void writeSmallFile(XceiverClientSpi client,
-  BlockID blockID, byte[] data, String traceID)
+  public static PutSmallFileResponseProto writeSmallFile(
+  XceiverClientSpi client, BlockID blockID, byte[] data, String traceID)
   throws IOException {
 
 BlockData containerBlockData =
@@ -268,6 +271,7 @@ public final class ContainerProtocolCalls  {
 .build();
 ContainerCommandResponseProto response = client.sendCommand(request);
 validateContainerResponse(response);
+return response.getPutSmallFile();
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b13c5674/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
--
diff --git a/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto 
b/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
index 1700e23..df26f24 100644
--- a/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
+++ b/hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
@@ -413,7 +413,7 @@ message PutSmallFileRequestProto {
 
 
 message PutSmallFileResponseProto {
-
+  required GetCommittedBlockLengthResponseProto committedBlockLength = 1;
 }
 
 message GetSmallFileRequestProto {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b13c5674/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
index ac0833b..d5762bc 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
@@ -481,10 +481,11 @@ public class ContainerStateMachine extends 

[17/50] [abbrv] hadoop git commit: HDFS-13942. [JDK10] Fix javadoc errors in hadoop-hdfs module. Contributed by Dinesh Chitlangia.

2018-11-05 Thread aengineer
HDFS-13942. [JDK10] Fix javadoc errors in hadoop-hdfs module. Contributed by 
Dinesh Chitlangia.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fac9f91b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fac9f91b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fac9f91b

Branch: refs/heads/HDDS-4
Commit: fac9f91b2944cee641049fffcafa6b65e0cf68f2
Parents: e4f22b0
Author: Akira Ajisaka 
Authored: Wed Oct 31 14:43:58 2018 +0900
Committer: Akira Ajisaka 
Committed: Wed Oct 31 14:43:58 2018 +0900

--
 .../java/org/apache/hadoop/hdfs/DFSUtil.java| 12 ++--
 .../hadoop/hdfs/protocol/BlockListAsLongs.java  |  2 +-
 .../QJournalProtocolServerSideTranslatorPB.java |  2 +-
 .../token/block/BlockTokenSecretManager.java|  2 +-
 .../hadoop/hdfs/server/balancer/Balancer.java   | 15 ++---
 .../server/blockmanagement/BlockManager.java| 26 +
 .../blockmanagement/BlockPlacementPolicy.java   |  1 -
 .../CombinedHostFileManager.java|  6 +-
 .../blockmanagement/CorruptReplicasMap.java |  2 +-
 .../blockmanagement/DatanodeAdminManager.java   |  8 +--
 .../server/blockmanagement/HostFileManager.java |  7 +--
 .../hdfs/server/blockmanagement/HostSet.java|  8 +--
 .../server/blockmanagement/SlowPeerTracker.java |  5 +-
 .../server/datanode/BlockPoolSliceStorage.java  | 60 
 .../server/datanode/BlockRecoveryWorker.java| 15 +++--
 .../hdfs/server/datanode/BlockScanner.java  |  6 +-
 .../hadoop/hdfs/server/datanode/DataNode.java   | 10 ++--
 .../hdfs/server/datanode/DataStorage.java   |  4 +-
 .../hdfs/server/datanode/DirectoryScanner.java  |  1 -
 .../hdfs/server/datanode/FileIoProvider.java|  3 -
 .../hdfs/server/datanode/VolumeScanner.java |  4 +-
 .../server/datanode/checker/AbstractFuture.java | 13 ++---
 .../server/datanode/fsdataset/FsDatasetSpi.java | 12 ++--
 .../server/datanode/fsdataset/FsVolumeSpi.java  | 13 +++--
 .../datanode/metrics/OutlierDetector.java   |  3 +-
 .../diskbalancer/DiskBalancerException.java |  1 -
 .../datamodel/DiskBalancerCluster.java  | 11 ++--
 .../datamodel/DiskBalancerDataNode.java | 10 ++--
 .../diskbalancer/planner/GreedyPlanner.java |  2 +-
 .../hadoop/hdfs/server/namenode/AclStorage.java | 18 +++---
 .../server/namenode/EncryptionZoneManager.java  | 42 +-
 .../hdfs/server/namenode/FSDirectory.java   |  8 +--
 .../hdfs/server/namenode/FSNamesystem.java  | 24 ++--
 .../hadoop/hdfs/server/namenode/INode.java  |  4 +-
 .../hdfs/server/namenode/INodeReference.java|  6 +-
 .../hdfs/server/namenode/INodesInPath.java  |  4 +-
 .../hdfs/server/namenode/JournalManager.java|  2 +-
 .../hdfs/server/namenode/LeaseManager.java  |  2 +-
 .../server/namenode/MetaRecoveryContext.java|  2 +-
 .../hadoop/hdfs/server/namenode/NameNode.java   |  6 +-
 .../hdfs/server/namenode/NamenodeFsck.java  |  9 ++-
 .../hadoop/hdfs/server/namenode/Quota.java  |  5 +-
 .../server/namenode/ReencryptionHandler.java|  2 +-
 .../server/namenode/XAttrPermissionFilter.java  |  4 +-
 .../hdfs/server/namenode/XAttrStorage.java  |  8 +--
 .../snapshot/AbstractINodeDiffList.java |  8 +--
 .../namenode/snapshot/DiffListBySkipList.java   |  9 +--
 .../sps/BlockStorageMovementNeeded.java |  5 +-
 .../namenode/sps/DatanodeCacheManager.java  |  2 +-
 .../sps/StoragePolicySatisfyManager.java| 14 +++--
 .../startupprogress/StartupProgressView.java|  4 +-
 .../server/namenode/top/metrics/TopMetrics.java | 17 --
 .../namenode/top/window/RollingWindow.java  | 18 +++---
 .../top/window/RollingWindowManager.java|  2 +-
 .../protocol/BlockStorageMovementCommand.java   | 11 ++--
 .../hdfs/server/protocol/DatanodeProtocol.java  |  2 +-
 .../hdfs/server/protocol/NamenodeProtocol.java  |  5 +-
 .../sps/ExternalSPSBlockMoveTaskHandler.java|  2 +
 .../org/apache/hadoop/hdfs/tools/DFSck.java | 13 +++--
 .../offlineEditsViewer/OfflineEditsViewer.java  |  4 +-
 .../offlineEditsViewer/OfflineEditsVisitor.java |  2 +-
 .../StatisticsEditsVisitor.java |  4 +-
 .../NameDistributionVisitor.java|  4 +-
 .../java/org/apache/hadoop/hdfs/util/Diff.java  | 16 +++---
 .../org/apache/hadoop/hdfs/util/XMLUtils.java   |  4 +-
 65 files changed, 310 insertions(+), 246 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fac9f91b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
index 

[36/50] [abbrv] hadoop git commit: YARN-8897. LoadBasedRouterPolicy throws NPE in case of sub cluster unavailability. Contributed by Bilwa S T.

2018-11-05 Thread aengineer
YARN-8897. LoadBasedRouterPolicy throws NPE in case of sub cluster 
unavailability. Contributed by Bilwa S T.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/aed836ef
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/aed836ef
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/aed836ef

Branch: refs/heads/HDDS-4
Commit: aed836efbff775d95899d05ff947f1048df8cf19
Parents: babc946
Author: Giovanni Matteo Fumarola 
Authored: Fri Nov 2 11:27:11 2018 -0700
Committer: Giovanni Matteo Fumarola 
Committed: Fri Nov 2 11:27:11 2018 -0700

--
 .../policies/router/LoadBasedRouterPolicy.java  |  6 +++-
 .../router/TestLoadBasedRouterPolicy.java   | 31 
 2 files changed, 36 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/aed836ef/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/policies/router/LoadBasedRouterPolicy.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/policies/router/LoadBasedRouterPolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/policies/router/LoadBasedRouterPolicy.java
index 06e445b..fa5eb4b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/policies/router/LoadBasedRouterPolicy.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/policies/router/LoadBasedRouterPolicy.java
@@ -26,6 +26,7 @@ import org.apache.hadoop.yarn.exceptions.YarnException;
 import 
org.apache.hadoop.yarn.server.federation.policies.FederationPolicyInitializationContext;
 import org.apache.hadoop.yarn.server.federation.policies.FederationPolicyUtils;
 import 
org.apache.hadoop.yarn.server.federation.policies.dao.WeightedPolicyInfo;
+import 
org.apache.hadoop.yarn.server.federation.policies.exceptions.FederationPolicyException;
 import 
org.apache.hadoop.yarn.server.federation.policies.exceptions.FederationPolicyInitializationException;
 import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
 import org.apache.hadoop.yarn.server.federation.store.records.SubClusterIdInfo;
@@ -95,7 +96,10 @@ public class LoadBasedRouterPolicy extends 
AbstractRouterPolicy {
 }
   }
 }
-
+if (chosen == null) {
+  throw new FederationPolicyException(
+  "Zero Active Subcluster with weight 1.");
+}
 return chosen.toId();
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/aed836ef/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/policies/router/TestLoadBasedRouterPolicy.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/policies/router/TestLoadBasedRouterPolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/policies/router/TestLoadBasedRouterPolicy.java
index dc8f99b..58f1b99 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/policies/router/TestLoadBasedRouterPolicy.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/policies/router/TestLoadBasedRouterPolicy.java
@@ -17,6 +17,8 @@
 
 package org.apache.hadoop.yarn.server.federation.policies.router;
 
+import static org.junit.Assert.fail;
+
 import java.util.HashMap;
 import java.util.Map;
 
@@ -103,4 +105,33 @@ public class TestLoadBasedRouterPolicy extends 
BaseRouterPoliciesTest {
 Assert.assertEquals("sc05", chosen.getId());
   }
 
+  @Test
+  public void testIfNoSubclustersWithWeightOne() {
+setPolicy(new LoadBasedRouterPolicy());
+setPolicyInfo(new WeightedPolicyInfo());
+Map routerWeights = new HashMap<>();
+Map amrmWeights = new HashMap<>();
+// update subcluster with weight 0
+SubClusterIdInfo sc = new SubClusterIdInfo(String.format("sc%02d", 0));
+SubClusterInfo federationSubClusterInfo = SubClusterInfo.newInstance(
+sc.toId(), null, null, null, null, -1, SubClusterState.SC_RUNNING, -1,
+generateClusterMetricsInfo(0));
+

[14/50] [abbrv] hadoop git commit: HDDS-754. VolumeInfo#getScmUsed throws NPE. Contributed by Hanisha Koneru.

2018-11-05 Thread aengineer
HDDS-754. VolumeInfo#getScmUsed throws NPE.
Contributed by Hanisha Koneru.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/773f0d15
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/773f0d15
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/773f0d15

Branch: refs/heads/HDDS-4
Commit: 773f0d1519715e3ddf77c139998cc12d7447da66
Parents: e33b61f
Author: Anu Engineer 
Authored: Tue Oct 30 19:17:57 2018 -0700
Committer: Anu Engineer 
Committed: Tue Oct 30 19:17:57 2018 -0700

--
 .../container/common/volume/VolumeInfo.java  | 19 +--
 .../ozone/container/common/volume/VolumeSet.java | 11 +++
 .../container/common/volume/TestHddsVolume.java  |  9 ++---
 .../container/common/volume/TestVolumeSet.java   |  4 +++-
 .../hdds/scm/pipeline/TestNodeFailure.java   |  3 ++-
 .../apache/hadoop/ozone/MiniOzoneCluster.java|  8 
 .../hadoop/ozone/MiniOzoneClusterImpl.java   | 16 +++-
 7 files changed, 58 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/773f0d15/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/VolumeInfo.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/VolumeInfo.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/VolumeInfo.java
index 62fca63..0de9f18 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/VolumeInfo.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/VolumeInfo.java
@@ -95,15 +95,30 @@ public class VolumeInfo {
 this.usage = new VolumeUsage(root, b.conf);
   }
 
-  public long getCapacity() {
-return configuredCapacity < 0 ? usage.getCapacity() : configuredCapacity;
+  public long getCapacity() throws IOException {
+if (configuredCapacity < 0) {
+  if (usage == null) {
+throw new IOException("Volume Usage thread is not running. This error" 
+
+" is usually seen during DataNode shutdown.");
+  }
+  return usage.getCapacity();
+}
+return configuredCapacity;
   }
 
   public long getAvailable() throws IOException {
+if (usage == null) {
+  throw new IOException("Volume Usage thread is not running. This error " +
+  "is usually seen during DataNode shutdown.");
+}
 return usage.getAvailable();
   }
 
   public long getScmUsed() throws IOException {
+if (usage == null) {
+  throw new IOException("Volume Usage thread is not running. This error " +
+  "is usually seen during DataNode shutdown.");
+}
 return usage.getScmUsed();
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/773f0d15/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/VolumeSet.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/VolumeSet.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/VolumeSet.java
index 5b6b823..d30dd89 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/VolumeSet.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/VolumeSet.java
@@ -372,18 +372,21 @@ public class VolumeSet {
   for (Map.Entry entry : volumeMap.entrySet()) {
 hddsVolume = entry.getValue();
 VolumeInfo volumeInfo = hddsVolume.getVolumeInfo();
-long scmUsed = 0;
-long remaining = 0;
+long scmUsed;
+long remaining;
+long capacity;
 failed = false;
 try {
   scmUsed = volumeInfo.getScmUsed();
   remaining = volumeInfo.getAvailable();
+  capacity = volumeInfo.getCapacity();
 } catch (IOException ex) {
   LOG.warn("Failed to get scmUsed and remaining for container " +
-  "storage location {}", volumeInfo.getRootDir());
+  "storage location {}", volumeInfo.getRootDir(), ex);
   // reset scmUsed and remaining if df/du failed.
   scmUsed = 0;
   remaining = 0;
+  capacity = 0;
   failed = true;
 }
 
@@ -392,7 +395,7 @@ public class VolumeSet {
 builder.setStorageLocation(volumeInfo.getRootDir())
 .setId(hddsVolume.getStorageID())
 .setFailed(failed)
-.setCapacity(hddsVolume.getCapacity())
+.setCapacity(capacity)
 

[05/50] [abbrv] hadoop git commit: HDDS-749. Restructure BlockId class in Ozone. Contributed by Shashikant Banerjee.

2018-11-05 Thread aengineer
HDDS-749. Restructure BlockId class in Ozone. Contributed by Shashikant 
Banerjee.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7757331d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7757331d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7757331d

Branch: refs/heads/HDDS-4
Commit: 7757331dbc043694891a5242ac161adece9e8d6a
Parents: 486b9a4
Author: Shashikant Banerjee 
Authored: Tue Oct 30 14:15:27 2018 +0530
Committer: Shashikant Banerjee 
Committed: Tue Oct 30 14:15:27 2018 +0530

--
 .../hdds/scm/storage/ChunkOutputStream.java | 17 ++--
 .../org/apache/hadoop/hdds/client/BlockID.java  | 85 ++--
 .../hadoop/hdds/client/ContainerBlockID.java| 79 ++
 .../common/helpers/AllocatedBlock.java  | 21 ++---
 ...kLocationProtocolClientSideTranslatorPB.java |  5 +-
 .../scm/storage/ContainerProtocolCalls.java |  7 +-
 .../apache/hadoop/ozone/common/BlockGroup.java  |  3 +-
 .../container/common/helpers/BlockData.java |  8 +-
 ...kLocationProtocolServerSideTranslatorPB.java |  2 +-
 .../main/proto/DatanodeContainerProtocol.proto  |  4 +-
 .../main/proto/ScmBlockLocationProtocol.proto   |  2 +-
 hadoop-hdds/common/src/main/proto/hdds.proto|  7 +-
 .../container/keyvalue/KeyValueHandler.java |  5 +-
 .../container/keyvalue/helpers/BlockUtils.java  |  2 -
 .../keyvalue/impl/BlockManagerImpl.java |  6 +-
 .../keyvalue/interfaces/BlockManager.java   |  3 +-
 .../keyvalue/TestBlockManagerImpl.java  |  6 +-
 .../hadoop/hdds/scm/block/BlockManagerImpl.java |  3 +-
 .../ozone/client/io/ChunkGroupInputStream.java  |  3 +-
 .../ozone/client/io/ChunkGroupOutputStream.java | 21 ++---
 .../ozone/om/helpers/OmKeyLocationInfo.java | 19 +
 .../src/main/proto/OzoneManagerProtocol.proto   |  1 -
 .../container/TestContainerReplication.java |  2 +-
 .../common/impl/TestCloseContainerHandler.java  |  2 +-
 .../common/impl/TestContainerPersistence.java   | 14 ++--
 .../TestGetCommittedBlockLengthAndPutKey.java   | 16 ++--
 .../hadoop/ozone/web/client/TestKeys.java   |  2 +-
 .../apache/hadoop/ozone/om/KeyManagerImpl.java  |  7 +-
 .../ozone/om/ScmBlockLocationTestIngClient.java |  3 +-
 29 files changed, 227 insertions(+), 128 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7757331d/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkOutputStream.java
--
diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkOutputStream.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkOutputStream.java
index 4547163..4e881c4 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkOutputStream.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkOutputStream.java
@@ -19,6 +19,7 @@
 package org.apache.hadoop.hdds.scm.storage;
 
 
+import com.google.common.base.Preconditions;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
 import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
 import org.apache.commons.codec.digest.DigestUtils;
@@ -57,7 +58,7 @@ import static 
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls
  */
 public class ChunkOutputStream extends OutputStream {
 
-  private final BlockID blockID;
+  private BlockID blockID;
   private final String key;
   private final String traceID;
   private final BlockData.Builder containerBlockData;
@@ -67,7 +68,6 @@ public class ChunkOutputStream extends OutputStream {
   private final String streamId;
   private int chunkIndex;
   private int chunkSize;
-  private long blockCommitSequenceId;
 
   /**
* Creates a new ChunkOutputStream.
@@ -96,15 +96,14 @@ public class ChunkOutputStream extends OutputStream {
 this.buffer = ByteBuffer.allocate(chunkSize);
 this.streamId = UUID.randomUUID().toString();
 this.chunkIndex = 0;
-blockCommitSequenceId = 0;
   }
 
   public ByteBuffer getBuffer() {
 return buffer;
   }
 
-  public long getBlockCommitSequenceId() {
-return blockCommitSequenceId;
+  public BlockID getBlockID() {
+return blockID;
   }
 
   @Override
@@ -165,8 +164,12 @@ public class ChunkOutputStream extends OutputStream {
   try {
 ContainerProtos.PutBlockResponseProto responseProto =
 putBlock(xceiverClient, containerBlockData.build(), traceID);
-blockCommitSequenceId =
-responseProto.getCommittedBlockLength().getBlockCommitSequenceId();
+BlockID responseBlockID = BlockID.getFromProtobuf(
+responseProto.getCommittedBlockLength().getBlockID());
+

[34/50] [abbrv] hadoop git commit: HADOOP-15885. Add base64 (urlString) support to DTUtil. Contributed by Inigo Goiri.

2018-11-05 Thread aengineer
HADOOP-15885. Add base64 (urlString) support to DTUtil. Contributed by Inigo 
Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/44e37b4f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/44e37b4f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/44e37b4f

Branch: refs/heads/HDDS-4
Commit: 44e37b4fd9f441becf536368a89436afcd6dede8
Parents: 6d9c18c
Author: Giovanni Matteo Fumarola 
Authored: Fri Nov 2 10:54:12 2018 -0700
Committer: Giovanni Matteo Fumarola 
Committed: Fri Nov 2 10:54:12 2018 -0700

--
 .../hadoop/security/token/DtFileOperations.java | 28 -
 .../hadoop/security/token/DtUtilShell.java  | 37 +++-
 .../src/site/markdown/CommandsManual.md |  1 +
 .../hadoop/security/token/TestDtUtilShell.java  | 44 
 4 files changed, 107 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/44e37b4f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/DtFileOperations.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/DtFileOperations.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/DtFileOperations.java
index f154f2d..5f74f8b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/DtFileOperations.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/DtFileOperations.java
@@ -89,7 +89,7 @@ public final class DtFileOperations {
 
   /** Add the service prefix for a local filesystem. */
   private static Path fileToPath(File f) {
-return new Path("file:" + f.getAbsolutePath());
+return new Path(f.toURI().toString());
   }
 
   /** Write out a Credentials object as a local file.
@@ -294,4 +294,30 @@ public final class DtFileOperations {
 }
 doFormattedWrite(tokenFile, fileFormat, creds, conf);
   }
+
+  /** Import a token from a base64 encoding into the local filesystem.
+   * @param tokenFile A local File object.
+   * @param fileFormat A string equal to FORMAT_PB or FORMAT_JAVA, for output.
+   * @param alias overwrite Service field of fetched token with this text.
+   * @param base64 urlString Encoding of the token to import.
+   * @param conf Configuration object passed along.
+   * @throws IOException Error to import the token into the file.
+   */
+  public static void importTokenFile(File tokenFile, String fileFormat,
+  Text alias, String base64, Configuration conf)
+  throws IOException {
+
+Credentials creds = tokenFile.exists() ?
+Credentials.readTokenStorageFile(tokenFile, conf) : new Credentials();
+
+Token token = new Token<>();
+token.decodeFromUrlString(base64);
+if (alias != null) {
+  token.setService(alias);
+}
+creds.addToken(token.getService(), token);
+LOG.info("Add token with service {}", token.getService());
+
+doFormattedWrite(tokenFile, fileFormat, creds, conf);
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/44e37b4f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/DtUtilShell.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/DtUtilShell.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/DtUtilShell.java
index 88db34f..bc2d1b6 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/DtUtilShell.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/DtUtilShell.java
@@ -55,6 +55,7 @@ public class DtUtilShell extends CommandShell {
   private static final String CANCEL = "cancel";
   private static final String REMOVE = "remove";
   private static final String RENEW = "renew";
+  private static final String IMPORT = "import";
   private static final String RENEWER = "-renewer";
   private static final String SERVICE = "-service";
   private static final String ALIAS = "-alias";
@@ -138,6 +139,8 @@ public class DtUtilShell extends CommandShell {
   setSubCommand(new Remove(false));
 } else if (command.equals(RENEW)) {
   setSubCommand(new Renew());
+} else if (command.equals(IMPORT)) {
+  setSubCommand(new Import(args[++i]));
 }
   } else if (args[i].equals(ALIAS)) {
 alias = new Text(args[++i]);
@@ -176,11 +179,11 @@ public class DtUtilShell extends CommandShell {
   @Override
   public String getCommandUsage() {
 return String.format(
-"%n%s%n   %s%n   %s%n   %s%n   %s%n   %s%n   %s%n 

[44/50] [abbrv] hadoop git commit: HDDS-797. If DN is started before SCM, it does not register. Contributed by Hanisha Koneru.

2018-11-05 Thread aengineer
HDDS-797. If DN is started before SCM, it does not register. Contributed by 
Hanisha Koneru.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c8ca1747
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c8ca1747
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c8ca1747

Branch: refs/heads/HDDS-4
Commit: c8ca1747c08d905cdefaa5566dd58d770a6b71bd
Parents: 15df2e7
Author: Arpit Agarwal 
Authored: Mon Nov 5 09:40:00 2018 -0800
Committer: Arpit Agarwal 
Committed: Mon Nov 5 09:40:00 2018 -0800

--
 .../states/endpoint/VersionEndpointTask.java| 79 +++-
 .../hadoop/ozone/TestMiniOzoneCluster.java  | 52 -
 2 files changed, 94 insertions(+), 37 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c8ca1747/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/VersionEndpointTask.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/VersionEndpointTask.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/VersionEndpointTask.java
index 79fa174..2d00da8 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/VersionEndpointTask.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/VersionEndpointTask.java
@@ -64,50 +64,57 @@ public class VersionEndpointTask implements
   public EndpointStateMachine.EndPointStates call() throws Exception {
 rpcEndPoint.lock();
 try{
-  SCMVersionResponseProto versionResponse =
-  rpcEndPoint.getEndPoint().getVersion(null);
-  VersionResponse response = VersionResponse.getFromProtobuf(
-  versionResponse);
-  rpcEndPoint.setVersion(response);
+  if (rpcEndPoint.getState().equals(
+  EndpointStateMachine.EndPointStates.GETVERSION)) {
+SCMVersionResponseProto versionResponse =
+rpcEndPoint.getEndPoint().getVersion(null);
+VersionResponse response = VersionResponse.getFromProtobuf(
+versionResponse);
+rpcEndPoint.setVersion(response);
 
-  String scmId = response.getValue(OzoneConsts.SCM_ID);
-  String clusterId = response.getValue(OzoneConsts.CLUSTER_ID);
+String scmId = response.getValue(OzoneConsts.SCM_ID);
+String clusterId = response.getValue(OzoneConsts.CLUSTER_ID);
 
-  // Check volumes
-  VolumeSet volumeSet = ozoneContainer.getVolumeSet();
-  volumeSet.writeLock();
-  try {
-Map volumeMap = volumeSet.getVolumeMap();
+// Check volumes
+VolumeSet volumeSet = ozoneContainer.getVolumeSet();
+volumeSet.writeLock();
+try {
+  Map volumeMap = volumeSet.getVolumeMap();
 
-Preconditions.checkNotNull(scmId, "Reply from SCM: scmId cannot be " +
-"null");
-Preconditions.checkNotNull(clusterId, "Reply from SCM: clusterId " +
-"cannot be null");
+  Preconditions.checkNotNull(scmId, "Reply from SCM: scmId cannot be " 
+
+  "null");
+  Preconditions.checkNotNull(clusterId, "Reply from SCM: clusterId " +
+  "cannot be null");
 
-// If version file does not exist create version file and also set 
scmId
-for (Map.Entry entry : volumeMap.entrySet()) {
-  HddsVolume hddsVolume = entry.getValue();
-  boolean result = HddsVolumeUtil.checkVolume(hddsVolume, scmId,
-  clusterId, LOG);
-  if (!result) {
-volumeSet.failVolume(hddsVolume.getHddsRootDir().getPath());
+  // If version file does not exist create version file and also set 
scmId
+
+  for (Map.Entry entry : volumeMap.entrySet()) {
+HddsVolume hddsVolume = entry.getValue();
+boolean result = HddsVolumeUtil.checkVolume(hddsVolume, scmId,
+clusterId, LOG);
+if (!result) {
+  volumeSet.failVolume(hddsVolume.getHddsRootDir().getPath());
+}
   }
+  if (volumeSet.getVolumesList().size() == 0) {
+// All volumes are in inconsistent state
+throw new DiskOutOfSpaceException("All configured Volumes are in " 
+
+"Inconsistent State");
+  }
+} finally {
+  volumeSet.writeUnlock();
 }
-if (volumeSet.getVolumesList().size() == 0) {
-  // All volumes are in inconsistent state
-  throw new DiskOutOfSpaceException("All configured Volumes are in " +
-  "Inconsistent State");
-}
-  } finally {
-   

[11/50] [abbrv] hadoop git commit: YARN-6729. Clarify documentation on how to enable cgroup support. Contributed by Zhankun Tang

2018-11-05 Thread aengineer
YARN-6729. Clarify documentation on how to enable cgroup support. Contributed 
by Zhankun Tang


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/277a3d8d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/277a3d8d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/277a3d8d

Branch: refs/heads/HDDS-4
Commit: 277a3d8d9fe1127c75452d083ff7859c603e686d
Parents: d36012b
Author: Shane Kumpf 
Authored: Tue Oct 30 11:36:55 2018 -0600
Committer: Shane Kumpf 
Committed: Tue Oct 30 11:36:55 2018 -0600

--
 .../hadoop-yarn-site/src/site/markdown/NodeManagerCgroups.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/277a3d8d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerCgroups.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerCgroups.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerCgroups.md
index 4a83dce..7a48f6d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerCgroups.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerCgroups.md
@@ -29,13 +29,13 @@ The following settings are related to setting up CGroups. 
These need to be set i
 |Configuration Name | Description |
 |: |: |
 | `yarn.nodemanager.container-executor.class` | This should be set to 
"org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor". CGroups is 
a Linux kernel feature and is exposed via the LinuxContainerExecutor. |
-| `yarn.nodemanager.linux-container-executor.resources-handler.class` | This 
should be set to 
"org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler". 
Using the LinuxContainerExecutor doesn't force you to use CGroups. If you wish 
to use CGroups, the resource-handler-class must be set to 
CGroupsLCEResourceHandler. |
+| `yarn.nodemanager.linux-container-executor.resources-handler.class` | This 
should be set to 
"org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler". 
Using the LinuxContainerExecutor doesn't force you to use CGroups. If you wish 
to use CGroups, the resource-handler-class must be set to 
CGroupsLCEResourceHandler. DefaultLCEResourcesHandler won't work. |
 | `yarn.nodemanager.linux-container-executor.cgroups.hierarchy` | The cgroups 
hierarchy under which to place YARN proccesses(cannot contain commas). If 
yarn.nodemanager.linux-container-executor.cgroups.mount is false (that is, if 
cgroups have been pre-configured) and the YARN user has write access to the 
parent directory, then the directory will be created. If the directory already 
exists, the administrator has to give YARN write permissions to it recursively. 
|
 | `yarn.nodemanager.linux-container-executor.cgroups.mount` | Whether the LCE 
should attempt to mount cgroups if not found - can be true or false. |
 | `yarn.nodemanager.linux-container-executor.cgroups.mount-path` | Optional. 
Where CGroups are located. LCE will try to mount them here, if 
`yarn.nodemanager.linux-container-executor.cgroups.mount` is true. LCE will try 
to use CGroups from this location, if 
`yarn.nodemanager.linux-container-executor.cgroups.mount` is false. If 
specified, this path and its subdirectories (CGroup hierarchies) must exist and 
they should be readable and writable by YARN before the NodeManager is 
launched. See CGroups mount options below for details. |
 | `yarn.nodemanager.linux-container-executor.group` | The Unix group of the 
NodeManager. It should match the setting in "container-executor.cfg". This 
configuration is required for validating the secure access of the 
container-executor binary. |
 
-The following settings are related to limiting resource usage of YARN 
containers:
+Once CGroups enabled, the following settings related to limiting resource 
usage of YARN containers can works:
 
 |Configuration Name | Description |
 |: |: |


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[19/50] [abbrv] hadoop git commit: HDDS-712. Use x-amz-storage-class to specify replication type and replication factor. Contributed by Bharat Viswanadham.

2018-11-05 Thread aengineer
HDDS-712. Use x-amz-storage-class to specify replication type and replication 
factor. Contributed by Bharat Viswanadham.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ecac351a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ecac351a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ecac351a

Branch: refs/heads/HDDS-4
Commit: ecac351aac1702194c56743ced5a66242643f28c
Parents: 9c438ab
Author: Márton Elek 
Authored: Wed Oct 31 11:08:43 2018 +0100
Committer: Márton Elek 
Committed: Wed Oct 31 13:28:59 2018 +0100

--
 .../dist/src/main/smoketest/s3/awss3.robot  |  4 +-
 .../dist/src/main/smoketest/s3/objectcopy.robot | 18 ++---
 .../src/main/smoketest/s3/objectdelete.robot|  6 +-
 .../main/smoketest/s3/objectmultidelete.robot   |  6 +-
 .../src/main/smoketest/s3/objectputget.robot|  2 +-
 .../ozone/s3/endpoint/ObjectEndpoint.java   | 68 ++-
 .../apache/hadoop/ozone/s3/util/S3Consts.java   | 19 ++
 .../hadoop/ozone/s3/util/S3StorageType.java | 55 
 .../hadoop/ozone/s3/util/package-info.java  | 22 +++
 .../hadoop/ozone/s3/endpoint/TestPutObject.java | 69 +++-
 10 files changed, 205 insertions(+), 64 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecac351a/hadoop-ozone/dist/src/main/smoketest/s3/awss3.robot
--
diff --git a/hadoop-ozone/dist/src/main/smoketest/s3/awss3.robot 
b/hadoop-ozone/dist/src/main/smoketest/s3/awss3.robot
index 79db688..c1ec9f0 100644
--- a/hadoop-ozone/dist/src/main/smoketest/s3/awss3.robot
+++ b/hadoop-ozone/dist/src/main/smoketest/s3/awss3.robot
@@ -29,9 +29,9 @@ ${BUCKET} generated
 
 File upload and directory list
 Execute   date > /tmp/testfile
-${result} = Execute AWSS3Cli  cp /tmp/testfile 
s3://${BUCKET}
+${result} = Execute AWSS3Cli  cp --storage-class 
REDUCED_REDUNDANCY /tmp/testfile s3://${BUCKET}
 Should contain${result} upload
-${result} = Execute AWSS3Cli  cp /tmp/testfile 
s3://${BUCKET}/dir1/dir2/file
+${result} = Execute AWSS3Cli  cp --storage-class 
REDUCED_REDUNDANCY /tmp/testfile s3://${BUCKET}/dir1/dir2/file
 Should contain${result} upload
 ${result} = Execute AWSS3Cli  ls s3://${BUCKET}
 Should contain${result} testfile

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecac351a/hadoop-ozone/dist/src/main/smoketest/s3/objectcopy.robot
--
diff --git a/hadoop-ozone/dist/src/main/smoketest/s3/objectcopy.robot 
b/hadoop-ozone/dist/src/main/smoketest/s3/objectcopy.robot
index 2daa861..e702d9b 100644
--- a/hadoop-ozone/dist/src/main/smoketest/s3/objectcopy.robot
+++ b/hadoop-ozone/dist/src/main/smoketest/s3/objectcopy.robot
@@ -39,28 +39,28 @@ Create Dest Bucket
 Copy Object Happy Scenario
 Run Keyword if'${DESTBUCKET}' == 'generated1'Create Dest Bucket
 Executedate > /tmp/copyfile
-${result} = Execute AWSS3ApiCliput-object --bucket 
${BUCKET} --key copyobject/f1 --body /tmp/copyfile
+${result} = Execute AWSS3ApiCliput-object --storage-class 
REDUCED_REDUNDANCY --bucket ${BUCKET} --key copyobject/f1 --body /tmp/copyfile
 ${result} = Execute AWSS3ApiClilist-objects --bucket 
${BUCKET} --prefix copyobject/
 Should contain ${result} f1
 
-${result} = Execute AWSS3ApiClicopy-object --bucket 
${DESTBUCKET} --key copyobject/f1 --copy-source ${BUCKET}/copyobject/f1
+${result} = Execute AWSS3ApiClicopy-object --storage-class 
REDUCED_REDUNDANCY --bucket ${DESTBUCKET} --key copyobject/f1 --copy-source 
${BUCKET}/copyobject/f1
 ${result} = Execute AWSS3ApiClilist-objects --bucket 
${DESTBUCKET} --prefix copyobject/
 Should contain ${result} f1
 #copying again will not throw error
-${result} = Execute AWSS3ApiClicopy-object --bucket 
${DESTBUCKET} --key copyobject/f1 --copy-source ${BUCKET}/copyobject/f1
+${result} = Execute AWSS3ApiClicopy-object --storage-class 
REDUCED_REDUNDANCY --bucket ${DESTBUCKET} --key copyobject/f1 --copy-source 
${BUCKET}/copyobject/f1
 ${result} = Execute AWSS3ApiClilist-objects --bucket 
${DESTBUCKET} --prefix copyobject/
 Should contain ${result} f1
 

[08/50] [abbrv] hadoop git commit: YARN-8854. Upgrade jquery datatable version references to v1.10.19. Contributed by Akhil PB.

2018-11-05 Thread aengineer
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d36012b6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.7/css/demo_table.css
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.7/css/demo_table.css
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.7/css/demo_table.css
deleted file mode 100644
index 37b9203..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.7/css/demo_table.css
+++ /dev/null
@@ -1,538 +0,0 @@
-/*
- *  File: demo_table.css
- *  CVS:  $Id$
- *  Description:  CSS descriptions for DataTables demo pages
- *  Author:   Allan Jardine
- *  Created:  Tue May 12 06:47:22 BST 2009
- *  Modified: $Date$ by $Author$
- *  Language: CSS
- *  Project:  DataTables
- *
- *  Copyright 2009 Allan Jardine. All Rights Reserved.
- *
- * ***
- * DESCRIPTION
- *
- * The styles given here are suitable for the demos that are used with the 
standard DataTables
- * distribution (see www.datatables.net). You will most likely wish to modify 
these styles to
- * meet the layout requirements of your site.
- *
- * Common issues:
- *   'full_numbers' pagination - I use an extra selector on the body tag to 
ensure that there is
- * no conflict between the two pagination types. If you want to use 
full_numbers pagination
- * ensure that you either have "example_alt_pagination" as a body class 
name, or better yet,
- * modify that selector.
- *   Note that the path used for Images is relative. All images are by default 
located in
- * ../images/ - relative to this CSS file.
- */
-
-/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
- * DataTables features
- */
-
-.dataTables_wrapper {
-   position: relative;
-   min-height: 302px;
-   clear: both;
-   _height: 302px;
-   zoom: 1; /* Feeling sorry for IE */
-}
-
-.dataTables_processing {
-   position: absolute;
-   top: 50%;
-   left: 50%;
-   width: 250px;
-   height: 30px;
-   margin-left: -125px;
-   margin-top: -15px;
-   padding: 14px 0 2px 0;
-   border: 1px solid #ddd;
-   text-align: center;
-   color: #999;
-   font-size: 14px;
-   background-color: white;
-}
-
-.dataTables_length {
-   width: 40%;
-   float: left;
-}
-
-.dataTables_filter {
-   width: 50%;
-   float: right;
-   text-align: right;
-}
-
-.dataTables_info {
-   width: 60%;
-   float: left;
-}
-
-.dataTables_paginate {
-   width: 44px;
-   * width: 50px;
-   float: right;
-   text-align: right;
-}
-
-/* Pagination nested */
-.paginate_disabled_previous, .paginate_enabled_previous, 
.paginate_disabled_next, .paginate_enabled_next {
-   height: 19px;
-   width: 19px;
-   margin-left: 3px;
-   float: left;
-}
-
-.paginate_disabled_previous {
-   background-image: url('../images/back_disabled.jpg');
-}
-
-.paginate_enabled_previous {
-   background-image: url('../images/back_enabled.jpg');
-}
-
-.paginate_disabled_next {
-   background-image: url('../images/forward_disabled.jpg');
-}
-
-.paginate_enabled_next {
-   background-image: url('../images/forward_enabled.jpg');
-}
-
-
-
-/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
- * DataTables display
- */
-table.display {
-   margin: 0 auto;
-   clear: both;
-   width: 100%;
-
-   /* Note Firefox 3.5 and before have a bug with border-collapse
-* ( https://bugzilla.mozilla.org/show%5Fbug.cgi?id=155955 )
-* border-spacing: 0; is one possible option. Conditional-css.com is
-* useful for this kind of thing
-*
-* Further note IE 6/7 has problems when calculating widths with border 
width.
-* It subtracts one px relative to the other browsers from the first 
column, and
-* adds one to the end...
-*
-* If you want that effect I'd suggest setting a border-top/left on 
th/td's and
-* then filling in the gaps with other borders.
-*/
-}
-
-table.display thead th {
-   padding: 3px 18px 3px 10px;
-   border-bottom: 1px solid black;
-   font-weight: bold;
-   cursor: pointer;
-   * cursor: hand;
-}
-
-table.display tfoot th {
-   padding: 3px 18px 3px 10px;
-   border-top: 1px solid black;
-   font-weight: bold;
-}
-
-table.display tr.heading2 td {
-   border-bottom: 1px solid #aaa;
-}
-
-table.display td {
-   padding: 3px 10px;
-}
-
-table.display td.center {
-   text-align: center;
-}
-
-
-
-/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
- * DataTables sorting
- */
-
-.sorting_asc {
-   background: 

[07/50] [abbrv] hadoop git commit: YARN-8854. Upgrade jquery datatable version references to v1.10.19. Contributed by Akhil PB.

2018-11-05 Thread aengineer
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d36012b6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.7/js/jquery.dataTables.min.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.7/js/jquery.dataTables.min.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.7/js/jquery.dataTables.min.js
deleted file mode 100644
index 85dd817..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.7/js/jquery.dataTables.min.js
+++ /dev/null
@@ -1,160 +0,0 @@
-/*! DataTables 1.10.7
- * ©2008-2015 SpryMedia Ltd - datatables.net/license
- */
-(function(Ea,Q,k){var P=function(h){function W(a){var 
b,c,e={};h.each(a,function(d){if((b=d.match(/^([^A-Z]+?)([A-Z])/))&&-1!=="a aa 
ai ao as b fn i m o s ".indexOf(b[1]+" 
"))c=d.replace(b[0],b[2].toLowerCase()),e[c]=d,"o"===b[1]&(a[d])});a._hungarianMap=e}function
 H(a,b,c){a._hungarianMap||W(a);var 
e;h.each(b,function(d){e=a._hungarianMap[d];if(e!==k&&(c||b[e]===k))"o"===e.charAt(0)?(b[e]||(b[e]={}),h.extend(!0,b[e],b[d]),H(a[e],b[e],c)):b[e]=b[d]})}function
 P(a){var b=m.defaults.oLanguage,c=a.sZeroRecords;
-!a.sEmptyTable&&(c&&"No data available in 
table"===b.sEmptyTable)&(a,a,"sZeroRecords","sEmptyTable");!a.sLoadingRecords&&(c&&"Loading..."===b.sLoadingRecords)&(a,a,"sZeroRecords","sLoadingRecords");a.sInfoThousands&&(a.sThousands=a.sInfoThousands);(a=a.sDecimal)&(a)}function
 
eb(a){A(a,"ordering","bSort");A(a,"orderMulti","bSortMulti");A(a,"orderClasses","bSortClasses");A(a,"orderCellsTop","bSortCellsTop");A(a,"order","aaSorting");A(a,"orderFixed","aaSortingFixed");A(a,"paging","bPaginate");
-A(a,"pagingType","sPaginationType");A(a,"pageLength","iDisplayLength");A(a,"searching","bFilter");if(a=a.aoSearchCols)for(var
 b=0,c=a.length;b").css({position:"absolute",top:0,left:0,height:1,width:1,overflow:"hidden"}).append(h("").css({position:"absolute",
-top:1,left:1,width:100,overflow:"scroll"}).append(h('').css({width:"100%",height:10}))).appendTo("body"),c=b.find(".test");a.bScrollOversize=100===c[0].offsetWidth;a.bScrollbarLeft=1!==Math.round(c.offset().left);b.remove()}function
 hb(a,b,c,e,d,f){var 
g,j=!1;c!==k&&(g=c,j=!0);for(;e!==d;)a.hasOwnProperty(e)&&(g=j?b(g,a[e],e,a):a[e],j=!0,e+=f);return
 g}function Fa(a,b){var 
c=m.defaults.column,e=a.aoColumns.length,c=h.extend({},m.models.oColumn,c,{nTh:b?b:Q.createElement("th"),sTitle:c.sTitle?
-c.sTitle:b?b.innerHTML:"",aDataSort:c.aDataSort?c.aDataSort:[e],mData:c.mData?c.mData:e,idx:e});a.aoColumns.push(c);c=a.aoPreSearchCols;c[e]=h.extend({},m.models.oSearch,c[e]);ka(a,e,h(b).data())}function
 ka(a,b,c){var 
b=a.aoColumns[b],e=a.oClasses,d=h(b.nTh);if(!b.sWidthOrig){b.sWidthOrig=d.attr("width")||null;var
 
f=(d.attr("style")||"").match(/width:\s*(\d+[pxem%]+)/);f&&(b.sWidthOrig=f[1])}c!==k&!==c&&(fb(c),H(m.defaults.column,c),c.mDataProp!==k&&!c.mData&&(c.mData=c.mDataProp),c.sType&&
-(b._sManualType=c.sType),c.className&&!c.sClass&&(c.sClass=c.className),h.extend(b,c),E(b,c,"sWidth","sWidthOrig"),c.iDataSort!==k&&(b.aDataSort=[c.iDataSort]),E(b,c,"aDataSort"));var
 
g=b.mData,j=R(g),i=b.mRender?R(b.mRender):null,c=function(a){return"string"===typeof
 
a&&-1!==a.indexOf("@")};b._bAttrSrc=h.isPlainObject(g)&&(c(g.sort)||c(g.type)||c(g.filter));b.fnGetData=function(a,b,c){var
 e=j(a,b,k,c);return i&?i(e,b,a,c):e};b.fnSetData=function(a,b,c){return 
S(g)(a,b,c)};"number"!==typeof g&&
-(a._rowReadObject=!0);a.oFeatures.bSort||(b.bSortable=!1,d.addClass(e.sSortableNone));a=-1!==h.inArray("asc",b.asSorting);c=-1!==h.inArray("desc",b.asSorting);!b.bSortable||!a&&!c?(b.sSortingClass=e.sSortableNone,b.sSortingClassJUI=""):a&&!c?(b.sSortingClass=e.sSortableAsc,b.sSortingClassJUI=e.sSortJUIAscAllowed):!a&?(b.sSortingClass=e.sSortableDesc,b.sSortingClassJUI=e.sSortJUIDescAllowed):(b.sSortingClass=e.sSortable,b.sSortingClassJUI=e.sSortJUI)}function
 X(a){if(!1!==a.oFeatures.bAutoWidth){var b=
-a.aoColumns;Ga(a);for(var 
c=0,e=b.length;cq[f])e(l.length+q[f],o);else if("string"===typeof 
q[f]){j=0;for(i=l.length;jb&[d]--; -1!=e&===k&(e,1)}function 
ca(a,b,c,e){var 
d=a.aoData[b],f,g=function(c,f){for(;c.childNodes.length;)c.removeChild(c.firstChild);c.innerHTML=x(a,b,f,"display")};if("dom"===c||(!c||"auto"===c)&&"dom"===d.src)d._aData=na(a,d,e,e===k?k:d._aData).data;else{var
 
j=d.anCells;if(j)if(e!==k)g(j[e],e);else{c=0;for(f=j.length;c").appendTo(g));b=0;for(c=l.length;btr").attr("role","row");h(g).find(">tr>th,
 >tr>td").addClass(o.sHeaderTH);
-h(j).find(">tr>th, 
>tr>td").addClass(o.sFooterTH);if(null!==j){a=a.aoFooter[0];b=0;for(c=a.length;b=a.fnRecordsDisplay()?0:g,a.iInitDisplayStart=-1);var 
g=a._iDisplayStart,o=a.fnDisplayEnd();if(a.bDeferLoading)a.bDeferLoading=!1,a.iDraw++,C(a,!1);else
 if(j){if(!a.bDestroying&&!kb(a))return}else 

[04/50] [abbrv] hadoop git commit: YARN-7754. [Atsv2] Update document for running v1 and v2 TS. Contributed by Suma Shivaprasad.

2018-11-05 Thread aengineer
YARN-7754. [Atsv2] Update document for running v1 and v2 TS. Contributed by 
Suma Shivaprasad.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/486b9a4a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/486b9a4a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/486b9a4a

Branch: refs/heads/HDDS-4
Commit: 486b9a4a75f413aa542338b0d866c3b490381d93
Parents: a283da2
Author: Rohith Sharma K S 
Authored: Tue Oct 30 11:35:01 2018 +0530
Committer: Rohith Sharma K S 
Committed: Tue Oct 30 11:35:01 2018 +0530

--
 .../src/site/markdown/TimelineServiceV2.md  | 12 
 1 file changed, 12 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/486b9a4a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md
index 2314f30..86faf6c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md
@@ -333,6 +333,18 @@ that it can write data to the Apache HBase cluster you are 
using, or set
 
 ```
 
+To configure both Timeline Service 1.5 and v.2, add the following property
+
+ ```
+ 
+   yarn.timeline-service.versions
+   1.5f,2.0f
+ 
+```
+
+If the above is not configured, then it defaults to the version set in 
`yarn.timeline-service.version`
+
+
  Running Timeline Service v.2
 Restart the resource manager as well as the node managers to pick up the new 
configuration. The
 collectors start within the resource manager and the node managers in an 
embedded manner.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[15/50] [abbrv] hadoop git commit: HDDS-755. ContainerInfo and ContainerReplica protobuf changes. Contributed by Nanda kumar.

2018-11-05 Thread aengineer
HDDS-755. ContainerInfo and ContainerReplica protobuf changes.
Contributed by Nanda kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e4f22b08
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e4f22b08
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e4f22b08

Branch: refs/heads/HDDS-4
Commit: e4f22b08e0d1074c315680ba20d8666be21a25db
Parents: 773f0d1
Author: Nanda kumar 
Authored: Wed Oct 31 10:29:35 2018 +0530
Committer: Nanda kumar 
Committed: Wed Oct 31 10:29:35 2018 +0530

--
 .../scm/client/ContainerOperationClient.java|  6 +--
 .../hadoop/hdds/scm/client/ScmClient.java   |  6 +--
 .../hdds/scm/container/ContainerInfo.java   |  8 +--
 ...rLocationProtocolClientSideTranslatorPB.java |  2 +-
 .../main/proto/DatanodeContainerProtocol.proto  | 27 +-
 .../StorageContainerLocationProtocol.proto  |  4 +-
 hadoop-hdds/common/src/main/proto/hdds.proto|  4 +-
 .../container/common/impl/ContainerData.java| 22 
 .../common/impl/ContainerDataYaml.java  |  6 +--
 .../container/common/impl/HddsDispatcher.java   | 13 ++---
 .../container/common/interfaces/Container.java  |  9 ++--
 .../container/keyvalue/KeyValueContainer.java   | 31 ++-
 .../keyvalue/KeyValueContainerData.java |  9 ++--
 .../container/keyvalue/KeyValueHandler.java | 16 +++---
 .../StorageContainerDatanodeProtocol.proto  | 57 +++-
 .../ozone/container/common/ScmTestMock.java | 14 ++---
 .../common/TestKeyValueContainerData.java   |  6 +--
 .../common/impl/TestContainerDataYaml.java  |  8 +--
 .../container/common/impl/TestContainerSet.java | 16 +++---
 .../keyvalue/TestKeyValueContainer.java | 15 +++---
 .../container/keyvalue/TestKeyValueHandler.java |  2 +-
 .../scm/container/ContainerReportHandler.java   |  2 +-
 .../hdds/scm/container/SCMContainerManager.java | 23 
 .../apache/hadoop/hdds/scm/HddsTestUtils.java   |  2 +-
 .../org/apache/hadoop/hdds/scm/TestUtils.java   | 20 +++
 .../container/TestContainerReportHandler.java   |  6 ++-
 .../scm/container/TestSCMContainerManager.java  | 22 
 .../hdds/scm/cli/container/InfoSubcommand.java  |  8 ++-
 .../rpc/TestCloseContainerHandlingByClient.java |  2 +-
 .../rpc/TestContainerStateMachineFailures.java  |  2 +-
 .../common/impl/TestContainerPersistence.java   |  4 +-
 .../commandhandler/TestBlockDeletion.java   | 10 ++--
 .../org/apache/hadoop/ozone/scm/cli/SQLCLI.java |  2 +-
 33 files changed, 199 insertions(+), 185 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e4f22b08/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
--
diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
index 25a71df..8c96164 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
@@ -27,7 +27,7 @@ import org.apache.hadoop.hdds.scm.protocolPB
 .StorageContainerLocationProtocolClientSideTranslatorPB;
 import org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
-.ContainerData;
+.ContainerDataProto;
 import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
 .ReadContainerResponseProto;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
@@ -309,7 +309,7 @@ public class ContainerOperationClient implements ScmClient {
* @throws IOException
*/
   @Override
-  public ContainerData readContainer(long containerID,
+  public ContainerDataProto readContainer(long containerID,
   Pipeline pipeline) throws IOException {
 XceiverClientSpi client = null;
 try {
@@ -337,7 +337,7 @@ public class ContainerOperationClient implements ScmClient {
* @throws IOException
*/
   @Override
-  public ContainerData readContainer(long containerID) throws IOException {
+  public ContainerDataProto readContainer(long containerID) throws IOException 
{
 ContainerWithPipeline info = getContainerWithPipeline(containerID);
 return readContainer(containerID, info.getPipeline());
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e4f22b08/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/client/ScmClient.java
 

[20/50] [abbrv] hadoop git commit: HDDS-659. Implement pagination in GET bucket (object list) endpoint. Contributed by Bharat Viswanadham.

2018-11-05 Thread aengineer
HDDS-659. Implement pagination in GET bucket (object list) endpoint. 
Contributed by Bharat Viswanadham.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b519f3f2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b519f3f2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b519f3f2

Branch: refs/heads/HDDS-4
Commit: b519f3f2a0ae960391ce7bff59f1fdd21a22e030
Parents: ecac351
Author: Márton Elek 
Authored: Wed Oct 31 12:21:38 2018 +0100
Committer: Márton Elek 
Committed: Wed Oct 31 13:29:01 2018 +0100

--
 .../ozone/s3/endpoint/BucketEndpoint.java   | 109 ++---
 .../ozone/s3/endpoint/ListObjectResponse.java   |  22 ++
 .../apache/hadoop/ozone/s3/util/S3Consts.java   |   1 +
 .../apache/hadoop/ozone/s3/util/S3utils.java|  73 ++
 .../hadoop/ozone/client/OzoneBucketStub.java|   7 +-
 .../hadoop/ozone/s3/endpoint/TestBucketGet.java | 227 ++-
 6 files changed, 400 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b519f3f2/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
--
diff --git 
a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
 
b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
index 8f554ed..04e2348 100644
--- 
a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
+++ 
b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/BucketEndpoint.java
@@ -36,7 +36,6 @@ import java.io.IOException;
 import java.io.InputStream;
 import java.time.Instant;
 import java.util.Iterator;
-import javax.ws.rs.core.Response.ResponseBuilder;
 
 import org.apache.hadoop.ozone.client.OzoneBucket;
 import org.apache.hadoop.ozone.client.OzoneKey;
@@ -48,10 +47,13 @@ import org.apache.hadoop.ozone.s3.exception.OS3Exception;
 import org.apache.hadoop.ozone.s3.exception.S3ErrorTable;
 
 import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.ozone.s3.util.S3utils;
 import org.apache.http.HttpStatus;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static org.apache.hadoop.ozone.s3.util.S3Consts.ENCODING_TYPE;
+
 /**
  * Bucket level rest endpoints.
  */
@@ -76,6 +78,8 @@ public class BucketEndpoint extends EndpointBase {
   @DefaultValue("1000") @QueryParam("max-keys") int maxKeys,
   @QueryParam("prefix") String prefix,
   @QueryParam("browser") String browser,
+  @QueryParam("continuation-token") String continueToken,
+  @QueryParam("start-after") String startAfter,
   @Context HttpHeaders hh) throws OS3Exception, IOException {
 
 if (browser != null) {
@@ -87,60 +91,91 @@ public class BucketEndpoint extends EndpointBase {
   }
 }
 
-if (delimiter == null) {
-  delimiter = "/";
-}
 if (prefix == null) {
   prefix = "";
 }
 
 OzoneBucket bucket = getBucket(bucketName);
 
-Iterator ozoneKeyIterator = bucket.listKeys(prefix);
+Iterator ozoneKeyIterator;
+
+String decodedToken = S3utils.decodeContinueToken(continueToken);
+
+if (startAfter != null && continueToken != null) {
+  // If continuation token and start after both are provided, then we
+  // ignore start After
+  ozoneKeyIterator = bucket.listKeys(prefix, decodedToken);
+} else if (startAfter != null && continueToken == null) {
+  ozoneKeyIterator = bucket.listKeys(prefix, startAfter);
+} else if (startAfter == null && continueToken != null){
+  ozoneKeyIterator = bucket.listKeys(prefix, decodedToken);
+} else {
+  ozoneKeyIterator = bucket.listKeys(prefix);
+}
+
 
 ListObjectResponse response = new ListObjectResponse();
 response.setDelimiter(delimiter);
 response.setName(bucketName);
 response.setPrefix(prefix);
 response.setMarker("");
-response.setMaxKeys(1000);
-response.setEncodingType("url");
+response.setMaxKeys(maxKeys);
+response.setEncodingType(ENCODING_TYPE);
 response.setTruncated(false);
+response.setContinueToken(continueToken);
 
 String prevDir = null;
+String lastKey = null;
+int count = 0;
 while (ozoneKeyIterator.hasNext()) {
   OzoneKey next = ozoneKeyIterator.next();
   String relativeKeyName = next.getName().substring(prefix.length());
 
-  int depth =
-  StringUtils.countMatches(relativeKeyName, delimiter);
+  int depth = StringUtils.countMatches(relativeKeyName, delimiter);
+  if (delimiter != null) {
+if (depth > 0) {
+  // means key has multiple delimiters in its value.
+  // ex: dir/dir1/dir2, where delimiter is "/" and prefix is dir/
+   

[18/50] [abbrv] hadoop git commit: HDFS-14033. [libhdfs++] Disable libhdfs++ build on systems that do not support thread_local. Contributed by Anatoli Shein.

2018-11-05 Thread aengineer
HDFS-14033. [libhdfs++] Disable libhdfs++ build on systems that do not support 
thread_local. Contributed by Anatoli Shein.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9c438abe
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9c438abe
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9c438abe

Branch: refs/heads/HDDS-4
Commit: 9c438abe52d4ee0b25345a4b7ec1697dd66f85e9
Parents: fac9f91
Author: Sunil G 
Authored: Wed Oct 31 12:32:49 2018 +0530
Committer: Sunil G 
Committed: Wed Oct 31 12:32:49 2018 +0530

--
 .../src/CMakeLists.txt  | 22 +++-
 .../src/main/native/libhdfspp/CMakeLists.txt|  4 ++--
 2 files changed, 23 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9c438abe/hadoop-hdfs-project/hadoop-hdfs-native-client/src/CMakeLists.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/CMakeLists.txt 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/CMakeLists.txt
index 1813ec1..026be9f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/CMakeLists.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/CMakeLists.txt
@@ -138,7 +138,27 @@ endif()
 
 add_subdirectory(main/native/libhdfs)
 add_subdirectory(main/native/libhdfs-tests)
-add_subdirectory(main/native/libhdfspp)
+
+# Temporary fix to disable Libhdfs++ build on older systems that do not 
support thread_local
+include(CheckCXXSourceCompiles)
+unset (THREAD_LOCAL_SUPPORTED CACHE)
+set (CMAKE_REQUIRED_DEFINITIONS "-std=c++11")
+set (CMAKE_REQUIRED_LIBRARIES ${CMAKE_THREAD_LIBS_INIT})
+check_cxx_source_compiles(
+"#include 
+int main(void) {
+  thread_local int s;
+  return 0;
+}"
+THREAD_LOCAL_SUPPORTED)
+if (THREAD_LOCAL_SUPPORTED)
+add_subdirectory(main/native/libhdfspp)
+else()
+message(WARNING
+"WARNING: Libhdfs++ library was not built because the required feature 
thread_local storage \
+is not supported by your compiler. Known compilers that support this 
feature: GCC 4.8+, Visual Studio 2015+, \
+Clang (community version 3.3+), Clang (version for Xcode 8+ and iOS 9+).")
+endif (THREAD_LOCAL_SUPPORTED)
 
 if(REQUIRE_LIBWEBHDFS)
 add_subdirectory(contrib/libwebhdfs)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9c438abe/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt
 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt
index 63fa80d..411320a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/CMakeLists.txt
@@ -63,8 +63,8 @@ check_cxx_source_compiles(
 if (NOT THREAD_LOCAL_SUPPORTED)
   message(FATAL_ERROR
   "FATAL ERROR: The required feature thread_local storage is not supported by 
your compiler. \
-  Known compilers that support this feature: GCC, Visual Studio, Clang 
(community version), \
-  Clang (version for iOS 9 and later).")
+  Known compilers that support this feature: GCC 4.8+, Visual Studio 2015+, 
Clang (community \
+  version 3.3+), Clang (version for Xcode 8+ and iOS 9+).")
 endif (NOT THREAD_LOCAL_SUPPORTED)
 
 # Check if PROTOC library was compiled with the compatible compiler by trying


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[01/50] [abbrv] hadoop git commit: HDFS-14027. DFSStripedOutputStream should implement both hsync methods.

2018-11-05 Thread aengineer
Repository: hadoop
Updated Branches:
  refs/heads/HDDS-4 2115256ab -> 7119be30b


HDFS-14027. DFSStripedOutputStream should implement both hsync methods.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/db7e6368
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/db7e6368
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/db7e6368

Branch: refs/heads/HDDS-4
Commit: db7e636824a36b90ba1c8e9b2fba1162771700fe
Parents: 496f0ff
Author: Xiao Chen 
Authored: Mon Oct 29 19:05:52 2018 -0700
Committer: Xiao Chen 
Committed: Mon Oct 29 19:06:15 2018 -0700

--
 .../hadoop/hdfs/DFSStripedOutputStream.java | 12 +++
 .../hadoop/hdfs/TestDFSStripedOutputStream.java | 36 +---
 2 files changed, 35 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/db7e6368/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
index ed875bb..df9770e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
@@ -24,6 +24,7 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.fs.CreateFlag;
 import org.apache.hadoop.fs.StreamCapabilities;
 import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
+import org.apache.hadoop.hdfs.client.HdfsDataOutputStream.SyncFlag;
 import org.apache.hadoop.hdfs.protocol.ClientProtocol;
 import org.apache.hadoop.hdfs.protocol.DatanodeID;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
@@ -956,11 +957,22 @@ public class DFSStripedOutputStream extends 
DFSOutputStream
   @Override
   public void hflush() {
 // not supported yet
+LOG.debug("DFSStripedOutputStream does not support hflush. "
++ "Caller should check StreamCapabilities before calling.");
   }
 
   @Override
   public void hsync() {
 // not supported yet
+LOG.debug("DFSStripedOutputStream does not support hsync. "
++ "Caller should check StreamCapabilities before calling.");
+  }
+
+  @Override
+  public void hsync(EnumSet syncFlags) {
+// not supported yet
+LOG.debug("DFSStripedOutputStream does not support hsync {}. "
++ "Caller should check StreamCapabilities before calling.", syncFlags);
   }
 
   @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/db7e6368/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStream.java
index 865a736..092aa0a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStream.java
@@ -18,12 +18,14 @@
 package org.apache.hadoop.hdfs;
 
 import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
 import java.io.ByteArrayInputStream;
 import java.io.IOException;
 import java.io.InputStream;
 import java.util.ArrayList;
+import java.util.EnumSet;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -31,6 +33,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FSDataOutputStream;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.StreamCapabilities.StreamCapability;
+import org.apache.hadoop.hdfs.client.HdfsDataOutputStream.SyncFlag;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy;
 import org.apache.hadoop.io.IOUtils;
@@ -196,19 +199,26 @@ public class TestDFSStripedOutputStream {
   public void testStreamFlush() throws Exception {
 final byte[] bytes = StripedFileTestUtil.generateBytes(blockSize *
 dataBlocks * 3 + cellSize * dataBlocks + cellSize + 123);
-FSDataOutputStream os = fs.create(new Path("/ec-file-1"));
-assertFalse("DFSStripedOutputStream should not have hflush() " +
-"capability yet!", os.hasCapability(
-StreamCapability.HFLUSH.getValue()));
-assertFalse("DFSStripedOutputStream should not have hsync() " +
-

[06/50] [abbrv] hadoop git commit: HADOOP-15855. Review hadoop credential doc, including object store details. Contributed by Steve Loughran.

2018-11-05 Thread aengineer
HADOOP-15855. Review hadoop credential doc, including object store details.
Contributed by Steve Loughran.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/62d98ca9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/62d98ca9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/62d98ca9

Branch: refs/heads/HDDS-4
Commit: 62d98ca92aee15d1790d169bfdf0043b05b748ce
Parents: 7757331
Author: Steve Loughran 
Authored: Tue Oct 30 15:58:04 2018 +
Committer: Steve Loughran 
Committed: Tue Oct 30 15:58:04 2018 +

--
 .../src/site/markdown/CredentialProviderAPI.md  | 130 ++-
 .../hadoop/crypto/key/TestKeyProvider.java  |  32 +++--
 2 files changed, 119 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/62d98ca9/hadoop-common-project/hadoop-common/src/site/markdown/CredentialProviderAPI.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/CredentialProviderAPI.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/CredentialProviderAPI.md
index bd1c2c7..0c5f486 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/CredentialProviderAPI.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/CredentialProviderAPI.md
@@ -32,10 +32,12 @@ Overview
 
 Usage
 -
+
 ### Usage Overview
 Let's provide a quick overview of the use of the credential provider framework 
for protecting passwords or other sensitive tokens in hadoop.
 
 # Why is it used?
+
 There are certain deployments that are very sensitive to how sensitive tokens 
like passwords are stored and managed within the cluster. For instance, there 
may be security best practices and policies in place that require such things 
to never be stored in clear text, for example. Enterprise deployments may be 
required to use a preferred solution for managing credentials and we need a way 
to plug in integrations for them.
 
 # General Usage Pattern
@@ -48,46 +50,46 @@ There are numerous places within the Hadoop project and 
ecosystem that can lever
 3. Features or components that leverage the new 
[Configuration.getPassword](../../api/org/apache/hadoop/conf/Configuration.html#getPassword-java.lang.String-)
 method to resolve their credentials will automatically pick up support for the 
credential provider API.
 - By using the same property names as are used for existing clear text 
passwords, this mechanism allows for the migration to credential providers 
while providing backward compatibility for clear text.
 - The entire credential provider path is interrogated before falling back 
to clear text passwords in config.
-4. Features or components that do not use the hadoop Configuration class for 
config or have other internal uses for the credential providers may choose to 
write to the CredentialProvider API itself. An example of its use will be 
included in this document but may also be found within 
[Configuration.getPassword](../../api/org/apache/hadoop/conf/Configuration.html#getPassword-java.lang.String-)
 and within the unit tests of features that have added support and need to 
provision credentials for the tests.
+4. Features or components that do not use Hadoop's 
`org.apache.hadoop.conf.Configuration` class for configuration or have other 
internal uses for the credential providers may choose to use the 
`CredentialProvider` API itself. An example of its use can be found within 
[Configuration.getPassword](../../api/org/apache/hadoop/conf/Configuration.html#getPassword-java.lang.String-)
 and within its unit tests.
 
 # Provision Credentials
-Example: ssl.server.keystore.password
+Example: `ssl.server.keystore.password`
 
-```
-hadoop credential create ssl.server.keystore.password -value 123
-  -provider localjceks://file/home/lmccay/aws.jceks
+```bash
+hadoop credential create ssl.server.keystore.password -value 123 \
+  -provider localjceks://file/home/lmccay/aws.jceks
 ```
 
-Note that the alias names are the same as the configuration properties that 
were used to get the
-credentials from the Configuration.get method. Reusing these names allows for 
intuitive
-migration to the use of credential providers and fall back logic for backward 
compatibility.
+The alias names are the same as the configuration properties that were used to 
get the
+credentials from the `Configuration.get()` methods.
 
 # Configuring the Provider Path
+
 Now, we need to make sure that this provisioned credential store is known at 
runtime by the
 
[Configuration.getPassword](../../api/org/apache/hadoop/conf/Configuration.html#getPassword-java.lang.String-)
 method. If there is no credential provider path configuration then
-getPassword will skip the credential 

hadoop git commit: HDFS-14042. Fix NPE when PROVIDED storage is missing. Contributed by Virajith Jalaparti.

2018-11-05 Thread gifuma
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 b732ed355 -> a1321d020


HDFS-14042. Fix NPE when PROVIDED storage is missing. Contributed by Virajith 
Jalaparti.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a1321d02
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a1321d02
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a1321d02

Branch: refs/heads/branch-3.1
Commit: a1321d020a1a184624f4a2cb35e74d1aa3b85f64
Parents: b732ed3
Author: Giovanni Matteo Fumarola 
Authored: Mon Nov 5 11:02:31 2018 -0800
Committer: Giovanni Matteo Fumarola 
Committed: Mon Nov 5 11:39:15 2018 -0800

--
 .../hdfs/server/blockmanagement/BlockManager.java  | 13 -
 .../server/blockmanagement/DatanodeDescriptor.java |  4 ++--
 .../hdfs/server/blockmanagement/HeartbeatManager.java  |  2 +-
 .../hdfs/server/datanode/TestDataNodeLifeline.java |  5 +
 4 files changed, 20 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a1321d02/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index fb30323..8ae1f50 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -2430,7 +2430,7 @@ public class BlockManager implements BlockStatsMXBean {
 return providedStorageMap.getCapacity();
   }
 
-  public void updateHeartbeat(DatanodeDescriptor node, StorageReport[] reports,
+  void updateHeartbeat(DatanodeDescriptor node, StorageReport[] reports,
   long cacheCapacity, long cacheUsed, int xceiverCount, int failedVolumes,
   VolumeFailureSummary volumeFailureSummary) {
 
@@ -2441,6 +2441,17 @@ public class BlockManager implements BlockStatsMXBean {
 failedVolumes, volumeFailureSummary);
   }
 
+  void updateHeartbeatState(DatanodeDescriptor node,
+  StorageReport[] reports, long cacheCapacity, long cacheUsed,
+  int xceiverCount, int failedVolumes,
+  VolumeFailureSummary volumeFailureSummary) {
+for (StorageReport report: reports) {
+  providedStorageMap.updateStorage(node, report.getStorage());
+}
+node.updateHeartbeatState(reports, cacheCapacity, cacheUsed, xceiverCount,
+failedVolumes, volumeFailureSummary);
+  }
+
   /**
* StatefulBlockInfo is used to build the "toUC" list, which is a list of
* updates to the information about under-construction blocks.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a1321d02/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
index 16ffb43..46a4a7e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
@@ -373,7 +373,7 @@ public class DatanodeDescriptor extends DatanodeInfo {
   /**
* Updates stats from datanode heartbeat.
*/
-  public void updateHeartbeat(StorageReport[] reports, long cacheCapacity,
+  void updateHeartbeat(StorageReport[] reports, long cacheCapacity,
   long cacheUsed, int xceiverCount, int volFailures,
   VolumeFailureSummary volumeFailureSummary) {
 updateHeartbeatState(reports, cacheCapacity, cacheUsed, xceiverCount,
@@ -384,7 +384,7 @@ public class DatanodeDescriptor extends DatanodeInfo {
   /**
* process datanode heartbeat or stats initialization.
*/
-  public void updateHeartbeatState(StorageReport[] reports, long cacheCapacity,
+  void updateHeartbeatState(StorageReport[] reports, long cacheCapacity,
   long cacheUsed, int xceiverCount, int volFailures,
   VolumeFailureSummary volumeFailureSummary) {
 updateStorageStats(reports, cacheCapacity, cacheUsed, xceiverCount,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a1321d02/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java

hadoop git commit: HDFS-14042. Fix NPE when PROVIDED storage is missing. Contributed by Virajith Jalaparti.

2018-11-05 Thread gifuma
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.2 fcd221fed -> 6e1fad299


HDFS-14042. Fix NPE when PROVIDED storage is missing. Contributed by Virajith 
Jalaparti.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6e1fad29
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6e1fad29
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6e1fad29

Branch: refs/heads/branch-3.2
Commit: 6e1fad299a092ced6478a39827efe4a6e4335ffb
Parents: fcd221f
Author: Giovanni Matteo Fumarola 
Authored: Mon Nov 5 11:02:31 2018 -0800
Committer: Giovanni Matteo Fumarola 
Committed: Mon Nov 5 11:33:32 2018 -0800

--
 .../hdfs/server/blockmanagement/BlockManager.java  | 13 -
 .../server/blockmanagement/DatanodeDescriptor.java |  4 ++--
 .../hdfs/server/blockmanagement/HeartbeatManager.java  |  2 +-
 .../hdfs/server/datanode/TestDataNodeLifeline.java |  5 +
 4 files changed, 20 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6e1fad29/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 5e14247..ebbba3b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -2444,7 +2444,7 @@ public class BlockManager implements BlockStatsMXBean {
 return providedStorageMap.getCapacity();
   }
 
-  public void updateHeartbeat(DatanodeDescriptor node, StorageReport[] reports,
+  void updateHeartbeat(DatanodeDescriptor node, StorageReport[] reports,
   long cacheCapacity, long cacheUsed, int xceiverCount, int failedVolumes,
   VolumeFailureSummary volumeFailureSummary) {
 
@@ -2455,6 +2455,17 @@ public class BlockManager implements BlockStatsMXBean {
 failedVolumes, volumeFailureSummary);
   }
 
+  void updateHeartbeatState(DatanodeDescriptor node,
+  StorageReport[] reports, long cacheCapacity, long cacheUsed,
+  int xceiverCount, int failedVolumes,
+  VolumeFailureSummary volumeFailureSummary) {
+for (StorageReport report: reports) {
+  providedStorageMap.updateStorage(node, report.getStorage());
+}
+node.updateHeartbeatState(reports, cacheCapacity, cacheUsed, xceiverCount,
+failedVolumes, volumeFailureSummary);
+  }
+
   /**
* StatefulBlockInfo is used to build the "toUC" list, which is a list of
* updates to the information about under-construction blocks.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6e1fad29/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
index 12b5c33..6aa2376 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
@@ -373,7 +373,7 @@ public class DatanodeDescriptor extends DatanodeInfo {
   /**
* Updates stats from datanode heartbeat.
*/
-  public void updateHeartbeat(StorageReport[] reports, long cacheCapacity,
+  void updateHeartbeat(StorageReport[] reports, long cacheCapacity,
   long cacheUsed, int xceiverCount, int volFailures,
   VolumeFailureSummary volumeFailureSummary) {
 updateHeartbeatState(reports, cacheCapacity, cacheUsed, xceiverCount,
@@ -384,7 +384,7 @@ public class DatanodeDescriptor extends DatanodeInfo {
   /**
* process datanode heartbeat or stats initialization.
*/
-  public void updateHeartbeatState(StorageReport[] reports, long cacheCapacity,
+  void updateHeartbeatState(StorageReport[] reports, long cacheCapacity,
   long cacheUsed, int xceiverCount, int volFailures,
   VolumeFailureSummary volumeFailureSummary) {
 updateStorageStats(reports, cacheCapacity, cacheUsed, xceiverCount,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6e1fad29/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java

hadoop git commit: HDFS-14042. Fix NPE when PROVIDED storage is missing. Contributed by Virajith Jalaparti.

2018-11-05 Thread gifuma
Repository: hadoop
Updated Branches:
  refs/heads/trunk 50f40e053 -> f3f5e7ad0


HDFS-14042. Fix NPE when PROVIDED storage is missing. Contributed by Virajith 
Jalaparti.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f3f5e7ad
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f3f5e7ad
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f3f5e7ad

Branch: refs/heads/trunk
Commit: f3f5e7ad005a88afad6fa09602073eaa450e21ed
Parents: 50f40e0
Author: Giovanni Matteo Fumarola 
Authored: Mon Nov 5 11:02:31 2018 -0800
Committer: Giovanni Matteo Fumarola 
Committed: Mon Nov 5 11:02:31 2018 -0800

--
 .../hdfs/server/blockmanagement/BlockManager.java  | 13 -
 .../server/blockmanagement/DatanodeDescriptor.java |  4 ++--
 .../hdfs/server/blockmanagement/HeartbeatManager.java  |  2 +-
 .../hdfs/server/datanode/TestDataNodeLifeline.java |  5 +
 4 files changed, 20 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f3f5e7ad/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index d74b523..a5fb0b1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -2447,7 +2447,7 @@ public class BlockManager implements BlockStatsMXBean {
 return providedStorageMap.getCapacity();
   }
 
-  public void updateHeartbeat(DatanodeDescriptor node, StorageReport[] reports,
+  void updateHeartbeat(DatanodeDescriptor node, StorageReport[] reports,
   long cacheCapacity, long cacheUsed, int xceiverCount, int failedVolumes,
   VolumeFailureSummary volumeFailureSummary) {
 
@@ -2458,6 +2458,17 @@ public class BlockManager implements BlockStatsMXBean {
 failedVolumes, volumeFailureSummary);
   }
 
+  void updateHeartbeatState(DatanodeDescriptor node,
+  StorageReport[] reports, long cacheCapacity, long cacheUsed,
+  int xceiverCount, int failedVolumes,
+  VolumeFailureSummary volumeFailureSummary) {
+for (StorageReport report: reports) {
+  providedStorageMap.updateStorage(node, report.getStorage());
+}
+node.updateHeartbeatState(reports, cacheCapacity, cacheUsed, xceiverCount,
+failedVolumes, volumeFailureSummary);
+  }
+
   /**
* StatefulBlockInfo is used to build the "toUC" list, which is a list of
* updates to the information about under-construction blocks.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f3f5e7ad/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
index 12b5c33..6aa2376 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
@@ -373,7 +373,7 @@ public class DatanodeDescriptor extends DatanodeInfo {
   /**
* Updates stats from datanode heartbeat.
*/
-  public void updateHeartbeat(StorageReport[] reports, long cacheCapacity,
+  void updateHeartbeat(StorageReport[] reports, long cacheCapacity,
   long cacheUsed, int xceiverCount, int volFailures,
   VolumeFailureSummary volumeFailureSummary) {
 updateHeartbeatState(reports, cacheCapacity, cacheUsed, xceiverCount,
@@ -384,7 +384,7 @@ public class DatanodeDescriptor extends DatanodeInfo {
   /**
* process datanode heartbeat or stats initialization.
*/
-  public void updateHeartbeatState(StorageReport[] reports, long cacheCapacity,
+  void updateHeartbeatState(StorageReport[] reports, long cacheCapacity,
   long cacheUsed, int xceiverCount, int volFailures,
   VolumeFailureSummary volumeFailureSummary) {
 updateStorageStats(reports, cacheCapacity, cacheUsed, xceiverCount,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f3f5e7ad/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java

hadoop git commit: HDDS-794. addendum patch to fix compilation failure. Contributed by Shashikant Banerjee.

2018-11-05 Thread shashikant
Repository: hadoop
Updated Branches:
  refs/heads/trunk 5ddefdd50 -> 50f40e053


HDDS-794. addendum patch to fix compilation failure. Contributed by Shashikant 
Banerjee.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/50f40e05
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/50f40e05
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/50f40e05

Branch: refs/heads/trunk
Commit: 50f40e0536f38517aa33e8859f299bcf19f2f319
Parents: 5ddefdd
Author: Shashikant Banerjee 
Authored: Tue Nov 6 00:20:57 2018 +0530
Committer: Shashikant Banerjee 
Committed: Tue Nov 6 00:20:57 2018 +0530

--
 .../apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/50f40e05/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
index 8f9d589..dc44dc5 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
@@ -139,7 +139,7 @@ public final class ChunkUtils {
   }
 }
 log.debug("Write Chunk completed for chunkFile: {}, size {}", chunkFile,
-data.length);
+bufferSize);
   }
 
   /**


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDDS-794. Add configs to set StateMachineData write timeout in ContainerStateMachine. Contributed by Shashikant Banerjee.

2018-11-05 Thread arp
Repository: hadoop
Updated Branches:
  refs/heads/trunk 942693bdd -> 5ddefdd50


HDDS-794. Add configs to set StateMachineData write timeout in 
ContainerStateMachine. Contributed by Shashikant Banerjee.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5ddefdd5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5ddefdd5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5ddefdd5

Branch: refs/heads/trunk
Commit: 5ddefdd50751ed316f2eb9046f294bbdcdfb2428
Parents: 942693b
Author: Arpit Agarwal 
Authored: Mon Nov 5 10:10:10 2018 -0800
Committer: Arpit Agarwal 
Committed: Mon Nov 5 10:41:28 2018 -0800

--
 .../org/apache/hadoop/hdds/scm/ScmConfigKeys.java |  6 ++
 .../org/apache/hadoop/ozone/OzoneConfigKeys.java  |  9 +
 .../common/src/main/resources/ozone-default.xml   |  7 +++
 .../server/ratis/ContainerStateMachine.java   | 18 --
 .../server/ratis/XceiverServerRatis.java  | 14 ++
 .../container/keyvalue/helpers/ChunkUtils.java|  2 ++
 .../container/keyvalue/impl/ChunkManagerImpl.java |  3 ++-
 7 files changed, 56 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5ddefdd5/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
index 56692af..38eec61 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
@@ -79,6 +79,12 @@ public final class ScmConfigKeys {
   "dfs.container.ratis.segment.preallocated.size";
   public static final int
   DFS_CONTAINER_RATIS_SEGMENT_PREALLOCATED_SIZE_DEFAULT = 128 * 1024 * 
1024;
+  public static final String
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_SYNC_TIMEOUT =
+  "dfs.container.ratis.statemachinedata.sync.timeout";
+  public static final TimeDuration
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_SYNC_TIMEOUT_DEFAULT =
+  TimeDuration.valueOf(10, TimeUnit.SECONDS);
   public static final String DFS_RATIS_CLIENT_REQUEST_TIMEOUT_DURATION_KEY =
   "dfs.ratis.client.request.timeout.duration";
   public static final TimeDuration

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5ddefdd5/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
index 3b4f017..54b1cf8 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
@@ -229,6 +229,15 @@ public final class OzoneConfigKeys {
   = ScmConfigKeys.DFS_CONTAINER_RATIS_SEGMENT_PREALLOCATED_SIZE_KEY;
   public static final int DFS_CONTAINER_RATIS_SEGMENT_PREALLOCATED_SIZE_DEFAULT
   = ScmConfigKeys.DFS_CONTAINER_RATIS_SEGMENT_PREALLOCATED_SIZE_DEFAULT;
+
+  // config settings to enable stateMachineData write timeout
+  public static final String
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_SYNC_TIMEOUT =
+  ScmConfigKeys.DFS_CONTAINER_RATIS_STATEMACHINEDATA_SYNC_TIMEOUT;
+  public static final TimeDuration
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_SYNC_TIMEOUT_DEFAULT =
+  ScmConfigKeys.DFS_CONTAINER_RATIS_STATEMACHINEDATA_SYNC_TIMEOUT_DEFAULT;
+
   public static final int DFS_CONTAINER_CHUNK_MAX_SIZE
   = ScmConfigKeys.OZONE_SCM_CHUNK_MAX_SIZE;
   public static final String DFS_CONTAINER_RATIS_DATANODE_STORAGE_DIR =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5ddefdd5/hadoop-hdds/common/src/main/resources/ozone-default.xml
--
diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml 
b/hadoop-hdds/common/src/main/resources/ozone-default.xml
index eb68662..5ff60eb 100644
--- a/hadoop-hdds/common/src/main/resources/ozone-default.xml
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -53,6 +53,13 @@
 
   
   
+dfs.container.ratis.statemachinedata.sync.timeout
+10s
+OZONE, DEBUG, CONTAINER, RATIS
+Timeout for StateMachine data writes by Ratis.
+
+  
+  
 dfs.container.ratis.datanode.storage.dir
 
 OZONE, CONTAINER, STORAGE, MANAGEMENT, RATIS


hadoop git commit: HDDS-799. Avoid ByteString to byte array conversion cost by using ByteBuffer in Datanode. Contributed by Mukul Kumar Singh.

2018-11-05 Thread shashikant
Repository: hadoop
Updated Branches:
  refs/heads/ozone-0.3 53d4aefae -> 4b0004488


HDDS-799. Avoid ByteString to byte array conversion cost by using ByteBuffer in 
Datanode. Contributed by Mukul Kumar Singh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4b000448
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4b000448
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4b000448

Branch: refs/heads/ozone-0.3
Commit: 4b00044883f5d00eea99ee885fff0761d9b6392e
Parents: 53d4aef
Author: Shashikant Banerjee 
Authored: Tue Nov 6 00:00:23 2018 +0530
Committer: Shashikant Banerjee 
Committed: Tue Nov 6 00:00:23 2018 +0530

--
 .../container/keyvalue/KeyValueHandler.java | 11 +++---
 .../container/keyvalue/helpers/ChunkUtils.java  | 30 
 .../keyvalue/impl/ChunkManagerImpl.java |  2 +-
 .../keyvalue/interfaces/ChunkManager.java   |  3 +-
 .../keyvalue/TestChunkManagerImpl.java  | 37 ++--
 .../common/impl/TestContainerPersistence.java   | 28 ++-
 6 files changed, 63 insertions(+), 48 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4b000448/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
index 7c859d4..2377cd6 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.ozone.container.keyvalue;
 
 import java.io.FileInputStream;
 import java.io.IOException;
+import java.nio.ByteBuffer;
 import java.util.HashMap;
 import java.util.LinkedList;
 import java.util.List;
@@ -76,7 +77,7 @@ import org.apache.hadoop.util.ReflectionUtils;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
-import com.google.protobuf.ByteString;
+import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
 import static org.apache.hadoop.hdds.HddsConfigKeys
 .HDDS_DATANODE_VOLUME_CHOOSING_POLICY;
 import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
@@ -668,10 +669,10 @@ public class KeyValueHandler extends Handler {
   ChunkInfo chunkInfo = ChunkInfo.getFromProtoBuf(chunkInfoProto);
   Preconditions.checkNotNull(chunkInfo);
 
-  byte[] data = null;
+  ByteBuffer data = null;
   if (request.getWriteChunk().getStage() == Stage.WRITE_DATA ||
   request.getWriteChunk().getStage() == Stage.COMBINED) {
-data = request.getWriteChunk().getData().toByteArray();
+data = request.getWriteChunk().getData().asReadOnlyByteBuffer();
   }
 
   chunkManager.writeChunk(kvContainer, blockID, chunkInfo, data,
@@ -729,7 +730,7 @@ public class KeyValueHandler extends Handler {
   ChunkInfo chunkInfo = ChunkInfo.getFromProtoBuf(
   putSmallFileReq.getChunkInfo());
   Preconditions.checkNotNull(chunkInfo);
-  byte[] data = putSmallFileReq.getData().toByteArray();
+  ByteBuffer data = putSmallFileReq.getData().asReadOnlyByteBuffer();
   // chunks will be committed as a part of handling putSmallFile
   // here. There is no need to maintain this info in openContainerBlockMap.
   chunkManager.writeChunk(
@@ -740,7 +741,7 @@ public class KeyValueHandler extends Handler {
   blockData.setChunks(chunks);
   // TODO: add bcsId as a part of putSmallFile transaction
   blockManager.putBlock(kvContainer, blockData);
-  metrics.incContainerBytesStats(Type.PutSmallFile, data.length);
+  metrics.incContainerBytesStats(Type.PutSmallFile, data.capacity());
 
 } catch (StorageContainerException ex) {
   return ContainerUtils.logAndReturnError(LOG, ex, request);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4b000448/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
index 492a286..dc44dc5 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
+++ 

hadoop git commit: HDDS-799. Avoid ByteString to byte array conversion cost by using ByteBuffer in Datanode. Contributed by Mukul Kumar Singh.

2018-11-05 Thread shashikant
Repository: hadoop
Updated Branches:
  refs/heads/trunk c8ca1747c -> 942693bdd


HDDS-799. Avoid ByteString to byte array conversion cost by using ByteBuffer in 
Datanode. Contributed by Mukul Kumar Singh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/942693bd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/942693bd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/942693bd

Branch: refs/heads/trunk
Commit: 942693bddd5fba51b85a5f677e3496a41817cff3
Parents: c8ca174
Author: Shashikant Banerjee 
Authored: Mon Nov 5 23:43:22 2018 +0530
Committer: Shashikant Banerjee 
Committed: Mon Nov 5 23:43:22 2018 +0530

--
 .../container/keyvalue/KeyValueHandler.java | 11 +++---
 .../container/keyvalue/helpers/ChunkUtils.java  | 28 ---
 .../keyvalue/impl/ChunkManagerImpl.java |  2 +-
 .../keyvalue/interfaces/ChunkManager.java   |  3 +-
 .../keyvalue/TestChunkManagerImpl.java  | 37 ++--
 .../common/impl/TestContainerPersistence.java   | 28 ++-
 6 files changed, 62 insertions(+), 47 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/942693bd/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
index 4cb23ed..1271d99 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.ozone.container.keyvalue;
 
 import java.io.FileInputStream;
 import java.io.IOException;
+import java.nio.ByteBuffer;
 import java.util.HashMap;
 import java.util.LinkedList;
 import java.util.List;
@@ -76,7 +77,7 @@ import org.apache.hadoop.util.ReflectionUtils;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
-import com.google.protobuf.ByteString;
+import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
 import static org.apache.hadoop.hdds.HddsConfigKeys
 .HDDS_DATANODE_VOLUME_CHOOSING_POLICY;
 import static 
org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Result.*;
@@ -652,10 +653,10 @@ public class KeyValueHandler extends Handler {
   ChunkInfo chunkInfo = ChunkInfo.getFromProtoBuf(chunkInfoProto);
   Preconditions.checkNotNull(chunkInfo);
 
-  byte[] data = null;
+  ByteBuffer data = null;
   if (request.getWriteChunk().getStage() == Stage.WRITE_DATA ||
   request.getWriteChunk().getStage() == Stage.COMBINED) {
-data = request.getWriteChunk().getData().toByteArray();
+data = request.getWriteChunk().getData().asReadOnlyByteBuffer();
   }
 
   chunkManager.writeChunk(kvContainer, blockID, chunkInfo, data,
@@ -713,7 +714,7 @@ public class KeyValueHandler extends Handler {
   ChunkInfo chunkInfo = ChunkInfo.getFromProtoBuf(
   putSmallFileReq.getChunkInfo());
   Preconditions.checkNotNull(chunkInfo);
-  byte[] data = putSmallFileReq.getData().toByteArray();
+  ByteBuffer data = putSmallFileReq.getData().asReadOnlyByteBuffer();
   // chunks will be committed as a part of handling putSmallFile
   // here. There is no need to maintain this info in openContainerBlockMap.
   chunkManager.writeChunk(
@@ -724,7 +725,7 @@ public class KeyValueHandler extends Handler {
   blockData.setChunks(chunks);
   // TODO: add bcsId as a part of putSmallFile transaction
   blockManager.putBlock(kvContainer, blockData);
-  metrics.incContainerBytesStats(Type.PutSmallFile, data.length);
+  metrics.incContainerBytesStats(Type.PutSmallFile, data.capacity());
 
 } catch (StorageContainerException ex) {
   return ContainerUtils.logAndReturnError(LOG, ex, request);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/942693bd/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
index 20598d9..718f5de 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/ChunkUtils.java
+++ 

hadoop git commit: HDDS-794. Add configs to set StateMachineData write timeout in ContainerStateMachine. Contributed by Shashikant Banerjee.

2018-11-05 Thread arp
Repository: hadoop
Updated Branches:
  refs/heads/ozone-0.3 db90350c9 -> 53d4aefae


HDDS-794. Add configs to set StateMachineData write timeout in 
ContainerStateMachine. Contributed by Shashikant Banerjee.

(cherry picked from commit 408f59caa9321be8a55afe44b1811c5dacf23206)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/53d4aefa
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/53d4aefa
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/53d4aefa

Branch: refs/heads/ozone-0.3
Commit: 53d4aefae8490acbd3e64dd791ffbe17afaf91c4
Parents: db90350
Author: Arpit Agarwal 
Authored: Mon Nov 5 10:10:10 2018 -0800
Committer: Arpit Agarwal 
Committed: Mon Nov 5 10:10:17 2018 -0800

--
 .../org/apache/hadoop/hdds/scm/ScmConfigKeys.java |  6 ++
 .../org/apache/hadoop/ozone/OzoneConfigKeys.java  |  9 +
 .../common/src/main/resources/ozone-default.xml   |  7 +++
 .../server/ratis/ContainerStateMachine.java   | 18 --
 .../server/ratis/XceiverServerRatis.java  | 14 ++
 .../container/keyvalue/helpers/ChunkUtils.java|  2 ++
 .../container/keyvalue/impl/ChunkManagerImpl.java |  3 ++-
 7 files changed, 56 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/53d4aefa/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
index f95b748..11e6a23 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
@@ -74,6 +74,12 @@ public final class ScmConfigKeys {
   "dfs.container.ratis.segment.preallocated.size";
   public static final int
   DFS_CONTAINER_RATIS_SEGMENT_PREALLOCATED_SIZE_DEFAULT = 128 * 1024 * 
1024;
+  public static final String
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_SYNC_TIMEOUT =
+  "dfs.container.ratis.statemachinedata.sync.timeout";
+  public static final TimeDuration
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_SYNC_TIMEOUT_DEFAULT =
+  TimeDuration.valueOf(10, TimeUnit.SECONDS);
   public static final String DFS_RATIS_CLIENT_REQUEST_TIMEOUT_DURATION_KEY =
   "dfs.ratis.client.request.timeout.duration";
   public static final TimeDuration

http://git-wip-us.apache.org/repos/asf/hadoop/blob/53d4aefa/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
--
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
index c931dcf..5e9fe08 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
@@ -232,6 +232,15 @@ public final class OzoneConfigKeys {
   = ScmConfigKeys.DFS_CONTAINER_RATIS_SEGMENT_PREALLOCATED_SIZE_KEY;
   public static final int DFS_CONTAINER_RATIS_SEGMENT_PREALLOCATED_SIZE_DEFAULT
   = ScmConfigKeys.DFS_CONTAINER_RATIS_SEGMENT_PREALLOCATED_SIZE_DEFAULT;
+
+  // config settings to enable stateMachineData write timeout
+  public static final String
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_SYNC_TIMEOUT =
+  ScmConfigKeys.DFS_CONTAINER_RATIS_STATEMACHINEDATA_SYNC_TIMEOUT;
+  public static final TimeDuration
+  DFS_CONTAINER_RATIS_STATEMACHINEDATA_SYNC_TIMEOUT_DEFAULT =
+  ScmConfigKeys.DFS_CONTAINER_RATIS_STATEMACHINEDATA_SYNC_TIMEOUT_DEFAULT;
+
   public static final int DFS_CONTAINER_CHUNK_MAX_SIZE
   = ScmConfigKeys.OZONE_SCM_CHUNK_MAX_SIZE;
   public static final String DFS_CONTAINER_RATIS_DATANODE_STORAGE_DIR =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/53d4aefa/hadoop-hdds/common/src/main/resources/ozone-default.xml
--
diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml 
b/hadoop-hdds/common/src/main/resources/ozone-default.xml
index 237f8d8..2e250fa 100644
--- a/hadoop-hdds/common/src/main/resources/ozone-default.xml
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -53,6 +53,13 @@
 
   
   
+dfs.container.ratis.statemachinedata.sync.timeout
+10s
+OZONE, DEBUG, CONTAINER, RATIS
+Timeout for StateMachine data writes by Ratis.
+
+  
+  
 dfs.container.ratis.datanode.storage.dir
 
 OZONE, CONTAINER, STORAGE, MANAGEMENT, RATIS


hadoop git commit: HDDS-797. If DN is started before SCM, it does not register. Contributed by Hanisha Koneru.

2018-11-05 Thread arp
Repository: hadoop
Updated Branches:
  refs/heads/ozone-0.3 732df81a6 -> db90350c9


HDDS-797. If DN is started before SCM, it does not register. Contributed by 
Hanisha Koneru.

(cherry picked from commit c8ca1747c08d905cdefaa5566dd58d770a6b71bd)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/db90350c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/db90350c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/db90350c

Branch: refs/heads/ozone-0.3
Commit: db90350c97cfe1f3cffb2f1e6df53e353e1c25af
Parents: 732df81
Author: Arpit Agarwal 
Authored: Mon Nov 5 09:40:00 2018 -0800
Committer: Arpit Agarwal 
Committed: Mon Nov 5 10:07:43 2018 -0800

--
 .../states/endpoint/VersionEndpointTask.java| 79 +++-
 .../hadoop/ozone/TestMiniOzoneCluster.java  | 52 -
 2 files changed, 94 insertions(+), 37 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/db90350c/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/VersionEndpointTask.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/VersionEndpointTask.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/VersionEndpointTask.java
index 79fa174..2d00da8 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/VersionEndpointTask.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/VersionEndpointTask.java
@@ -64,50 +64,57 @@ public class VersionEndpointTask implements
   public EndpointStateMachine.EndPointStates call() throws Exception {
 rpcEndPoint.lock();
 try{
-  SCMVersionResponseProto versionResponse =
-  rpcEndPoint.getEndPoint().getVersion(null);
-  VersionResponse response = VersionResponse.getFromProtobuf(
-  versionResponse);
-  rpcEndPoint.setVersion(response);
+  if (rpcEndPoint.getState().equals(
+  EndpointStateMachine.EndPointStates.GETVERSION)) {
+SCMVersionResponseProto versionResponse =
+rpcEndPoint.getEndPoint().getVersion(null);
+VersionResponse response = VersionResponse.getFromProtobuf(
+versionResponse);
+rpcEndPoint.setVersion(response);
 
-  String scmId = response.getValue(OzoneConsts.SCM_ID);
-  String clusterId = response.getValue(OzoneConsts.CLUSTER_ID);
+String scmId = response.getValue(OzoneConsts.SCM_ID);
+String clusterId = response.getValue(OzoneConsts.CLUSTER_ID);
 
-  // Check volumes
-  VolumeSet volumeSet = ozoneContainer.getVolumeSet();
-  volumeSet.writeLock();
-  try {
-Map volumeMap = volumeSet.getVolumeMap();
+// Check volumes
+VolumeSet volumeSet = ozoneContainer.getVolumeSet();
+volumeSet.writeLock();
+try {
+  Map volumeMap = volumeSet.getVolumeMap();
 
-Preconditions.checkNotNull(scmId, "Reply from SCM: scmId cannot be " +
-"null");
-Preconditions.checkNotNull(clusterId, "Reply from SCM: clusterId " +
-"cannot be null");
+  Preconditions.checkNotNull(scmId, "Reply from SCM: scmId cannot be " 
+
+  "null");
+  Preconditions.checkNotNull(clusterId, "Reply from SCM: clusterId " +
+  "cannot be null");
 
-// If version file does not exist create version file and also set 
scmId
-for (Map.Entry entry : volumeMap.entrySet()) {
-  HddsVolume hddsVolume = entry.getValue();
-  boolean result = HddsVolumeUtil.checkVolume(hddsVolume, scmId,
-  clusterId, LOG);
-  if (!result) {
-volumeSet.failVolume(hddsVolume.getHddsRootDir().getPath());
+  // If version file does not exist create version file and also set 
scmId
+
+  for (Map.Entry entry : volumeMap.entrySet()) {
+HddsVolume hddsVolume = entry.getValue();
+boolean result = HddsVolumeUtil.checkVolume(hddsVolume, scmId,
+clusterId, LOG);
+if (!result) {
+  volumeSet.failVolume(hddsVolume.getHddsRootDir().getPath());
+}
   }
+  if (volumeSet.getVolumesList().size() == 0) {
+// All volumes are in inconsistent state
+throw new DiskOutOfSpaceException("All configured Volumes are in " 
+
+"Inconsistent State");
+  }
+} finally {
+  volumeSet.writeUnlock();
 }
-if (volumeSet.getVolumesList().size() == 0) {
-  // All volumes are in inconsistent 

hadoop git commit: YARN-8687. Update YARN service file type in documentation. Contributed by Suma Shivaprasad

2018-11-05 Thread sunilg
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.2.0 b990d5943 -> ded047b06


YARN-8687. Update YARN service file type in documentation.
   Contributed by Suma Shivaprasad

(cherry picked from commit ba7e81667ce12d5cf9d87ee18a8627323759cee0)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ded047b0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ded047b0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ded047b0

Branch: refs/heads/branch-3.2.0
Commit: ded047b06faf917e39a5459de62865a51d2809a5
Parents: b990d59
Author: Eric Yang 
Authored: Thu Oct 18 12:02:10 2018 -0400
Committer: Sunil G 
Committed: Mon Nov 5 23:26:18 2018 +0530

--
 .../hadoop-yarn-site/src/site/markdown/yarn-service/Examples.md| 2 +-
 .../src/site/markdown/yarn-service/YarnServiceAPI.md   | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ded047b0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/Examples.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/Examples.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/Examples.md
index 73e00b3..da7a9c4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/Examples.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/Examples.md
@@ -48,7 +48,7 @@ Note this example requires registry DNS.
   "configuration": {
 "files": [
   {
-"type": "ENV",
+"type": "TEMPLATE",
 "dest_file": "/var/www/html/index.html",
 "properties": {
   "content": 
"TitleHello from 
${COMPONENT_INSTANCE_NAME}!"

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ded047b0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/YarnServiceAPI.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/YarnServiceAPI.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/YarnServiceAPI.md
index 7b1e74a..fe49158 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/YarnServiceAPI.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/YarnServiceAPI.md
@@ -252,7 +252,7 @@ A config file that needs to be created and made available 
as a volume in a servi
 
 |Name|Description|Required|Schema|Default|
 ||||||
-|type|Config file in the standard format like xml, properties, json, yaml, 
template or static/archive resource files. When static/archive types are 
specified, file must be uploaded to remote file system before launching the 
job, and YARN service framework will localize files prior to launching 
containers. Archive files are unwrapped during localization |false|enum (XML, 
PROPERTIES, JSON, YAML, TEMPLATE, ENV, HADOOP_XML, STATIC, ARCHIVE)||
+|type|Config file in the standard format like xml, properties, json, yaml, 
template or static/archive resource files. When static/archive types are 
specified, file must be uploaded to remote file system before launching the 
job, and YARN service framework will localize files prior to launching 
containers. Archive files are unwrapped during localization |false|enum (XML, 
PROPERTIES, JSON, YAML, TEMPLATE, HADOOP_XML, STATIC, ARCHIVE)||
 |dest_file|The path that this configuration file should be created as. If it 
is an absolute path, it will be mounted into the DOCKER container. Absolute 
paths are only allowed for DOCKER containers.  If it is a relative path, only 
the file name should be provided, and the file will be created in the container 
local working directory under a folder named conf for all types other than 
static/archive. For static/archive resource types, the files are available 
under resources directory.|false|string||
 |src_file|This provides the source location of the configuration file, the 
content of which is dumped to dest_file post property substitutions, in the 
format as specified in type. Typically the src_file would point to a source 
controlled network accessible file maintained by tools like puppet, chef, or 
hdfs etc. Currently, only hdfs is supported.|false|string||
 |properties|A blob of key value pairs that will be dumped in the dest_file in 
the format as specified in type. If src_file is specified, src_file content are 
dumped in the dest_file and these properties will overwrite, if any, existing 
properties in src_file or be added as new 

hadoop git commit: YARN-8938. Updated YARN service upgrade document. Contributed by Chandni Singh

2018-11-05 Thread sunilg
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.2.0 83e5ebd94 -> b990d5943


YARN-8938.  Updated YARN service upgrade document.
Contributed by Chandni Singh

(cherry picked from commit bbc6dcd3d0976932a49d8650804fb0a4018b3a02)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b990d594
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b990d594
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b990d594

Branch: refs/heads/branch-3.2.0
Commit: b990d5943acf608a3236d2167eafcac769f29da7
Parents: 83e5ebd
Author: Eric Yang 
Authored: Wed Oct 24 11:50:09 2018 -0400
Committer: Sunil G 
Committed: Mon Nov 5 23:16:36 2018 +0530

--
 .../markdown/yarn-service/ServiceUpgrade.md | 38 ++--
 1 file changed, 36 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b990d594/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/ServiceUpgrade.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/ServiceUpgrade.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/ServiceUpgrade.md
index 839be22..559d9cd 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/ServiceUpgrade.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/ServiceUpgrade.md
@@ -47,8 +47,21 @@ A service can be auto-finalized when the upgrade is 
initialized with
 `-autoFinalize` option. With auto-finalization, when all the 
component-instances of
 the service have been upgraded, finalization will be performed automatically 
by the
 service framework.\
-\
-**NOTE**: Cancel of upgrade is not implemented yet.
+
+Hadoop 3.2.0 onwards canceling upgrade and express upgrade is also supported.
+
+1. Cancel upgrade.\
+Before the upgrade of the service is finalized, the user has an option to 
cancel
+the upgrade. This step resolves the dependencies between the components and 
then
+sequentially rolls back each component which was upgraded.
+
+2. Express upgrade.\
+This is a one-step process to upgrade all the components of the service. It 
involves
+providing the service spec of the newer version of the service. The service 
master
+then performs the following steps automatically:\
+a. Discovers all the components that require an upgrade.\
+b. Resolve dependencies between these components.\
+c. Triggers upgrade of the components sequentially.
 
 ## Upgrade Example
 This example shows upgrade of sleeper service. Below is the sleeper service
@@ -195,3 +208,24 @@ e.g. The command below finalizes the upgrade of 
`my-sleeper`:
 ```
 yarn app -upgrade my-sleeper -finalize
 ```
+
+### Cancel Upgrade
+User can cancel an upgrade before it is finalized using the below command:
+```
+yarn app -upgrade ${service_name} -cancel
+```
+e.g. Before the upgrade is finalized, the command below cancels the upgrade of
+`my-sleeper`:
+```
+yarn app -upgrade my-sleeper -cancel
+```
+
+### Express Upgrade
+User can upgrade a service in one using the below command:
+```
+yarn app -upgrade ${service_name} -express ${path_to_new_service_def_file}
+```
+e.g. The command below express upgrades `my-sleeper`:
+```
+yarn app -upgrade my-sleeper -express sleeper_v101.json
+```
\ No newline at end of file


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDDS-797. If DN is started before SCM, it does not register. Contributed by Hanisha Koneru.

2018-11-05 Thread arp
Repository: hadoop
Updated Branches:
  refs/heads/trunk 15df2e7a7 -> c8ca1747c


HDDS-797. If DN is started before SCM, it does not register. Contributed by 
Hanisha Koneru.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c8ca1747
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c8ca1747
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c8ca1747

Branch: refs/heads/trunk
Commit: c8ca1747c08d905cdefaa5566dd58d770a6b71bd
Parents: 15df2e7
Author: Arpit Agarwal 
Authored: Mon Nov 5 09:40:00 2018 -0800
Committer: Arpit Agarwal 
Committed: Mon Nov 5 09:40:00 2018 -0800

--
 .../states/endpoint/VersionEndpointTask.java| 79 +++-
 .../hadoop/ozone/TestMiniOzoneCluster.java  | 52 -
 2 files changed, 94 insertions(+), 37 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c8ca1747/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/VersionEndpointTask.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/VersionEndpointTask.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/VersionEndpointTask.java
index 79fa174..2d00da8 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/VersionEndpointTask.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/states/endpoint/VersionEndpointTask.java
@@ -64,50 +64,57 @@ public class VersionEndpointTask implements
   public EndpointStateMachine.EndPointStates call() throws Exception {
 rpcEndPoint.lock();
 try{
-  SCMVersionResponseProto versionResponse =
-  rpcEndPoint.getEndPoint().getVersion(null);
-  VersionResponse response = VersionResponse.getFromProtobuf(
-  versionResponse);
-  rpcEndPoint.setVersion(response);
+  if (rpcEndPoint.getState().equals(
+  EndpointStateMachine.EndPointStates.GETVERSION)) {
+SCMVersionResponseProto versionResponse =
+rpcEndPoint.getEndPoint().getVersion(null);
+VersionResponse response = VersionResponse.getFromProtobuf(
+versionResponse);
+rpcEndPoint.setVersion(response);
 
-  String scmId = response.getValue(OzoneConsts.SCM_ID);
-  String clusterId = response.getValue(OzoneConsts.CLUSTER_ID);
+String scmId = response.getValue(OzoneConsts.SCM_ID);
+String clusterId = response.getValue(OzoneConsts.CLUSTER_ID);
 
-  // Check volumes
-  VolumeSet volumeSet = ozoneContainer.getVolumeSet();
-  volumeSet.writeLock();
-  try {
-Map volumeMap = volumeSet.getVolumeMap();
+// Check volumes
+VolumeSet volumeSet = ozoneContainer.getVolumeSet();
+volumeSet.writeLock();
+try {
+  Map volumeMap = volumeSet.getVolumeMap();
 
-Preconditions.checkNotNull(scmId, "Reply from SCM: scmId cannot be " +
-"null");
-Preconditions.checkNotNull(clusterId, "Reply from SCM: clusterId " +
-"cannot be null");
+  Preconditions.checkNotNull(scmId, "Reply from SCM: scmId cannot be " 
+
+  "null");
+  Preconditions.checkNotNull(clusterId, "Reply from SCM: clusterId " +
+  "cannot be null");
 
-// If version file does not exist create version file and also set 
scmId
-for (Map.Entry entry : volumeMap.entrySet()) {
-  HddsVolume hddsVolume = entry.getValue();
-  boolean result = HddsVolumeUtil.checkVolume(hddsVolume, scmId,
-  clusterId, LOG);
-  if (!result) {
-volumeSet.failVolume(hddsVolume.getHddsRootDir().getPath());
+  // If version file does not exist create version file and also set 
scmId
+
+  for (Map.Entry entry : volumeMap.entrySet()) {
+HddsVolume hddsVolume = entry.getValue();
+boolean result = HddsVolumeUtil.checkVolume(hddsVolume, scmId,
+clusterId, LOG);
+if (!result) {
+  volumeSet.failVolume(hddsVolume.getHddsRootDir().getPath());
+}
   }
+  if (volumeSet.getVolumesList().size() == 0) {
+// All volumes are in inconsistent state
+throw new DiskOutOfSpaceException("All configured Volumes are in " 
+
+"Inconsistent State");
+  }
+} finally {
+  volumeSet.writeUnlock();
 }
-if (volumeSet.getVolumesList().size() == 0) {
-  // All volumes are in inconsistent state
-  throw new DiskOutOfSpaceException("All configured Volumes are in 

hadoop git commit: HDDS-524. log4j is added with root to apache/hadoop:2 and apache/hadoop:3 images. Contributed by Dinesh Chitlangia.

2018-11-05 Thread elek
Repository: hadoop
Updated Branches:
  refs/heads/docker-hadoop-2 5a998aa0f -> d3a7dc87b


HDDS-524. log4j is added with root to apache/hadoop:2 and apache/hadoop:3 
images. Contributed by Dinesh Chitlangia.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d3a7dc87
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d3a7dc87
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d3a7dc87

Branch: refs/heads/docker-hadoop-2
Commit: d3a7dc87beef255a9b738fc6bee6174628b1d2ca
Parents: 5a998aa
Author: Márton Elek 
Authored: Mon Nov 5 15:36:49 2018 +0100
Committer: Márton Elek 
Committed: Mon Nov 5 15:36:49 2018 +0100

--
 Dockerfile | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3a7dc87/Dockerfile
--
diff --git a/Dockerfile b/Dockerfile
index 7d3d755..282a2ca 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -19,3 +19,4 @@ WORKDIR /opt
 RUN sudo rm -rf /opt/hadoop && wget $HADOOP_URL -O hadoop.tar.gz && tar zxf 
hadoop.tar.gz && rm hadoop.tar.gz && mv hadoop* hadoop && rm -rf 
/opt/hadoop/share/doc
 WORKDIR /opt/hadoop
 ADD log4j.properties /opt/hadoop/etc/hadoop/log4j.properties
+RUN sudo chown -R hadoop:users /opt/hadoop/etc/hadoop/*


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDDS-524. log4j is added with root to apache/hadoop:2 and apache/hadoop:3 images. Contributed by Dinesh Chitlangia.

2018-11-05 Thread elek
Repository: hadoop
Updated Branches:
  refs/heads/docker-hadoop-3 eb740e3e3 -> 1c9bd2475


HDDS-524. log4j is added with root to apache/hadoop:2 and apache/hadoop:3 
images. Contributed by Dinesh Chitlangia.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1c9bd247
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1c9bd247
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1c9bd247

Branch: refs/heads/docker-hadoop-3
Commit: 1c9bd24755cb575684be59b998799e5b99e9cbb0
Parents: eb740e3
Author: Márton Elek 
Authored: Mon Nov 5 15:34:06 2018 +0100
Committer: Márton Elek 
Committed: Mon Nov 5 15:34:06 2018 +0100

--
 Dockerfile | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1c9bd247/Dockerfile
--
diff --git a/Dockerfile b/Dockerfile
index b4c56b8..044fe72 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -19,3 +19,4 @@ WORKDIR /opt
 RUN sudo rm -rf /opt/hadoop && wget $HADOOP_URL -O hadoop.tar.gz && tar zxf 
hadoop.tar.gz && rm hadoop.tar.gz && mv hadoop* hadoop && rm -rf 
/opt/hadoop/share/doc
 WORKDIR /opt/hadoop
 ADD log4j.properties /opt/hadoop/etc/hadoop/log4j.properties
+RUN sudo chown -R hadoop:users /opt/hadoop/etc/hadoop/*


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDDS-796. Fix failed test TestStorageContainerManagerHttpServer#testHttpPolicy.

2018-11-05 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk d43cc5db0 -> 15df2e7a7


HDDS-796. Fix failed test TestStorageContainerManagerHttpServer#testHttpPolicy.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/15df2e7a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/15df2e7a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/15df2e7a

Branch: refs/heads/trunk
Commit: 15df2e7a7547e12e884b624d9f17ad2799d9ccf9
Parents: d43cc5d
Author: Yiqun Lin 
Authored: Mon Nov 5 17:31:06 2018 +0800
Committer: Yiqun Lin 
Committed: Mon Nov 5 17:31:06 2018 +0800

--
 .../java/org/apache/hadoop/hdds/server/BaseHttpServer.java| 7 ++-
 1 file changed, 2 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/15df2e7a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
--
diff --git 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
index 2726fc3..5e7d7b8 100644
--- 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
+++ 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
@@ -115,13 +115,10 @@ public abstract class BaseHttpServer {
 final Optional addressPort =
 getPortNumberFromConfigKeys(conf, addressKey);
 
-final Optional addresHost =
+final Optional addressHost =
 getHostNameFromConfigKeys(conf, addressKey);
 
-String hostName = bindHost.orElse(addresHost.get());
-if (hostName == null || hostName.isEmpty()) {
-  hostName = bindHostDefault;
-}
+String hostName = bindHost.orElse(addressHost.orElse(bindHostDefault));
 
 return NetUtils.createSocketAddr(
 hostName + ":" + addressPort.orElse(bindPortdefault));


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15900. Update JSch versions in LICENSE.txt.

2018-11-05 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9.2 e10a1886c -> 80971803f


HADOOP-15900. Update JSch versions in LICENSE.txt.

(cherry picked from commit d43cc5db0ff80958ca873df1dc2fa00054e37175)

Conflicts:
LICENSE.txt
(cherry picked from commit 939827daa65b61431cdb66d0ac266b902b8734c4)
(cherry picked from commit 1f3899fa3b46543e49702f71ce7641463a893886)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/80971803
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/80971803
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/80971803

Branch: refs/heads/branch-2.9.2
Commit: 80971803f76f09152d524392e90540d111326ef5
Parents: e10a188
Author: Akira Ajisaka 
Authored: Mon Nov 5 17:51:16 2018 +0900
Committer: Akira Ajisaka 
Committed: Mon Nov 5 18:02:47 2018 +0900

--
 LICENSE.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/80971803/LICENSE.txt
--
diff --git a/LICENSE.txt b/LICENSE.txt
index 23c63be..5283f67 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -1716,7 +1716,7 @@ any resulting litigation.
 The binary distribution of this product bundles these dependencies under the
 following license:
 ASM Core 3.2
-JSch 0.1.51
+JSch 0.1.54
 ParaNamer Core 2.3
 JLine 0.9.94
 leveldbjni-all 1.8


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15900. Update JSch versions in LICENSE.txt.

2018-11-05 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 bd79bfc3a -> 29d969eff


HADOOP-15900. Update JSch versions in LICENSE.txt.

(cherry picked from commit d43cc5db0ff80958ca873df1dc2fa00054e37175)

Conflicts:
LICENSE.txt
(cherry picked from commit 939827daa65b61431cdb66d0ac266b902b8734c4)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/29d969ef
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/29d969ef
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/29d969ef

Branch: refs/heads/branch-2.7
Commit: 29d969eff405f0af63aa14458dad89437d61cbea
Parents: bd79bfc
Author: Akira Ajisaka 
Authored: Mon Nov 5 17:51:16 2018 +0900
Committer: Akira Ajisaka 
Committed: Mon Nov 5 18:00:41 2018 +0900

--
 LICENSE.txt |  2 +-
 hadoop-common-project/hadoop-common/CHANGES.txt | 15 +++
 2 files changed, 16 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/29d969ef/LICENSE.txt
--
diff --git a/LICENSE.txt b/LICENSE.txt
index bc7ef3d..73cf666 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -1536,7 +1536,7 @@ any resulting litigation.
 The binary distribution of this product bundles these dependencies under the
 following license:
 ASM Core 3.2
-JSch 0.1.42
+JSch 0.1.54
 ParaNamer Core 2.3
 JLine 0.9.94
 leveldbjni-all 1.8

http://git-wip-us.apache.org/repos/asf/hadoop/blob/29d969ef/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 8a11aee..b69a3cf 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1,5 +1,20 @@
 Hadoop Change Log
 
+Release 2.7.8 - UNRELEASED
+
+  INCOMPATIBLE CHANGES
+
+  NEW FEATURES
+
+  IMPROVEMENTS
+
+  OPTIMIZATIONS
+
+  BUG FIXES
+
+HADOOP-15900. Update JSch versions in LICENSE.txt.
+Contributed by Akira Ajisaka.
+
 Release 2.7.7 - 2018-07-18
 
   INCOMPATIBLE CHANGES


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15900. Update JSch versions in LICENSE.txt.

2018-11-05 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 2785be987 -> 3d76d4785


HADOOP-15900. Update JSch versions in LICENSE.txt.

(cherry picked from commit d43cc5db0ff80958ca873df1dc2fa00054e37175)

Conflicts:
LICENSE.txt
(cherry picked from commit 939827daa65b61431cdb66d0ac266b902b8734c4)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3d76d478
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3d76d478
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3d76d478

Branch: refs/heads/branch-2.8
Commit: 3d76d4785b4f2ce787f6dace7c945ede134716ba
Parents: 2785be9
Author: Akira Ajisaka 
Authored: Mon Nov 5 17:51:16 2018 +0900
Committer: Akira Ajisaka 
Committed: Mon Nov 5 17:54:51 2018 +0900

--
 LICENSE.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3d76d478/LICENSE.txt
--
diff --git a/LICENSE.txt b/LICENSE.txt
index 61ebbd6..a5036f6 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -1566,7 +1566,7 @@ any resulting litigation.
 The binary distribution of this product bundles these dependencies under the
 following license:
 ASM Core 3.2
-JSch 0.1.51
+JSch 0.1.54
 ParaNamer Core 2.3
 JLine 0.9.94
 leveldbjni-all 1.8


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15900. Update JSch versions in LICENSE.txt.

2018-11-05 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 72ee45acf -> 939827daa


HADOOP-15900. Update JSch versions in LICENSE.txt.

(cherry picked from commit d43cc5db0ff80958ca873df1dc2fa00054e37175)

Conflicts:
LICENSE.txt


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/939827da
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/939827da
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/939827da

Branch: refs/heads/branch-2
Commit: 939827daa65b61431cdb66d0ac266b902b8734c4
Parents: 72ee45a
Author: Akira Ajisaka 
Authored: Mon Nov 5 17:51:16 2018 +0900
Committer: Akira Ajisaka 
Committed: Mon Nov 5 17:53:53 2018 +0900

--
 LICENSE.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/939827da/LICENSE.txt
--
diff --git a/LICENSE.txt b/LICENSE.txt
index 23c63be..5283f67 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -1716,7 +1716,7 @@ any resulting litigation.
 The binary distribution of this product bundles these dependencies under the
 following license:
 ASM Core 3.2
-JSch 0.1.51
+JSch 0.1.54
 ParaNamer Core 2.3
 JLine 0.9.94
 leveldbjni-all 1.8


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15900. Update JSch versions in LICENSE.txt.

2018-11-05 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 9c64d4291 -> 1f3899fa3


HADOOP-15900. Update JSch versions in LICENSE.txt.

(cherry picked from commit d43cc5db0ff80958ca873df1dc2fa00054e37175)

Conflicts:
LICENSE.txt
(cherry picked from commit 939827daa65b61431cdb66d0ac266b902b8734c4)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1f3899fa
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1f3899fa
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1f3899fa

Branch: refs/heads/branch-2.9
Commit: 1f3899fa3b46543e49702f71ce7641463a893886
Parents: 9c64d42
Author: Akira Ajisaka 
Authored: Mon Nov 5 17:51:16 2018 +0900
Committer: Akira Ajisaka 
Committed: Mon Nov 5 17:54:30 2018 +0900

--
 LICENSE.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1f3899fa/LICENSE.txt
--
diff --git a/LICENSE.txt b/LICENSE.txt
index 23c63be..5283f67 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -1716,7 +1716,7 @@ any resulting litigation.
 The binary distribution of this product bundles these dependencies under the
 following license:
 ASM Core 3.2
-JSch 0.1.51
+JSch 0.1.54
 ParaNamer Core 2.3
 JLine 0.9.94
 leveldbjni-all 1.8


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15900. Update JSch versions in LICENSE.txt.

2018-11-05 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.2 ea6b9d6c6 -> fcd221fed


HADOOP-15900. Update JSch versions in LICENSE.txt.

(cherry picked from commit d43cc5db0ff80958ca873df1dc2fa00054e37175)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fcd221fe
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fcd221fe
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fcd221fe

Branch: refs/heads/branch-3.2
Commit: fcd221fed786c0f47fcc6208e0f784058128af20
Parents: ea6b9d6
Author: Akira Ajisaka 
Authored: Mon Nov 5 17:51:16 2018 +0900
Committer: Akira Ajisaka 
Committed: Mon Nov 5 17:52:04 2018 +0900

--
 LICENSE.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fcd221fe/LICENSE.txt
--
diff --git a/LICENSE.txt b/LICENSE.txt
index 393ed0e..ac9ba24 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -1835,7 +1835,7 @@ any resulting litigation.
 
 The binary distribution of this product bundles these dependencies under the
 following license:
-JSch 0.1.51
+JSch 0.1.54
 ParaNamer Core 2.3
 JLine 0.9.94
 leveldbjni-all 1.8


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15900. Update JSch versions in LICENSE.txt.

2018-11-05 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 869b1cee6 -> aee665f2d


HADOOP-15900. Update JSch versions in LICENSE.txt.

(cherry picked from commit d43cc5db0ff80958ca873df1dc2fa00054e37175)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/aee665f2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/aee665f2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/aee665f2

Branch: refs/heads/branch-3.0
Commit: aee665f2db102e72a3c67733acf53cb101b5722f
Parents: 869b1ce
Author: Akira Ajisaka 
Authored: Mon Nov 5 17:51:16 2018 +0900
Committer: Akira Ajisaka 
Committed: Mon Nov 5 17:52:36 2018 +0900

--
 LICENSE.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/aee665f2/LICENSE.txt
--
diff --git a/LICENSE.txt b/LICENSE.txt
index 54ad821..c13133d 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -1767,7 +1767,7 @@ any resulting litigation.
 
 The binary distribution of this product bundles these dependencies under the
 following license:
-JSch 0.1.51
+JSch 0.1.54
 ParaNamer Core 2.3
 JLine 0.9.94
 leveldbjni-all 1.8


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15900. Update JSch versions in LICENSE.txt.

2018-11-05 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 73d8f8761 -> b732ed355


HADOOP-15900. Update JSch versions in LICENSE.txt.

(cherry picked from commit d43cc5db0ff80958ca873df1dc2fa00054e37175)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b732ed35
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b732ed35
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b732ed35

Branch: refs/heads/branch-3.1
Commit: b732ed3553c8ebca75d0da2c5732fb967ac74530
Parents: 73d8f87
Author: Akira Ajisaka 
Authored: Mon Nov 5 17:51:16 2018 +0900
Committer: Akira Ajisaka 
Committed: Mon Nov 5 17:52:21 2018 +0900

--
 LICENSE.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b732ed35/LICENSE.txt
--
diff --git a/LICENSE.txt b/LICENSE.txt
index 7a88148..7e05e90 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -1767,7 +1767,7 @@ any resulting litigation.
 
 The binary distribution of this product bundles these dependencies under the
 following license:
-JSch 0.1.51
+JSch 0.1.54
 ParaNamer Core 2.3
 JLine 0.9.94
 leveldbjni-all 1.8


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15900. Update JSch versions in LICENSE.txt.

2018-11-05 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/trunk a5e21cd6a -> d43cc5db0


HADOOP-15900. Update JSch versions in LICENSE.txt.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d43cc5db
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d43cc5db
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d43cc5db

Branch: refs/heads/trunk
Commit: d43cc5db0ff80958ca873df1dc2fa00054e37175
Parents: a5e21cd
Author: Akira Ajisaka 
Authored: Mon Nov 5 17:51:16 2018 +0900
Committer: Akira Ajisaka 
Committed: Mon Nov 5 17:51:26 2018 +0900

--
 LICENSE.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d43cc5db/LICENSE.txt
--
diff --git a/LICENSE.txt b/LICENSE.txt
index 1a97528..81eb32f 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -1836,7 +1836,7 @@ any resulting litigation.
 
 The binary distribution of this product bundles these dependencies under the
 following license:
-JSch 0.1.51
+JSch 0.1.54
 ParaNamer Core 2.3
 JLine 0.9.94
 leveldbjni-all 1.8


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org