[hadoop] branch trunk updated: YARN-10331. Upgrade node.js to 10.21.0. (#2106)

2020-06-30 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new cd188ea  YARN-10331. Upgrade node.js to 10.21.0. (#2106)
cd188ea is described below

commit cd188ea9f0e807df1e2cc13f62be3e4c956b1e69
Author: Akira Ajisaka 
AuthorDate: Tue Jun 30 16:52:57 2020 +0900

YARN-10331. Upgrade node.js to 10.21.0. (#2106)
---
 dev-support/docker/Dockerfile  | 6 +++---
 dev-support/docker/Dockerfile_aarch64  | 6 +++---
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml | 2 +-
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/dev-support/docker/Dockerfile b/dev-support/docker/Dockerfile
index 818d394..fd2d293 100644
--- a/dev-support/docker/Dockerfile
+++ b/dev-support/docker/Dockerfile
@@ -123,10 +123,10 @@ RUN pip2 install \
 RUN pip2 install python-dateutil==2.7.3
 
 ###
-# Install node.js 8.17.0 for web UI framework (4.2.6 ships with Xenial)
+# Install node.js 10.21.0 for web UI framework (4.2.6 ships with Xenial)
 ###
-RUN curl -L -s -S https://deb.nodesource.com/setup_8.x | bash - \
-&& apt-get install -y --no-install-recommends nodejs=8.17.0-1nodesource1 \
+RUN curl -L -s -S https://deb.nodesource.com/setup_10.x | bash - \
+&& apt-get install -y --no-install-recommends nodejs=10.21.0-1nodesource1 \
 && apt-get clean \
 && rm -rf /var/lib/apt/lists/* \
 && npm install -g bower@1.8.8
diff --git a/dev-support/docker/Dockerfile_aarch64 
b/dev-support/docker/Dockerfile_aarch64
index d0cfa5a..5628c60 100644
--- a/dev-support/docker/Dockerfile_aarch64
+++ b/dev-support/docker/Dockerfile_aarch64
@@ -184,10 +184,10 @@ RUN pip2 install \
 RUN pip2 install python-dateutil==2.7.3
 
 ###
-# Install node.js 8.17.0 for web UI framework (4.2.6 ships with Xenial)
+# Install node.js 10.21.0 for web UI framework (4.2.6 ships with Xenial)
 ###
-RUN curl -L -s -S https://deb.nodesource.com/setup_8.x | bash - \
-&& apt-get install -y --no-install-recommends nodejs=8.17.0-1nodesource1 \
+RUN curl -L -s -S https://deb.nodesource.com/setup_10.x | bash - \
+&& apt-get install -y --no-install-recommends nodejs=10.21.0-1nodesource1 \
 && apt-get clean \
 && rm -rf /var/lib/apt/lists/* \
 && npm install -g bower@1.8.8
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
index 58b4b3d..f3bce21 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
@@ -184,7 +184,7 @@
   install-node-and-yarn
 
 
-  v8.17.0
+  v10.21.0
   v1.21.1
 
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: YARN-10331. Upgrade node.js to 10.21.0. (#2106)

2020-06-30 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new aa283fc  YARN-10331. Upgrade node.js to 10.21.0. (#2106)
aa283fc is described below

commit aa283fc2c2cedcd4680a402373effd51fbb753c3
Author: Akira Ajisaka 
AuthorDate: Tue Jun 30 16:52:57 2020 +0900

YARN-10331. Upgrade node.js to 10.21.0. (#2106)

(cherry picked from commit cd188ea9f0e807df1e2cc13f62be3e4c956b1e69)
---
 dev-support/docker/Dockerfile  | 6 +++---
 dev-support/docker/Dockerfile_aarch64  | 6 +++---
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml | 2 +-
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/dev-support/docker/Dockerfile b/dev-support/docker/Dockerfile
index e16fed1..5aa0b65 100644
--- a/dev-support/docker/Dockerfile
+++ b/dev-support/docker/Dockerfile
@@ -176,10 +176,10 @@ RUN pip2 install \
 RUN pip2 install python-dateutil==2.7.3
 
 ###
-# Install node.js 8.17.0 for web UI framework (4.2.6 ships with Xenial)
+# Install node.js 10.21.0 for web UI framework (4.2.6 ships with Xenial)
 ###
-RUN curl -L -s -S https://deb.nodesource.com/setup_8.x | bash - \
-&& apt-get install -y --no-install-recommends nodejs=8.17.0-1nodesource1 \
+RUN curl -L -s -S https://deb.nodesource.com/setup_10.x | bash - \
+&& apt-get install -y --no-install-recommends nodejs=10.21.0-1nodesource1 \
 && apt-get clean \
 && rm -rf /var/lib/apt/lists/* \
 && npm install -g bower@1.8.8
diff --git a/dev-support/docker/Dockerfile_aarch64 
b/dev-support/docker/Dockerfile_aarch64
index d0cfa5a..5628c60 100644
--- a/dev-support/docker/Dockerfile_aarch64
+++ b/dev-support/docker/Dockerfile_aarch64
@@ -184,10 +184,10 @@ RUN pip2 install \
 RUN pip2 install python-dateutil==2.7.3
 
 ###
-# Install node.js 8.17.0 for web UI framework (4.2.6 ships with Xenial)
+# Install node.js 10.21.0 for web UI framework (4.2.6 ships with Xenial)
 ###
-RUN curl -L -s -S https://deb.nodesource.com/setup_8.x | bash - \
-&& apt-get install -y --no-install-recommends nodejs=8.17.0-1nodesource1 \
+RUN curl -L -s -S https://deb.nodesource.com/setup_10.x | bash - \
+&& apt-get install -y --no-install-recommends nodejs=10.21.0-1nodesource1 \
 && apt-get clean \
 && rm -rf /var/lib/apt/lists/* \
 && npm install -g bower@1.8.8
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
index d38be5d..accd7c8 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
@@ -184,7 +184,7 @@
   install-node-and-yarn
 
 
-  v8.17.0
+  v10.21.0
   v1.21.1
 
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: YARN-10277. CapacityScheduler test TestUserGroupMappingPlacementRule should build proper hierarchy. Contributed by Szilard Nemeth

2020-06-30 Thread snemeth
This is an automated email from the ASF dual-hosted git repository.

snemeth pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 8b48274  YARN-10277. CapacityScheduler test 
TestUserGroupMappingPlacementRule should build proper hierarchy. Contributed by 
Szilard Nemeth
8b48274 is described below

commit 8b482744e999084ce6f90bedd6407ae2a69d302a
Author: Szilard Nemeth 
AuthorDate: Tue Jun 30 11:32:59 2020 +0200

YARN-10277. CapacityScheduler test TestUserGroupMappingPlacementRule should 
build proper hierarchy. Contributed by Szilard Nemeth
---
 .../TestUserGroupMappingPlacementRule.java | 204 +++--
 1 file changed, 151 insertions(+), 53 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestUserGroupMappingPlacementRule.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestUserGroupMappingPlacementRule.java
index 9cd7ae0..e436b6e 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestUserGroupMappingPlacementRule.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestUserGroupMappingPlacementRule.java
@@ -24,7 +24,11 @@ import static org.mockito.Mockito.when;
 import static org.mockito.Mockito.isNull;
 
 import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
 
+import com.google.common.collect.Maps;
+import org.apache.commons.compress.utils.Lists;
 import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.security.GroupMappingServiceProvider;
 import org.apache.hadoop.security.Groups;
@@ -35,6 +39,7 @@ import org.apache.hadoop.yarn.exceptions.YarnException;
 import 
org.apache.hadoop.yarn.server.resourcemanager.placement.QueueMapping.MappingType;
 import 
org.apache.hadoop.yarn.server.resourcemanager.placement.QueueMapping.QueueMappingBuilder;
 import 
org.apache.hadoop.yarn.server.resourcemanager.placement.TestUserGroupMappingPlacementRule.QueueMappingTestData.QueueMappingTestDataBuilder;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractCSQueue;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ManagedParentQueue;
@@ -45,8 +50,147 @@ import org.apache.hadoop.yarn.util.Records;
 import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 public class TestUserGroupMappingPlacementRule {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(TestUserGroupMappingPlacementRule.class);
+
+  private static class MockQueueHierarchyBuilder {
+private static final String ROOT = "root";
+private static final String QUEUE_SEP = ".";
+private List queuePaths = Lists.newArrayList();
+private List managedParentQueues = Lists.newArrayList();
+private CapacitySchedulerQueueManager queueManager;
+
+public static MockQueueHierarchyBuilder create() {
+  return new MockQueueHierarchyBuilder();
+}
+
+public MockQueueHierarchyBuilder withQueueManager(
+CapacitySchedulerQueueManager queueManager) {
+  this.queueManager = queueManager;
+  return this;
+}
+
+public MockQueueHierarchyBuilder withQueue(String queue) {
+  this.queuePaths.add(queue);
+  return this;
+}
+
+public MockQueueHierarchyBuilder withManagedParentQueue(
+String managedQueue) {
+  this.managedParentQueues.add(managedQueue);
+  return this;
+}
+
+public void build() {
+  if (this.queueManager == null) {
+throw new IllegalStateException(
+"QueueManager instance is not provided!");
+  }
+
+  for (String managedParentQueue : managedParentQueues) {
+if (!queuePaths.contains(managedParentQueue)) {
+  queuePaths.add(managedParentQueue);
+} else {
+  throw new IllegalStateException("Cannot add a managed parent " +
+  "and a simple queue with the same path");
+}
+  }
+
+  Map queues = Maps.newHashMap();
+  for (String queuePath : queuePaths) {
+LOG.info("Processing queue path: " + queuePath);
+addQueues(queues, queuePath);
+  }
+}
+
+private void addQueues(Map queues,
+String queuePath) {
+  final String[] pathComponents = 

[hadoop] branch branch-3.3 updated: HADOOP-16798. S3A Committer thread pool shutdown problems. (#1963)

2020-06-30 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 7de1ac0  HADOOP-16798. S3A Committer thread pool shutdown problems. 
(#1963)
7de1ac0 is described below

commit 7de1ac054741dbc3827a64a74d89fa32b3511872
Author: Steve Loughran 
AuthorDate: Tue Jun 30 10:44:51 2020 +0100

HADOOP-16798. S3A Committer thread pool shutdown problems. (#1963)

Contributed by Steve Loughran.

Fixes a condition which can cause job commit to fail if a task was
aborted < 60s before the job commit commenced: the task abort
will shut down the thread pool with a hard exit after 60s; the
job commit POST requests would be scheduled through the same pool,
so be interrupted and fail. At present the access is synchronized,
but presumably the executor shutdown code is calling wait() and releasing
locks.

Task abort is triggered from the AM when task attempts succeed but
there are still active speculative task attempts running. Thus it
only surfaces when speculation is enabled and the final tasks are
speculating, which, given they are the stragglers, is not unheard of.

Note: this problem has never been seen in production; it has surfaced
in the hadoop-aws tests on a heavily overloaded desktop

Change-Id: I3b433356d01fcc50d88b4353dbca018484984bc8
---
 .../hadoop/fs/s3a/commit/AbstractS3ACommitter.java | 129 -
 .../org/apache/hadoop/fs/s3a/commit/Tasks.java |  23 +++-
 .../staging/PartitionedStagingCommitter.java   |   7 +-
 .../fs/s3a/commit/staging/StagingCommitter.java|   2 +-
 .../org/apache/hadoop/fs/s3a/commit/TestTasks.java |  21 +++-
 5 files changed, 142 insertions(+), 40 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java
index e82fbda..32d00a4 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java
@@ -24,6 +24,7 @@ import java.util.ArrayList;
 import java.util.Date;
 import java.util.List;
 import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Future;
 import java.util.concurrent.TimeUnit;
 import java.util.stream.Collectors;
 
@@ -472,7 +473,7 @@ public abstract class AbstractS3ACommitter extends 
PathOutputCommitter {
   Tasks.foreach(pending.getSourceFiles())
   .stopOnFailure()
   .suppressExceptions(false)
-  .executeWith(buildThreadPool(context))
+  .executeWith(buildSubmitter(context))
   .abortWith(path ->
   loadAndAbort(commitContext, pending, path, true, false))
   .revertWith(path ->
@@ -502,7 +503,7 @@ public abstract class AbstractS3ACommitter extends 
PathOutputCommitter {
   Tasks.foreach(pending.getSourceFiles())
   .stopOnFailure()
   .suppressExceptions(false)
-  .executeWith(buildThreadPool(context))
+  .executeWith(buildSubmitter(context))
   .run(path -> PendingSet.load(sourceFS, path));
 }
   }
@@ -525,7 +526,7 @@ public abstract class AbstractS3ACommitter extends 
PathOutputCommitter {
   Tasks.foreach(pendingSet.getCommits())
   .stopOnFailure()
   .suppressExceptions(false)
-  .executeWith(singleCommitThreadPool())
+  .executeWith(singleThreadSubmitter())
   .onFailure((commit, exception) ->
   commitContext.abortSingleCommit(commit))
   .abortWith(commitContext::abortSingleCommit)
@@ -580,7 +581,7 @@ public abstract class AbstractS3ACommitter extends 
PathOutputCommitter {
   path);
   FileSystem fs = getDestFS();
   Tasks.foreach(pendingSet.getCommits())
-  .executeWith(singleCommitThreadPool())
+  .executeWith(singleThreadSubmitter())
   .suppressExceptions(suppressExceptions)
   .run(commit -> {
 try {
@@ -674,7 +675,7 @@ public abstract class AbstractS3ACommitter extends 
PathOutputCommitter {
 return;
   }
   Tasks.foreach(pending)
-  .executeWith(buildThreadPool(getJobContext()))
+  .executeWith(buildSubmitter(getJobContext()))
   .suppressExceptions(suppressExceptions)
   .run(u -> commitContext.abortMultipartCommit(
   u.getKey(), u.getUploadId()));
@@ -838,44 +839,116 @@ public abstract class AbstractS3ACommitter extends 
PathOutputCommitter {
   }
 
   /**
-   * Returns an {@link ExecutorService} for parallel tasks. The number of
+   * Returns an {@link Tasks.Submitter} for parallel tasks. The number of
* threads in the thread-pool is set 

[hadoop] branch trunk updated: HADOOP-16798. S3A Committer thread pool shutdown problems. (#1963)

2020-06-30 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 4249c04  HADOOP-16798. S3A Committer thread pool shutdown problems. 
(#1963)
4249c04 is described below

commit 4249c04d454ca82aadeed152ab777e93474754ab
Author: Steve Loughran 
AuthorDate: Tue Jun 30 10:44:51 2020 +0100

HADOOP-16798. S3A Committer thread pool shutdown problems. (#1963)



Contributed by Steve Loughran.

Fixes a condition which can cause job commit to fail if a task was
aborted < 60s before the job commit commenced: the task abort
will shut down the thread pool with a hard exit after 60s; the
job commit POST requests would be scheduled through the same pool,
so be interrupted and fail. At present the access is synchronized,
but presumably the executor shutdown code is calling wait() and releasing
locks.

Task abort is triggered from the AM when task attempts succeed but
there are still active speculative task attempts running. Thus it
only surfaces when speculation is enabled and the final tasks are
speculating, which, given they are the stragglers, is not unheard of.

Note: this problem has never been seen in production; it has surfaced
in the hadoop-aws tests on a heavily overloaded desktop
---
 .../hadoop/fs/s3a/commit/AbstractS3ACommitter.java | 129 -
 .../org/apache/hadoop/fs/s3a/commit/Tasks.java |  23 +++-
 .../staging/PartitionedStagingCommitter.java   |   7 +-
 .../fs/s3a/commit/staging/StagingCommitter.java|   2 +-
 .../org/apache/hadoop/fs/s3a/commit/TestTasks.java |  21 +++-
 5 files changed, 142 insertions(+), 40 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java
index e82fbda..32d00a4 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java
@@ -24,6 +24,7 @@ import java.util.ArrayList;
 import java.util.Date;
 import java.util.List;
 import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Future;
 import java.util.concurrent.TimeUnit;
 import java.util.stream.Collectors;
 
@@ -472,7 +473,7 @@ public abstract class AbstractS3ACommitter extends 
PathOutputCommitter {
   Tasks.foreach(pending.getSourceFiles())
   .stopOnFailure()
   .suppressExceptions(false)
-  .executeWith(buildThreadPool(context))
+  .executeWith(buildSubmitter(context))
   .abortWith(path ->
   loadAndAbort(commitContext, pending, path, true, false))
   .revertWith(path ->
@@ -502,7 +503,7 @@ public abstract class AbstractS3ACommitter extends 
PathOutputCommitter {
   Tasks.foreach(pending.getSourceFiles())
   .stopOnFailure()
   .suppressExceptions(false)
-  .executeWith(buildThreadPool(context))
+  .executeWith(buildSubmitter(context))
   .run(path -> PendingSet.load(sourceFS, path));
 }
   }
@@ -525,7 +526,7 @@ public abstract class AbstractS3ACommitter extends 
PathOutputCommitter {
   Tasks.foreach(pendingSet.getCommits())
   .stopOnFailure()
   .suppressExceptions(false)
-  .executeWith(singleCommitThreadPool())
+  .executeWith(singleThreadSubmitter())
   .onFailure((commit, exception) ->
   commitContext.abortSingleCommit(commit))
   .abortWith(commitContext::abortSingleCommit)
@@ -580,7 +581,7 @@ public abstract class AbstractS3ACommitter extends 
PathOutputCommitter {
   path);
   FileSystem fs = getDestFS();
   Tasks.foreach(pendingSet.getCommits())
-  .executeWith(singleCommitThreadPool())
+  .executeWith(singleThreadSubmitter())
   .suppressExceptions(suppressExceptions)
   .run(commit -> {
 try {
@@ -674,7 +675,7 @@ public abstract class AbstractS3ACommitter extends 
PathOutputCommitter {
 return;
   }
   Tasks.foreach(pending)
-  .executeWith(buildThreadPool(getJobContext()))
+  .executeWith(buildSubmitter(getJobContext()))
   .suppressExceptions(suppressExceptions)
   .run(u -> commitContext.abortMultipartCommit(
   u.getKey(), u.getUploadId()));
@@ -838,44 +839,116 @@ public abstract class AbstractS3ACommitter extends 
PathOutputCommitter {
   }
 
   /**
-   * Returns an {@link ExecutorService} for parallel tasks. The number of
+   * Returns an {@link Tasks.Submitter} for parallel tasks. The number of
* threads in the thread-pool is set by fs.s3a.committer.threads.
* If num-threads is 0, this 

[hadoop] branch trunk updated: HDFS-15160. ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock. Contributed by Stephen O'Donnell.

2020-06-30 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 2a67e2b  HDFS-15160. ReplicaMap, Disk Balancer, Directory Scanner and 
various FsDatasetImpl methods should use datanode readlock. Contributed by 
Stephen O'Donnell.
2a67e2b is described below

commit 2a67e2b1a0e3a5f91056f5b977ef9c4c07ba6718
Author: Stephen O'Donnell 
AuthorDate: Tue Jun 30 07:09:26 2020 -0700

HDFS-15160. ReplicaMap, Disk Balancer, Directory Scanner and various 
FsDatasetImpl methods should use datanode readlock. Contributed by Stephen 
O'Donnell.

Signed-off-by: Wei-Chiu Chuang 
---
 .../java/org/apache/hadoop/hdfs/DFSConfigKeys.java |   4 +
 .../hadoop/hdfs/server/datanode/BlockSender.java   |   2 +-
 .../hadoop/hdfs/server/datanode/DataNode.java  |   2 +-
 .../hdfs/server/datanode/DirectoryScanner.java |   2 +-
 .../hadoop/hdfs/server/datanode/DiskBalancer.java  |   2 +-
 .../server/datanode/fsdataset/FsDatasetSpi.java|   8 +-
 .../datanode/fsdataset/impl/FsDatasetImpl.java |  64 -
 .../server/datanode/fsdataset/impl/ReplicaMap.java |  31 +--
 .../src/main/resources/hdfs-default.xml|  13 +++
 .../datanode/fsdataset/impl/TestFsDatasetImpl.java | 101 -
 10 files changed, 187 insertions(+), 42 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 3a0a678..9de33ff 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -606,6 +606,10 @@ public class DFSConfigKeys extends CommonConfigurationKeys 
{
   public static final String DFS_DATANODE_LOCK_FAIR_KEY =
   "dfs.datanode.lock.fair";
   public static final boolean DFS_DATANODE_LOCK_FAIR_DEFAULT = true;
+  public static final String DFS_DATANODE_LOCK_READ_WRITE_ENABLED_KEY =
+  "dfs.datanode.lock.read.write.enabled";
+  public static final Boolean DFS_DATANODE_LOCK_READ_WRITE_ENABLED_DEFAULT =
+  true;
   public static final String  DFS_DATANODE_LOCK_REPORTING_THRESHOLD_MS_KEY =
   "dfs.datanode.lock-reporting-threshold-ms";
   public static final long
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
index 6102a59..b396bf9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
@@ -255,7 +255,7 @@ class BlockSender implements java.io.Closeable {
   // the append write.
   ChunkChecksum chunkChecksum = null;
   final long replicaVisibleLength;
-  try(AutoCloseableLock lock = datanode.data.acquireDatasetLock()) {
+  try(AutoCloseableLock lock = datanode.data.acquireDatasetReadLock()) {
 replica = getReplica(block, datanode);
 replicaVisibleLength = replica.getVisibleLength();
   }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index 2e498e4..e242cc8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -3060,7 +3060,7 @@ public class DataNode extends ReconfigurableBase
 final BlockConstructionStage stage;
 
 //get replica information
-try(AutoCloseableLock lock = data.acquireDatasetLock()) {
+try(AutoCloseableLock lock = data.acquireDatasetReadLock()) {
   Block storedBlock = data.getStoredBlock(b.getBlockPoolId(),
   b.getBlockId());
   if (null == storedBlock) {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
index 35625ce..b2e521c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
@@ -473,7 +473,7 @@ public class DirectoryScanner implements Runnable {
 blockPoolReport.sortBlocks();
 
 // Hold FSDataset lock to prevent further changes to the block map
-try (AutoCloseableLock lock = 

[hadoop] branch trunk updated: YARN-9809. Added node manager health status to resource manager registration call. Contributed by Eric Badger via eyang

2020-06-30 Thread eyang
This is an automated email from the ASF dual-hosted git repository.

eyang pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e8dc862  YARN-9809. Added node manager health status to resource 
manager registration call.Contributed by Eric Badger via eyang
e8dc862 is described below

commit e8dc862d3856e9eaea124c625dade36f1dd53fe2
Author: Eric Yang 
AuthorDate: Tue Jun 30 11:39:16 2020 -0700

YARN-9809. Added node manager health status to resource manager 
registration call.
   Contributed by Eric Badger via eyang
---
 .../apache/hadoop/yarn/conf/YarnConfiguration.java |  7 +++
 .../src/main/resources/yarn-default.xml|  7 +++
 .../RegisterNodeManagerRequest.java| 19 ++-
 .../impl/pb/RegisterNodeManagerRequestPBImpl.java  | 39 -
 .../proto/yarn_server_common_service_protos.proto  |  1 +
 .../server/nodemanager/NodeStatusUpdaterImpl.java  |  3 +-
 .../nodemanager/health/NodeHealthScriptRunner.java | 11 +++-
 .../health/TimedHealthReporterService.java | 20 ++-
 .../yarn/server/nodemanager/TestEventFlow.java |  5 ++
 .../containermanager/BaseContainerManagerTest.java | 66 +-
 .../containermanager/TestContainerManager.java |  6 +-
 .../nodemanager/containermanager/TestNMProxy.java  |  4 +-
 .../scheduler/TestContainerSchedulerQueuing.java   |  2 +-
 .../resourcemanager/ResourceTrackerService.java|  5 +-
 .../server/resourcemanager/rmnode/RMNodeImpl.java  | 58 +++
 .../resourcemanager/rmnode/RMNodeStartedEvent.java | 10 +++-
 .../hadoop/yarn/server/resourcemanager/MockNM.java | 22 
 .../hadoop/yarn/server/resourcemanager/MockRM.java |  7 ++-
 .../yarn/server/resourcemanager/NodeManager.java   |  3 +-
 .../resourcemanager/TestRMNodeTransitions.java | 55 +++---
 .../resourcemanager/TestResourceManager.java   | 29 ++
 .../TestResourceTrackerService.java|  6 ++
 .../TestRMAppLogAggregationStatus.java |  7 ++-
 .../resourcetracker/TestNMExpiry.java  |  7 +++
 .../resourcetracker/TestNMReconnect.java   |  7 +++
 .../scheduler/TestAbstractYarnScheduler.java   |  5 ++
 .../scheduler/TestSchedulerHealth.java | 18 --
 .../scheduler/capacity/TestCapacityScheduler.java  | 63 ++---
 .../scheduler/fair/TestFairScheduler.java  | 21 +--
 .../scheduler/fifo/TestFifoScheduler.java  | 25 +---
 .../webapp/TestRMWebServicesNodes.java |  5 +-
 31 files changed, 429 insertions(+), 114 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 85d5a58..54e 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -2013,6 +2013,13 @@ public class YarnConfiguration extends Configuration {
   NM_PREFIX + "health-checker.interval-ms";
   public static final long DEFAULT_NM_HEALTH_CHECK_INTERVAL_MS = 10 * 60 * 
1000;
 
+  /** Whether or not to run the node health script before the NM
+   *  starts up.*/
+  public static final String NM_HEALTH_CHECK_RUN_BEFORE_STARTUP =
+  NM_PREFIX + "health-checker.run-before-startup";
+  public static final boolean DEFAULT_NM_HEALTH_CHECK_RUN_BEFORE_STARTUP =
+  false;
+
   /** Health check time out period for all scripts.*/
   public static final String NM_HEALTH_CHECK_TIMEOUT_MS =
   NM_PREFIX + "health-checker.timeout-ms";
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index f09186e..2f97a7c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -1669,6 +1669,13 @@
   
 
   
+Whether or not to run the node health script
+before the NM starts up.
+yarn.nodemanager.health-checker.run-before-startup
+false
+  
+
+  
 Frequency of running node health scripts.
 yarn.nodemanager.health-checker.interval-ms
 60
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RegisterNodeManagerRequest.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RegisterNodeManagerRequest.java
index 

[hadoop] branch trunk updated: HDFS-15416. Improve DataStorage#addStorageLocations() for empty locations. Contibuted by jianghua zhu.

2020-06-30 Thread hexiaoqiao
This is an automated email from the ASF dual-hosted git repository.

hexiaoqiao pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 9ac498e  HDFS-15416. Improve DataStorage#addStorageLocations() for 
empty locations. Contibuted by jianghua zhu.
9ac498e is described below

commit 9ac498e30057de1291c3e3128bceaa1af9547c67
Author: He Xiaoqiao 
AuthorDate: Wed Jul 1 12:30:10 2020 +0800

HDFS-15416. Improve DataStorage#addStorageLocations() for empty locations. 
Contibuted by jianghua zhu.
---
 .../hadoop/hdfs/server/datanode/DataStorage.java   |  5 
 .../hdfs/server/datanode/TestDataStorage.java  | 28 ++
 2 files changed, 33 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
index 2447fd7..b7faecb 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
@@ -388,6 +388,11 @@ public class DataStorage extends Storage {
 try {
   final List successLocations = loadDataStorage(
   datanode, nsInfo, dataDirs, startOpt, executor);
+
+  if (successLocations.isEmpty()) {
+return Lists.newArrayList();
+  }
+
   return loadBlockPoolSliceStorage(
   datanode, nsInfo, successLocations, startOpt, executor);
 } finally {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataStorage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataStorage.java
index 6c49451..f82462a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataStorage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataStorage.java
@@ -44,6 +44,7 @@ import static org.junit.Assert.fail;
 public class TestDataStorage {
   private final static String DEFAULT_BPID = "bp-0";
   private final static String CLUSTER_ID = "cluster0";
+  private final static String CLUSTER_ID2 = "cluster1";
   private final static String BUILD_VERSION = "2.0";
   private final static String SOFTWARE_VERSION = "2.0";
   private final static long CTIME = 1;
@@ -166,6 +167,33 @@ public class TestDataStorage {
   }
 
   @Test
+  public void testAddStorageDirectoriesFailure() throws IOException {
+final int numLocations = 1;
+List locations = createStorageLocations(numLocations);
+assertEquals(numLocations, locations.size());
+
+NamespaceInfo namespaceInfo = new NamespaceInfo(0, CLUSTER_ID,
+DEFAULT_BPID, CTIME, BUILD_VERSION, SOFTWARE_VERSION);
+List successLocations = storage.addStorageLocations(
+mockDN, namespaceInfo, locations, START_OPT);
+assertEquals(1, successLocations.size());
+
+// After the DataNode restarts, the value of the clusterId is different
+// from the value before the restart.
+storage.unlockAll();
+DataNode newMockDN = Mockito.mock(DataNode.class);
+Mockito.when(newMockDN.getConf()).thenReturn(new HdfsConfiguration());
+DataStorage newStorage = new DataStorage();
+NamespaceInfo newNamespaceInfo = new NamespaceInfo(0, CLUSTER_ID2,
+DEFAULT_BPID, CTIME, BUILD_VERSION, SOFTWARE_VERSION);
+successLocations = newStorage.addStorageLocations(
+newMockDN, newNamespaceInfo, locations, START_OPT);
+assertEquals(0, successLocations.size());
+newStorage.unlockAll();
+newMockDN.shutdown();
+  }
+
+  @Test
   public void testMissingVersion() throws IOException,
   URISyntaxException {
 final int numLocations = 1;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org