[hadoop] branch trunk updated: HADOOP-16930. Add hadoop-aws documentation for ProfileCredentialsProvider

2020-03-25 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 25a03bf  HADOOP-16930. Add hadoop-aws documentation for 
ProfileCredentialsProvider
25a03bf is described below

commit 25a03bfeced370faf0c9645c474be36b0a78074f
Author: Nicholas Chammas 
AuthorDate: Wed Mar 25 06:39:35 2020 -0400

HADOOP-16930. Add hadoop-aws documentation for ProfileCredentialsProvider


Contributed by Nicholas Chammas.
---
 .../src/site/markdown/tools/hadoop-aws/index.md| 25 ++
 1 file changed, 25 insertions(+)

diff --git 
a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md 
b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
index aec778e..22b98ed 100644
--- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
+++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md
@@ -400,6 +400,31 @@ for credentials to access S3.  Within the AWS SDK, this 
functionality is
 provided by `InstanceProfileCredentialsProvider`, which internally enforces a
 singleton instance in order to prevent throttling problem.
 
+###  Using Named Profile Credentials with 
`ProfileCredentialsProvider`
+
+You can configure Hadoop to authenticate to AWS using a [named 
profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html).
+
+To authenticate with a named profile:
+
+1. Declare `com.amazonaws.auth.profile.ProfileCredentialsProvider` as the 
provider.
+1. Set your profile via the `AWS_PROFILE` environment variable.
+1. Due to a [bug in version 1 of the AWS Java 
SDK](https://github.com/aws/aws-sdk-java/issues/803),
+you'll need to remove the `profile` prefix from the AWS configuration section 
heading.
+
+Here's an example of what your AWS configuration files should look like:
+
+```
+$ cat ~/.aws/config
+[user1]
+region = us-east-1
+$ cat ~/.aws/credentials
+[user1]
+aws_access_key_id = ...
+aws_secret_access_key = ...
+aws_session_token = ...
+aws_security_token = ...
+```
+
 ###  Using Session Credentials with 
`TemporaryAWSCredentialsProvider`
 
 [Temporary Security 
Credentials](http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-16938. Make non-HA proxy providers pluggable

2020-03-25 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 2d294bd  HADOOP-16938. Make non-HA proxy providers pluggable
2d294bd is described below

commit 2d294bd575f81ced4b562ac7275b014c267e480d
Author: RogPodge <39840334+rogpo...@users.noreply.github.com>
AuthorDate: Wed Mar 25 08:06:58 2020 -0700

HADOOP-16938. Make non-HA proxy providers pluggable
---
 .../apache/hadoop/yarn/conf/YarnConfiguration.java |  4 +
 .../client/DefaultNoHARMFailoverProxyProvider.java | 98 ++
 .../org/apache/hadoop/yarn/client/RMProxy.java | 36 ++--
 .../src/main/resources/yarn-default.xml|  8 ++
 .../src/site/markdown/ResourceManagerHA.md |  3 +-
 5 files changed, 139 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 63031df..67d1841 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -958,6 +958,10 @@ public class YarnConfiguration extends Configuration {
   CLIENT_FAILOVER_PREFIX + "proxy-provider";
   public static final String DEFAULT_CLIENT_FAILOVER_PROXY_PROVIDER =
   "org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider";
+  public static final String CLIENT_FAILOVER_NO_HA_PROXY_PROVIDER =
+  CLIENT_FAILOVER_PREFIX + "no-ha-proxy-provider";
+  public static final String DEFAULT_CLIENT_FAILOVER_NO_HA_PROXY_PROVIDER =
+  "org.apache.hadoop.yarn.client.DefaultNoHARMFailoverProxyProvider";
 
   public static final String CLIENT_FAILOVER_MAX_ATTEMPTS =
   CLIENT_FAILOVER_PREFIX + "max-attempts";
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/DefaultNoHARMFailoverProxyProvider.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/DefaultNoHARMFailoverProxyProvider.java
new file mode 100644
index 000..e5197cf
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/DefaultNoHARMFailoverProxyProvider.java
@@ -0,0 +1,98 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.client;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.retry.DefaultFailoverProxyProvider;
+import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+
+/**
+ * An implementation of {@link RMFailoverProxyProvider} which does nothing in
+ * the event of failover, and always returns the same proxy object.
+ * This is the default non-HA RM Failover proxy provider. It is used to replace
+ * {@link DefaultFailoverProxyProvider} which was used as Yarn default non-HA.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Unstable
+public class DefaultNoHARMFailoverProxyProvider
+implements RMFailoverProxyProvider {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(DefaultNoHARMFailoverProxyProvider.class);
+
+  protected T proxy;
+  protected Class protocol;
+
+  /**
+   * Initialize internal data structures, invoked right after instantiation.
+   *
+   * @param conf Configuration to use
+   * @param proxyThe {@link RMProxy} instance to use
+   * @param protocol The communication protocol to use
+   */
+  @Override
+  public void init(Configuration conf, RMProxy proxy,
+Class protocol) {
+this.protocol = protocol;
+try {
+  YarnConfiguration yarnCon

[hadoop] branch trunk updated: HADOOP-16912. Emit per priority RPC queue time and processing time from DecayRpcScheduler. Contributed by Fengnan Li.

2020-03-25 Thread sunchao
This is an automated email from the ASF dual-hosted git repository.

sunchao pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new e3fbdcb  HADOOP-16912. Emit per priority RPC queue time and processing 
time from DecayRpcScheduler. Contributed by Fengnan Li.
e3fbdcb is described below

commit e3fbdcbc141bff6c78c24387906a277d518660ae
Author: Chao Sun 
AuthorDate: Wed Mar 25 10:21:20 2020 -0700

HADOOP-16912. Emit per priority RPC queue time and processing time from 
DecayRpcScheduler. Contributed by Fengnan Li.
---
 .../org/apache/hadoop/ipc/DecayRpcScheduler.java   |  21 
 .../metrics/DecayRpcSchedulerDetailedMetrics.java  | 135 +
 .../metrics2/lib/MutableRatesWithAggregation.java  |  12 ++
 .../hadoop-common/src/site/markdown/Metrics.md |  11 ++
 .../apache/hadoop/ipc/TestDecayRpcScheduler.java   |  61 +-
 .../TestDecayRpcSchedulerDetailedMetrics.java  |  45 +++
 .../hadoop/metrics2/lib/TestMutableMetrics.java|  13 ++
 7 files changed, 269 insertions(+), 29 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
index ffeafb5..3e952eb 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
@@ -42,6 +42,7 @@ import com.google.common.util.concurrent.AtomicDoubleArray;
 import org.apache.commons.lang3.exception.ExceptionUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.ipc.metrics.DecayRpcSchedulerDetailedMetrics;
 import org.apache.hadoop.metrics2.MetricsCollector;
 import org.apache.hadoop.metrics2.MetricsRecordBuilder;
 import org.apache.hadoop.metrics2.MetricsSource;
@@ -154,6 +155,10 @@ public class DecayRpcScheduler implements RpcScheduler,
   private final AtomicDoubleArray responseTimeAvgInLastWindow;
   private final AtomicLongArray responseTimeCountInLastWindow;
 
+  // RPC queue time rates per queue
+  private final DecayRpcSchedulerDetailedMetrics
+  decayRpcSchedulerDetailedMetrics;
+
   // Pre-computed scheduling decisions during the decay sweep are
   // atomically swapped in as a read-only map
   private final AtomicReference> scheduleCacheRef =
@@ -236,6 +241,10 @@ public class DecayRpcScheduler implements RpcScheduler,
 Preconditions.checkArgument(topUsersCount > 0,
 "the number of top users for scheduler metrics must be at least 1");
 
+decayRpcSchedulerDetailedMetrics =
+DecayRpcSchedulerDetailedMetrics.create(ns);
+decayRpcSchedulerDetailedMetrics.init(numLevels);
+
 // Setup delay timer
 Timer timer = new Timer(true);
 DecayTask task = new DecayTask(this, timer);
@@ -626,6 +635,11 @@ public class DecayRpcScheduler implements RpcScheduler,
 long queueTime = details.get(Timing.QUEUE, TimeUnit.MILLISECONDS);
 long processingTime = details.get(Timing.PROCESSING, 
TimeUnit.MILLISECONDS);
 
+this.decayRpcSchedulerDetailedMetrics.addQueueTime(
+priorityLevel, queueTime);
+this.decayRpcSchedulerDetailedMetrics.addProcessingTime(
+priorityLevel, processingTime);
+
 responseTimeCountInCurrWindow.getAndIncrement(priorityLevel);
 responseTimeTotalInCurrWindow.getAndAdd(priorityLevel,
 queueTime+processingTime);
@@ -987,9 +1001,16 @@ public class DecayRpcScheduler implements RpcScheduler,
 return decayedCallCosts;
   }
 
+  @VisibleForTesting
+  public DecayRpcSchedulerDetailedMetrics
+  getDecayRpcSchedulerDetailedMetrics() {
+return decayRpcSchedulerDetailedMetrics;
+  }
+
   @Override
   public void stop() {
 metricsProxy.unregisterSource(namespace);
 MetricsProxy.removeInstance(namespace);
+decayRpcSchedulerDetailedMetrics.shutdown();
   }
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/DecayRpcSchedulerDetailedMetrics.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/DecayRpcSchedulerDetailedMetrics.java
new file mode 100644
index 000..04a6c0e
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/DecayRpcSchedulerDetailedMetrics.java
@@ -0,0 +1,135 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *

[hadoop] branch trunk updated: HDFS-15154. Allow only hdfs superusers the ability to assign HDFS storage policies. Contributed by Siddharth Wagle.

2020-03-25 Thread arp
This is an automated email from the ASF dual-hosted git repository.

arp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new a700803  HDFS-15154. Allow only hdfs superusers the ability to assign 
HDFS storage policies. Contributed by Siddharth Wagle.
a700803 is described below

commit a700803a18fb957d2799001a2ce1dcb70f75c080
Author: Arpit Agarwal 
AuthorDate: Wed Mar 25 10:28:30 2020 -0700

HDFS-15154. Allow only hdfs superusers the ability to assign HDFS storage 
policies. Contributed by Siddharth Wagle.

Change-Id: I32d6dd2837945b8fc026a759aa367c55daefe348
---
 .../java/org/apache/hadoop/hdfs/DFSConfigKeys.java |   4 +
 .../hadoop/hdfs/server/namenode/FSDirAttrOp.java   |  12 +-
 .../hadoop/hdfs/server/namenode/FSDirectory.java   |  13 +-
 .../hadoop/hdfs/server/namenode/FSNamesystem.java  |  61 ++--
 .../src/main/resources/hdfs-default.xml|   9 ++
 .../hdfs/TestStoragePolicyPermissionSettings.java  | 157 +
 6 files changed, 222 insertions(+), 34 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index e3f4d1e..73cddee 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -1114,6 +1114,10 @@ public class DFSConfigKeys extends 
CommonConfigurationKeys {
 
   public static final String  DFS_STORAGE_POLICY_ENABLED_KEY = 
"dfs.storage.policy.enabled";
   public static final boolean DFS_STORAGE_POLICY_ENABLED_DEFAULT = true;
+  public static final String DFS_STORAGE_POLICY_PERMISSIONS_SUPERUSER_ONLY_KEY 
=
+  "dfs.storage.policy.permissions.superuser-only";
+  public static final boolean
+  DFS_STORAGE_POLICY_PERMISSIONS_SUPERUSER_ONLY_DEFAULT = false;
 
   public static final String  DFS_QUOTA_BY_STORAGETYPE_ENABLED_KEY = 
"dfs.quota.by.storage.type.enabled";
   public static final boolean DFS_QUOTA_BY_STORAGETYPE_ENABLED_DEFAULT = true;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
index 83df0aa..8e9606d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
@@ -47,7 +47,6 @@ import java.util.EnumSet;
 import java.util.List;
 
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_QUOTA_BY_STORAGETYPE_ENABLED_KEY;
-import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_STORAGE_POLICY_ENABLED_KEY;
 
 public class FSDirAttrOp {
   static FileStatus setPermission(
@@ -151,7 +150,7 @@ public class FSDirAttrOp {
   static FileStatus unsetStoragePolicy(FSDirectory fsd, FSPermissionChecker pc,
   BlockManager bm, String src) throws IOException {
 return setStoragePolicy(fsd, pc, bm, src,
-HdfsConstants.BLOCK_STORAGE_POLICY_ID_UNSPECIFIED, "unset");
+HdfsConstants.BLOCK_STORAGE_POLICY_ID_UNSPECIFIED);
   }
 
   static FileStatus setStoragePolicy(FSDirectory fsd, FSPermissionChecker pc,
@@ -162,17 +161,12 @@ public class FSDirAttrOp {
   throw new HadoopIllegalArgumentException(
   "Cannot find a block policy with the name " + policyName);
 }
-return setStoragePolicy(fsd, pc, bm, src, policy.getId(), "set");
+return setStoragePolicy(fsd, pc, bm, src, policy.getId());
   }
 
   static FileStatus setStoragePolicy(FSDirectory fsd, FSPermissionChecker pc,
-  BlockManager bm, String src, final byte policyId, final String operation)
+  BlockManager bm, String src, final byte policyId)
   throws IOException {
-if (!fsd.isStoragePolicyEnabled()) {
-  throw new IOException(String.format(
-  "Failed to %s storage policy since %s is set to false.", operation,
-  DFS_STORAGE_POLICY_ENABLED_KEY));
-}
 INodesInPath iip;
 fsd.writeLock();
 try {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
index 77d8518..c06b59f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
@@ -88,8 +88,6 @@ import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_ACCESSTIME_PRECI
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_ACCESSTIME_PRECISION_KEY;
 import static 
org.apach

[hadoop] branch trunk updated: YARN-10200. Add number of containers to RMAppManager summary

2020-03-25 Thread jhung
This is an automated email from the ASF dual-hosted git repository.

jhung pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 6ce189c  YARN-10200. Add number of containers to RMAppManager summary
6ce189c is described below

commit 6ce189c62132706d9aaee5abf020ae4dc783ba26
Author: Jonathan Hung 
AuthorDate: Mon Mar 23 14:35:43 2020 -0700

YARN-10200. Add number of containers to RMAppManager summary
---
 .../yarn/server/resourcemanager/RMAppManager.java  |  4 ++-
 .../resourcemanager/recovery/RMStateStore.java |  3 ++-
 .../records/ApplicationAttemptStateData.java   | 30 +++---
 .../impl/pb/ApplicationAttemptStateDataPBImpl.java | 12 +
 .../server/resourcemanager/rmapp/RMAppImpl.java|  6 -
 .../server/resourcemanager/rmapp/RMAppMetrics.java |  8 +-
 .../rmapp/attempt/RMAppAttemptImpl.java|  8 --
 .../rmapp/attempt/RMAppAttemptMetrics.java |  4 +++
 .../yarn_server_resourcemanager_recovery.proto |  1 +
 .../server/resourcemanager/TestAppManager.java |  3 ++-
 .../TestContainerResourceUsage.java|  5 
 .../applicationsmanager/MockAsm.java   |  2 +-
 .../TestCombinedSystemMetricsPublisher.java|  2 +-
 .../metrics/TestSystemMetricsPublisher.java|  3 ++-
 .../metrics/TestSystemMetricsPublisherForV2.java   |  2 +-
 .../recovery/RMStateStoreTestBase.java |  6 +++--
 .../recovery/TestZKRMStateStore.java   |  4 +--
 .../server/resourcemanager/webapp/TestAppPage.java |  2 +-
 .../webapp/TestRMWebAppFairScheduler.java  |  2 +-
 .../resourcemanager/webapp/TestRMWebServices.java  |  2 +-
 20 files changed, 88 insertions(+), 21 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
index 2b806dd..4413e9d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
@@ -228,7 +228,9 @@ public class RMAppManager implements 
EventHandler,
   ? ""
   : app.getApplicationSubmissionContext()
   .getNodeLabelExpression())
-  .add("diagnostics", app.getDiagnostics());
+  .add("diagnostics", app.getDiagnostics())
+  .add("totalAllocatedContainers",
+  metrics.getTotalAllocatedContainers());
   return summary;
 }
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
index e88d2b4..052f6b0 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
@@ -993,7 +993,8 @@ public abstract class RMStateStore extends AbstractService {
 appAttempt.getMasterContainer(),
 credentials, appAttempt.getStartTime(),
 resUsage.getResourceUsageSecondsMap(),
-attempMetrics.getPreemptedResourceSecondsMap());
+attempMetrics.getPreemptedResourceSecondsMap(),
+attempMetrics.getTotalAllocatedContainers());
 
 getRMStateStoreEventHandler().handle(
   new RMStateStoreAppAttemptEvent(attemptState));
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/ApplicationAttemptStateData.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/ApplicationAttemptStateData.java
index 2de071a..27e80cd 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/ApplicationAttemptStateData.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/sr

[hadoop] branch branch-3.2 updated: YARN-10200. Add number of containers to RMAppManager summary

2020-03-25 Thread jhung
This is an automated email from the ASF dual-hosted git repository.

jhung pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 5d3fb0e  YARN-10200. Add number of containers to RMAppManager summary
5d3fb0e is described below

commit 5d3fb0ebe9d3f3395320b82a76194ba6fad01e00
Author: Jonathan Hung 
AuthorDate: Mon Mar 23 14:35:43 2020 -0700

YARN-10200. Add number of containers to RMAppManager summary

(cherry picked from commit 2de0572cdc1c6fdbfaab108b169b2d5b0c077e86)
---
 .../yarn/server/resourcemanager/RMAppManager.java  |  4 ++-
 .../resourcemanager/recovery/RMStateStore.java |  3 ++-
 .../records/ApplicationAttemptStateData.java   | 30 +++---
 .../impl/pb/ApplicationAttemptStateDataPBImpl.java | 12 +
 .../server/resourcemanager/rmapp/RMAppImpl.java|  6 -
 .../server/resourcemanager/rmapp/RMAppMetrics.java |  8 +-
 .../rmapp/attempt/RMAppAttemptImpl.java|  8 --
 .../rmapp/attempt/RMAppAttemptMetrics.java |  4 +++
 .../yarn_server_resourcemanager_recovery.proto |  1 +
 .../server/resourcemanager/TestAppManager.java |  3 ++-
 .../TestContainerResourceUsage.java|  5 
 .../applicationsmanager/MockAsm.java   |  2 +-
 .../TestCombinedSystemMetricsPublisher.java|  2 +-
 .../metrics/TestSystemMetricsPublisher.java|  3 ++-
 .../metrics/TestSystemMetricsPublisherForV2.java   |  2 +-
 .../recovery/RMStateStoreTestBase.java |  6 +++--
 .../recovery/TestZKRMStateStore.java   |  4 +--
 .../server/resourcemanager/webapp/TestAppPage.java |  2 +-
 .../webapp/TestRMWebAppFairScheduler.java  |  2 +-
 19 files changed, 87 insertions(+), 20 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
index c8b36c4..6623ab1 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
@@ -224,7 +224,9 @@ public class RMAppManager implements 
EventHandler,
   ? ""
   : app.getApplicationSubmissionContext()
   .getNodeLabelExpression())
-  .add("diagnostics", app.getDiagnostics());
+  .add("diagnostics", app.getDiagnostics())
+  .add("totalAllocatedContainers",
+  metrics.getTotalAllocatedContainers());
   return summary;
 }
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
index dc033fe..d894365 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
@@ -915,7 +915,8 @@ public abstract class RMStateStore extends AbstractService {
 appAttempt.getMasterContainer(),
 credentials, appAttempt.getStartTime(),
 resUsage.getResourceUsageSecondsMap(),
-attempMetrics.getPreemptedResourceSecondsMap());
+attempMetrics.getPreemptedResourceSecondsMap(),
+attempMetrics.getTotalAllocatedContainers());
 
 getRMStateStoreEventHandler().handle(
   new RMStateStoreAppAttemptEvent(attemptState));
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/ApplicationAttemptStateData.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/ApplicationAttemptStateData.java
index 2de071a..27e80cd 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/ApplicationAttemptStateData.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-ya

[hadoop] branch branch-3.1 updated: YARN-10200. Add number of containers to RMAppManager summary

2020-03-25 Thread jhung
This is an automated email from the ASF dual-hosted git repository.

jhung pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 9c6dd8c  YARN-10200. Add number of containers to RMAppManager summary
9c6dd8c is described below

commit 9c6dd8c83a29183d70cd4a69a8317a9303954cc1
Author: Jonathan Hung 
AuthorDate: Mon Mar 23 14:35:43 2020 -0700

YARN-10200. Add number of containers to RMAppManager summary

(cherry picked from commit 2de0572cdc1c6fdbfaab108b169b2d5b0c077e86)
(cherry picked from commit 5d3fb0ebe9d3f3395320b82a76194ba6fad01e00)
---
 .../yarn/server/resourcemanager/RMAppManager.java  |  4 ++-
 .../resourcemanager/recovery/RMStateStore.java |  3 ++-
 .../records/ApplicationAttemptStateData.java   | 30 +++---
 .../impl/pb/ApplicationAttemptStateDataPBImpl.java | 12 +
 .../server/resourcemanager/rmapp/RMAppImpl.java|  6 -
 .../server/resourcemanager/rmapp/RMAppMetrics.java |  8 +-
 .../rmapp/attempt/RMAppAttemptImpl.java|  8 --
 .../rmapp/attempt/RMAppAttemptMetrics.java |  4 +++
 .../yarn_server_resourcemanager_recovery.proto |  1 +
 .../server/resourcemanager/TestAppManager.java |  3 ++-
 .../TestContainerResourceUsage.java|  5 
 .../applicationsmanager/MockAsm.java   |  2 +-
 .../TestCombinedSystemMetricsPublisher.java|  2 +-
 .../metrics/TestSystemMetricsPublisher.java|  3 ++-
 .../metrics/TestSystemMetricsPublisherForV2.java   |  2 +-
 .../recovery/RMStateStoreTestBase.java |  6 +++--
 .../recovery/TestZKRMStateStore.java   |  4 +--
 .../server/resourcemanager/webapp/TestAppPage.java |  2 +-
 .../webapp/TestRMWebAppFairScheduler.java  |  2 +-
 19 files changed, 87 insertions(+), 20 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
index 2972219..668eda2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
@@ -218,7 +218,9 @@ public class RMAppManager implements 
EventHandler,
   ? ""
   : app.getApplicationSubmissionContext()
   .getNodeLabelExpression())
-  .add("diagnostics", app.getDiagnostics());
+  .add("diagnostics", app.getDiagnostics())
+  .add("totalAllocatedContainers",
+  metrics.getTotalAllocatedContainers());
   return summary;
 }
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
index 161e317..1391357 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
@@ -898,7 +898,8 @@ public abstract class RMStateStore extends AbstractService {
 appAttempt.getMasterContainer(),
 credentials, appAttempt.getStartTime(),
 resUsage.getResourceUsageSecondsMap(),
-attempMetrics.getPreemptedResourceSecondsMap());
+attempMetrics.getPreemptedResourceSecondsMap(),
+attempMetrics.getTotalAllocatedContainers());
 
 getRMStateStoreEventHandler().handle(
   new RMStateStoreAppAttemptEvent(attemptState));
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/ApplicationAttemptStateData.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/ApplicationAttemptStateData.java
index 2de071a..27e80cd 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/ApplicationAttemptStateData

[hadoop] branch branch-2.10 updated: YARN-10200. Add number of containers to RMAppManager summary

2020-03-25 Thread jhung
This is an automated email from the ASF dual-hosted git repository.

jhung pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.10 by this push:
 new 1c8529f  YARN-10200. Add number of containers to RMAppManager summary
1c8529f is described below

commit 1c8529f03053e591515b02ca50aa2cccabe04b6d
Author: Jonathan Hung 
AuthorDate: Mon Mar 23 14:35:43 2020 -0700

YARN-10200. Add number of containers to RMAppManager summary

(cherry picked from commit 2de0572cdc1c6fdbfaab108b169b2d5b0c077e86)
(cherry picked from commit 5d3fb0ebe9d3f3395320b82a76194ba6fad01e00)
(cherry picked from commit 9c6dd8c83a29183d70cd4a69a8317a9303954cc1)
---
 .../yarn/server/resourcemanager/RMAppManager.java  |  4 ++-
 .../resourcemanager/recovery/RMStateStore.java |  3 ++-
 .../records/ApplicationAttemptStateData.java   | 30 +++---
 .../impl/pb/ApplicationAttemptStateDataPBImpl.java | 12 +
 .../server/resourcemanager/rmapp/RMAppImpl.java|  7 +++--
 .../server/resourcemanager/rmapp/RMAppMetrics.java |  8 +-
 .../rmapp/attempt/RMAppAttemptImpl.java|  8 --
 .../rmapp/attempt/RMAppAttemptMetrics.java |  4 +++
 .../yarn_server_resourcemanager_recovery.proto |  1 +
 .../server/resourcemanager/TestAppManager.java |  3 ++-
 .../TestContainerResourceUsage.java|  5 
 .../applicationsmanager/MockAsm.java   |  2 +-
 .../TestCombinedSystemMetricsPublisher.java|  2 +-
 .../metrics/TestSystemMetricsPublisher.java|  3 ++-
 .../metrics/TestSystemMetricsPublisherForV2.java   |  2 +-
 .../recovery/RMStateStoreTestBase.java |  4 +--
 .../recovery/TestZKRMStateStore.java   |  4 +--
 .../server/resourcemanager/webapp/TestAppPage.java |  2 +-
 .../webapp/TestRMWebAppFairScheduler.java  |  2 +-
 19 files changed, 85 insertions(+), 21 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
index b0afb23..fde9578 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
@@ -209,7 +209,9 @@ public class RMAppManager implements 
EventHandler,
   ? ""
   : app.getApplicationSubmissionContext()
   .getNodeLabelExpression())
-  .add("diagnostics", app.getDiagnostics());
+  .add("diagnostics", app.getDiagnostics())
+  .add("totalAllocatedContainers",
+  metrics.getTotalAllocatedContainers());
   return summary;
 }
 
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
index 992c8a0..d019626 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/RMStateStore.java
@@ -861,7 +861,8 @@ public abstract class RMStateStore extends AbstractService {
 appAttempt.getMasterContainer(),
 credentials, appAttempt.getStartTime(),
 resUsage.getResourceUsageSecondsMap(),
-attempMetrics.getPreemptedResourceSecondsMap());
+attempMetrics.getPreemptedResourceSecondsMap(),
+attempMetrics.getTotalAllocatedContainers());
 
 getRMStateStoreEventHandler().handle(
   new RMStateStoreAppAttemptEvent(attemptState));
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/ApplicationAttemptStateData.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/records/ApplicationAttemptStateData.java
index 2de071a..27e80cd 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop

[hadoop] branch trunk updated: HDFS-15075. Remove process command timing from BPServiceActor. Contributed by Xiaoqiao He.

2020-03-25 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new cdcb77a  HDFS-15075. Remove process command timing from 
BPServiceActor. Contributed by Xiaoqiao He.
cdcb77a is described below

commit cdcb77a2c5ca99502d2ac2fbf803f22463eb1343
Author: Inigo Goiri 
AuthorDate: Wed Mar 25 11:30:54 2020 -0700

HDFS-15075. Remove process command timing from BPServiceActor. Contributed 
by Xiaoqiao He.
---
 .../main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java |  5 +
 .../hadoop/hdfs/server/datanode/BPServiceActor.java | 17 +
 .../org/apache/hadoop/hdfs/server/datanode/DNConf.java  | 14 ++
 .../hdfs/server/datanode/metrics/DataNodeMetrics.java   | 10 ++
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml |  9 +
 .../hadoop/hdfs/server/datanode/TestBPOfferService.java |  4 
 6 files changed, 51 insertions(+), 8 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 73cddee..b2f8ad2 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -443,6 +443,11 @@ public class DFSConfigKeys extends CommonConfigurationKeys 
{
   public static final boolean DFS_DATANODE_PMEM_CACHE_RECOVERY_DEFAULT =
   true;
 
+  public static final String DFS_DATANODE_PROCESS_COMMANDS_THRESHOLD_KEY =
+  "dfs.datanode.processcommands.threshold";
+  public static final long DFS_DATANODE_PROCESS_COMMANDS_THRESHOLD_DEFAULT =
+  TimeUnit.SECONDS.toMillis(2);
+
   public static final String 
DFS_NAMENODE_DATANODE_REGISTRATION_IP_HOSTNAME_CHECK_KEY = 
"dfs.namenode.datanode.registration.ip-hostname-check";
   public static final boolean 
DFS_NAMENODE_DATANODE_REGISTRATION_IP_HOSTNAME_CHECK_DEFAULT = true;
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
index 222ee49..a436c94 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
@@ -702,15 +702,7 @@ class BPServiceActor implements Runnable {
 if (state == HAServiceState.ACTIVE) {
   handleRollingUpgradeStatus(resp);
 }
-
-long startProcessCommands = monotonicNow();
 commandProcessingThread.enqueue(resp.getCommands());
-long endProcessCommands = monotonicNow();
-if (endProcessCommands - startProcessCommands > 2000) {
-  LOG.info("Took " + (endProcessCommands - startProcessCommands)
-  + "ms to process " + resp.getCommands().length
-  + " commands from NN");
-}
   }
 }
 if (!dn.areIBRDisabledForTests() &&
@@ -1353,6 +1345,7 @@ class BPServiceActor implements Runnable {
  */
 private boolean processCommand(DatanodeCommand[] cmds) {
   if (cmds != null) {
+long startProcessCommands = monotonicNow();
 for (DatanodeCommand cmd : cmds) {
   try {
 if (!bpos.processCommandFromActor(cmd, actor)) {
@@ -1371,6 +1364,14 @@ class BPServiceActor implements Runnable {
 LOG.warn("Error processing datanode Command", ioe);
   }
 }
+long processCommandsMs = monotonicNow() - startProcessCommands;
+if (cmds.length > 0) {
+  dn.getMetrics().addNumProcessedCommands(processCommandsMs);
+}
+if (processCommandsMs > dnConf.getProcessCommandsThresholdMs()) {
+  LOG.info("Took {} ms to process {} commands from NN",
+  processCommandsMs, cmds.length);
+}
   }
   return true;
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
index 487c97d..b56dd4e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java
@@ -35,6 +35,8 @@ import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_OUTLIERS_REPORT_
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_PMEM_CACHE_DIRS_KEY;
 import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_PMEM_CACHE_RECOVERY_DEFAULT;
 import st

[hadoop] branch trunk updated: HDFS-15234. Add a default method body for the INodeAttributeProvider#checkPermissionWithContext API. (#1909)

2020-03-25 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 0fa7bf4  HDFS-15234. Add a default method body for the 
INodeAttributeProvider#checkPermissionWithContext API. (#1909)
0fa7bf4 is described below

commit 0fa7bf47dfe6d95fc520ef8fd19b0a601b660717
Author: Wei-Chiu Chuang 
AuthorDate: Wed Mar 25 16:03:26 2020 -0700

HDFS-15234. Add a default method body for the 
INodeAttributeProvider#checkPermissionWithContext API. (#1909)
---
 .../hadoop/hdfs/server/namenode/INodeAttributeProvider.java   | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
index 80d4967..63c5b46 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java
@@ -399,8 +399,12 @@ public abstract class INodeAttributeProvider {
  * operation.
  * @throws AccessControlException
  */
-void checkPermissionWithContext(AuthorizationContext authzContext)
-throws AccessControlException;
+default void checkPermissionWithContext(AuthorizationContext authzContext)
+throws AccessControlException {
+  throw new AccessControlException("The authorization provider does not "
+  + "implement the checkPermissionWithContext(AuthorizationContext) "
+  + "API.");
+}
   }
   /**
* Initialize the provider. This method is called at NameNode startup


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 04/04: HDFS-15219. DFS Client will stuck when ResponseProcessor.run throw Error (#1902). Contributed by zhengchenyu.

2020-03-25 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 152cbc64578eb518eb7f99c8ad16dd926e5fab42
Author: Isa Hekmatizadeh 
AuthorDate: Tue Mar 24 17:47:22 2020 +

HDFS-15219. DFS Client will stuck when ResponseProcessor.run throw Error 
(#1902). Contributed by  zhengchenyu.

(cherry picked from commit d9c4f1129c0814ab61fce6ea8baf4b272f84c252)
---
 .../src/main/java/org/apache/hadoop/hdfs/DataStreamer.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
index 4c733bf..6637490 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
@@ -1184,7 +1184,7 @@ class DataStreamer extends Daemon {
 
 one.releaseBuffer(byteArrayManager);
   }
-} catch (Exception e) {
+} catch (Throwable e) {
   if (!responderClosed) {
 lastException.set(e);
 errorState.setInternalError();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 03/04: HDFS-15158. The number of failed volumes mismatch with volumeFailures of Datanode metrics. Contributed by Yang Yun.

2020-03-25 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 1e3b0df6abcc1252907c41aaedb3e7e257bce497
Author: Ayush Saxena 
AuthorDate: Sun Feb 9 23:19:40 2020 +0530

HDFS-15158. The number of failed volumes mismatch with volumeFailures of 
Datanode metrics. Contributed by Yang Yun.

(cherry picked from commit 6191d4b4a0919863fda78e549ab6c60022e3ebc2)
---
 .../hadoop/hdfs/server/datanode/DataNode.java  | 12 +-
 .../server/datanode/metrics/DataNodeMetrics.java   |  6 ++---
 .../server/datanode/TestDataNodeVolumeFailure.java | 26 ++
 3 files changed, 35 insertions(+), 9 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index 8cd8f98..3ccb226 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -2192,7 +2192,7 @@ public class DataNode extends ReconfigurableBase
 });
   }
 
-  private void handleDiskError(String failedVolumes) {
+  private void handleDiskError(String failedVolumes, int failedNumber) {
 final boolean hasEnoughResources = data.hasEnoughResource();
 LOG.warn("DataNode.handleDiskError on: " +
 "[{}] Keep Running: {}", failedVolumes, hasEnoughResources);
@@ -2201,7 +2201,7 @@ public class DataNode extends ReconfigurableBase
 // shutdown the DN completely.
 int dpError = hasEnoughResources ? DatanodeProtocol.DISK_ERROR  
  : DatanodeProtocol.FATAL_DISK_ERROR;  
-metrics.incrVolumeFailures();
+metrics.incrVolumeFailures(failedNumber);
 
 //inform NameNodes
 for(BPOfferService bpos: blockPoolManager.getAllNamenodeThreads()) {
@@ -3408,8 +3408,8 @@ public class DataNode extends ReconfigurableBase
 }
 
 data.handleVolumeFailures(unhealthyVolumes);
-Set unhealthyLocations = new HashSet<>(
-unhealthyVolumes.size());
+int failedNumber = unhealthyVolumes.size();
+Set unhealthyLocations = new HashSet<>(failedNumber);
 
 StringBuilder sb = new StringBuilder("DataNode failed volumes:");
 for (FsVolumeSpi vol : unhealthyVolumes) {
@@ -3424,8 +3424,8 @@ public class DataNode extends ReconfigurableBase
   LOG.warn("Error occurred when removing unhealthy storage dirs", e);
 }
 LOG.debug("{}", sb);
-  // send blockreport regarding volume failure
-handleDiskError(sb.toString());
+// send blockreport regarding volume failure
+handleDiskError(sb.toString(), failedNumber);
   }
 
   /**
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
index 8f445a6..b02fc1e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
@@ -374,9 +374,9 @@ public class DataNodeMetrics {
   remoteBytesRead.incr(size);
 }
   }
-  
-  public void incrVolumeFailures() {
-volumeFailures.incr();
+
+  public void incrVolumeFailures(int size) {
+volumeFailures.incr(size);
   }
 
   public void incrDatanodeNetworkErrors() {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
index 4b4002b..2508eef 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.hdfs.server.datanode;
 
+import static org.apache.hadoop.test.MetricsAsserts.getLongCounter;
+import static org.apache.hadoop.test.MetricsAsserts.getMetrics;
 import static org.apache.hadoop.test.PlatformAssumptions.assumeNotWindows;
 import static org.hamcrest.core.Is.is;
 import static org.junit.Assert.assertEquals;
@@ -77,6 +79,7 @@ import 
org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration;
 import org.apache.hadoop.hdfs.server.protocol.NamenodeProtocols;
 import org.apache.hadoop.hdfs.server.protocol.VolumeFailureSummary;
 import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.metrics2.MetricsRecordBuilder;
 import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.security.token.Token;
 imp

[hadoop] branch branch-3.1 updated (9c6dd8c -> d64f688)

2020-03-25 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a change to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 9c6dd8c  YARN-10200. Add number of containers to RMAppManager summary
 new f1a19b7  HDFS-14986. ReplicaCachingGetSpaceUsed throws 
ConcurrentModificationException. Contributed by Aiphago.
 new 4891f24  HDFS-15158. The number of failed volumes mismatch with 
volumeFailures of Datanode metrics. Contributed by Yang Yun.
 new d64f688  HDFS-15219. DFS Client will stuck when ResponseProcessor.run 
throw Error (#1902). Contributed by  zhengchenyu.

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../org/apache/hadoop/fs/CachingGetSpaceUsed.java  | 34 +++--
 .../java/org/apache/hadoop/hdfs/DataStreamer.java  |  2 +-
 .../hadoop/hdfs/server/datanode/DataNode.java  | 12 ++---
 .../server/datanode/fsdataset/FsDatasetSpi.java|  6 +++
 .../datanode/fsdataset/impl/FsDatasetImpl.java | 12 ++---
 .../fsdataset/impl/ReplicaCachingGetSpaceUsed.java |  1 +
 .../server/datanode/metrics/DataNodeMetrics.java   |  6 +--
 .../server/datanode/TestDataNodeVolumeFailure.java | 26 ++
 .../impl/TestReplicaCachingGetSpaceUsed.java   | 55 ++
 9 files changed, 134 insertions(+), 20 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/04: HDFS-15223. FSCK fails if one namenode is not available. Contributed by Ayush Saxena.

2020-03-25 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 853eafa81afcb17e27a14807af27d7f0af2d5dcd
Author: Ayush Saxena 
AuthorDate: Thu Mar 19 21:23:13 2020 +0530

HDFS-15223. FSCK fails if one namenode is not available. Contributed by 
Ayush Saxena.

(cherry picked from commit bb41ddaf1e0c9bf44830b2cf0ac653b7354abf46)
---
 .../src/main/java/org/apache/hadoop/hdfs/HAUtil.java  | 15 ---
 .../apache/hadoop/hdfs/server/namenode/ha/TestHAFsck.java |  7 ++-
 2 files changed, 18 insertions(+), 4 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
index 79275b0..aebc28a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HAUtil.java
@@ -57,10 +57,14 @@ import org.apache.hadoop.security.UserGroupInformation;
 import com.google.common.base.Joiner;
 import com.google.common.base.Preconditions;
 import com.google.common.collect.Lists;
+import org.slf4j.LoggerFactory;
 
 @InterfaceAudience.Private
 public class HAUtil {
-  
+
+  public static final org.slf4j.Logger LOG =
+  LoggerFactory.getLogger(HAUtil.class.getName());
+
   private static final String[] HA_SPECIAL_INDEPENDENT_KEYS = new String[]{
 DFS_NAMENODE_RPC_ADDRESS_KEY,
 DFS_NAMENODE_RPC_BIND_HOST_KEY,
@@ -273,8 +277,13 @@ public class HAUtil {
   List namenodes =
   getProxiesForAllNameNodesInNameservice(dfsConf, nsId);
   for (ClientProtocol proxy : namenodes) {
-if (proxy.getHAServiceState().equals(HAServiceState.ACTIVE)) {
-  inAddr = RPC.getServerAddress(proxy);
+try {
+  if (proxy.getHAServiceState().equals(HAServiceState.ACTIVE)) {
+inAddr = RPC.getServerAddress(proxy);
+  }
+} catch (Exception e) {
+  //Ignore the exception while connecting to a namenode.
+  LOG.debug("Error while connecting to namenode", e);
 }
   }
 } else {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAFsck.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAFsck.java
index cc8ead1..46ebb8f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAFsck.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAFsck.java
@@ -75,7 +75,12 @@ public class TestHAFsck {
   
   cluster.transitionToStandby(0);
   cluster.transitionToActive(1);
-  
+
+  runFsck(conf);
+  // Stop one standby namenode, FSCK should still be successful, since 
there
+  // is one Active namenode available
+  cluster.getNameNode(0).stop();
+
   runFsck(conf);
 } finally {
   if (fs != null) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated (5d3fb0e -> 152cbc6)

2020-03-25 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a change to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 5d3fb0e  YARN-10200. Add number of containers to RMAppManager summary
 new 853eafa  HDFS-15223. FSCK fails if one namenode is not available. 
Contributed by Ayush Saxena.
 new 26b51f3  HDFS-14986. ReplicaCachingGetSpaceUsed throws 
ConcurrentModificationException. Contributed by Aiphago.
 new 1e3b0df  HDFS-15158. The number of failed volumes mismatch with 
volumeFailures of Datanode metrics. Contributed by Yang Yun.
 new 152cbc6  HDFS-15219. DFS Client will stuck when ResponseProcessor.run 
throw Error (#1902). Contributed by  zhengchenyu.

The 4 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../org/apache/hadoop/fs/CachingGetSpaceUsed.java  | 34 +++--
 .../java/org/apache/hadoop/hdfs/DataStreamer.java  |  2 +-
 .../main/java/org/apache/hadoop/hdfs/HAUtil.java   | 15 --
 .../hadoop/hdfs/server/datanode/DataNode.java  | 12 ++---
 .../server/datanode/fsdataset/FsDatasetSpi.java|  6 +++
 .../datanode/fsdataset/impl/FsDatasetImpl.java | 12 ++---
 .../fsdataset/impl/ReplicaCachingGetSpaceUsed.java |  1 +
 .../server/datanode/metrics/DataNodeMetrics.java   |  6 +--
 .../server/datanode/TestDataNodeVolumeFailure.java | 26 ++
 .../impl/TestReplicaCachingGetSpaceUsed.java   | 55 ++
 .../hadoop/hdfs/server/namenode/ha/TestHAFsck.java |  7 ++-
 11 files changed, 152 insertions(+), 24 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 03/03: HDFS-15219. DFS Client will stuck when ResponseProcessor.run throw Error (#1902). Contributed by zhengchenyu.

2020-03-25 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit d64f6881987085acb927248730bea26d5db959d3
Author: Isa Hekmatizadeh 
AuthorDate: Tue Mar 24 17:47:22 2020 +

HDFS-15219. DFS Client will stuck when ResponseProcessor.run throw Error 
(#1902). Contributed by  zhengchenyu.

(cherry picked from commit d9c4f1129c0814ab61fce6ea8baf4b272f84c252)
---
 .../src/main/java/org/apache/hadoop/hdfs/DataStreamer.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
index 4c733bf..6637490 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
@@ -1184,7 +1184,7 @@ class DataStreamer extends Daemon {
 
 one.releaseBuffer(byteArrayManager);
   }
-} catch (Exception e) {
+} catch (Throwable e) {
   if (!responderClosed) {
 lastException.set(e);
 errorState.setInternalError();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 02/03: HDFS-15158. The number of failed volumes mismatch with volumeFailures of Datanode metrics. Contributed by Yang Yun.

2020-03-25 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 4891f24a94a7a2a293dc379d23e93907bbb40526
Author: Ayush Saxena 
AuthorDate: Sun Feb 9 23:19:40 2020 +0530

HDFS-15158. The number of failed volumes mismatch with volumeFailures of 
Datanode metrics. Contributed by Yang Yun.

(cherry picked from commit 6191d4b4a0919863fda78e549ab6c60022e3ebc2)
(cherry picked from commit 1e3b0df6abcc1252907c41aaedb3e7e257bce497)
---
 .../hadoop/hdfs/server/datanode/DataNode.java  | 12 +-
 .../server/datanode/metrics/DataNodeMetrics.java   |  6 ++---
 .../server/datanode/TestDataNodeVolumeFailure.java | 26 ++
 3 files changed, 35 insertions(+), 9 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index 4cb2d93..62e4262 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -2192,7 +2192,7 @@ public class DataNode extends ReconfigurableBase
 });
   }
 
-  private void handleDiskError(String failedVolumes) {
+  private void handleDiskError(String failedVolumes, int failedNumber) {
 final boolean hasEnoughResources = data.hasEnoughResource();
 LOG.warn("DataNode.handleDiskError on: " +
 "[{}] Keep Running: {}", failedVolumes, hasEnoughResources);
@@ -2201,7 +2201,7 @@ public class DataNode extends ReconfigurableBase
 // shutdown the DN completely.
 int dpError = hasEnoughResources ? DatanodeProtocol.DISK_ERROR  
  : DatanodeProtocol.FATAL_DISK_ERROR;  
-metrics.incrVolumeFailures();
+metrics.incrVolumeFailures(failedNumber);
 
 //inform NameNodes
 for(BPOfferService bpos: blockPoolManager.getAllNamenodeThreads()) {
@@ -3408,8 +3408,8 @@ public class DataNode extends ReconfigurableBase
 }
 
 data.handleVolumeFailures(unhealthyVolumes);
-Set unhealthyLocations = new HashSet<>(
-unhealthyVolumes.size());
+int failedNumber = unhealthyVolumes.size();
+Set unhealthyLocations = new HashSet<>(failedNumber);
 
 StringBuilder sb = new StringBuilder("DataNode failed volumes:");
 for (FsVolumeSpi vol : unhealthyVolumes) {
@@ -3424,8 +3424,8 @@ public class DataNode extends ReconfigurableBase
   LOG.warn("Error occurred when removing unhealthy storage dirs", e);
 }
 LOG.debug("{}", sb);
-  // send blockreport regarding volume failure
-handleDiskError(sb.toString());
+// send blockreport regarding volume failure
+handleDiskError(sb.toString(), failedNumber);
   }
 
   /**
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
index 58a2f65..00590ac 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
@@ -370,9 +370,9 @@ public class DataNodeMetrics {
   remoteBytesRead.incr(size);
 }
   }
-  
-  public void incrVolumeFailures() {
-volumeFailures.incr();
+
+  public void incrVolumeFailures(int size) {
+volumeFailures.incr(size);
   }
 
   public void incrDatanodeNetworkErrors() {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
index 4b4002b..2508eef 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailure.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.hdfs.server.datanode;
 
+import static org.apache.hadoop.test.MetricsAsserts.getLongCounter;
+import static org.apache.hadoop.test.MetricsAsserts.getMetrics;
 import static org.apache.hadoop.test.PlatformAssumptions.assumeNotWindows;
 import static org.hamcrest.core.Is.is;
 import static org.junit.Assert.assertEquals;
@@ -77,6 +79,7 @@ import 
org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration;
 import org.apache.hadoop.hdfs.server.protocol.NamenodeProtocols;
 import org.apache.hadoop.hdfs.server.protocol.VolumeFailureSummary;
 import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.metrics2.MetricsRecordBuilder;
 import org.apache.

[hadoop] 02/04: HDFS-14986. ReplicaCachingGetSpaceUsed throws ConcurrentModificationException. Contributed by Aiphago.

2020-03-25 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 26b51f3e2295b9a85ee1fc9a7f475cb3dc181933
Author: Yiqun Lin 
AuthorDate: Thu Nov 28 10:43:35 2019 +0800

HDFS-14986. ReplicaCachingGetSpaceUsed throws 
ConcurrentModificationException. Contributed by Aiphago.

(cherry picked from commit 2b452b4e6063072b2bec491edd3f412eb7ac21f3)
---
 .../org/apache/hadoop/fs/CachingGetSpaceUsed.java  | 34 +++--
 .../server/datanode/fsdataset/FsDatasetSpi.java|  6 +++
 .../datanode/fsdataset/impl/FsDatasetImpl.java | 12 ++---
 .../fsdataset/impl/ReplicaCachingGetSpaceUsed.java |  1 +
 .../impl/TestReplicaCachingGetSpaceUsed.java   | 55 ++
 5 files changed, 98 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
index 92476d7..58dc82d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
@@ -47,6 +47,7 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
   private final long jitter;
   private final String dirPath;
   private Thread refreshUsed;
+  private boolean shouldFirstRefresh;
 
   /**
* This is the constructor used by the builder.
@@ -79,16 +80,30 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
 this.refreshInterval = interval;
 this.jitter = jitter;
 this.used.set(initialUsed);
+this.shouldFirstRefresh = true;
   }
 
   void init() {
 if (used.get() < 0) {
   used.set(0);
+  if (!shouldFirstRefresh) {
+// Skip initial refresh operation, so we need to do first refresh
+// operation immediately in refresh thread.
+initRefeshThread(true);
+return;
+  }
   refresh();
 }
+initRefeshThread(false);
+  }
 
+  /**
+   * RunImmediately should set true, if we skip the first refresh.
+   * @param runImmediately The param default should be false.
+   */
+  private void initRefeshThread (boolean runImmediately) {
 if (refreshInterval > 0) {
-  refreshUsed = new Thread(new RefreshThread(this),
+  refreshUsed = new Thread(new RefreshThread(this, runImmediately),
   "refreshUsed-" + dirPath);
   refreshUsed.setDaemon(true);
   refreshUsed.start();
@@ -101,6 +116,14 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
   protected abstract void refresh();
 
   /**
+   * Reset that if we need to do the first refresh.
+   * @param shouldFirstRefresh The flag value to set.
+   */
+  protected void setShouldFirstRefresh(boolean shouldFirstRefresh) {
+this.shouldFirstRefresh = shouldFirstRefresh;
+  }
+
+  /**
* @return an estimate of space used in the directory path.
*/
   @Override public long getUsed() throws IOException {
@@ -156,9 +179,11 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
   private static final class RefreshThread implements Runnable {
 
 final CachingGetSpaceUsed spaceUsed;
+private boolean runImmediately;
 
-RefreshThread(CachingGetSpaceUsed spaceUsed) {
+RefreshThread(CachingGetSpaceUsed spaceUsed, boolean runImmediately) {
   this.spaceUsed = spaceUsed;
+  this.runImmediately = runImmediately;
 }
 
 @Override
@@ -176,7 +201,10 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
   }
   // Make sure that after the jitter we didn't end up at 0.
   refreshInterval = Math.max(refreshInterval, 1);
-  Thread.sleep(refreshInterval);
+  if (!runImmediately) {
+Thread.sleep(refreshInterval);
+  }
+  runImmediately = false;
   // update the used variable
   spaceUsed.refresh();
 } catch (InterruptedException e) {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
index 78a5cfc..578c390 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
@@ -661,5 +661,11 @@ public interface FsDatasetSpi 
extends FSDatasetMBean {
*/
   AutoCloseableLock acquireDatasetLock();
 
+  /**
+   * Deep copy the replica info belonging to given block pool.
+   * @param bpid Specified block pool id.
+   * @return A set of replica info.
+   * @throws I

[hadoop] 01/03: HDFS-14986. ReplicaCachingGetSpaceUsed throws ConcurrentModificationException. Contributed by Aiphago.

2020-03-25 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit f1a19b7a3f590f8b1876deb1bae6dfb4bf840edb
Author: Yiqun Lin 
AuthorDate: Thu Nov 28 10:43:35 2019 +0800

HDFS-14986. ReplicaCachingGetSpaceUsed throws 
ConcurrentModificationException. Contributed by Aiphago.

(cherry picked from commit 2b452b4e6063072b2bec491edd3f412eb7ac21f3)
---
 .../org/apache/hadoop/fs/CachingGetSpaceUsed.java  | 34 +++--
 .../server/datanode/fsdataset/FsDatasetSpi.java|  6 +++
 .../datanode/fsdataset/impl/FsDatasetImpl.java | 12 ++---
 .../fsdataset/impl/ReplicaCachingGetSpaceUsed.java |  1 +
 .../impl/TestReplicaCachingGetSpaceUsed.java   | 55 ++
 5 files changed, 98 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
index 92476d7..58dc82d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CachingGetSpaceUsed.java
@@ -47,6 +47,7 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
   private final long jitter;
   private final String dirPath;
   private Thread refreshUsed;
+  private boolean shouldFirstRefresh;
 
   /**
* This is the constructor used by the builder.
@@ -79,16 +80,30 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
 this.refreshInterval = interval;
 this.jitter = jitter;
 this.used.set(initialUsed);
+this.shouldFirstRefresh = true;
   }
 
   void init() {
 if (used.get() < 0) {
   used.set(0);
+  if (!shouldFirstRefresh) {
+// Skip initial refresh operation, so we need to do first refresh
+// operation immediately in refresh thread.
+initRefeshThread(true);
+return;
+  }
   refresh();
 }
+initRefeshThread(false);
+  }
 
+  /**
+   * RunImmediately should set true, if we skip the first refresh.
+   * @param runImmediately The param default should be false.
+   */
+  private void initRefeshThread (boolean runImmediately) {
 if (refreshInterval > 0) {
-  refreshUsed = new Thread(new RefreshThread(this),
+  refreshUsed = new Thread(new RefreshThread(this, runImmediately),
   "refreshUsed-" + dirPath);
   refreshUsed.setDaemon(true);
   refreshUsed.start();
@@ -101,6 +116,14 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
   protected abstract void refresh();
 
   /**
+   * Reset that if we need to do the first refresh.
+   * @param shouldFirstRefresh The flag value to set.
+   */
+  protected void setShouldFirstRefresh(boolean shouldFirstRefresh) {
+this.shouldFirstRefresh = shouldFirstRefresh;
+  }
+
+  /**
* @return an estimate of space used in the directory path.
*/
   @Override public long getUsed() throws IOException {
@@ -156,9 +179,11 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
   private static final class RefreshThread implements Runnable {
 
 final CachingGetSpaceUsed spaceUsed;
+private boolean runImmediately;
 
-RefreshThread(CachingGetSpaceUsed spaceUsed) {
+RefreshThread(CachingGetSpaceUsed spaceUsed, boolean runImmediately) {
   this.spaceUsed = spaceUsed;
+  this.runImmediately = runImmediately;
 }
 
 @Override
@@ -176,7 +201,10 @@ public abstract class CachingGetSpaceUsed implements 
Closeable, GetSpaceUsed {
   }
   // Make sure that after the jitter we didn't end up at 0.
   refreshInterval = Math.max(refreshInterval, 1);
-  Thread.sleep(refreshInterval);
+  if (!runImmediately) {
+Thread.sleep(refreshInterval);
+  }
+  runImmediately = false;
   // update the used variable
   spaceUsed.refresh();
 } catch (InterruptedException e) {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
index 78a5cfc..578c390 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
@@ -661,5 +661,11 @@ public interface FsDatasetSpi 
extends FSDatasetMBean {
*/
   AutoCloseableLock acquireDatasetLock();
 
+  /**
+   * Deep copy the replica info belonging to given block pool.
+   * @param bpid Specified block pool id.
+   * @return A set of replica info.
+   * @throws I

[hadoop] 03/03: HDFS-14647. NPE during secure namenode startup. Contributed by Fengnan Li.

2020-03-25 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 6a4c3fad6681efec58f89ce91fd4e2d7824507fc
Author: Ayush Saxena 
AuthorDate: Thu Jul 25 06:51:07 2019 +0530

HDFS-14647. NPE during secure namenode startup. Contributed by Fengnan Li.

(cherry picked from commit 62deab17a33cef723d73f8d8b9e37e5bddbc1813)
(cherry picked from commit 6cd9290401735a2c33a0ff0ae7324876ef9615e9)
---
 .../hadoop/hdfs/server/namenode/NameNode.java  |  4 +++
 .../hadoop/hdfs/server/common/TestJspHelper.java   | 34 ++
 2 files changed, 38 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
index 6ba26ec..4741c6c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
@@ -675,6 +675,10 @@ public class NameNode extends ReconfigurableBase implements
   @Override
   public void verifyToken(DelegationTokenIdentifier id, byte[] password)
   throws IOException {
+// during startup namesystem is null, let client retry
+if (namesystem == null) {
+  throw new RetriableException("Namenode is in startup mode");
+}
 namesystem.verifyToken(id, password);
   }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestJspHelper.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestJspHelper.java
index 5a1661c..1aff766 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestJspHelper.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestJspHelper.java
@@ -21,12 +21,14 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.HdfsConfiguration;
 import 
org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
+import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer;
 import org.apache.hadoop.hdfs.web.resources.DoAsParam;
 import org.apache.hadoop.hdfs.web.resources.UserParam;
 import org.apache.hadoop.io.DataInputBuffer;
 import org.apache.hadoop.io.DataOutputBuffer;
 import org.apache.hadoop.io.Text;
+import org.apache.hadoop.ipc.RetriableException;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.UserGroupInformation.AuthenticationMethod;
 import org.apache.hadoop.security.authorize.AuthorizationException;
@@ -36,9 +38,11 @@ import org.apache.hadoop.security.authorize.ProxyUsers;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.token.TokenIdentifier;
 import 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager;
+import org.apache.hadoop.test.LambdaTestUtils;
 import org.junit.Assert;
 import org.junit.BeforeClass;
 import org.junit.Test;
+import org.mockito.Mockito;
 
 import javax.servlet.ServletContext;
 import javax.servlet.http.HttpServletRequest;
@@ -371,8 +375,38 @@ public class TestJspHelper {
 }
   }
 
+  @Test
+  public void testGetUgiDuringStartup() throws Exception {
+conf.set(DFSConfigKeys.FS_DEFAULT_NAME_KEY, "hdfs://localhost:4321/");
+ServletContext context = mock(ServletContext.class);
+String realUser = "TheDoctor";
+String user = "TheNurse";
+conf.set(DFSConfigKeys.HADOOP_SECURITY_AUTHENTICATION, "kerberos");
+UserGroupInformation.setConfiguration(conf);
+HttpServletRequest request;
 
+Text ownerText = new Text(user);
+DelegationTokenIdentifier dtId = new DelegationTokenIdentifier(
+ownerText, ownerText, new Text(realUser));
+Token token =
+new Token(dtId,
+new DummySecretManager(0, 0, 0, 0));
+String tokenString = token.encodeToUrlString();
 
+// token with auth-ed user
+request = getMockRequest(realUser, null, null);
+when(request.getParameter(JspHelper.DELEGATION_PARAMETER_NAME)).thenReturn(
+tokenString);
+
+NameNode mockNN = mock(NameNode.class);
+Mockito.doCallRealMethod().when(mockNN)
+.verifyToken(Mockito.any(), Mockito.any());
+when(context.getAttribute("name.node")).thenReturn(mockNN);
+
+LambdaTestUtils.intercept(RetriableException.class,
+"Namenode is in startup mode",
+() -> JspHelper.getUGI(context, request, conf));
+  }
 
   private HttpServletRequest getMockRequest(String remoteUser, String user, 
String doAs) {
 HttpServletRequest request = mock(HttpServletRequest.class);


-

[hadoop] 03/03: HDFS-14647. NPE during secure namenode startup. Contributed by Fengnan Li.

2020-03-25 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 6cd9290401735a2c33a0ff0ae7324876ef9615e9
Author: Ayush Saxena 
AuthorDate: Thu Jul 25 06:51:07 2019 +0530

HDFS-14647. NPE during secure namenode startup. Contributed by Fengnan Li.

(cherry picked from commit 62deab17a33cef723d73f8d8b9e37e5bddbc1813)
---
 .../hadoop/hdfs/server/namenode/NameNode.java  |  4 +++
 .../hadoop/hdfs/server/common/TestJspHelper.java   | 34 ++
 2 files changed, 38 insertions(+)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
index ba4b730..0fff970 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
@@ -679,6 +679,10 @@ public class NameNode extends ReconfigurableBase implements
   @Override
   public void verifyToken(DelegationTokenIdentifier id, byte[] password)
   throws IOException {
+// during startup namesystem is null, let client retry
+if (namesystem == null) {
+  throw new RetriableException("Namenode is in startup mode");
+}
 namesystem.verifyToken(id, password);
   }
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestJspHelper.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestJspHelper.java
index 5a1661c..1aff766 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestJspHelper.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestJspHelper.java
@@ -21,12 +21,14 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.HdfsConfiguration;
 import 
org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
+import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer;
 import org.apache.hadoop.hdfs.web.resources.DoAsParam;
 import org.apache.hadoop.hdfs.web.resources.UserParam;
 import org.apache.hadoop.io.DataInputBuffer;
 import org.apache.hadoop.io.DataOutputBuffer;
 import org.apache.hadoop.io.Text;
+import org.apache.hadoop.ipc.RetriableException;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.UserGroupInformation.AuthenticationMethod;
 import org.apache.hadoop.security.authorize.AuthorizationException;
@@ -36,9 +38,11 @@ import org.apache.hadoop.security.authorize.ProxyUsers;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.token.TokenIdentifier;
 import 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager;
+import org.apache.hadoop.test.LambdaTestUtils;
 import org.junit.Assert;
 import org.junit.BeforeClass;
 import org.junit.Test;
+import org.mockito.Mockito;
 
 import javax.servlet.ServletContext;
 import javax.servlet.http.HttpServletRequest;
@@ -371,8 +375,38 @@ public class TestJspHelper {
 }
   }
 
+  @Test
+  public void testGetUgiDuringStartup() throws Exception {
+conf.set(DFSConfigKeys.FS_DEFAULT_NAME_KEY, "hdfs://localhost:4321/");
+ServletContext context = mock(ServletContext.class);
+String realUser = "TheDoctor";
+String user = "TheNurse";
+conf.set(DFSConfigKeys.HADOOP_SECURITY_AUTHENTICATION, "kerberos");
+UserGroupInformation.setConfiguration(conf);
+HttpServletRequest request;
 
+Text ownerText = new Text(user);
+DelegationTokenIdentifier dtId = new DelegationTokenIdentifier(
+ownerText, ownerText, new Text(realUser));
+Token token =
+new Token(dtId,
+new DummySecretManager(0, 0, 0, 0));
+String tokenString = token.encodeToUrlString();
 
+// token with auth-ed user
+request = getMockRequest(realUser, null, null);
+when(request.getParameter(JspHelper.DELEGATION_PARAMETER_NAME)).thenReturn(
+tokenString);
+
+NameNode mockNN = mock(NameNode.class);
+Mockito.doCallRealMethod().when(mockNN)
+.verifyToken(Mockito.any(), Mockito.any());
+when(context.getAttribute("name.node")).thenReturn(mockNN);
+
+LambdaTestUtils.intercept(RetriableException.class,
+"Namenode is in startup mode",
+() -> JspHelper.getUGI(context, request, conf));
+  }
 
   private HttpServletRequest getMockRequest(String remoteUser, String user, 
String doAs) {
 HttpServletRequest request = mock(HttpServletRequest.class);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoo

[hadoop] branch branch-3.2 updated (152cbc6 -> 6cd9290)

2020-03-25 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a change to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 152cbc6  HDFS-15219. DFS Client will stuck when ResponseProcessor.run 
throw Error (#1902). Contributed by  zhengchenyu.
 new eca7bc7  HDFS-14006. Refactor name node to allow different token 
verification implementations. Contributed by CR Hota.
 new ba6b3a3  HDFS-14434.  Ignore user.name query parameter in secure 
WebHDFS.  Contributed by KWON BYUNGCHANG
 new 6cd9290  HDFS-14647. NPE during secure namenode startup. Contributed 
by Fengnan Li.

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../apache/hadoop/hdfs/web/WebHdfsFileSystem.java  |  16 +-
 .../hadoop/hdfs/server/common/JspHelper.java   |  16 +-
 ...pPutFailedException.java => TokenVerifier.java} |  24 ++-
 .../hadoop/hdfs/server/namenode/NameNode.java  |  13 +-
 .../hdfs/server/namenode/NameNodeHttpServer.java   |   6 +
 .../hadoop/hdfs/server/common/TestJspHelper.java   | 122 
 .../apache/hadoop/hdfs/web/TestWebHdfsTokens.java  | 218 +
 .../org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java |  47 +++--
 8 files changed, 303 insertions(+), 159 deletions(-)
 copy 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/{HttpPutFailedException.java
 => TokenVerifier.java} (63%)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/03: HDFS-14006. Refactor name node to allow different token verification implementations. Contributed by CR Hota.

2020-03-25 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit eca7bc7ac4292ff08b9e1ea2e22116d6e58e8b95
Author: Giovanni Matteo Fumarola 
AuthorDate: Fri Dec 14 11:10:54 2018 -0800

HDFS-14006. Refactor name node to allow different token verification 
implementations. Contributed by CR Hota.

(cherry picked from commit 00d5e631b596f8712600879366e5283829e7ee5d)
---
 .../hadoop/hdfs/server/common/JspHelper.java   |  8 ++---
 .../hadoop/hdfs/server/common/TokenVerifier.java   | 35 ++
 .../hadoop/hdfs/server/namenode/NameNode.java  |  9 +-
 .../hdfs/server/namenode/NameNodeHttpServer.java   |  6 
 4 files changed, 53 insertions(+), 5 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
index 498a093..eb488e8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
@@ -23,7 +23,6 @@ import org.slf4j.LoggerFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.conf.Configuration;
 import 
org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
-import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer;
 import org.apache.hadoop.hdfs.web.resources.DelegationParam;
 import org.apache.hadoop.hdfs.web.resources.DoAsParam;
@@ -176,10 +175,11 @@ public class JspHelper {
 DelegationTokenIdentifier id = new DelegationTokenIdentifier();
 id.readFields(in);
 if (context != null) {
-  final NameNode nn = NameNodeHttpServer.getNameNodeFromContext(context);
-  if (nn != null) {
+  final TokenVerifier tokenVerifier =
+  NameNodeHttpServer.getTokenVerifierFromContext(context);
+  if (tokenVerifier != null) {
 // Verify the token.
-nn.getNamesystem().verifyToken(id, token.getPassword());
+tokenVerifier.verifyToken(id, token.getPassword());
   }
 }
 UserGroupInformation ugi = id.getUser();
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TokenVerifier.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TokenVerifier.java
new file mode 100644
index 000..5691f0c
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TokenVerifier.java
@@ -0,0 +1,35 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.common;
+
+import java.io.IOException;
+import 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier;
+
+/**
+ * Interface to verify delegation tokens passed through WebHDFS.
+ * Implementations are intercepted by JspHelper that pass delegation token
+ * for verification.
+ */
+public interface TokenVerifier {
+
+  /* Verify delegation token passed through WebHDFS
+   * Name node, Router implement this for JspHelper to verify token
+   */
+  void verifyToken(T t, byte[] password) throws IOException;
+
+}
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
index 4556b89..ba4b730 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
@@ -47,6 +47,7 @@ import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
 import org.apache.hadoop.hdfs.protocol.ClientProtocol;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import 
org.apache.hadoop.hdfs.protocol.HdfsConstants.StoragePolicySatisfierMode;
+import 
org.apache.hadoop.hdfs.security.token.delegation.DelegationTok

[hadoop] branch branch-3.1 updated (d64f688 -> 6a4c3fa)

2020-03-25 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a change to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from d64f688  HDFS-15219. DFS Client will stuck when ResponseProcessor.run 
throw Error (#1902). Contributed by  zhengchenyu.
 new 4aa7734  HDFS-14006. Refactor name node to allow different token 
verification implementations. Contributed by CR Hota.
 new b837431  HDFS-14434.  Ignore user.name query parameter in secure 
WebHDFS.  Contributed by KWON BYUNGCHANG
 new 6a4c3fa  HDFS-14647. NPE during secure namenode startup. Contributed 
by Fengnan Li.

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../apache/hadoop/hdfs/web/WebHdfsFileSystem.java  |  16 +-
 .../hadoop/hdfs/server/common/JspHelper.java   |  16 +-
 ...pGetFailedException.java => TokenVerifier.java} |  26 ++-
 .../hadoop/hdfs/server/namenode/NameNode.java  |  13 +-
 .../hdfs/server/namenode/NameNodeHttpServer.java   |   6 +
 .../hadoop/hdfs/server/common/TestJspHelper.java   | 122 
 .../apache/hadoop/hdfs/web/TestWebHdfsTokens.java  | 218 +
 .../org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java |  47 +++--
 8 files changed, 303 insertions(+), 161 deletions(-)
 copy 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/{HttpGetFailedException.java
 => TokenVerifier.java} (63%)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 02/03: HDFS-14434. Ignore user.name query parameter in secure WebHDFS. Contributed by KWON BYUNGCHANG

2020-03-25 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit ba6b3a384863b57bc7eeeb736950f544e6ed8d6d
Author: Eric Yang 
AuthorDate: Tue May 28 17:31:35 2019 -0400

HDFS-14434.  Ignore user.name query parameter in secure WebHDFS.
 Contributed by KWON BYUNGCHANG

(cherry picked from commit d78854b928bb877f26b11b5b212a100a79941f35)

 Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java
---
 .../apache/hadoop/hdfs/web/WebHdfsFileSystem.java  |  16 +-
 .../hadoop/hdfs/server/common/JspHelper.java   |   8 +-
 .../hadoop/hdfs/server/common/TestJspHelper.java   |  88 +
 .../apache/hadoop/hdfs/web/TestWebHdfsTokens.java  | 218 +
 .../org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java |  47 +++--
 5 files changed, 236 insertions(+), 141 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
index b316bf1..90b15ff 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
@@ -167,6 +167,7 @@ public class WebHdfsFileSystem extends FileSystem
   private InetSocketAddress nnAddrs[];
   private int currentNNAddrIndex;
   private boolean disallowFallbackToInsecureCluster;
+  private boolean isInsecureCluster;
   private String restCsrfCustomHeader;
   private Set restCsrfMethodsToIgnore;
 
@@ -279,6 +280,7 @@ public class WebHdfsFileSystem extends FileSystem
 
 this.workingDir = makeQualified(new Path(getHomeDirectoryString(ugi)));
 this.canRefreshDelegationToken = UserGroupInformation.isSecurityEnabled();
+this.isInsecureCluster = !this.canRefreshDelegationToken;
 this.disallowFallbackToInsecureCluster = !conf.getBoolean(
 CommonConfigurationKeys.IPC_CLIENT_FALLBACK_TO_SIMPLE_AUTH_ALLOWED_KEY,
 
CommonConfigurationKeys.IPC_CLIENT_FALLBACK_TO_SIMPLE_AUTH_ALLOWED_DEFAULT);
@@ -364,6 +366,7 @@ public class WebHdfsFileSystem extends FileSystem
 LOG.debug("Fetched new token: {}", token);
   } else { // security is disabled
 canRefreshDelegationToken = false;
+isInsecureCluster = true;
   }
 }
   }
@@ -410,8 +413,7 @@ public class WebHdfsFileSystem extends FileSystem
 if (cachedHomeDirectory == null) {
   final HttpOpParam.Op op = GetOpParam.Op.GETHOMEDIRECTORY;
   try {
-String pathFromDelegatedFS = new FsPathResponseRunner(op, null,
-new UserParam(ugi)) {
+String pathFromDelegatedFS = new FsPathResponseRunner(op, 
null){
   @Override
   String decodeResponse(Map json) throws IOException {
 return JsonUtilClient.getPath(json);
@@ -573,7 +575,8 @@ public class WebHdfsFileSystem extends FileSystem
 return url;
   }
 
-  Param[] getAuthParameters(final HttpOpParam.Op op) throws IOException {
+  private synchronized Param[] getAuthParameters(final HttpOpParam.Op op)
+  throws IOException {
 List> authParams = Lists.newArrayList();
 // Skip adding delegation token for token operations because these
 // operations require authentication.
@@ -590,7 +593,12 @@ public class WebHdfsFileSystem extends FileSystem
 authParams.add(new DoAsParam(userUgi.getShortUserName()));
 userUgi = realUgi;
   }
-  authParams.add(new UserParam(userUgi.getShortUserName()));
+  UserParam userParam = new UserParam((userUgi.getShortUserName()));
+
+  //in insecure, use user.name parameter, in secure, use spnego auth
+  if(isInsecureCluster) {
+authParams.add(userParam);
+  }
 }
 return authParams.toArray(new Param[0]);
   }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
index eb488e8..2c65c3f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
@@ -118,12 +118,9 @@ public class JspHelper {
   remoteUser = request.getRemoteUser();
   final String tokenString = 
request.getParameter(DELEGATION_PARAMETER_NAME);
   if (tokenString != null) {
-// Token-based connections need only verify the effective user, and
-// disallow proxying to different user.  Proxy authorization checks
-// are not required since the checks apply to issuing a token.
+
+// user.name, doas param i

[hadoop] 02/03: HDFS-14434. Ignore user.name query parameter in secure WebHDFS. Contributed by KWON BYUNGCHANG

2020-03-25 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit b837431a08524145865f9bd542527d466b1f774b
Author: Eric Yang 
AuthorDate: Tue May 28 17:31:35 2019 -0400

HDFS-14434.  Ignore user.name query parameter in secure WebHDFS.
 Contributed by KWON BYUNGCHANG

(cherry picked from commit d78854b928bb877f26b11b5b212a100a79941f35)

 Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java

(cherry picked from commit ba6b3a384863b57bc7eeeb736950f544e6ed8d6d)
---
 .../apache/hadoop/hdfs/web/WebHdfsFileSystem.java  |  16 +-
 .../hadoop/hdfs/server/common/JspHelper.java   |   8 +-
 .../hadoop/hdfs/server/common/TestJspHelper.java   |  88 +
 .../apache/hadoop/hdfs/web/TestWebHdfsTokens.java  | 218 +
 .../org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java |  47 +++--
 5 files changed, 236 insertions(+), 141 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
index 6fa7c97..37b66e6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
@@ -167,6 +167,7 @@ public class WebHdfsFileSystem extends FileSystem
   private InetSocketAddress nnAddrs[];
   private int currentNNAddrIndex;
   private boolean disallowFallbackToInsecureCluster;
+  private boolean isInsecureCluster;
   private String restCsrfCustomHeader;
   private Set restCsrfMethodsToIgnore;
 
@@ -280,6 +281,7 @@ public class WebHdfsFileSystem extends FileSystem
 
 this.workingDir = makeQualified(new Path(getHomeDirectoryString(ugi)));
 this.canRefreshDelegationToken = UserGroupInformation.isSecurityEnabled();
+this.isInsecureCluster = !this.canRefreshDelegationToken;
 this.disallowFallbackToInsecureCluster = !conf.getBoolean(
 CommonConfigurationKeys.IPC_CLIENT_FALLBACK_TO_SIMPLE_AUTH_ALLOWED_KEY,
 
CommonConfigurationKeys.IPC_CLIENT_FALLBACK_TO_SIMPLE_AUTH_ALLOWED_DEFAULT);
@@ -365,6 +367,7 @@ public class WebHdfsFileSystem extends FileSystem
 LOG.debug("Fetched new token: {}", token);
   } else { // security is disabled
 canRefreshDelegationToken = false;
+isInsecureCluster = true;
   }
 }
   }
@@ -411,8 +414,7 @@ public class WebHdfsFileSystem extends FileSystem
 if (cachedHomeDirectory == null) {
   final HttpOpParam.Op op = GetOpParam.Op.GETHOMEDIRECTORY;
   try {
-String pathFromDelegatedFS = new FsPathResponseRunner(op, null,
-new UserParam(ugi)) {
+String pathFromDelegatedFS = new FsPathResponseRunner(op, 
null){
   @Override
   String decodeResponse(Map json) throws IOException {
 return JsonUtilClient.getPath(json);
@@ -574,7 +576,8 @@ public class WebHdfsFileSystem extends FileSystem
 return url;
   }
 
-  Param[] getAuthParameters(final HttpOpParam.Op op) throws IOException {
+  private synchronized Param[] getAuthParameters(final HttpOpParam.Op op)
+  throws IOException {
 List> authParams = Lists.newArrayList();
 // Skip adding delegation token for token operations because these
 // operations require authentication.
@@ -591,7 +594,12 @@ public class WebHdfsFileSystem extends FileSystem
 authParams.add(new DoAsParam(userUgi.getShortUserName()));
 userUgi = realUgi;
   }
-  authParams.add(new UserParam(userUgi.getShortUserName()));
+  UserParam userParam = new UserParam((userUgi.getShortUserName()));
+
+  //in insecure, use user.name parameter, in secure, use spnego auth
+  if(isInsecureCluster) {
+authParams.add(userParam);
+  }
 }
 return authParams.toArray(new Param[0]);
   }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
index 2d1d736..e56f1e1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
@@ -118,12 +118,9 @@ public class JspHelper {
   remoteUser = request.getRemoteUser();
   final String tokenString = 
request.getParameter(DELEGATION_PARAMETER_NAME);
   if (tokenString != null) {
-// Token-based connections need only verify the effective user, and
-// disallow proxying to different user.  Proxy authorization checks
-// are not required si

[hadoop] 01/03: HDFS-14006. Refactor name node to allow different token verification implementations. Contributed by CR Hota.

2020-03-25 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 4aa7734fc70d3f1ad2e9cdf7b0fc4d86616442b1
Author: Giovanni Matteo Fumarola 
AuthorDate: Fri Dec 14 11:10:54 2018 -0800

HDFS-14006. Refactor name node to allow different token verification 
implementations. Contributed by CR Hota.

(cherry picked from commit 00d5e631b596f8712600879366e5283829e7ee5d)
(cherry picked from commit eca7bc7ac4292ff08b9e1ea2e22116d6e58e8b95)

 Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
---
 .../hadoop/hdfs/server/common/JspHelper.java   |  8 ++---
 .../hadoop/hdfs/server/common/TokenVerifier.java   | 35 ++
 .../hadoop/hdfs/server/namenode/NameNode.java  |  9 +-
 .../hdfs/server/namenode/NameNodeHttpServer.java   |  6 
 4 files changed, 53 insertions(+), 5 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
index 637c679..2d1d736 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
@@ -23,7 +23,6 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.conf.Configuration;
 import 
org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
-import org.apache.hadoop.hdfs.server.namenode.NameNode;
 import org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer;
 import org.apache.hadoop.hdfs.web.resources.DelegationParam;
 import org.apache.hadoop.hdfs.web.resources.DoAsParam;
@@ -176,10 +175,11 @@ public class JspHelper {
 DelegationTokenIdentifier id = new DelegationTokenIdentifier();
 id.readFields(in);
 if (context != null) {
-  final NameNode nn = NameNodeHttpServer.getNameNodeFromContext(context);
-  if (nn != null) {
+  final TokenVerifier tokenVerifier =
+  NameNodeHttpServer.getTokenVerifierFromContext(context);
+  if (tokenVerifier != null) {
 // Verify the token.
-nn.getNamesystem().verifyToken(id, token.getPassword());
+tokenVerifier.verifyToken(id, token.getPassword());
   }
 }
 UserGroupInformation ugi = id.getUser();
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TokenVerifier.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TokenVerifier.java
new file mode 100644
index 000..5691f0c
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TokenVerifier.java
@@ -0,0 +1,35 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.common;
+
+import java.io.IOException;
+import 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier;
+
+/**
+ * Interface to verify delegation tokens passed through WebHDFS.
+ * Implementations are intercepted by JspHelper that pass delegation token
+ * for verification.
+ */
+public interface TokenVerifier {
+
+  /* Verify delegation token passed through WebHDFS
+   * Name node, Router implement this for JspHelper to verify token
+   */
+  void verifyToken(T t, byte[] password) throws IOException;
+
+}
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
index e577185..6ba26ec 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
@@ -46,6 +46,7 @@ import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;