[hadoop] branch trunk updated: HDFS-13693. Remove unnecessary search in INodeDirectory.addChild during image loading. Contributed by Lisheng Sun.

2019-07-22 Thread ayushsaxena
This is an automated email from the ASF dual-hosted git repository.

ayushsaxena pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 377f95b  HDFS-13693. Remove unnecessary search in 
INodeDirectory.addChild during image loading. Contributed by Lisheng Sun.
377f95b is described below

commit 377f95bbe8d2d171b5d7b0bfa7559e67ca4aae46
Author: Ayush Saxena 
AuthorDate: Tue Jul 23 08:37:55 2019 +0530

HDFS-13693. Remove unnecessary search in INodeDirectory.addChild during 
image loading. Contributed by Lisheng Sun.
---
 .../hdfs/server/namenode/FSImageFormatPBINode.java   |  4 +++-
 .../hadoop/hdfs/server/namenode/INodeDirectory.java  | 16 
 2 files changed, 19 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
index bc455e0..6825a5c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
@@ -269,7 +269,7 @@ public final class FSImageFormatPBINode {
 + "name before upgrading to this release.");
   }
   // NOTE: This does not update space counts for parents
-  if (!parent.addChild(child)) {
+  if (!parent.addChildAtLoading(child)) {
 return;
   }
   dir.cacheName(child);
@@ -551,6 +551,8 @@ public final class FSImageFormatPBINode {
   ++numImageErrors;
 }
 if (!inode.isReference()) {
+  // Serialization must ensure that children are in order, related
+  // to HDFS-13693
   b.addChildren(inode.getId());
 } else {
   refList.add(inode.asReference());
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
index 433abcb..28eb3d2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
@@ -573,6 +573,22 @@ public class INodeDirectory extends 
INodeWithAdditionalFields
   }
 
   /**
+   * During image loading, the search is unnecessary since the insert position
+   * should always be at the end of the map given the sequence they are
+   * serialized on disk.
+   */
+  public boolean addChildAtLoading(INode node) {
+int pos;
+if (!node.isReference()) {
+  pos = (children == null) ? (-1) : (-children.size() - 1);
+  addChild(node, pos);
+  return true;
+} else {
+  return addChild(node);
+}
+  }
+
+  /**
* Add the node to the children list at the given insertion point.
* The basic add method which actually calls children.add(..).
*/


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2 updated: MAPREDUCE-7076. TestNNBench#testNNBenchCreateReadAndDelete failing in our internal build

2019-07-22 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 2cbd7eb  MAPREDUCE-7076. TestNNBench#testNNBenchCreateReadAndDelete 
failing in our internal build
2cbd7eb is described below

commit 2cbd7eb5b658d6633e285f598951d52b9d626f0d
Author: pingsutw 
AuthorDate: Sun Jul 14 00:27:46 2019 +0800

MAPREDUCE-7076. TestNNBench#testNNBenchCreateReadAndDelete failing in our 
internal build

This closes #1089

Signed-off-by: Akira Ajisaka 
(cherry picked from commit ee87e9a42e4ff1f27a6f1e5b7c7de97f8989d9b2)
---
 .../src/test/java/org/apache/hadoop/hdfs/NNBench.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/hdfs/NNBench.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/hdfs/NNBench.java
index 29eac43..d86b824 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/hdfs/NNBench.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/hdfs/NNBench.java
@@ -669,7 +669,7 @@ public class NNBench extends Configured implements Tool {
   long startTime = getConf().getLong("test.nnbench.starttime", 0l);
   long currentTime = System.currentTimeMillis();
   long sleepTime = startTime - currentTime;
-  boolean retVal = false;
+  boolean retVal = true;
   
   // If the sleep time is greater than 0, then sleep and return
   if (sleepTime > 0) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated (3ff2148 -> 1a2aba8)

2019-07-22 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a change to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 3ff2148  YARN-9668. UGI conf doesn't read user overridden 
configurations on RM and NM startup. (Contributed by Jonanthan Hung)
 add 1a2aba8  MAPREDUCE-7076. TestNNBench#testNNBenchCreateReadAndDelete 
failing in our internal build

No new revisions were added by this update.

Summary of changes:
 .../src/test/java/org/apache/hadoop/hdfs/NNBench.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: MAPREDUCE-7076. TestNNBench#testNNBenchCreateReadAndDelete failing in our internal build

2019-07-22 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 772cacd  MAPREDUCE-7076. TestNNBench#testNNBenchCreateReadAndDelete 
failing in our internal build
772cacd is described below

commit 772cacdbdd7d85519161a18de05a4c13e2c8b4ef
Author: pingsutw 
AuthorDate: Sun Jul 14 00:27:46 2019 +0800

MAPREDUCE-7076. TestNNBench#testNNBenchCreateReadAndDelete failing in our 
internal build

This closes #1089

Signed-off-by: Akira Ajisaka 
(cherry picked from commit ee87e9a42e4ff1f27a6f1e5b7c7de97f8989d9b2)
---
 .../src/test/java/org/apache/hadoop/hdfs/NNBench.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/hdfs/NNBench.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/hdfs/NNBench.java
index 2346c3c..e339d48 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/hdfs/NNBench.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/hdfs/NNBench.java
@@ -668,7 +668,7 @@ public class NNBench extends Configured implements Tool {
   long startTime = getConf().getLong("test.nnbench.starttime", 0l);
   long currentTime = System.currentTimeMillis();
   long sleepTime = startTime - currentTime;
-  boolean retVal = false;
+  boolean retVal = true;
   
   // If the sleep time is greater than 0, then sleep and return
   if (sleepTime > 0) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: MAPREDUCE-7076. TestNNBench#testNNBenchCreateReadAndDelete failing in our internal build

2019-07-22 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new ee87e9a  MAPREDUCE-7076. TestNNBench#testNNBenchCreateReadAndDelete 
failing in our internal build
ee87e9a is described below

commit ee87e9a42e4ff1f27a6f1e5b7c7de97f8989d9b2
Author: pingsutw 
AuthorDate: Sun Jul 14 00:27:46 2019 +0800

MAPREDUCE-7076. TestNNBench#testNNBenchCreateReadAndDelete failing in our 
internal build

This closes #1089

Signed-off-by: Akira Ajisaka 
---
 .../src/test/java/org/apache/hadoop/hdfs/NNBench.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/hdfs/NNBench.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/hdfs/NNBench.java
index 2346c3c..e339d48 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/hdfs/NNBench.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/hdfs/NNBench.java
@@ -668,7 +668,7 @@ public class NNBench extends Configured implements Tool {
   long startTime = getConf().getLong("test.nnbench.starttime", 0l);
   long currentTime = System.currentTimeMillis();
   long sleepTime = startTime - currentTime;
-  boolean retVal = false;
+  boolean retVal = true;
   
   // If the sleep time is greater than 0, then sleep and return
   if (sleepTime > 0) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch ozone-0.4.1 updated: HDDS-1710. Publish JVM metrics via Hadoop metrics Signed-off-by: Anu Engineer

2019-07-22 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/ozone-0.4.1 by this push:
 new 415314b  HDDS-1710. Publish JVM metrics via Hadoop metrics 
Signed-off-by: Anu Engineer 
415314b is described below

commit 415314b03b002772f55c971e30a62bc17d02c354
Author: Márton Elek 
AuthorDate: Thu Jun 20 10:28:42 2019 +0200

HDDS-1710. Publish JVM metrics via Hadoop metrics
Signed-off-by: Anu Engineer 

(cherry picked from commit c533b79c328a3b0a28028761d8a50942b9758636)
---
 .../main/java/org/apache/hadoop/hdds/HddsUtils.java| 18 ++
 .../org/apache/hadoop/ozone/HddsDatanodeService.java   |  2 +-
 .../hdds/scm/server/StorageContainerManager.java   |  5 -
 .../java/org/apache/hadoop/ozone/om/OzoneManager.java  |  4 +++-
 .../main/java/org/apache/hadoop/ozone/freon/Freon.java |  2 ++
 5 files changed, 28 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
index a284caa..8b239a4 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
@@ -42,9 +42,13 @@ import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.SCMSecurityProtocol;
 import org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolPB;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.ipc.Client;
 import org.apache.hadoop.ipc.ProtobufRpcEngine;
 import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.metrics2.MetricsSystem;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.source.JvmMetrics;
 import org.apache.hadoop.metrics2.util.MBeans;
 import org.apache.hadoop.net.DNS;
 import org.apache.hadoop.net.NetUtils;
@@ -475,4 +479,18 @@ public final class HddsUtils {
 .orElse(ScmConfigKeys.OZONE_SCM_SECURITY_SERVICE_PORT_DEFAULT));
   }
 
+  /**
+   * Initialize hadoop metrics systen for Ozone servers.
+   *  @param configuration OzoneConfiguration to use.
+   * @param serverNameThe logical name of the server components. (eg.
+   * @return
+   */
+  public static MetricsSystem initializeMetrics(OzoneConfiguration 
configuration,
+  String serverName) {
+MetricsSystem metricsSystem = DefaultMetricsSystem.initialize(serverName);
+JvmMetrics.create(serverName,
+configuration.get(DFSConfigKeys.DFS_METRICS_SESSION_ID_KEY),
+DefaultMetricsSystem.instance());
+return metricsSystem;
+  }
 }
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
index 93e9490..4c2bf1a 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
@@ -170,8 +170,8 @@ public class HddsDatanodeService extends GenericCli 
implements ServicePlugin {
   }
 
   public void start() {
-DefaultMetricsSystem.initialize("HddsDatanode");
 OzoneConfiguration.activate();
+HddsUtils.initializeMetrics(conf, "HddsDatanode");
 if (HddsUtils.isHddsEnabled(conf)) {
   try {
 String hostname = HddsUtils.getHostName(conf);
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
index 6296df8..ffe1b81 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
@@ -81,11 +81,13 @@ import 
org.apache.hadoop.hdds.security.x509.certificate.authority.DefaultCAServe
 import org.apache.hadoop.hdds.server.ServiceRuntimeInfoImpl;
 import org.apache.hadoop.hdds.server.events.EventPublisher;
 import org.apache.hadoop.hdds.server.events.EventQueue;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.ipc.RPC;
 import org.apache.hadoop.metrics2.MetricsSystem;
 import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.source.JvmMetrics;
 import org.apache.hadoop.metrics2.util.MBeans;
 import org.apache.hadoop.ozone.OzoneConfigKeys;
 import org.apache.hadoop.ozone.OzoneSecurityUtil;
@@ -760,7 +762,8 @@ public final class StorageContainerManager extends 
ServiceRuntimeInfoImpl
 

[hadoop] branch trunk updated: HDDS-1710. Publish JVM metrics via Hadoop metrics Signed-off-by: Anu Engineer

2019-07-22 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new c533b79  HDDS-1710. Publish JVM metrics via Hadoop metrics 
Signed-off-by: Anu Engineer 
c533b79 is described below

commit c533b79c328a3b0a28028761d8a50942b9758636
Author: Márton Elek 
AuthorDate: Thu Jun 20 10:28:42 2019 +0200

HDDS-1710. Publish JVM metrics via Hadoop metrics
Signed-off-by: Anu Engineer 
---
 .../main/java/org/apache/hadoop/hdds/HddsUtils.java| 18 ++
 .../org/apache/hadoop/ozone/HddsDatanodeService.java   |  2 +-
 .../hdds/scm/server/StorageContainerManager.java   |  5 -
 .../java/org/apache/hadoop/ozone/om/OzoneManager.java  |  4 +++-
 .../main/java/org/apache/hadoop/ozone/freon/Freon.java |  2 ++
 5 files changed, 28 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
index a284caa..8b239a4 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
@@ -42,9 +42,13 @@ import org.apache.hadoop.hdds.scm.ScmConfigKeys;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.hdds.protocol.SCMSecurityProtocol;
 import org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolPB;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.ipc.Client;
 import org.apache.hadoop.ipc.ProtobufRpcEngine;
 import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.metrics2.MetricsSystem;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.source.JvmMetrics;
 import org.apache.hadoop.metrics2.util.MBeans;
 import org.apache.hadoop.net.DNS;
 import org.apache.hadoop.net.NetUtils;
@@ -475,4 +479,18 @@ public final class HddsUtils {
 .orElse(ScmConfigKeys.OZONE_SCM_SECURITY_SERVICE_PORT_DEFAULT));
   }
 
+  /**
+   * Initialize hadoop metrics systen for Ozone servers.
+   *  @param configuration OzoneConfiguration to use.
+   * @param serverNameThe logical name of the server components. (eg.
+   * @return
+   */
+  public static MetricsSystem initializeMetrics(OzoneConfiguration 
configuration,
+  String serverName) {
+MetricsSystem metricsSystem = DefaultMetricsSystem.initialize(serverName);
+JvmMetrics.create(serverName,
+configuration.get(DFSConfigKeys.DFS_METRICS_SESSION_ID_KEY),
+DefaultMetricsSystem.instance());
+return metricsSystem;
+  }
 }
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
index 93e9490..4c2bf1a 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/HddsDatanodeService.java
@@ -170,8 +170,8 @@ public class HddsDatanodeService extends GenericCli 
implements ServicePlugin {
   }
 
   public void start() {
-DefaultMetricsSystem.initialize("HddsDatanode");
 OzoneConfiguration.activate();
+HddsUtils.initializeMetrics(conf, "HddsDatanode");
 if (HddsUtils.isHddsEnabled(conf)) {
   try {
 String hostname = HddsUtils.getHostName(conf);
diff --git 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
index 6296df8..ffe1b81 100644
--- 
a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
+++ 
b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
@@ -81,11 +81,13 @@ import 
org.apache.hadoop.hdds.security.x509.certificate.authority.DefaultCAServe
 import org.apache.hadoop.hdds.server.ServiceRuntimeInfoImpl;
 import org.apache.hadoop.hdds.server.events.EventPublisher;
 import org.apache.hadoop.hdds.server.events.EventQueue;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.ipc.RPC;
 import org.apache.hadoop.metrics2.MetricsSystem;
 import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.source.JvmMetrics;
 import org.apache.hadoop.metrics2.util.MBeans;
 import org.apache.hadoop.ozone.OzoneConfigKeys;
 import org.apache.hadoop.ozone.OzoneSecurityUtil;
@@ -760,7 +762,8 @@ public final class StorageContainerManager extends 
ServiceRuntimeInfoImpl
 buildRpcServerStartMessage(
 "StorageContainerLocationProtocol RPC server",
 

[hadoop] branch ozone-0.4.1 updated: HDDS-1803. shellcheck.sh does not work on Mac

2019-07-22 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/ozone-0.4.1 by this push:
 new 697338d  HDDS-1803. shellcheck.sh does not work on Mac
697338d is described below

commit 697338dd824dc458774410177bf0439c95f5557c
Author: Doroszlai, Attila 
AuthorDate: Tue Jul 16 05:06:26 2019 +0200

HDDS-1803. shellcheck.sh does not work on Mac

Signed-off-by: Anu Engineer 
(cherry picked from commit d59f2711e0f47befc536ad05442d098862e88cef)
---
 hadoop-ozone/dev-support/checks/shellcheck.sh | 11 ---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/hadoop-ozone/dev-support/checks/shellcheck.sh 
b/hadoop-ozone/dev-support/checks/shellcheck.sh
index 1284acd..637a4f8 100755
--- a/hadoop-ozone/dev-support/checks/shellcheck.sh
+++ b/hadoop-ozone/dev-support/checks/shellcheck.sh
@@ -19,9 +19,14 @@ cd "$DIR/../../.." || exit 1
 OUTPUT_FILE="$DIR/../../../target/shell-problems.txt"
 mkdir -p "$(dirname "$OUTPUT_FILE")"
 echo "" > "$OUTPUT_FILE"
-find "./hadoop-hdds" -type f -executable | grep -v target | grep -v 
node_modules | grep -v py | xargs -n1 shellcheck  | tee "$OUTPUT_FILE"
-find "./hadoop-ozone" -type f -executable | grep -v target | grep -v 
node_modules | grep -v py | xargs -n1 shellcheck  | tee "$OUTPUT_FILE"
-
+if [[ "$(uname -s)" = "Darwin" ]]; then
+  find hadoop-hdds hadoop-ozone -type f -perm '-500'
+else
+  find hadoop-hdds hadoop-ozone -type f -executable
+fi \
+  | grep -v -e target/ -e node_modules/ -e '\.\(ico\|py\|yml\)$' \
+  | xargs -n1 shellcheck \
+  | tee "$OUTPUT_FILE"
 
 if [ "$(cat "$OUTPUT_FILE")" ]; then
exit 1


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1803. shellcheck.sh does not work on Mac

2019-07-22 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d59f271  HDDS-1803. shellcheck.sh does not work on Mac
d59f271 is described below

commit d59f2711e0f47befc536ad05442d098862e88cef
Author: Doroszlai, Attila 
AuthorDate: Tue Jul 16 05:06:26 2019 +0200

HDDS-1803. shellcheck.sh does not work on Mac

Signed-off-by: Anu Engineer 
---
 hadoop-ozone/dev-support/checks/shellcheck.sh | 11 ---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/hadoop-ozone/dev-support/checks/shellcheck.sh 
b/hadoop-ozone/dev-support/checks/shellcheck.sh
index 1284acd..637a4f8 100755
--- a/hadoop-ozone/dev-support/checks/shellcheck.sh
+++ b/hadoop-ozone/dev-support/checks/shellcheck.sh
@@ -19,9 +19,14 @@ cd "$DIR/../../.." || exit 1
 OUTPUT_FILE="$DIR/../../../target/shell-problems.txt"
 mkdir -p "$(dirname "$OUTPUT_FILE")"
 echo "" > "$OUTPUT_FILE"
-find "./hadoop-hdds" -type f -executable | grep -v target | grep -v 
node_modules | grep -v py | xargs -n1 shellcheck  | tee "$OUTPUT_FILE"
-find "./hadoop-ozone" -type f -executable | grep -v target | grep -v 
node_modules | grep -v py | xargs -n1 shellcheck  | tee "$OUTPUT_FILE"
-
+if [[ "$(uname -s)" = "Darwin" ]]; then
+  find hadoop-hdds hadoop-ozone -type f -perm '-500'
+else
+  find hadoop-hdds hadoop-ozone -type f -executable
+fi \
+  | grep -v -e target/ -e node_modules/ -e '\.\(ico\|py\|yml\)$' \
+  | xargs -n1 shellcheck \
+  | tee "$OUTPUT_FILE"
 
 if [ "$(cat "$OUTPUT_FILE")" ]; then
exit 1


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch ozone-0.4.1 updated: HDDS-1799. Add goofyfs to the ozone-runner docker image

2019-07-22 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/ozone-0.4.1 by this push:
 new b944b40  HDDS-1799. Add goofyfs to the ozone-runner docker image
b944b40 is described below

commit b944b40e33987a0bb301b2bed4a19ef9dfbb05a2
Author: Márton Elek 
AuthorDate: Tue Jul 16 15:05:56 2019 +0200

HDDS-1799. Add goofyfs to the ozone-runner docker image

Signed-off-by: Anu Engineer 
(cherry picked from commit d70ec4b5fa5644d4acface78b826a2601596e030)
---
 hadoop-ozone/dist/pom.xml| 2 +-
 hadoop-ozone/dist/src/main/docker/Dockerfile | 4 
 2 files changed, 1 insertion(+), 5 deletions(-)

diff --git a/hadoop-ozone/dist/pom.xml b/hadoop-ozone/dist/pom.xml
index 2ec6cc0..4027a68 100644
--- a/hadoop-ozone/dist/pom.xml
+++ b/hadoop-ozone/dist/pom.xml
@@ -28,7 +28,7 @@
   
 UTF-8
 true
-20190617-2
+20190717-1
   
 
   
diff --git a/hadoop-ozone/dist/src/main/docker/Dockerfile 
b/hadoop-ozone/dist/src/main/docker/Dockerfile
index 4937c8e..3b0e8fe 100644
--- a/hadoop-ozone/dist/src/main/docker/Dockerfile
+++ b/hadoop-ozone/dist/src/main/docker/Dockerfile
@@ -19,7 +19,3 @@ FROM apache/ozone-runner:@docker.ozone-runner.version@
 ADD --chown=hadoop . /opt/hadoop
 
 WORKDIR /opt/hadoop
-
-RUN sudo wget https://os.anzix.net/goofys -O /usr/bin/goofys
-RUN sudo chmod 755 /usr/bin/goofys
-RUN sudo yum install -y fuse


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1799. Add goofyfs to the ozone-runner docker image

2019-07-22 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d70ec4b  HDDS-1799. Add goofyfs to the ozone-runner docker image
d70ec4b is described below

commit d70ec4b5fa5644d4acface78b826a2601596e030
Author: Márton Elek 
AuthorDate: Tue Jul 16 15:05:56 2019 +0200

HDDS-1799. Add goofyfs to the ozone-runner docker image

Signed-off-by: Anu Engineer 
---
 hadoop-ozone/dist/pom.xml| 2 +-
 hadoop-ozone/dist/src/main/docker/Dockerfile | 4 
 2 files changed, 1 insertion(+), 5 deletions(-)

diff --git a/hadoop-ozone/dist/pom.xml b/hadoop-ozone/dist/pom.xml
index 380b2d4..a95c1c7 100644
--- a/hadoop-ozone/dist/pom.xml
+++ b/hadoop-ozone/dist/pom.xml
@@ -28,7 +28,7 @@
   
 UTF-8
 true
-20190617-2
+20190717-1
   
 
   
diff --git a/hadoop-ozone/dist/src/main/docker/Dockerfile 
b/hadoop-ozone/dist/src/main/docker/Dockerfile
index 4937c8e..3b0e8fe 100644
--- a/hadoop-ozone/dist/src/main/docker/Dockerfile
+++ b/hadoop-ozone/dist/src/main/docker/Dockerfile
@@ -19,7 +19,3 @@ FROM apache/ozone-runner:@docker.ozone-runner.version@
 ADD --chown=hadoop . /opt/hadoop
 
 WORKDIR /opt/hadoop
-
-RUN sudo wget https://os.anzix.net/goofys -O /usr/bin/goofys
-RUN sudo chmod 755 /usr/bin/goofys
-RUN sudo yum install -y fuse


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/02: HDDS-1686. Remove check to get from openKeyTable in acl implementatio… (#966)

2019-07-22 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 9108efa8afbee80420c76c625237af22dd4fab84
Author: Bharat Viswanadham 
AuthorDate: Mon Jul 22 15:11:10 2019 -0700

HDDS-1686. Remove check to get from openKeyTable in acl implementatio… 
(#966)



(cherry picked from commit 2ea71d953b46221f90b38d75a2999056f044471f)
---
 .../org/apache/hadoop/ozone/om/KeyManagerImpl.java | 43 --
 1 file changed, 8 insertions(+), 35 deletions(-)

diff --git 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
index 9b4eac3..ce55210 100644
--- 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
+++ 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
@@ -1398,17 +1398,10 @@ public class KeyManagerImpl implements KeyManager {
   validateBucket(volume, bucket);
   String objectKey = metadataManager.getOzoneKey(volume, bucket, keyName);
   OmKeyInfo keyInfo = metadataManager.getKeyTable().get(objectKey);
-  Table keyTable;
   if (keyInfo == null) {
-keyInfo = metadataManager.getOpenKeyTable().get(objectKey);
-if (keyInfo == null) {
-  throw new OMException("Key not found. Key:" +
-  objectKey, KEY_NOT_FOUND);
-}
-keyTable = metadataManager.getOpenKeyTable();
-  } else {
-keyTable = metadataManager.getKeyTable();
+throw new OMException("Key not found. Key:" + objectKey, 
KEY_NOT_FOUND);
   }
+
   List newAcls = new ArrayList<>(keyInfo.getAcls());
   OzoneAclInfo newAcl = null;
   for(OzoneAclInfo a: keyInfo.getAcls()) {
@@ -1444,7 +1437,7 @@ public class KeyManagerImpl implements KeyManager {
   .setDataSize(keyInfo.getDataSize())
   .setFileEncryptionInfo(keyInfo.getFileEncryptionInfo())
   .build();
-  keyTable.put(objectKey, newObj);
+  metadataManager.getKeyTable().put(objectKey, newObj);
 } catch (IOException ex) {
   if (!(ex instanceof OMException)) {
 LOG.error("Add acl operation failed for key:{}/{}/{}", volume,
@@ -1477,16 +1470,8 @@ public class KeyManagerImpl implements KeyManager {
   validateBucket(volume, bucket);
   String objectKey = metadataManager.getOzoneKey(volume, bucket, keyName);
   OmKeyInfo keyInfo = metadataManager.getKeyTable().get(objectKey);
-  Table keyTable;
   if (keyInfo == null) {
-keyInfo = metadataManager.getOpenKeyTable().get(objectKey);
-if (keyInfo == null) {
-  throw new OMException("Key not found. Key:" +
-  objectKey, KEY_NOT_FOUND);
-}
-keyTable = metadataManager.getOpenKeyTable();
-  } else {
-keyTable = metadataManager.getKeyTable();
+throw new OMException("Key not found. Key:" + objectKey, 
KEY_NOT_FOUND);
   }
 
   List newAcls = new ArrayList<>(keyInfo.getAcls());
@@ -1531,7 +1516,7 @@ public class KeyManagerImpl implements KeyManager {
   .setFileEncryptionInfo(keyInfo.getFileEncryptionInfo())
   .build();
 
-  keyTable.put(objectKey, newObj);
+  metadataManager.getKeyTable().put(objectKey, newObj);
 } catch (IOException ex) {
   if (!(ex instanceof OMException)) {
 LOG.error("Remove acl operation failed for key:{}/{}/{}", volume,
@@ -1564,16 +1549,8 @@ public class KeyManagerImpl implements KeyManager {
   validateBucket(volume, bucket);
   String objectKey = metadataManager.getOzoneKey(volume, bucket, keyName);
   OmKeyInfo keyInfo = metadataManager.getKeyTable().get(objectKey);
-  Table keyTable;
   if (keyInfo == null) {
-keyInfo = metadataManager.getOpenKeyTable().get(objectKey);
-if (keyInfo == null) {
-  throw new OMException("Key not found. Key:" +
-  objectKey, KEY_NOT_FOUND);
-}
-keyTable = metadataManager.getOpenKeyTable();
-  } else {
-keyTable = metadataManager.getKeyTable();
+throw new OMException("Key not found. Key:" + objectKey, 
KEY_NOT_FOUND);
   }
 
   List newAcls = new ArrayList<>();
@@ -1594,7 +1571,7 @@ public class KeyManagerImpl implements KeyManager {
   .setFileEncryptionInfo(keyInfo.getFileEncryptionInfo())
   .build();
 
-  keyTable.put(objectKey, newObj);
+  metadataManager.getKeyTable().put(objectKey, newObj);
 } catch (IOException ex) {
   if (!(ex instanceof OMException)) {
 LOG.error("Set acl operation failed for key:{}/{}/{}", volume,
@@ -1626,11 +1603,7 @@ public class KeyManagerImpl implements KeyManager {
   String objectKey = metadataManager.getOzoneKey(volume, bucket, keyName);
   OmKeyInfo keyInfo = 

[hadoop] branch ozone-0.4.1 updated (2224072 -> 8f9c44d)

2019-07-22 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a change to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 2224072  HDDS-1811. Prometheus metrics are broken.
 new 9108efa  HDDS-1686. Remove check to get from openKeyTable in acl 
implementatio… (#966)
 new 8f9c44d  HDDS-1585. Add LICENSE.txt and NOTICE.txt to Ozone Recon Web

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../dist/dev-support/bin/dist-layout-stitching | 3 +-
 .../org/apache/hadoop/ozone/om/KeyManagerImpl.java |43 +-
 .../webapps/recon/ozone-recon-web/LICENSE  | 17279 +++
 .../resources/webapps/recon/ozone-recon-web/NOTICE | 5 +
 4 files changed, 17294 insertions(+), 36 deletions(-)
 create mode 100644 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/LICENSE
 create mode 100644 
hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/NOTICE


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1686. Remove check to get from openKeyTable in acl implementatio… (#966)

2019-07-22 Thread arp
This is an automated email from the ASF dual-hosted git repository.

arp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 2ea71d9  HDDS-1686. Remove check to get from openKeyTable in acl 
implementatio… (#966)
2ea71d9 is described below

commit 2ea71d953b46221f90b38d75a2999056f044471f
Author: Bharat Viswanadham 
AuthorDate: Mon Jul 22 15:11:10 2019 -0700

HDDS-1686. Remove check to get from openKeyTable in acl implementatio… 
(#966)
---
 .../org/apache/hadoop/ozone/om/KeyManagerImpl.java | 43 --
 1 file changed, 8 insertions(+), 35 deletions(-)

diff --git 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
index 24af013..c7182c2 100644
--- 
a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
+++ 
b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
@@ -1396,17 +1396,10 @@ public class KeyManagerImpl implements KeyManager {
   validateBucket(volume, bucket);
   String objectKey = metadataManager.getOzoneKey(volume, bucket, keyName);
   OmKeyInfo keyInfo = metadataManager.getKeyTable().get(objectKey);
-  Table keyTable;
   if (keyInfo == null) {
-keyInfo = metadataManager.getOpenKeyTable().get(objectKey);
-if (keyInfo == null) {
-  throw new OMException("Key not found. Key:" +
-  objectKey, KEY_NOT_FOUND);
-}
-keyTable = metadataManager.getOpenKeyTable();
-  } else {
-keyTable = metadataManager.getKeyTable();
+throw new OMException("Key not found. Key:" + objectKey, 
KEY_NOT_FOUND);
   }
+
   List newAcls = new ArrayList<>(keyInfo.getAcls());
   OzoneAclInfo newAcl = null;
   for(OzoneAclInfo a: keyInfo.getAcls()) {
@@ -1442,7 +1435,7 @@ public class KeyManagerImpl implements KeyManager {
   .setDataSize(keyInfo.getDataSize())
   .setFileEncryptionInfo(keyInfo.getFileEncryptionInfo())
   .build();
-  keyTable.put(objectKey, newObj);
+  metadataManager.getKeyTable().put(objectKey, newObj);
 } catch (IOException ex) {
   if (!(ex instanceof OMException)) {
 LOG.error("Add acl operation failed for key:{}/{}/{}", volume,
@@ -1475,16 +1468,8 @@ public class KeyManagerImpl implements KeyManager {
   validateBucket(volume, bucket);
   String objectKey = metadataManager.getOzoneKey(volume, bucket, keyName);
   OmKeyInfo keyInfo = metadataManager.getKeyTable().get(objectKey);
-  Table keyTable;
   if (keyInfo == null) {
-keyInfo = metadataManager.getOpenKeyTable().get(objectKey);
-if (keyInfo == null) {
-  throw new OMException("Key not found. Key:" +
-  objectKey, KEY_NOT_FOUND);
-}
-keyTable = metadataManager.getOpenKeyTable();
-  } else {
-keyTable = metadataManager.getKeyTable();
+throw new OMException("Key not found. Key:" + objectKey, 
KEY_NOT_FOUND);
   }
 
   List newAcls = new ArrayList<>(keyInfo.getAcls());
@@ -1529,7 +1514,7 @@ public class KeyManagerImpl implements KeyManager {
   .setFileEncryptionInfo(keyInfo.getFileEncryptionInfo())
   .build();
 
-  keyTable.put(objectKey, newObj);
+  metadataManager.getKeyTable().put(objectKey, newObj);
 } catch (IOException ex) {
   if (!(ex instanceof OMException)) {
 LOG.error("Remove acl operation failed for key:{}/{}/{}", volume,
@@ -1562,16 +1547,8 @@ public class KeyManagerImpl implements KeyManager {
   validateBucket(volume, bucket);
   String objectKey = metadataManager.getOzoneKey(volume, bucket, keyName);
   OmKeyInfo keyInfo = metadataManager.getKeyTable().get(objectKey);
-  Table keyTable;
   if (keyInfo == null) {
-keyInfo = metadataManager.getOpenKeyTable().get(objectKey);
-if (keyInfo == null) {
-  throw new OMException("Key not found. Key:" +
-  objectKey, KEY_NOT_FOUND);
-}
-keyTable = metadataManager.getOpenKeyTable();
-  } else {
-keyTable = metadataManager.getKeyTable();
+throw new OMException("Key not found. Key:" + objectKey, 
KEY_NOT_FOUND);
   }
 
   List newAcls = new ArrayList<>();
@@ -1592,7 +1569,7 @@ public class KeyManagerImpl implements KeyManager {
   .setFileEncryptionInfo(keyInfo.getFileEncryptionInfo())
   .build();
 
-  keyTable.put(objectKey, newObj);
+  metadataManager.getKeyTable().put(objectKey, newObj);
 } catch (IOException ex) {
   if (!(ex instanceof OMException)) {
 LOG.error("Set acl operation failed for key:{}/{}/{}", volume,
@@ -1624,11 +1601,7 @@ public class KeyManagerImpl implements KeyManager {
   String objectKey = 

[hadoop] branch ozone-0.4.1 updated: HDDS-1811. Prometheus metrics are broken.

2019-07-22 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch ozone-0.4.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/ozone-0.4.1 by this push:
 new 2224072  HDDS-1811. Prometheus metrics are broken.
2224072 is described below

commit 22240727e5cd8ee169e8a2796015540124cd1593
Author: Doroszlai, Attila 
AuthorDate: Wed Jul 17 23:29:56 2019 +0200

HDDS-1811. Prometheus metrics are broken.

Signed-off-by: Anu Engineer 
(cherry picked from commit c958eddcf4b1f1cd9f6e5c2368a77a5962532435)
---
 .../common/transport/server/ratis/CSMMetrics.java  |  2 +-
 .../hadoop/hdds/server/PrometheusMetricsSink.java  | 34 +-
 .../hdds/server/TestPrometheusMetricsSink.java | 30 +++
 3 files changed, 33 insertions(+), 33 deletions(-)

diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/CSMMetrics.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/CSMMetrics.java
index def0d7f..ebbec4d 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/CSMMetrics.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/CSMMetrics.java
@@ -62,7 +62,7 @@ public class CSMMetrics {
   public CSMMetrics() {
 int numCmdTypes = ContainerProtos.Type.values().length;
 this.opsLatency = new MutableRate[numCmdTypes];
-this.registry = new MetricsRegistry(CSMMetrics.class.getName());
+this.registry = new MetricsRegistry(CSMMetrics.class.getSimpleName());
 for (int i = 0; i < numCmdTypes; i++) {
   opsLatency[i] = registry.newRate(
   ContainerProtos.Type.forNumber(i + 1).toString(),
diff --git 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java
 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java
index 52532f1..df25cfc 100644
--- 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java
+++ 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java
@@ -23,7 +23,6 @@ import java.io.IOException;
 import java.io.Writer;
 import java.util.HashMap;
 import java.util.Map;
-import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 
 import org.apache.commons.lang3.StringUtils;
@@ -47,8 +46,8 @@ public class PrometheusMetricsSink implements MetricsSink {
*/
   private Map metricLines = new HashMap<>();
 
-  private static final Pattern UPPER_CASE_SEQ =
-  Pattern.compile("([A-Z]*)([A-Z])");
+  private static final Pattern SPLIT_PATTERN =
+  Pattern.compile("(? 0) {
-replacement = "_" + m.group(1).toLowerCase() + replacement;
-  }
-  m.appendReplacement(sb, replacement);
-}
-m.appendTail(sb);
-
-//always prefixed with "_"
-return sb.toString().substring(1);
-  }
-
-  private String upperFirst(String name) {
-if (Character.isLowerCase(name.charAt(0))) {
-  return Character.toUpperCase(name.charAt(0)) + name.substring(1);
-} else {
-  return name;
-}
 
+String baseName = StringUtils.capitalize(recordName)
++ StringUtils.capitalize(metricName);
+baseName = baseName.replace('-', '_');
+String[] parts = SPLIT_PATTERN.split(baseName);
+return String.join("_", parts).toLowerCase();
   }
 
   @Override
diff --git 
a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestPrometheusMetricsSink.java
 
b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestPrometheusMetricsSink.java
index 0a8eb67..a1a9a55 100644
--- 
a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestPrometheusMetricsSink.java
+++ 
b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestPrometheusMetricsSink.java
@@ -30,7 +30,7 @@ import org.apache.hadoop.metrics2.lib.MutableCounterLong;
 import org.junit.Assert;
 import org.junit.Test;
 
-import static org.apache.commons.codec.CharEncoding.UTF_8;
+import static java.nio.charset.StandardCharsets.UTF_8;
 
 /**
  * Test prometheus Sink.
@@ -59,10 +59,11 @@ public class TestPrometheusMetricsSink {
 writer.flush();
 
 //THEN
-System.out.println(stream.toString(UTF_8));
+String writtenMetrics = stream.toString(UTF_8.name());
+System.out.println(writtenMetrics);
 Assert.assertTrue(
 "The expected metric line is missing from prometheus metrics output",
-stream.toString(UTF_8).contains(
+writtenMetrics.contains(
 "test_metrics_num_bucket_create_fails{context=\"dfs\"")
 );
 
@@ -71,7 +72,7 @@ public class TestPrometheusMetricsSink {
   }
 
   @Test
-  public void testNaming() throws IOException {
+  public void 

[hadoop] branch trunk updated: HDDS-1811. Prometheus metrics are broken.

2019-07-22 Thread aengineer
This is an automated email from the ASF dual-hosted git repository.

aengineer pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new c958edd  HDDS-1811. Prometheus metrics are broken.
c958edd is described below

commit c958eddcf4b1f1cd9f6e5c2368a77a5962532435
Author: Doroszlai, Attila 
AuthorDate: Wed Jul 17 23:29:56 2019 +0200

HDDS-1811. Prometheus metrics are broken.

Signed-off-by: Anu Engineer 
---
 .../common/transport/server/ratis/CSMMetrics.java  |  2 +-
 .../hadoop/hdds/server/PrometheusMetricsSink.java  | 34 +-
 .../hdds/server/TestPrometheusMetricsSink.java | 30 +++
 3 files changed, 33 insertions(+), 33 deletions(-)

diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/CSMMetrics.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/CSMMetrics.java
index def0d7f..ebbec4d 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/CSMMetrics.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/CSMMetrics.java
@@ -62,7 +62,7 @@ public class CSMMetrics {
   public CSMMetrics() {
 int numCmdTypes = ContainerProtos.Type.values().length;
 this.opsLatency = new MutableRate[numCmdTypes];
-this.registry = new MetricsRegistry(CSMMetrics.class.getName());
+this.registry = new MetricsRegistry(CSMMetrics.class.getSimpleName());
 for (int i = 0; i < numCmdTypes; i++) {
   opsLatency[i] = registry.newRate(
   ContainerProtos.Type.forNumber(i + 1).toString(),
diff --git 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java
 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java
index 52532f1..df25cfc 100644
--- 
a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java
+++ 
b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java
@@ -23,7 +23,6 @@ import java.io.IOException;
 import java.io.Writer;
 import java.util.HashMap;
 import java.util.Map;
-import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 
 import org.apache.commons.lang3.StringUtils;
@@ -47,8 +46,8 @@ public class PrometheusMetricsSink implements MetricsSink {
*/
   private Map metricLines = new HashMap<>();
 
-  private static final Pattern UPPER_CASE_SEQ =
-  Pattern.compile("([A-Z]*)([A-Z])");
+  private static final Pattern SPLIT_PATTERN =
+  Pattern.compile("(? 0) {
-replacement = "_" + m.group(1).toLowerCase() + replacement;
-  }
-  m.appendReplacement(sb, replacement);
-}
-m.appendTail(sb);
-
-//always prefixed with "_"
-return sb.toString().substring(1);
-  }
-
-  private String upperFirst(String name) {
-if (Character.isLowerCase(name.charAt(0))) {
-  return Character.toUpperCase(name.charAt(0)) + name.substring(1);
-} else {
-  return name;
-}
 
+String baseName = StringUtils.capitalize(recordName)
++ StringUtils.capitalize(metricName);
+baseName = baseName.replace('-', '_');
+String[] parts = SPLIT_PATTERN.split(baseName);
+return String.join("_", parts).toLowerCase();
   }
 
   @Override
diff --git 
a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestPrometheusMetricsSink.java
 
b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestPrometheusMetricsSink.java
index 0a8eb67..a1a9a55 100644
--- 
a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestPrometheusMetricsSink.java
+++ 
b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestPrometheusMetricsSink.java
@@ -30,7 +30,7 @@ import org.apache.hadoop.metrics2.lib.MutableCounterLong;
 import org.junit.Assert;
 import org.junit.Test;
 
-import static org.apache.commons.codec.CharEncoding.UTF_8;
+import static java.nio.charset.StandardCharsets.UTF_8;
 
 /**
  * Test prometheus Sink.
@@ -59,10 +59,11 @@ public class TestPrometheusMetricsSink {
 writer.flush();
 
 //THEN
-System.out.println(stream.toString(UTF_8));
+String writtenMetrics = stream.toString(UTF_8.name());
+System.out.println(writtenMetrics);
 Assert.assertTrue(
 "The expected metric line is missing from prometheus metrics output",
-stream.toString(UTF_8).contains(
+writtenMetrics.contains(
 "test_metrics_num_bucket_create_fails{context=\"dfs\"")
 );
 
@@ -71,7 +72,7 @@ public class TestPrometheusMetricsSink {
   }
 
   @Test
-  public void testNaming() throws IOException {
+  public void testNamingCamelCase() {
 PrometheusMetricsSink sink = new PrometheusMetricsSink();
 
 

[hadoop] branch trunk updated: HDDS-1649. On installSnapshot notification from OM leader, download checkpoint and reload OM state (#948)

2019-07-22 Thread arp
This is an automated email from the ASF dual-hosted git repository.

arp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new cdc36fe  HDDS-1649. On installSnapshot notification from OM leader, 
download checkpoint and reload OM state (#948)
cdc36fe is described below

commit cdc36fe286708b5ff12675599da8c7650744f064
Author: Hanisha Koneru 
AuthorDate: Mon Jul 22 12:06:55 2019 -0700

HDDS-1649. On installSnapshot notification from OM leader, download 
checkpoint and reload OM state (#948)
---
 .../java/org/apache/hadoop/ozone/OzoneConsts.java  |   1 +
 .../common/src/main/resources/ozone-default.xml|   8 +
 .../org/apache/hadoop/ozone/om/OMConfigKeys.java   |   3 +
 .../hadoop/ozone/om/exceptions/OMException.java|   3 +-
 .../ozone/om/protocol/OzoneManagerHAProtocol.java  |   3 +-
 .../src/main/proto/OzoneManagerProtocol.proto  |   2 +
 .../org/apache/hadoop/ozone/MiniOzoneCluster.java  |   6 +
 .../hadoop/ozone/MiniOzoneHAClusterImpl.java   |  49 ++-
 .../hadoop/ozone/om/TestOMRatisSnapshots.java  | 189 +++
 .../apache/hadoop/ozone/om/TestOzoneManagerHA.java |   7 +-
 .../hadoop/ozone/om/OMDBCheckpointServlet.java |   2 +-
 .../java/org/apache/hadoop/ozone/om/OMMetrics.java |   9 +-
 .../org/apache/hadoop/ozone/om/OzoneManager.java   | 359 -
 .../ozone/om/ratis/OzoneManagerRatisServer.java|  15 +-
 .../ozone/om/ratis/OzoneManagerStateMachine.java   |  81 -
 .../om/snapshot/OzoneManagerSnapshotProvider.java  |   2 +-
 16 files changed, 637 insertions(+), 102 deletions(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
index d28e477..67bd22d 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
@@ -119,6 +119,7 @@ public final class OzoneConsts {
   public static final String DN_CONTAINER_DB = "-dn-"+ CONTAINER_DB_SUFFIX;
   public static final String DELETED_BLOCK_DB = "deletedBlock.db";
   public static final String OM_DB_NAME = "om.db";
+  public static final String OM_DB_BACKUP_PREFIX = "om.db.backup.";
   public static final String OM_DB_CHECKPOINTS_DIR_NAME = "om.db.checkpoints";
   public static final String OZONE_MANAGER_TOKEN_DB_NAME = "om-token.db";
   public static final String SCM_DB_NAME = "scm.db";
diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml 
b/hadoop-hdds/common/src/main/resources/ozone-default.xml
index 30cf386..b2f820b 100644
--- a/hadoop-hdds/common/src/main/resources/ozone-default.xml
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -1630,6 +1630,14 @@
 Byte limit for Raft's Log Worker queue.
 
   
+  
+ozone.om.ratis.log.purge.gap
+100
+OZONE, OM, RATIS
+The minimum gap between log indices for Raft server to purge
+  its log segments after taking snapshot.
+
+  
 
   
 ozone.om.ratis.snapshot.auto.trigger.threshold
diff --git 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
index 14b6783..35431fa 100644
--- 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
+++ 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
@@ -123,6 +123,9 @@ public final class OMConfigKeys {
   "ozone.om.ratis.log.appender.queue.byte-limit";
   public static final String
   OZONE_OM_RATIS_LOG_APPENDER_QUEUE_BYTE_LIMIT_DEFAULT = "32MB";
+  public static final String OZONE_OM_RATIS_LOG_PURGE_GAP =
+  "ozone.om.ratis.log.purge.gap";
+  public static final int OZONE_OM_RATIS_LOG_PURGE_GAP_DEFAULT = 100;
 
   // OM Snapshot configurations
   public static final String OZONE_OM_RATIS_SNAPSHOT_AUTO_TRIGGER_THRESHOLD_KEY
diff --git 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/exceptions/OMException.java
 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/exceptions/OMException.java
index 66ce1cc..78bdb21 100644
--- 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/exceptions/OMException.java
+++ 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/exceptions/OMException.java
@@ -203,7 +203,8 @@ public class OMException extends IOException {
 
 PREFIX_NOT_FOUND,
 
-S3_BUCKET_INVALID_LENGTH
+S3_BUCKET_INVALID_LENGTH,
 
+RATIS_ERROR // Error in Ratis server
   }
 }
diff --git 
a/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerHAProtocol.java
 
b/hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerHAProtocol.java
index 1434dca..675c814 100644
--- 

[hadoop] branch branch-3.0 updated: YARN-9668. UGI conf doesn't read user overridden configurations on RM and NM startup. (Contributed by Jonanthan Hung)

2019-07-22 Thread jhung
This is an automated email from the ASF dual-hosted git repository.

jhung pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new 3b346f5  YARN-9668. UGI conf doesn't read user overridden 
configurations on RM and NM startup. (Contributed by Jonanthan Hung)
3b346f5 is described below

commit 3b346f50de9f6a3a109dbea599d7944e1edcbfb2
Author: Jonathan Hung 
AuthorDate: Mon Jul 22 10:46:45 2019 -0700

YARN-9668. UGI conf doesn't read user overridden configurations on RM and 
NM startup. (Contributed by Jonanthan Hung)
---
 .../yarn/server/nodemanager/NodeManager.java   |  1 +
 .../yarn/server/nodemanager/TestNodeManager.java   | 28 
 .../server/resourcemanager/ResourceManager.java|  1 +
 .../yarn/server/resourcemanager/TestRMRestart.java |  5 ++--
 .../resourcemanager/TestResourceManager.java   | 30 +-
 5 files changed, 62 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
index 44133df..780821b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
@@ -334,6 +334,7 @@ public class NodeManager extends CompositeService
 
   @Override
   protected void serviceInit(Configuration conf) throws Exception {
+UserGroupInformation.setConfiguration(conf);
 rmWorkPreservingRestartEnabled = conf.getBoolean(YarnConfiguration
 .RM_WORK_PRESERVING_RECOVERY_ENABLED,
 YarnConfiguration.DEFAULT_RM_WORK_PRESERVING_RECOVERY_ENABLED);
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManager.java
index 9279711..b09071d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManager.java
@@ -23,6 +23,7 @@ import static org.junit.Assert.fail;
 import java.io.IOException;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerEvent;
@@ -30,7 +31,9 @@ import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Cont
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerState;
 import org.apache.hadoop.yarn.server.nodemanager.nodelabels.NodeLabelsProvider;
 import org.junit.Assert;
+import org.junit.Rule;
 import org.junit.Test;
+import org.junit.rules.ExpectedException;
 
 public class TestNodeManager {
 
@@ -42,6 +45,9 @@ public class TestNodeManager {
 }
   }
 
+  @Rule
+  public ExpectedException thrown = ExpectedException.none();
+
   @Test
   public void testContainerExecutorInitCall() {
 NodeManager nm = new NodeManager();
@@ -170,4 +176,26 @@ public class TestNodeManager {
   e.printStackTrace();
 }
   }
+
+  /**
+   * Test whether NodeManager passes user-provided conf to
+   * UserGroupInformation class. If it reads this (incorrect)
+   * AuthenticationMethod enum an exception is thrown.
+   */
+  @Test
+  public void testUserProvidedUGIConf() throws Exception {
+thrown.expect(IllegalArgumentException.class);
+thrown.expectMessage("Invalid attribute value for "
++ CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION
++ " of DUMMYAUTH");
+Configuration dummyConf = new YarnConfiguration();
+dummyConf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION,
+"DUMMYAUTH");
+NodeManager dummyNodeManager = new NodeManager();
+try {
+  dummyNodeManager.init(dummyConf);
+} finally {
+  dummyNodeManager.stop();
+}
+  }
 }
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
 

[hadoop] branch branch-3.1 updated: YARN-9668. UGI conf doesn't read user overridden configurations on RM and NM startup. (Contributed by Jonanthan Hung)

2019-07-22 Thread jhung
This is an automated email from the ASF dual-hosted git repository.

jhung pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 3ff2148  YARN-9668. UGI conf doesn't read user overridden 
configurations on RM and NM startup. (Contributed by Jonanthan Hung)
3ff2148 is described below

commit 3ff2148482794f41a2d4d09f92bf4135a8c4cb7e
Author: Jonathan Hung 
AuthorDate: Mon Jul 22 10:46:45 2019 -0700

YARN-9668. UGI conf doesn't read user overridden configurations on RM and 
NM startup. (Contributed by Jonanthan Hung)
---
 .../yarn/server/nodemanager/NodeManager.java   |  1 +
 .../yarn/server/nodemanager/TestNodeManager.java   | 28 
 .../server/resourcemanager/ResourceManager.java|  1 +
 .../yarn/server/resourcemanager/TestRMRestart.java |  5 ++--
 .../resourcemanager/TestResourceManager.java   | 30 +-
 5 files changed, 62 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
index cd4171a..da4fda2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
@@ -350,6 +350,7 @@ public class NodeManager extends CompositeService
 
   @Override
   protected void serviceInit(Configuration conf) throws Exception {
+UserGroupInformation.setConfiguration(conf);
 rmWorkPreservingRestartEnabled = conf.getBoolean(YarnConfiguration
 .RM_WORK_PRESERVING_RECOVERY_ENABLED,
 YarnConfiguration.DEFAULT_RM_WORK_PRESERVING_RECOVERY_ENABLED);
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManager.java
index b31215b..ece0086 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManager.java
@@ -23,6 +23,7 @@ import static org.junit.Assert.fail;
 import java.io.IOException;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerEvent;
@@ -30,7 +31,9 @@ import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Cont
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerState;
 import org.apache.hadoop.yarn.server.nodemanager.nodelabels.NodeLabelsProvider;
 import org.junit.Assert;
+import org.junit.Rule;
 import org.junit.Test;
+import org.junit.rules.ExpectedException;
 
 public class TestNodeManager {
 
@@ -42,6 +45,9 @@ public class TestNodeManager {
 }
   }
 
+  @Rule
+  public ExpectedException thrown = ExpectedException.none();
+
   @Test
   public void testContainerExecutorInitCall() {
 NodeManager nm = new NodeManager();
@@ -170,4 +176,26 @@ public class TestNodeManager {
   e.printStackTrace();
 }
   }
+
+  /**
+   * Test whether NodeManager passes user-provided conf to
+   * UserGroupInformation class. If it reads this (incorrect)
+   * AuthenticationMethod enum an exception is thrown.
+   */
+  @Test
+  public void testUserProvidedUGIConf() throws Exception {
+thrown.expect(IllegalArgumentException.class);
+thrown.expectMessage("Invalid attribute value for "
++ CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION
++ " of DUMMYAUTH");
+Configuration dummyConf = new YarnConfiguration();
+dummyConf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION,
+"DUMMYAUTH");
+NodeManager dummyNodeManager = new NodeManager();
+try {
+  dummyNodeManager.init(dummyConf);
+} finally {
+  dummyNodeManager.stop();
+}
+  }
 }
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
 

[hadoop] branch branch-3.2 updated: YARN-9668. UGI conf doesn't read user overridden configurations on RM and NM startup. (Contributed by Jonanthan Hung)

2019-07-22 Thread jhung
This is an automated email from the ASF dual-hosted git repository.

jhung pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 1534400  YARN-9668. UGI conf doesn't read user overridden 
configurations on RM and NM startup. (Contributed by Jonanthan Hung)
1534400 is described below

commit 15344006bcda264b9258cf93d3e809fdc7dfd129
Author: Jonathan Hung 
AuthorDate: Mon Jul 22 10:46:45 2019 -0700

YARN-9668. UGI conf doesn't read user overridden configurations on RM and 
NM startup. (Contributed by Jonanthan Hung)
---
 .../yarn/server/nodemanager/NodeManager.java   |  1 +
 .../yarn/server/nodemanager/TestNodeManager.java   | 28 
 .../server/resourcemanager/ResourceManager.java|  1 +
 .../yarn/server/resourcemanager/TestRMRestart.java |  5 ++--
 .../resourcemanager/TestResourceManager.java   | 30 +-
 5 files changed, 62 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
index 1dff937..6edde64 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
@@ -385,6 +385,7 @@ public class NodeManager extends CompositeService
 
   @Override
   protected void serviceInit(Configuration conf) throws Exception {
+UserGroupInformation.setConfiguration(conf);
 rmWorkPreservingRestartEnabled = conf.getBoolean(YarnConfiguration
 .RM_WORK_PRESERVING_RECOVERY_ENABLED,
 YarnConfiguration.DEFAULT_RM_WORK_PRESERVING_RECOVERY_ENABLED);
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManager.java
index b2c2f6ee..cf87490 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManager.java
@@ -23,6 +23,7 @@ import static org.junit.Assert.fail;
 import java.io.IOException;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerEvent;
@@ -30,7 +31,9 @@ import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Cont
 import 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerState;
 import org.apache.hadoop.yarn.server.nodemanager.nodelabels.NodeLabelsProvider;
 import org.junit.Assert;
+import org.junit.Rule;
 import org.junit.Test;
+import org.junit.rules.ExpectedException;
 
 public class TestNodeManager {
 
@@ -42,6 +45,9 @@ public class TestNodeManager {
 }
   }
 
+  @Rule
+  public ExpectedException thrown = ExpectedException.none();
+
   @Test
   public void testContainerExecutorInitCall() {
 NodeManager nm = new NodeManager();
@@ -170,4 +176,26 @@ public class TestNodeManager {
   e.printStackTrace();
 }
   }
+
+  /**
+   * Test whether NodeManager passes user-provided conf to
+   * UserGroupInformation class. If it reads this (incorrect)
+   * AuthenticationMethod enum an exception is thrown.
+   */
+  @Test
+  public void testUserProvidedUGIConf() throws Exception {
+thrown.expect(IllegalArgumentException.class);
+thrown.expectMessage("Invalid attribute value for "
++ CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION
++ " of DUMMYAUTH");
+Configuration dummyConf = new YarnConfiguration();
+dummyConf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION,
+"DUMMYAUTH");
+NodeManager dummyNodeManager = new NodeManager();
+try {
+  dummyNodeManager.init(dummyConf);
+} finally {
+  dummyNodeManager.stop();
+}
+  }
 }
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
 

[hadoop] branch trunk updated: HDDS-1840. Fix TestSecureOzoneContainer. (#1135)

2019-07-22 Thread arp
This is an automated email from the ASF dual-hosted git repository.

arp pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 340bbaf  HDDS-1840. Fix TestSecureOzoneContainer. (#1135)
340bbaf is described below

commit 340bbaf8bfba1368e45dbcefd64937ef8afe7a9c
Author: Bharat Viswanadham 
AuthorDate: Mon Jul 22 10:23:48 2019 -0700

HDDS-1840. Fix TestSecureOzoneContainer. (#1135)
---
 .../hadoop/ozone/container/ozoneimpl/TestSecureOzoneContainer.java  | 2 ++
 1 file changed, 2 insertions(+)

diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestSecureOzoneContainer.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestSecureOzoneContainer.java
index c086f31..fca449b 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestSecureOzoneContainer.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestSecureOzoneContainer.java
@@ -26,6 +26,7 @@ import 
org.apache.hadoop.hdds.protocol.proto.HddsProtos.BlockTokenSecretProto.Ac
 import org.apache.hadoop.hdds.security.exception.SCMSecurityException;
 import org.apache.hadoop.hdds.security.token.OzoneBlockTokenIdentifier;
 import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
 import org.apache.hadoop.ozone.OzoneConfigKeys;
 import org.apache.hadoop.hdds.conf.OzoneConfiguration;
 import org.apache.hadoop.ozone.client.CertificateClientTestImpl;
@@ -110,6 +111,7 @@ public class TestSecureOzoneContainer {
 
   @Before
   public void setup() throws Exception {
+DefaultMetricsSystem.setMiniClusterMode(true);
 conf = new OzoneConfiguration();
 String ozoneMetaPath =
 GenericTestUtils.getTempPath("ozoneMeta");


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org