[hadoop] branch branch-3.1 updated: YARN-9257. Distributed Shell client throws a NPE for a non-existent queue. Contributed by Charan Hebri.

2019-02-07 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 73956d5  YARN-9257. Distributed Shell client throws a NPE for a 
non-existent queue. Contributed by Charan Hebri.
73956d5 is described below

commit 73956d5de98aadd9f9aa981e188bdad42f89afdf
Author: Sunil G 
AuthorDate: Fri Feb 8 11:22:44 2019 +0530

YARN-9257. Distributed Shell client throws a NPE for a non-existent queue. 
Contributed by Charan Hebri.

(cherry picked from commit fbc08145cfb6a81395448c4b3463bf6d28a6272b)
---
 .../yarn/applications/distributedshell/Client.java   |  6 ++
 .../distributedshell/TestDistributedShell.java   | 16 
 2 files changed, 22 insertions(+)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
index 1ba1860..27bbac5 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
@@ -648,6 +648,12 @@ public class Client {
 }
 
 QueueInfo queueInfo = yarnClient.getQueueInfo(this.amQueue);
+if (queueInfo == null) {
+  throw new IllegalArgumentException(String
+  .format("Queue %s not present in scheduler configuration.",
+  this.amQueue));
+}
+
 LOG.info("Queue info"
 + ", queueName=" + queueInfo.getQueueName()
 + ", queueCurrentCapacity=" + queueInfo.getCurrentCapacity()
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
index 49d8f3d..b41fea6 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
@@ -1630,4 +1630,20 @@ public class TestDistributedShell {
 client.init(args);
 client.run();
   }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testDistributedShellNonExistentQueue() throws Exception {
+String[] args =  {
+"--jar",
+APPMASTER_JAR,
+"--num_containers",
+"1",
+"--shell_command",
+Shell.WINDOWS ? "dir" : "ls",
+"--queue",
+"non-existent-queue" };
+Client client = new Client(new Configuration(yarnCluster.getConfig()));
+client.init(args);
+client.run();
+  }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: YARN-9257. Distributed Shell client throws a NPE for a non-existent queue. Contributed by Charan Hebri.

2019-02-07 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new fbc0814  YARN-9257. Distributed Shell client throws a NPE for a 
non-existent queue. Contributed by Charan Hebri.
fbc0814 is described below

commit fbc08145cfb6a81395448c4b3463bf6d28a6272b
Author: Sunil G 
AuthorDate: Fri Feb 8 11:22:44 2019 +0530

YARN-9257. Distributed Shell client throws a NPE for a non-existent queue. 
Contributed by Charan Hebri.
---
 .../yarn/applications/distributedshell/Client.java   |  6 ++
 .../distributedshell/TestDistributedShell.java   | 16 
 2 files changed, 22 insertions(+)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
index e8b69fe..ecbe288 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
@@ -651,6 +651,12 @@ public class Client {
 }
 
 QueueInfo queueInfo = yarnClient.getQueueInfo(this.amQueue);
+if (queueInfo == null) {
+  throw new IllegalArgumentException(String
+  .format("Queue %s not present in scheduler configuration.",
+  this.amQueue));
+}
+
 LOG.info("Queue info"
 + ", queueName=" + queueInfo.getQueueName()
 + ", queueCurrentCapacity=" + queueInfo.getCurrentCapacity()
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
index 49d8f3d..b41fea6 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
@@ -1630,4 +1630,20 @@ public class TestDistributedShell {
 client.init(args);
 client.run();
   }
+
+  @Test(expected = IllegalArgumentException.class)
+  public void testDistributedShellNonExistentQueue() throws Exception {
+String[] args =  {
+"--jar",
+APPMASTER_JAR,
+"--num_containers",
+"1",
+"--shell_command",
+Shell.WINDOWS ? "dir" : "ls",
+"--queue",
+"non-existent-queue" };
+Client client = new Client(new Configuration(yarnCluster.getConfig()));
+client.init(args);
+client.run();
+  }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1069. Temporarily disable the security acceptance tests by default in Ozone. Contributed by Marton Elek.

2019-02-07 Thread bharat
This is an automated email from the ASF dual-hosted git repository.

bharat pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new a140a89  HDDS-1069. Temporarily disable the security acceptance tests 
by default in Ozone. Contributed by Marton Elek.
a140a89 is described below

commit a140a890c6a53015e9a86183bc0b6c591a0267fb
Author: Bharat Viswanadham 
AuthorDate: Thu Feb 7 18:05:05 2019 -0800

HDDS-1069. Temporarily disable the security acceptance tests by default in 
Ozone. Contributed by Marton Elek.
---
 hadoop-ozone/dist/src/main/smoketest/test.sh | 2 --
 1 file changed, 2 deletions(-)

diff --git a/hadoop-ozone/dist/src/main/smoketest/test.sh 
b/hadoop-ozone/dist/src/main/smoketest/test.sh
index b447481..5e7462a 100755
--- a/hadoop-ozone/dist/src/main/smoketest/test.sh
+++ b/hadoop-ozone/dist/src/main/smoketest/test.sh
@@ -140,8 +140,6 @@ if [ "$RUN_ALL" = true ]; then
 #
 # We select the test suites and execute them on multiple type of clusters
 #
-   DEFAULT_TESTS=("security")
-   execute_tests ozonesecure "${DEFAULT_TESTS[@]}"
DEFAULT_TESTS=("basic")
execute_tests ozone "${DEFAULT_TESTS[@]}"
TESTS=("ozonefs")


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HDFS-14140. JournalNodeSyncer authentication is failing in secure cluster. Contributed by Surendra Singh Lilhore.

2019-02-07 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 1f4be45  HDFS-14140. JournalNodeSyncer authentication is failing in 
secure cluster. Contributed by Surendra Singh Lilhore.
1f4be45 is described below

commit 1f4be45ef6ea5e9b81847e7abcea90e6ef20ff53
Author: Surendra Singh Lilhore 
AuthorDate: Thu Feb 7 16:43:55 2019 -0800

HDFS-14140. JournalNodeSyncer authentication is failing in secure cluster. 
Contributed by Surendra Singh Lilhore.

Signed-off-by: Wei-Chiu Chuang 
(cherry picked from commit 4be87353e35a30d95d8847b09a1890b014bfc6bb)
(cherry picked from commit 2501fcd26bd7bef2738a8f6660dc63862c755ce3)
---
 .../hdfs/qjournal/server/JournalNodeSyncer.java| 25 +++---
 1 file changed, 17 insertions(+), 8 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
index 7b3d970..d2b2c9b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
@@ -38,6 +38,7 @@ import org.apache.hadoop.hdfs.util.DataTransferThrottler;
 import org.apache.hadoop.ipc.ProtobufRpcEngine;
 import org.apache.hadoop.ipc.RPC;
 import org.apache.hadoop.security.SecurityUtil;
+import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.util.Daemon;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -439,15 +440,23 @@ public class JournalNodeSyncer {
 File tmpEditsFile = jnStorage.getTemporaryEditsFile(
 log.getStartTxId(), log.getEndTxId());
 
-try {
-  Util.doGetUrl(url, ImmutableList.of(tmpEditsFile), jnStorage, false,
-  logSegmentTransferTimeout, throttler);
-} catch (IOException e) {
-  LOG.error("Download of Edit Log file for Syncing failed. Deleting temp " 
+
-  "file: " + tmpEditsFile);
-  if (!tmpEditsFile.delete()) {
-LOG.warn("Deleting " + tmpEditsFile + " has failed");
+if (!SecurityUtil.doAsLoginUser(() -> {
+  if (UserGroupInformation.isSecurityEnabled()) {
+UserGroupInformation.getCurrentUser().checkTGTAndReloginFromKeytab();
+  }
+  try {
+Util.doGetUrl(url, ImmutableList.of(tmpEditsFile), jnStorage, false,
+logSegmentTransferTimeout, throttler);
+  } catch (IOException e) {
+LOG.error("Download of Edit Log file for Syncing failed. Deleting temp 
"
++ "file: " + tmpEditsFile, e);
+if (!tmpEditsFile.delete()) {
+  LOG.warn("Deleting " + tmpEditsFile + " has failed");
+}
+return false;
   }
+  return true;
+})) {
   return false;
 }
 LOG.info("Downloaded file " + tmpEditsFile.getName() + " of size " +


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HDFS-14140. JournalNodeSyncer authentication is failing in secure cluster. Contributed by Surendra Singh Lilhore.

2019-02-07 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 2501fcd  HDFS-14140. JournalNodeSyncer authentication is failing in 
secure cluster. Contributed by Surendra Singh Lilhore.
2501fcd is described below

commit 2501fcd26bd7bef2738a8f6660dc63862c755ce3
Author: Surendra Singh Lilhore 
AuthorDate: Thu Feb 7 16:43:55 2019 -0800

HDFS-14140. JournalNodeSyncer authentication is failing in secure cluster. 
Contributed by Surendra Singh Lilhore.

Signed-off-by: Wei-Chiu Chuang 
(cherry picked from commit 4be87353e35a30d95d8847b09a1890b014bfc6bb)
---
 .../hdfs/qjournal/server/JournalNodeSyncer.java| 25 +++---
 1 file changed, 17 insertions(+), 8 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
index 7b3d970..d2b2c9b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
@@ -38,6 +38,7 @@ import org.apache.hadoop.hdfs.util.DataTransferThrottler;
 import org.apache.hadoop.ipc.ProtobufRpcEngine;
 import org.apache.hadoop.ipc.RPC;
 import org.apache.hadoop.security.SecurityUtil;
+import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.util.Daemon;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -439,15 +440,23 @@ public class JournalNodeSyncer {
 File tmpEditsFile = jnStorage.getTemporaryEditsFile(
 log.getStartTxId(), log.getEndTxId());
 
-try {
-  Util.doGetUrl(url, ImmutableList.of(tmpEditsFile), jnStorage, false,
-  logSegmentTransferTimeout, throttler);
-} catch (IOException e) {
-  LOG.error("Download of Edit Log file for Syncing failed. Deleting temp " 
+
-  "file: " + tmpEditsFile);
-  if (!tmpEditsFile.delete()) {
-LOG.warn("Deleting " + tmpEditsFile + " has failed");
+if (!SecurityUtil.doAsLoginUser(() -> {
+  if (UserGroupInformation.isSecurityEnabled()) {
+UserGroupInformation.getCurrentUser().checkTGTAndReloginFromKeytab();
+  }
+  try {
+Util.doGetUrl(url, ImmutableList.of(tmpEditsFile), jnStorage, false,
+logSegmentTransferTimeout, throttler);
+  } catch (IOException e) {
+LOG.error("Download of Edit Log file for Syncing failed. Deleting temp 
"
++ "file: " + tmpEditsFile, e);
+if (!tmpEditsFile.delete()) {
+  LOG.warn("Deleting " + tmpEditsFile + " has failed");
+}
+return false;
   }
+  return true;
+})) {
   return false;
 }
 LOG.info("Downloaded file " + tmpEditsFile.getName() + " of size " +


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-14140. JournalNodeSyncer authentication is failing in secure cluster. Contributed by Surendra Singh Lilhore.

2019-02-07 Thread weichiu
This is an automated email from the ASF dual-hosted git repository.

weichiu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 4be8735  HDFS-14140. JournalNodeSyncer authentication is failing in 
secure cluster. Contributed by Surendra Singh Lilhore.
4be8735 is described below

commit 4be87353e35a30d95d8847b09a1890b014bfc6bb
Author: Surendra Singh Lilhore 
AuthorDate: Thu Feb 7 16:43:55 2019 -0800

HDFS-14140. JournalNodeSyncer authentication is failing in secure cluster. 
Contributed by Surendra Singh Lilhore.

Signed-off-by: Wei-Chiu Chuang 
---
 .../hdfs/qjournal/server/JournalNodeSyncer.java| 25 +++---
 1 file changed, 17 insertions(+), 8 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
index 7b3d970..d2b2c9b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeSyncer.java
@@ -38,6 +38,7 @@ import org.apache.hadoop.hdfs.util.DataTransferThrottler;
 import org.apache.hadoop.ipc.ProtobufRpcEngine;
 import org.apache.hadoop.ipc.RPC;
 import org.apache.hadoop.security.SecurityUtil;
+import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.util.Daemon;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -439,15 +440,23 @@ public class JournalNodeSyncer {
 File tmpEditsFile = jnStorage.getTemporaryEditsFile(
 log.getStartTxId(), log.getEndTxId());
 
-try {
-  Util.doGetUrl(url, ImmutableList.of(tmpEditsFile), jnStorage, false,
-  logSegmentTransferTimeout, throttler);
-} catch (IOException e) {
-  LOG.error("Download of Edit Log file for Syncing failed. Deleting temp " 
+
-  "file: " + tmpEditsFile);
-  if (!tmpEditsFile.delete()) {
-LOG.warn("Deleting " + tmpEditsFile + " has failed");
+if (!SecurityUtil.doAsLoginUser(() -> {
+  if (UserGroupInformation.isSecurityEnabled()) {
+UserGroupInformation.getCurrentUser().checkTGTAndReloginFromKeytab();
+  }
+  try {
+Util.doGetUrl(url, ImmutableList.of(tmpEditsFile), jnStorage, false,
+logSegmentTransferTimeout, throttler);
+  } catch (IOException e) {
+LOG.error("Download of Edit Log file for Syncing failed. Deleting temp 
"
++ "file: " + tmpEditsFile, e);
+if (!tmpEditsFile.delete()) {
+  LOG.warn("Deleting " + tmpEditsFile + " has failed");
+}
+return false;
   }
+  return true;
+})) {
   return false;
 }
 LOG.info("Downloaded file " + tmpEditsFile.getName() + " of size " +


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HADOOP-15281. Distcp to add no-rename copy option. Contributed by Andrew Olson.

2019-02-07 Thread epayne
This is an automated email from the ASF dual-hosted git repository.

epayne pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 49d5463  HADOOP-15281. Distcp to add no-rename copy option. 
Contributed by Andrew Olson.
49d5463 is described below

commit 49d54633e0f4bd388c00d591e90666dbb7633c9f
Author: Eric E Payne 
AuthorDate: Thu Feb 7 23:15:18 2019 +

HADOOP-15281. Distcp to add no-rename copy option.
Contributed by Andrew Olson.
---
 .../fs/contract/s3a/ITestS3AContractDistCp.java| 33 +++
 .../org/apache/hadoop/tools/DistCpConstants.java   |  3 +-
 .../org/apache/hadoop/tools/DistCpContext.java |  4 ++
 .../apache/hadoop/tools/DistCpOptionSwitch.java| 14 -
 .../org/apache/hadoop/tools/DistCpOptions.java | 19 ++
 .../org/apache/hadoop/tools/OptionsParser.java |  4 +-
 .../org/apache/hadoop/tools/mapred/CopyMapper.java |  6 +-
 .../tools/mapred/RetriableFileCopyCommand.java | 53 -
 .../hadoop-distcp/src/site/markdown/DistCp.md.vm   |  6 +-
 .../org/apache/hadoop/tools/TestDistCpOptions.java |  5 +-
 .../tools/contract/AbstractContractDistCpTest.java | 68 +-
 11 files changed, 192 insertions(+), 23 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
index b3d511e..740f256 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.fs.contract.s3a;
 
+import java.io.FileNotFoundException;
 import java.io.IOException;
 
 import static org.apache.hadoop.fs.s3a.Constants.*;
@@ -26,6 +27,7 @@ import static 
org.apache.hadoop.fs.s3a.S3ATestUtils.maybeEnableS3Guard;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.StorageStatistics;
 import org.apache.hadoop.fs.s3a.FailureInjectionPolicy;
 import org.apache.hadoop.tools.contract.AbstractContractDistCpTest;
 
@@ -74,4 +76,35 @@ public class ITestS3AContractDistCp extends 
AbstractContractDistCpTest {
 Path path = super.path(filepath);
 return new Path(path, FailureInjectionPolicy.DEFAULT_DELAY_KEY_SUBSTRING);
   }
+
+  @Override
+  public void testDirectWrite() throws Exception {
+resetStorageStatistics();
+super.testDirectWrite();
+assertEquals("Expected no renames for a direct write distcp", 0L,
+getRenameOperationCount());
+  }
+
+  @Override
+  public void testNonDirectWrite() throws Exception {
+resetStorageStatistics();
+try {
+  super.testNonDirectWrite();
+} catch (FileNotFoundException e) {
+  // We may get this exception when data is written to a DELAY_LISTING_ME
+  // directory causing verification of the distcp success to fail if
+  // S3Guard is not enabled
+}
+assertEquals("Expected 2 renames for a non-direct write distcp", 2L,
+getRenameOperationCount());
+  }
+
+  private void resetStorageStatistics() {
+getFileSystem().getStorageStatistics().reset();
+  }
+
+  private long getRenameOperationCount() {
+return getFileSystem().getStorageStatistics()
+.getLong(StorageStatistics.CommonStatisticNames.OP_RENAME);
+  }
 }
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
index 4946091..e20f206 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
@@ -85,7 +85,8 @@ public final class DistCpConstants {
   "distcp.dynamic.min.records_per_chunk";
   public static final String CONF_LABEL_SPLIT_RATIO =
   "distcp.dynamic.split.ratio";
-  
+  public static final String CONF_LABEL_DIRECT_WRITE = "distcp.direct.write";
+
   /* Total bytes to be copied. Updated by copylisting. Unfiltered count */
   public static final String CONF_LABEL_TOTAL_BYTES_TO_BE_COPIED = 
"mapred.total.bytes.expected";
 
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpContext.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpContext.java
index fc047ca..1e63d80 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpContext.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpContext.java
@@ -179,6 +179,10 @@ public class DistCpContext {
 return options.getCopyBufferSize();
   }
 
+  public boolean shouldDirectWrite() {
+return 

[hadoop] branch branch-2.8 updated: YARN-7171: RM UI should sort memory / cores numerically. Contributed by Ahmed Hussein

2019-02-07 Thread epayne
This is an automated email from the ASF dual-hosted git repository.

epayne pushed a commit to branch branch-2.8
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.8 by this push:
 new a53beec  YARN-7171: RM UI should sort memory / cores numerically. 
Contributed by Ahmed Hussein
a53beec is described below

commit a53beec79a981b202a38b50c1455eddf6239290f
Author: Eric E Payne 
AuthorDate: Thu Feb 7 22:24:24 2019 +

YARN-7171: RM UI should sort memory / cores numerically. Contributed by 
Ahmed Hussein
---
 .../resources/webapps/static/yarn.dt.plugins.js| 22 ++
 .../hadoop/yarn/server/webapp/WebPageUtils.java|  1 +
 2 files changed, 23 insertions(+)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
index c003272..92f2ae9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
@@ -73,6 +73,28 @@ jQuery.fn.dataTableExt.oApi.fnSetFilteringDelay = function ( 
oSettings, iDelay )
   return this;
 }
 
+jQuery.fn.dataTableExt.oSort['num-ignore-str-asc'] = function(a, b) {
+  if (isNaN(a) && isNaN(b)) return ((a < b) ? 1 : ((a > b) ? -1 : 0));
+
+  if (isNaN(a)) return 1;
+  if (isNaN(b)) return -1;
+
+  var x = parseFloat(a);
+  var y = parseFloat(b);
+  return ((x < y) ? -1 : ((x > y) ? 1 : 0));
+};
+
+jQuery.fn.dataTableExt.oSort['num-ignore-str-desc'] = function(a, b) {
+  if (isNaN(a) && isNaN(b)) return ((a < b) ? 1 : ((a > b) ? -1 : 0));
+
+  if (isNaN(a)) return 1;
+  if (isNaN(b)) return -1;
+
+  var x = parseFloat(a);
+  var y = parseFloat(b);
+  return ((x < y) ? 1 : ((x > y) ? -1 : 0));
+};
+
 function renderHadoopDate(data, type, full) {
   if (type === 'display' || type === 'filter') {
 if(data === '0'|| data === '-1') {
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
index a07baa2..7586046 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
@@ -53,6 +53,7 @@ public class WebPageUtils {
   .append(", 'mRender': parseHadoopID }")
   .append("\n, {'sType':'numeric', 'aTargets': [6, 7]")
   .append(", 'mRender': renderHadoopDate }")
+  .append("\n, {'sType':'num-ignore-str', 'aTargets': [10, 11, 12] }")
   .append("\n, {'sType':'numeric', bSearchable:false, 'aTargets':");
 if (isFairSchedulerPage) {
   sb.append("[13]");


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.9 updated: YARN-7171: RM UI should sort memory / cores numerically. Contributed by Ahmed Hussein

2019-02-07 Thread epayne
This is an automated email from the ASF dual-hosted git repository.

epayne pushed a commit to branch branch-2.9
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.9 by this push:
 new 3d8da16  YARN-7171: RM UI should sort memory / cores numerically. 
Contributed by Ahmed Hussein
3d8da16 is described below

commit 3d8da16c82fa13c16f5069fc627c71aaf9e7c023
Author: Eric E Payne 
AuthorDate: Thu Feb 7 16:38:11 2019 +

YARN-7171: RM UI should sort memory / cores numerically. Contributed by 
Ahmed Hussein

(cherry picked from commit d1ca9432dd0b3e9b46b4903e8c9d33f5c28fcc1b)
---
 .../resources/webapps/static/yarn.dt.plugins.js| 23 ++
 .../hadoop/yarn/server/webapp/WebPageUtils.java|  3 ++-
 2 files changed, 25 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
index c003272..51cb630 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
@@ -41,6 +41,29 @@ jQuery.fn.dataTableExt.oSort['title-numeric-desc'] = 
function(a,b) {
   return ((x < y) ?  1 : ((x > y) ? -1 : 0));
 };
 
+// 'numeric-ignore-strings' sort type
+jQuery.fn.dataTableExt.oSort['num-ignore-str-asc'] = function(a, b) {
+  if (isNaN(a) && isNaN(b)) return ((a < b) ? 1 : ((a > b) ? -1 : 0));
+
+  if (isNaN(a)) return 1;
+  if (isNaN(b)) return -1;
+
+  x = parseFloat(a);
+  y = parseFloat(b);
+  return ((x < y) ? -1 : ((x > y) ? 1 : 0));
+};
+
+jQuery.fn.dataTableExt.oSort['num-ignore-str-desc'] = function(a, b) {
+  if (isNaN(a) && isNaN(b)) return ((a < b) ? 1 : ((a > b) ? -1 : 0));
+
+  if (isNaN(a)) return 1;
+  if (isNaN(b)) return -1;
+
+  x = parseFloat(a);
+  y = parseFloat(b);
+  return ((x < y) ? 1 : ((x > y) ? -1 : 0));
+};
+
 jQuery.fn.dataTableExt.oApi.fnSetFilteringDelay = function ( oSettings, iDelay 
) {
   var
   _that = this,
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
index b2f65a8..1cd7d87 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
@@ -51,8 +51,9 @@ public class WebPageUtils {
 sb.append("[\n")
   .append("{'sType':'natural', 'aTargets': [0]")
   .append(", 'mRender': parseHadoopID }")
-  .append("\n, {'sType':'numeric', 'aTargets': [6, 7, 8]")
+  .append("\n, {'sType':'num-ignore-str', 'aTargets': [6, 7, 8]")
   .append(", 'mRender': renderHadoopDate }")
+  .append("\n, {'sType':'num-ignore-str', 'aTargets': [11, 12, 13, 14, 15] 
}")
   .append("\n, {'sType':'numeric', bSearchable:false, 'aTargets':");
 if (isFairSchedulerPage) {
   sb.append("[15]");


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/02: Revert "HADOOP-15954. ABFS: Enable owner and group conversion for MSI and login user using OAuth."

2019-02-07 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 668817a6cefa6025ddfe082ed71d7d317d811381
Author: Steve Loughran 
AuthorDate: Thu Feb 7 21:56:43 2019 +

Revert "HADOOP-15954. ABFS: Enable owner and group conversion for MSI and 
login user using OAuth."

(accidentally mixed in two patches)

This reverts commit fa8cd1bf28f5b81849ba351a2d7225fbc580350d.
---
 .../main/java/org/apache/hadoop/fs/shell/Ls.java   |  35 +--
 .../src/site/markdown/FileSystemShell.md   |   5 +-
 .../java/org/apache/hadoop/fs/shell/TestLs.java|  35 +--
 .../hadoop-common/src/test/resources/testConf.xml  |   6 +-
 .../hdfs/tools/TestStoragePolicyCommands.java  |  17 --
 .../src/test/resources/testErasureCodingConf.xml   |  19 --
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |  10 -
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java|  37 ++-
 .../fs/azurebfs/AzureBlobFileSystemStore.java  | 179 ++--
 .../fs/azurebfs/constants/ConfigurationKeys.java   |  23 +-
 .../fs/azurebfs/oauth2/IdentityTransformer.java| 275 ---
 .../hadoop/fs/azurebfs/oauth2/package-info.java|   8 +-
 .../src/site/markdown/testing_azure.md |  55 
 .../fs/azurebfs/ITestAbfsIdentityTransformer.java  | 301 -
 14 files changed, 117 insertions(+), 888 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Ls.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Ls.java
index b2bdc84..efc541c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Ls.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Ls.java
@@ -33,7 +33,6 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.BlockStoragePolicySpi;
 import org.apache.hadoop.fs.ContentSummary;
 
 /**
@@ -58,15 +57,13 @@ class Ls extends FsCommand {
   private static final String OPTION_ATIME = "u";
   private static final String OPTION_SIZE = "S";
   private static final String OPTION_ECPOLICY = "e";
-  private static final String OPTION_SPOLICY = "sp";
 
   public static final String NAME = "ls";
   public static final String USAGE = "[-" + OPTION_PATHONLY + "] [-" +
   OPTION_DIRECTORY + "] [-" + OPTION_HUMAN + "] [-" +
   OPTION_HIDENONPRINTABLE + "] [-" + OPTION_RECURSIVE + "] [-" +
   OPTION_MTIME + "] [-" + OPTION_SIZE + "] [-" + OPTION_REVERSE + "] [-" +
-  OPTION_ATIME + "] [-" + OPTION_ECPOLICY + "] [-" + OPTION_SPOLICY
-  + "] [ ...]";
+  OPTION_ATIME + "] [-" + OPTION_ECPOLICY +"] [ ...]";
 
   public static final String DESCRIPTION =
   "List the contents that match the specified file pattern. If " +
@@ -99,9 +96,7 @@ class Ls extends FsCommand {
   "  Use time of last access instead of modification for\n" +
   "  display and sorting.\n"+
   "  -" + OPTION_ECPOLICY +
-  "  Display the erasure coding policy of files and directories.\n" +
-  "  -" + OPTION_SPOLICY +
-  "  Display the storage policy of files and directories.\n";
+  "  Display the erasure coding policy of files and directories.\n";
 
   protected final SimpleDateFormat dateFormat =
 new SimpleDateFormat("-MM-dd HH:mm");
@@ -115,7 +110,6 @@ class Ls extends FsCommand {
   private boolean orderSize;
   private boolean useAtime;
   private boolean displayECPolicy;
-  private boolean displaySPolicy;
   private Comparator orderComparator;
 
   protected boolean humanReadable = false;
@@ -141,8 +135,7 @@ class Ls extends FsCommand {
 CommandFormat cf = new CommandFormat(0, Integer.MAX_VALUE,
 OPTION_PATHONLY, OPTION_DIRECTORY, OPTION_HUMAN,
 OPTION_HIDENONPRINTABLE, OPTION_RECURSIVE, OPTION_REVERSE,
-OPTION_MTIME, OPTION_SIZE, OPTION_ATIME, OPTION_ECPOLICY,
-OPTION_SPOLICY);
+OPTION_MTIME, OPTION_SIZE, OPTION_ATIME, OPTION_ECPOLICY);
 cf.parse(args);
 pathOnly = cf.getOpt(OPTION_PATHONLY);
 dirRecurse = !cf.getOpt(OPTION_DIRECTORY);
@@ -154,7 +147,6 @@ class Ls extends FsCommand {
 orderSize = !orderTime && cf.getOpt(OPTION_SIZE);
 useAtime = cf.getOpt(OPTION_ATIME);
 displayECPolicy = cf.getOpt(OPTION_ECPOLICY);
-displaySPolicy = cf.getOpt(OPTION_SPOLICY);
 if (args.isEmpty()) args.add(Path.CUR_DIR);
 
 initialiseOrderComparator();
@@ -237,16 +229,6 @@ class Ls extends FsCommand {
 return this.displayECPolicy;
   }
 
-  /**
-   * Should storage policies be displayed.
-   * @return true display storage policies, false doesn't display storage
-   * policies
-   */
-  @VisibleForTesting
-  boolean 

[hadoop] branch trunk updated (546c5d7 -> 1f16550)

2019-02-07 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 546c5d7  HADOOP-16032. Distcp It should clear sub directory ACL before 
applying new ACL on.
 new 668817a  Revert "HADOOP-15954. ABFS: Enable owner and group conversion 
for MSI and login user using OAuth."
 new 1f16550  HADOOP-15954. ABFS: Enable owner and group conversion for MSI 
and login user using OAuth.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../main/java/org/apache/hadoop/fs/shell/Ls.java   | 35 ++
 .../src/site/markdown/FileSystemShell.md   |  5 +---
 .../java/org/apache/hadoop/fs/shell/TestLs.java| 35 +-
 .../hadoop-common/src/test/resources/testConf.xml  |  6 +---
 .../hdfs/tools/TestStoragePolicyCommands.java  | 17 ---
 .../src/test/resources/testErasureCodingConf.xml   | 19 
 .../fs/azurebfs/oauth2/IdentityTransformer.java|  7 +++--
 .../hadoop/fs/azurebfs/oauth2/package-info.java|  8 +
 8 files changed, 12 insertions(+), 120 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 02/02: HADOOP-15954. ABFS: Enable owner and group conversion for MSI and login user using OAuth.

2019-02-07 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 1f1655028eede24197705a594b6ef19e6737db35
Author: Da Zhou 
AuthorDate: Thu Feb 7 21:58:21 2019 +

HADOOP-15954. ABFS: Enable owner and group conversion for MSI and login 
user using OAuth.

Contributed by Da Zhou and Junhua Gu.
---
 .../hadoop/fs/azurebfs/AbfsConfiguration.java  |  10 +
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java|  37 +--
 .../fs/azurebfs/AzureBlobFileSystemStore.java  | 179 ++--
 .../fs/azurebfs/constants/ConfigurationKeys.java   |  23 +-
 .../fs/azurebfs/oauth2/IdentityTransformer.java| 278 +++
 .../src/site/markdown/testing_azure.md |  55 
 .../fs/azurebfs/ITestAbfsIdentityTransformer.java  | 301 +
 7 files changed, 773 insertions(+), 110 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
index b9bc7f2..67055c5 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
@@ -219,6 +219,16 @@ public class AbfsConfiguration{
 
   /**
* Returns the account-specific value if it exists, then looks for an
+   * account-agnostic value.
+   * @param key Account-agnostic configuration key
+   * @return value if one exists, else the default value
+   */
+  public String getString(String key, String defaultValue) {
+return rawConfig.get(accountConf(key), rawConfig.get(key, defaultValue));
+  }
+
+  /**
+   * Returns the account-specific value if it exists, then looks for an
* account-agnostic value, and finally tries the default value.
* @param key Account-agnostic configuration key
* @param defaultValue Value returned if none is configured
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
index 4c24ac8..e321e9e 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
@@ -83,9 +83,6 @@ public class AzureBlobFileSystem extends FileSystem {
   public static final Logger LOG = 
LoggerFactory.getLogger(AzureBlobFileSystem.class);
   private URI uri;
   private Path workingDir;
-  private UserGroupInformation userGroupInformation;
-  private String user;
-  private String primaryUserGroup;
   private AzureBlobFileSystemStore abfsStore;
   private boolean isClosed;
 
@@ -103,9 +100,7 @@ public class AzureBlobFileSystem extends FileSystem {
 LOG.debug("Initializing AzureBlobFileSystem for {}", uri);
 
 this.uri = URI.create(uri.getScheme() + "://" + uri.getAuthority());
-this.userGroupInformation = UserGroupInformation.getCurrentUser();
-this.user = userGroupInformation.getUserName();
-this.abfsStore = new AzureBlobFileSystemStore(uri, this.isSecureScheme(), 
configuration, userGroupInformation);
+this.abfsStore = new AzureBlobFileSystemStore(uri, this.isSecureScheme(), 
configuration);
 final AbfsConfiguration abfsConfiguration = 
abfsStore.getAbfsConfiguration();
 
 this.setWorkingDirectory(this.getHomeDirectory());
@@ -120,18 +115,6 @@ public class AzureBlobFileSystem extends FileSystem {
   }
 }
 
-if (!abfsConfiguration.getSkipUserGroupMetadataDuringInitialization()) {
-  try {
-this.primaryUserGroup = userGroupInformation.getPrimaryGroupName();
-  } catch (IOException ex) {
-LOG.error("Failed to get primary group for {}, using user name as 
primary group name", user);
-this.primaryUserGroup = this.user;
-  }
-} else {
-  //Provide a default group name
-  this.primaryUserGroup = this.user;
-}
-
 if (UserGroupInformation.isSecurityEnabled()) {
   this.delegationTokenEnabled = 
abfsConfiguration.isDelegationTokenManagerEnabled();
 
@@ -153,8 +136,8 @@ public class AzureBlobFileSystem extends FileSystem {
 final StringBuilder sb = new StringBuilder(
 "AzureBlobFileSystem{");
 sb.append("uri=").append(uri);
-sb.append(", user='").append(user).append('\'');
-sb.append(", primaryUserGroup='").append(primaryUserGroup).append('\'');
+sb.append(", user='").append(abfsStore.getUser()).append('\'');
+sb.append(", 
primaryUserGroup='").append(abfsStore.getPrimaryGroup()).append('\'');
 sb.append('}');
 return sb.toString();
   }
@@ -503,7 +486,7 @@ public class AzureBlobFileSystem extends FileSystem {
   public Path getHomeDirectory() {
 return makeQualified(new 

[hadoop] branch branch-3.2 updated: HADOOP-16032. Distcp It should clear sub directory ACL before applying new ACL on.

2019-02-07 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new c5eca3f  HADOOP-16032. Distcp It should clear sub directory ACL before 
applying new ACL on.
c5eca3f is described below

commit c5eca3f7ee095d6a261eb411ad97aba654d67d13
Author: Ranith Sardar 
AuthorDate: Thu Feb 7 21:49:18 2019 +

HADOOP-16032. Distcp It should clear sub directory ACL before applying new 
ACL on.

Contributed by Ranith Sardar.

(cherry picked from commit 546c5d70efebb828389f609a89b123c4ee51f867)
---
 .../org/apache/hadoop/tools/util/DistCpUtils.java  |  1 +
 .../apache/hadoop/tools/util/TestDistCpUtils.java  | 88 +-
 2 files changed, 88 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
index 2da5251..96a7c5d 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
@@ -211,6 +211,7 @@ public class DistCpUtils {
   List srcAcl = srcFileStatus.getAclEntries();
   List targetAcl = getAcl(targetFS, targetFileStatus);
   if (!srcAcl.equals(targetAcl)) {
+targetFS.removeAcl(path);
 targetFS.setAcl(path, srcAcl);
   }
   // setAcl doesn't preserve sticky bit, so also call setPermission if 
needed.
diff --git 
a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
 
b/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
index 54804eb..304e41c 100644
--- 
a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
+++ 
b/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
@@ -25,7 +25,9 @@ import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.permission.AclEntry;
 import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.MiniDFSCluster;
 import org.apache.hadoop.hdfs.server.namenode.INodeFile;
 import org.apache.hadoop.hdfs.tools.ECAdmin;
@@ -39,12 +41,26 @@ import org.junit.Assert;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
+import com.google.common.collect.Lists;
+
 import java.io.IOException;
 import java.io.OutputStream;
 import java.util.EnumSet;
+import java.util.List;
 import java.util.Random;
 import java.util.Stack;
 
+import static org.apache.hadoop.fs.permission.AclEntryScope.ACCESS;
+import static org.apache.hadoop.fs.permission.AclEntryScope.DEFAULT;
+import static org.apache.hadoop.fs.permission.AclEntryType.GROUP;
+import static org.apache.hadoop.fs.permission.AclEntryType.OTHER;
+import static org.apache.hadoop.fs.permission.AclEntryType.USER;
+import static org.apache.hadoop.fs.permission.FsAction.ALL;
+import static org.apache.hadoop.fs.permission.FsAction.EXECUTE;
+import static org.apache.hadoop.fs.permission.FsAction.READ;
+import static org.apache.hadoop.fs.permission.FsAction.READ_EXECUTE;
+import static org.apache.hadoop.fs.permission.FsAction.READ_WRITE;
+import static org.apache.hadoop.hdfs.server.namenode.AclTestHelpers.aclEntry;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -60,6 +76,7 @@ public class TestDistCpUtils {
   
   @BeforeClass
   public static void create() throws IOException {
+config.setBoolean(DFSConfigKeys.DFS_NAMENODE_ACLS_ENABLED_KEY, true);
 cluster = new MiniDFSCluster.Builder(config)
 .numDataNodes(2)
 .format(true)
@@ -180,7 +197,76 @@ public class TestDistCpUtils {
 Assert.assertTrue(srcStatus.getModificationTime() == 
dstStatus.getModificationTime());
 Assert.assertTrue(srcStatus.getReplication() == 
dstStatus.getReplication());
   }
-  
+
+  @Test
+  public void testPreserveAclsforDefaultACL() throws IOException {
+FileSystem fs = FileSystem.get(config);
+
+EnumSet attributes = EnumSet.of(FileAttribute.ACL,
+FileAttribute.PERMISSION, FileAttribute.XATTR, FileAttribute.GROUP,
+FileAttribute.USER, FileAttribute.REPLICATION, FileAttribute.XATTR,
+FileAttribute.TIMES);
+
+Path dest = new Path("/tmpdest");
+Path src = new Path("/testsrc");
+
+fs.mkdirs(src);
+fs.mkdirs(dest);
+
+List acls = Lists.newArrayList(
+aclEntry(DEFAULT, USER, "foo", READ_EXECUTE),
+aclEntry(ACCESS, USER, READ_WRITE), aclEntry(ACCESS, GROUP, READ),
+aclEntry(ACCESS, 

[hadoop] branch trunk updated: HADOOP-16032. Distcp It should clear sub directory ACL before applying new ACL on.

2019-02-07 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 546c5d7  HADOOP-16032. Distcp It should clear sub directory ACL before 
applying new ACL on.
546c5d7 is described below

commit 546c5d70efebb828389f609a89b123c4ee51f867
Author: Ranith Sardar 
AuthorDate: Thu Feb 7 21:48:07 2019 +

HADOOP-16032. Distcp It should clear sub directory ACL before applying new 
ACL on.
---
 .../org/apache/hadoop/tools/util/DistCpUtils.java  |  1 +
 .../apache/hadoop/tools/util/TestDistCpUtils.java  | 88 +-
 2 files changed, 88 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
index 2da5251..96a7c5d 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
@@ -211,6 +211,7 @@ public class DistCpUtils {
   List srcAcl = srcFileStatus.getAclEntries();
   List targetAcl = getAcl(targetFS, targetFileStatus);
   if (!srcAcl.equals(targetAcl)) {
+targetFS.removeAcl(path);
 targetFS.setAcl(path, srcAcl);
   }
   // setAcl doesn't preserve sticky bit, so also call setPermission if 
needed.
diff --git 
a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
 
b/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
index 54804eb..304e41c 100644
--- 
a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
+++ 
b/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
@@ -25,7 +25,9 @@ import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.permission.AclEntry;
 import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.MiniDFSCluster;
 import org.apache.hadoop.hdfs.server.namenode.INodeFile;
 import org.apache.hadoop.hdfs.tools.ECAdmin;
@@ -39,12 +41,26 @@ import org.junit.Assert;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
+import com.google.common.collect.Lists;
+
 import java.io.IOException;
 import java.io.OutputStream;
 import java.util.EnumSet;
+import java.util.List;
 import java.util.Random;
 import java.util.Stack;
 
+import static org.apache.hadoop.fs.permission.AclEntryScope.ACCESS;
+import static org.apache.hadoop.fs.permission.AclEntryScope.DEFAULT;
+import static org.apache.hadoop.fs.permission.AclEntryType.GROUP;
+import static org.apache.hadoop.fs.permission.AclEntryType.OTHER;
+import static org.apache.hadoop.fs.permission.AclEntryType.USER;
+import static org.apache.hadoop.fs.permission.FsAction.ALL;
+import static org.apache.hadoop.fs.permission.FsAction.EXECUTE;
+import static org.apache.hadoop.fs.permission.FsAction.READ;
+import static org.apache.hadoop.fs.permission.FsAction.READ_EXECUTE;
+import static org.apache.hadoop.fs.permission.FsAction.READ_WRITE;
+import static org.apache.hadoop.hdfs.server.namenode.AclTestHelpers.aclEntry;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -60,6 +76,7 @@ public class TestDistCpUtils {
   
   @BeforeClass
   public static void create() throws IOException {
+config.setBoolean(DFSConfigKeys.DFS_NAMENODE_ACLS_ENABLED_KEY, true);
 cluster = new MiniDFSCluster.Builder(config)
 .numDataNodes(2)
 .format(true)
@@ -180,7 +197,76 @@ public class TestDistCpUtils {
 Assert.assertTrue(srcStatus.getModificationTime() == 
dstStatus.getModificationTime());
 Assert.assertTrue(srcStatus.getReplication() == 
dstStatus.getReplication());
   }
-  
+
+  @Test
+  public void testPreserveAclsforDefaultACL() throws IOException {
+FileSystem fs = FileSystem.get(config);
+
+EnumSet attributes = EnumSet.of(FileAttribute.ACL,
+FileAttribute.PERMISSION, FileAttribute.XATTR, FileAttribute.GROUP,
+FileAttribute.USER, FileAttribute.REPLICATION, FileAttribute.XATTR,
+FileAttribute.TIMES);
+
+Path dest = new Path("/tmpdest");
+Path src = new Path("/testsrc");
+
+fs.mkdirs(src);
+fs.mkdirs(dest);
+
+List acls = Lists.newArrayList(
+aclEntry(DEFAULT, USER, "foo", READ_EXECUTE),
+aclEntry(ACCESS, USER, READ_WRITE), aclEntry(ACCESS, GROUP, READ),
+aclEntry(ACCESS, OTHER, READ), aclEntry(ACCESS, USER, "bar", ALL));
+final List acls1 = Lists.newArrayList(aclEntry(ACCESS, USER, 
ALL),
+  

[hadoop] branch branch-2 updated: YARN-7171: RM UI should sort memory / cores numerically. Contributed by Ahmed Hussein

2019-02-07 Thread epayne
This is an automated email from the ASF dual-hosted git repository.

epayne pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 2a7dcc5  YARN-7171: RM UI should sort memory / cores numerically. 
Contributed by Ahmed Hussein
2a7dcc5 is described below

commit 2a7dcc509bc9ce5c043410ed5a547b9f9e510d6e
Author: Eric E Payne 
AuthorDate: Thu Feb 7 16:38:11 2019 +

YARN-7171: RM UI should sort memory / cores numerically. Contributed by 
Ahmed Hussein

(cherry picked from commit d1ca9432dd0b3e9b46b4903e8c9d33f5c28fcc1b)
---
 .../resources/webapps/static/yarn.dt.plugins.js| 23 ++
 .../hadoop/yarn/server/webapp/WebPageUtils.java|  3 ++-
 2 files changed, 25 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
index c003272..51cb630 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
@@ -41,6 +41,29 @@ jQuery.fn.dataTableExt.oSort['title-numeric-desc'] = 
function(a,b) {
   return ((x < y) ?  1 : ((x > y) ? -1 : 0));
 };
 
+// 'numeric-ignore-strings' sort type
+jQuery.fn.dataTableExt.oSort['num-ignore-str-asc'] = function(a, b) {
+  if (isNaN(a) && isNaN(b)) return ((a < b) ? 1 : ((a > b) ? -1 : 0));
+
+  if (isNaN(a)) return 1;
+  if (isNaN(b)) return -1;
+
+  x = parseFloat(a);
+  y = parseFloat(b);
+  return ((x < y) ? -1 : ((x > y) ? 1 : 0));
+};
+
+jQuery.fn.dataTableExt.oSort['num-ignore-str-desc'] = function(a, b) {
+  if (isNaN(a) && isNaN(b)) return ((a < b) ? 1 : ((a > b) ? -1 : 0));
+
+  if (isNaN(a)) return 1;
+  if (isNaN(b)) return -1;
+
+  x = parseFloat(a);
+  y = parseFloat(b);
+  return ((x < y) ? 1 : ((x > y) ? -1 : 0));
+};
+
 jQuery.fn.dataTableExt.oApi.fnSetFilteringDelay = function ( oSettings, iDelay 
) {
   var
   _that = this,
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
index b2f65a8..1cd7d87 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
@@ -51,8 +51,9 @@ public class WebPageUtils {
 sb.append("[\n")
   .append("{'sType':'natural', 'aTargets': [0]")
   .append(", 'mRender': parseHadoopID }")
-  .append("\n, {'sType':'numeric', 'aTargets': [6, 7, 8]")
+  .append("\n, {'sType':'num-ignore-str', 'aTargets': [6, 7, 8]")
   .append(", 'mRender': renderHadoopDate }")
+  .append("\n, {'sType':'num-ignore-str', 'aTargets': [11, 12, 13, 14, 15] 
}")
   .append("\n, {'sType':'numeric', bSearchable:false, 'aTargets':");
 if (isFairSchedulerPage) {
   sb.append("[15]");


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.0 updated: YARN-7171: RM UI should sort memory / cores numerically. Contributed by Ahmed Hussein

2019-02-07 Thread epayne
This is an automated email from the ASF dual-hosted git repository.

epayne pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new f3ff6d3  YARN-7171: RM UI should sort memory / cores numerically. 
Contributed by Ahmed Hussein
f3ff6d3 is described below

commit f3ff6d3aa7614493cbe1c66b941adee78a6fe869
Author: Eric E Payne 
AuthorDate: Thu Feb 7 16:38:11 2019 +

YARN-7171: RM UI should sort memory / cores numerically. Contributed by 
Ahmed Hussein

(cherry picked from commit d1ca9432dd0b3e9b46b4903e8c9d33f5c28fcc1b)
---
 .../resources/webapps/static/yarn.dt.plugins.js| 23 ++
 .../hadoop/yarn/server/webapp/WebPageUtils.java|  3 ++-
 2 files changed, 25 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
index c003272..51cb630 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
@@ -41,6 +41,29 @@ jQuery.fn.dataTableExt.oSort['title-numeric-desc'] = 
function(a,b) {
   return ((x < y) ?  1 : ((x > y) ? -1 : 0));
 };
 
+// 'numeric-ignore-strings' sort type
+jQuery.fn.dataTableExt.oSort['num-ignore-str-asc'] = function(a, b) {
+  if (isNaN(a) && isNaN(b)) return ((a < b) ? 1 : ((a > b) ? -1 : 0));
+
+  if (isNaN(a)) return 1;
+  if (isNaN(b)) return -1;
+
+  x = parseFloat(a);
+  y = parseFloat(b);
+  return ((x < y) ? -1 : ((x > y) ? 1 : 0));
+};
+
+jQuery.fn.dataTableExt.oSort['num-ignore-str-desc'] = function(a, b) {
+  if (isNaN(a) && isNaN(b)) return ((a < b) ? 1 : ((a > b) ? -1 : 0));
+
+  if (isNaN(a)) return 1;
+  if (isNaN(b)) return -1;
+
+  x = parseFloat(a);
+  y = parseFloat(b);
+  return ((x < y) ? 1 : ((x > y) ? -1 : 0));
+};
+
 jQuery.fn.dataTableExt.oApi.fnSetFilteringDelay = function ( oSettings, iDelay 
) {
   var
   _that = this,
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
index b2f65a8..1cd7d87 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
@@ -51,8 +51,9 @@ public class WebPageUtils {
 sb.append("[\n")
   .append("{'sType':'natural', 'aTargets': [0]")
   .append(", 'mRender': parseHadoopID }")
-  .append("\n, {'sType':'numeric', 'aTargets': [6, 7, 8]")
+  .append("\n, {'sType':'num-ignore-str', 'aTargets': [6, 7, 8]")
   .append(", 'mRender': renderHadoopDate }")
+  .append("\n, {'sType':'num-ignore-str', 'aTargets': [11, 12, 13, 14, 15] 
}")
   .append("\n, {'sType':'numeric', bSearchable:false, 'aTargets':");
 if (isFairSchedulerPage) {
   sb.append("[15]");


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: YARN-7171: RM UI should sort memory / cores numerically. Contributed by Ahmed Hussein

2019-02-07 Thread epayne
This is an automated email from the ASF dual-hosted git repository.

epayne pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 834a862  YARN-7171: RM UI should sort memory / cores numerically. 
Contributed by Ahmed Hussein
834a862 is described below

commit 834a862bd0ac67696e5b482825fb0088618a97e4
Author: Eric E Payne 
AuthorDate: Thu Feb 7 16:38:11 2019 +

YARN-7171: RM UI should sort memory / cores numerically. Contributed by 
Ahmed Hussein

(cherry picked from commit d1ca9432dd0b3e9b46b4903e8c9d33f5c28fcc1b)
---
 .../resources/webapps/static/yarn.dt.plugins.js| 23 ++
 .../hadoop/yarn/server/webapp/WebPageUtils.java|  3 ++-
 2 files changed, 25 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
index c003272..51cb630 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
@@ -41,6 +41,29 @@ jQuery.fn.dataTableExt.oSort['title-numeric-desc'] = 
function(a,b) {
   return ((x < y) ?  1 : ((x > y) ? -1 : 0));
 };
 
+// 'numeric-ignore-strings' sort type
+jQuery.fn.dataTableExt.oSort['num-ignore-str-asc'] = function(a, b) {
+  if (isNaN(a) && isNaN(b)) return ((a < b) ? 1 : ((a > b) ? -1 : 0));
+
+  if (isNaN(a)) return 1;
+  if (isNaN(b)) return -1;
+
+  x = parseFloat(a);
+  y = parseFloat(b);
+  return ((x < y) ? -1 : ((x > y) ? 1 : 0));
+};
+
+jQuery.fn.dataTableExt.oSort['num-ignore-str-desc'] = function(a, b) {
+  if (isNaN(a) && isNaN(b)) return ((a < b) ? 1 : ((a > b) ? -1 : 0));
+
+  if (isNaN(a)) return 1;
+  if (isNaN(b)) return -1;
+
+  x = parseFloat(a);
+  y = parseFloat(b);
+  return ((x < y) ? 1 : ((x > y) ? -1 : 0));
+};
+
 jQuery.fn.dataTableExt.oApi.fnSetFilteringDelay = function ( oSettings, iDelay 
) {
   var
   _that = this,
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
index b2f65a8..1cd7d87 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
@@ -51,8 +51,9 @@ public class WebPageUtils {
 sb.append("[\n")
   .append("{'sType':'natural', 'aTargets': [0]")
   .append(", 'mRender': parseHadoopID }")
-  .append("\n, {'sType':'numeric', 'aTargets': [6, 7, 8]")
+  .append("\n, {'sType':'num-ignore-str', 'aTargets': [6, 7, 8]")
   .append(", 'mRender': renderHadoopDate }")
+  .append("\n, {'sType':'num-ignore-str', 'aTargets': [11, 12, 13, 14, 15] 
}")
   .append("\n, {'sType':'numeric', bSearchable:false, 'aTargets':");
 if (isFairSchedulerPage) {
   sb.append("[15]");


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: Revert "HADOOP-15281. Distcp to add no-rename copy option."

2019-02-07 Thread epayne
This is an automated email from the ASF dual-hosted git repository.

epayne pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 352ebc6  Revert "HADOOP-15281. Distcp to add no-rename copy option."
352ebc6 is described below

commit 352ebc6ed988029be804d08883ffb8b5c4872221
Author: Eric E Payne 
AuthorDate: Thu Feb 7 20:18:32 2019 +

Revert "HADOOP-15281. Distcp to add no-rename copy option."

Revert "HADOOP-15281. Distcp to add no-rename copy option. Contributed 
by Andrew Olson."
This reverts commit d2765ffc2e3f6ce144bb0ca6066801d79cd7217d.
---
 .../fs/contract/s3a/ITestS3AContractDistCp.java| 33 ---
 .../org/apache/hadoop/tools/DistCpConstants.java   |  3 +-
 .../org/apache/hadoop/tools/DistCpContext.java |  4 --
 .../apache/hadoop/tools/DistCpOptionSwitch.java| 14 +
 .../org/apache/hadoop/tools/DistCpOptions.java | 19 --
 .../org/apache/hadoop/tools/OptionsParser.java |  4 +-
 .../org/apache/hadoop/tools/mapred/CopyMapper.java |  6 +-
 .../tools/mapred/RetriableFileCopyCommand.java | 52 +
 .../hadoop-distcp/src/site/markdown/DistCp.md.vm   |  6 +-
 .../org/apache/hadoop/tools/TestDistCpOptions.java |  5 +-
 .../tools/contract/AbstractContractDistCpTest.java | 68 +-
 11 files changed, 23 insertions(+), 191 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
index 740f256..b3d511e 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
@@ -18,7 +18,6 @@
 
 package org.apache.hadoop.fs.contract.s3a;
 
-import java.io.FileNotFoundException;
 import java.io.IOException;
 
 import static org.apache.hadoop.fs.s3a.Constants.*;
@@ -27,7 +26,6 @@ import static 
org.apache.hadoop.fs.s3a.S3ATestUtils.maybeEnableS3Guard;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.StorageStatistics;
 import org.apache.hadoop.fs.s3a.FailureInjectionPolicy;
 import org.apache.hadoop.tools.contract.AbstractContractDistCpTest;
 
@@ -76,35 +74,4 @@ public class ITestS3AContractDistCp extends 
AbstractContractDistCpTest {
 Path path = super.path(filepath);
 return new Path(path, FailureInjectionPolicy.DEFAULT_DELAY_KEY_SUBSTRING);
   }
-
-  @Override
-  public void testDirectWrite() throws Exception {
-resetStorageStatistics();
-super.testDirectWrite();
-assertEquals("Expected no renames for a direct write distcp", 0L,
-getRenameOperationCount());
-  }
-
-  @Override
-  public void testNonDirectWrite() throws Exception {
-resetStorageStatistics();
-try {
-  super.testNonDirectWrite();
-} catch (FileNotFoundException e) {
-  // We may get this exception when data is written to a DELAY_LISTING_ME
-  // directory causing verification of the distcp success to fail if
-  // S3Guard is not enabled
-}
-assertEquals("Expected 2 renames for a non-direct write distcp", 2L,
-getRenameOperationCount());
-  }
-
-  private void resetStorageStatistics() {
-getFileSystem().getStorageStatistics().reset();
-  }
-
-  private long getRenameOperationCount() {
-return getFileSystem().getStorageStatistics()
-.getLong(StorageStatistics.CommonStatisticNames.OP_RENAME);
-  }
 }
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
index e20f206..4946091 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
@@ -85,8 +85,7 @@ public final class DistCpConstants {
   "distcp.dynamic.min.records_per_chunk";
   public static final String CONF_LABEL_SPLIT_RATIO =
   "distcp.dynamic.split.ratio";
-  public static final String CONF_LABEL_DIRECT_WRITE = "distcp.direct.write";
-
+  
   /* Total bytes to be copied. Updated by copylisting. Unfiltered count */
   public static final String CONF_LABEL_TOTAL_BYTES_TO_BE_COPIED = 
"mapred.total.bytes.expected";
 
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpContext.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpContext.java
index 1e63d80..fc047ca 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpContext.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpContext.java
@@ -179,10 +179,6 @@ public 

[Hadoop Wiki] Update of "HowToCommit" by MartonElek

2019-02-07 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "HowToCommit" page has been changed by MartonElek:
https://wiki.apache.org/hadoop/HowToCommit?action=diff=42=43

Comment:
fix man doc generation (from ant to mvn)

  
  The end user documentation is maintained in the main repository (hadoop.git) 
and the results are committed to the hadoop-site during each release. The 
website itself is managed in the hadoop-site.git repository (both the source 
and the rendered form).
  
- To commit end-user documentation changes to trunk or a branch, ask the user 
to submit only changes made to the *.xml files in {{{src/docs}}}. Apply that 
patch, run {{{ant docs}}} to generate the html, and then commit. End-user 
documentation is only published to the web when releases are made, as described 
in HowToRelease.
+ To commit end-user documentation create a patch as usual and modify the 
content of src/site directory of any hadoop project (eg. 
./hadoop-common-project/hadoop-auth/src/site).  You can regenerate the docs 
with {{mvn site}}. End-user documentation is only published to the web when 
releases are made, as described in HowToRelease.
  
- To commit changes to the website and re-publish them: {{{
+ To commit changes to the website and re-publish them: 
  
+ {{{
  git clone https://gitbox.apache.org/repos/asf/hadoop-site.git -b asf-site
  #edit site under ./src
  hugo
@@ -101, +102 @@

  
  The commit will be reflected on Apache Hadoop site automatically.
  
- Note: you can check the rendering locally: with 'hugo serve && firefox 
http://localhost:1313' 
+ Note: you can check the rendering locally: with {{hugo serve && firefox 
http://localhost:1313}} 
  
  == Patches that break HDFS, YARN and MapReduce ==
  

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[Hadoop Wiki] Update of "HowToCommit" by MartonElek

2019-02-07 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "HowToCommit" page has been changed by MartonElek:
https://wiki.apache.org/hadoop/HowToCommit?action=diff=41=42

Comment:
Update site generation

   * [[http://www.apache.org/dev/new-committers-guide.html|Apache New Committer 
Guide]]
   * [[http://www.apache.org/dev/committers.html|Apache Committer FAQ]]
  
- The first act of a new core committer is typically to add their name to the 
[[http://hadoop.apache.org/common/credits.html|credits]] page.  This requires 
changing the XML source in 
http://svn.apache.org/repos/asf/hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml.
 Once done, update the Hadoop website as described [[#Documentation|here]].
+ The first act of a new core committer is typically to add their name to the 
[[http://hadoop.apache.org/common/credits.html|credits]] page.  This requires 
changing the site source in 
https://github.com/apache/hadoop-site/blob/asf-site/src/who.md. Once done, 
update the Hadoop website as described [[#Documentation|here]] (TLDR; don't 
forget to regenerate the site with hugo, and commit the generated results, too).
  
  
  == Review ==
@@ -79, +79 @@

  <>
   Committing Documentation 
  
- Hadoop's official documentation is authored using 
[[http://forrest.apache.org/|Forrest]].  To commit documentation changes you 
must have Apache Forrest installed, and set the forrest directory on your 
{{{$FORREST_HOME}}}. Note that the current version ([[wget 
http://archive.apache.org/dist/forrest/0.9/apache-forrest-0.9.tar.gz|0.9]]) 
work properly with Java 8. Documentation is of two types:
+ Hadoop's official documentation is authored using 
[[https://gohugo.io/|hugo]].  To commit documentation changes you must have 
Hugo installed (single binary available for all the platforms, part of the 
package repositories, brew/pacman/yum...). Documentation is of two types:
+ 
   1. End-user documentation, versioned with releases; and,
-  1. The website.  This is maintained separately in subversion, republished as 
it is changed.
+  1. The website.  
+ 
+ The end user documentation is maintained in the main repository (hadoop.git) 
and the results are committed to the hadoop-site during each release. The 
website itself is managed in the hadoop-site.git repository (both the source 
and the rendered form).
  
  To commit end-user documentation changes to trunk or a branch, ask the user 
to submit only changes made to the *.xml files in {{{src/docs}}}. Apply that 
patch, run {{{ant docs}}} to generate the html, and then commit. End-user 
documentation is only published to the web when releases are made, as described 
in HowToRelease.
  
  To commit changes to the website and re-publish them: {{{
- svn co https://svn.apache.org/repos/asf/hadoop/common/site
- cd site/main
- $FORREST_HOME/tools/ant/bin/ant -Dforrest.home=$FORREST_HOME # Newer version 
of Ant does not work. Use the Ant bundled with forrest.
- firefox publish/index.html   # preview the changes
- svn stat # check for new pages
- svn add  # add any new pages
- svn commit
+ 
+ git clone https://gitbox.apache.org/repos/asf/hadoop-site.git -b asf-site
+ #edit site under ./src
+ hugo
+ # add both the ./src and ./content directories (source and rendered version)
+ git add .
+ git commit
+ git push 
  }}}
  
  The commit will be reflected on Apache Hadoop site automatically.
+ 
+ Note: you can check the rendering locally: with 'hugo serve && firefox 
http://localhost:1313' 
  
  == Patches that break HDFS, YARN and MapReduce ==
  

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: YARN-7171: RM UI should sort memory / cores numerically. Contributed by Ahmed Hussein

2019-02-07 Thread epayne
This is an automated email from the ASF dual-hosted git repository.

epayne pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 55dde82  YARN-7171: RM UI should sort memory / cores numerically. 
Contributed by Ahmed Hussein
55dde82 is described below

commit 55dde827e60ef6455114b0bb28ecc07b6e457f37
Author: Eric E Payne 
AuthorDate: Thu Feb 7 16:38:11 2019 +

YARN-7171: RM UI should sort memory / cores numerically. Contributed by 
Ahmed Hussein

(cherry picked from commit d1ca9432dd0b3e9b46b4903e8c9d33f5c28fcc1b)
---
 .../resources/webapps/static/yarn.dt.plugins.js| 23 ++
 .../hadoop/yarn/server/webapp/WebPageUtils.java|  3 ++-
 2 files changed, 25 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
index c003272..51cb630 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
@@ -41,6 +41,29 @@ jQuery.fn.dataTableExt.oSort['title-numeric-desc'] = 
function(a,b) {
   return ((x < y) ?  1 : ((x > y) ? -1 : 0));
 };
 
+// 'numeric-ignore-strings' sort type
+jQuery.fn.dataTableExt.oSort['num-ignore-str-asc'] = function(a, b) {
+  if (isNaN(a) && isNaN(b)) return ((a < b) ? 1 : ((a > b) ? -1 : 0));
+
+  if (isNaN(a)) return 1;
+  if (isNaN(b)) return -1;
+
+  x = parseFloat(a);
+  y = parseFloat(b);
+  return ((x < y) ? -1 : ((x > y) ? 1 : 0));
+};
+
+jQuery.fn.dataTableExt.oSort['num-ignore-str-desc'] = function(a, b) {
+  if (isNaN(a) && isNaN(b)) return ((a < b) ? 1 : ((a > b) ? -1 : 0));
+
+  if (isNaN(a)) return 1;
+  if (isNaN(b)) return -1;
+
+  x = parseFloat(a);
+  y = parseFloat(b);
+  return ((x < y) ? 1 : ((x > y) ? -1 : 0));
+};
+
 jQuery.fn.dataTableExt.oApi.fnSetFilteringDelay = function ( oSettings, iDelay 
) {
   var
   _that = this,
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
index b2f65a8..1cd7d87 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
@@ -51,8 +51,9 @@ public class WebPageUtils {
 sb.append("[\n")
   .append("{'sType':'natural', 'aTargets': [0]")
   .append(", 'mRender': parseHadoopID }")
-  .append("\n, {'sType':'numeric', 'aTargets': [6, 7, 8]")
+  .append("\n, {'sType':'num-ignore-str', 'aTargets': [6, 7, 8]")
   .append(", 'mRender': renderHadoopDate }")
+  .append("\n, {'sType':'num-ignore-str', 'aTargets': [11, 12, 13, 14, 15] 
}")
   .append("\n, {'sType':'numeric', bSearchable:false, 'aTargets':");
 if (isFairSchedulerPage) {
   sb.append("[15]");


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-1071. Make Ozone s3 acceptance test suite centos compatible. Contributed by Elek Marton.

2019-02-07 Thread bharat
This is an automated email from the ASF dual-hosted git repository.

bharat pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 75e8441  HDDS-1071. Make Ozone s3 acceptance test suite centos 
compatible. Contributed by Elek Marton.
75e8441 is described below

commit 75e8441c616deb2de2390e9375259ffdc6417935
Author: Bharat Viswanadham 
AuthorDate: Thu Feb 7 08:53:11 2019 -0800

HDDS-1071. Make Ozone s3 acceptance test suite centos compatible. 
Contributed by Elek Marton.
---
 hadoop-ozone/dist/src/main/smoketest/s3/commonawslib.robot | 5 +
 1 file changed, 5 insertions(+)

diff --git a/hadoop-ozone/dist/src/main/smoketest/s3/commonawslib.robot 
b/hadoop-ozone/dist/src/main/smoketest/s3/commonawslib.robot
index f426145..75f396c 100644
--- a/hadoop-ozone/dist/src/main/smoketest/s3/commonawslib.robot
+++ b/hadoop-ozone/dist/src/main/smoketest/s3/commonawslib.robot
@@ -39,7 +39,12 @@ Execute AWSS3Cli
 Install aws cli
 ${rc}  ${output} = Run And Return Rc And 
Output   which apt-get
 Run Keyword if '${rc}' == '0'  Install aws cli s3 debian
+${rc}  ${output} = Run And Return Rc And 
Output   yum --help
+Run Keyword if '${rc}' == '0'  Install aws cli s3 centos
 
+
+Install aws cli s3 centos
+Executesudo yum install -y awscli
 Install aws cli s3 debian
 Executesudo apt-get install -y awscli
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: YARN-7171: RM UI should sort memory / cores numerically. Contributed by Ahmed Hussein

2019-02-07 Thread epayne
This is an automated email from the ASF dual-hosted git repository.

epayne pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d1ca943  YARN-7171: RM UI should sort memory / cores numerically. 
Contributed by Ahmed Hussein
d1ca943 is described below

commit d1ca9432dd0b3e9b46b4903e8c9d33f5c28fcc1b
Author: Eric E Payne 
AuthorDate: Thu Feb 7 16:38:11 2019 +

YARN-7171: RM UI should sort memory / cores numerically. Contributed by 
Ahmed Hussein
---
 .../resources/webapps/static/yarn.dt.plugins.js| 23 ++
 .../hadoop/yarn/server/webapp/WebPageUtils.java|  3 ++-
 2 files changed, 25 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
index c003272..51cb630 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/yarn.dt.plugins.js
@@ -41,6 +41,29 @@ jQuery.fn.dataTableExt.oSort['title-numeric-desc'] = 
function(a,b) {
   return ((x < y) ?  1 : ((x > y) ? -1 : 0));
 };
 
+// 'numeric-ignore-strings' sort type
+jQuery.fn.dataTableExt.oSort['num-ignore-str-asc'] = function(a, b) {
+  if (isNaN(a) && isNaN(b)) return ((a < b) ? 1 : ((a > b) ? -1 : 0));
+
+  if (isNaN(a)) return 1;
+  if (isNaN(b)) return -1;
+
+  x = parseFloat(a);
+  y = parseFloat(b);
+  return ((x < y) ? -1 : ((x > y) ? 1 : 0));
+};
+
+jQuery.fn.dataTableExt.oSort['num-ignore-str-desc'] = function(a, b) {
+  if (isNaN(a) && isNaN(b)) return ((a < b) ? 1 : ((a > b) ? -1 : 0));
+
+  if (isNaN(a)) return 1;
+  if (isNaN(b)) return -1;
+
+  x = parseFloat(a);
+  y = parseFloat(b);
+  return ((x < y) ? 1 : ((x > y) ? -1 : 0));
+};
+
 jQuery.fn.dataTableExt.oApi.fnSetFilteringDelay = function ( oSettings, iDelay 
) {
   var
   _that = this,
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
index b2f65a8..1cd7d87 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
@@ -51,8 +51,9 @@ public class WebPageUtils {
 sb.append("[\n")
   .append("{'sType':'natural', 'aTargets': [0]")
   .append(", 'mRender': parseHadoopID }")
-  .append("\n, {'sType':'numeric', 'aTargets': [6, 7, 8]")
+  .append("\n, {'sType':'num-ignore-str', 'aTargets': [6, 7, 8]")
   .append(", 'mRender': renderHadoopDate }")
+  .append("\n, {'sType':'num-ignore-str', 'aTargets': [11, 12, 13, 14, 15] 
}")
   .append("\n, {'sType':'numeric', bSearchable:false, 'aTargets':");
 if (isFairSchedulerPage) {
   sb.append("[15]");


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDDS-922. Create isolated classloder to use ozonefs with any older hadoop versions. Contributed by Elek, Marton.

2019-02-07 Thread elek
This is an automated email from the ASF dual-hosted git repository.

elek pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new a65aca2  HDDS-922. Create isolated classloder to use ozonefs with any 
older hadoop versions. Contributed by Elek, Marton.
a65aca2 is described below

commit a65aca2feff90b28fd5d52cf2da7e7248967bbaf
Author: Márton Elek 
AuthorDate: Thu Feb 7 17:02:03 2019 +0100

HDDS-922. Create isolated classloder to use ozonefs with any older hadoop 
versions. Contributed by Elek, Marton.
---
 .../hadoop/hdds/conf/OzoneConfiguration.java   |   2 +
 .../org/apache/hadoop/ozone/OzoneConfigKeys.java   |   7 +
 .../common/src/main/resources/ozone-default.xml|  15 ++
 hadoop-hdds/docs/content/OzoneFS.md|  19 +-
 .../dist/dev-support/bin/dist-layout-stitching |   4 -
 hadoop-ozone/dist/pom.xml  |  15 ++
 .../src/main/compose/ozonefs/docker-compose.yaml   |  29 ++-
 hadoop-ozone/ozonefs-lib-legacy/pom.xml| 104 +
 .../src/main/resources/ozonefs.txt |  21 ++
 hadoop-ozone/ozonefs-lib/pom.xml   |  89 
 hadoop-ozone/ozonefs/pom.xml   |  60 ++
 .../org/apache/hadoop/fs/ozone/BasicKeyInfo.java   |  53 +
 .../hadoop/fs/ozone/FilteredClassLoader.java   |  84 
 .../apache/hadoop/fs/ozone/OzoneClientAdapter.java |  55 +
 .../hadoop/fs/ozone/OzoneClientAdapterFactory.java | 119 +++
 .../hadoop/fs/ozone/OzoneClientAdapterImpl.java| 235 +
 .../apache/hadoop/fs/ozone/OzoneFSInputStream.java |  16 +-
 .../hadoop/fs/ozone/OzoneFSOutputStream.java   |   5 +-
 .../apache/hadoop/fs/ozone/OzoneFileSystem.java| 225 
 hadoop-ozone/pom.xml   |  23 +-
 20 files changed, 963 insertions(+), 217 deletions(-)

diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
index 36d953c..43e6fe7 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
@@ -47,6 +47,8 @@ public class OzoneConfiguration extends Configuration {
 
   public OzoneConfiguration(Configuration conf) {
 super(conf);
+//load the configuration from the classloader of the original conf.
+setClassLoader(conf.getClassLoader());
   }
 
   public List readPropertyFromXml(URL url) throws JAXBException {
diff --git 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
index e9a52f8aae..91f53f3 100644
--- 
a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
+++ 
b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
@@ -371,6 +371,13 @@ public final class OzoneConfigKeys {
   public static final boolean OZONE_ACL_ENABLED_DEFAULT =
   false;
 
+  //For technical reasons this is unused and hardcoded to the
+  // OzoneFileSystem.initialize.
+  public static final String OZONE_FS_ISOLATED_CLASSLOADER =
+  "ozone.fs.isolated-classloader";
+
+
+
   /**
* There is no need to instantiate this class.
*/
diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml 
b/hadoop-hdds/common/src/main/resources/ozone-default.xml
index 5489936..7ba15f3 100644
--- a/hadoop-hdds/common/src/main/resources/ozone-default.xml
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -1806,4 +1806,19 @@
   not be renewed.
 
   
+
+  
+ozone.fs.isolated-classloader
+
+OZONE, OZONEFS
+
+  Enable it for older hadoops to separate the classloading of all the
+  Ozone classes. With 'true' value, ozonefs can be used with older
+  hadoop versions as the hadoop3/ozone related classes are loaded by
+  an isolated classloader.
+
+  Default depends from the used jar. true for ozone-filesystem-lib-legacy
+  jar and false for the ozone-filesystem-lib.jar
+
+  
 
\ No newline at end of file
diff --git a/hadoop-hdds/docs/content/OzoneFS.md 
b/hadoop-hdds/docs/content/OzoneFS.md
index 92c83d8..b7f8a74 100644
--- a/hadoop-hdds/docs/content/OzoneFS.md
+++ b/hadoop-hdds/docs/content/OzoneFS.md
@@ -56,12 +56,11 @@ This will make this bucket to be the default file system 
for HDFS dfs commands a
 You also need to add the ozone-filesystem.jar file to the classpath:
 
 {{< highlight bash >}}
-export 
HADOOP_CLASSPATH=/opt/ozone/share/hadoop/ozonefs/hadoop-ozone-filesystem.jar:$HADOOP_CLASSPATH
+export 
HADOOP_CLASSPATH=/opt/ozone/share/ozonefs/lib/hadoop-ozone-filesystem-lib-.*.jar:$HADOOP_CLASSPATH
 {{< /highlight >}}
 
 
 
-
 Once the default 

[hadoop] branch branch-2.8 updated: YARN-9206. RMServerUtils does not count SHUTDOWN as an accepted state. Contributed by Kuhu Shukla.

2019-02-07 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a commit to branch branch-2.8
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.8 by this push:
 new 4c063fa  YARN-9206. RMServerUtils does not count SHUTDOWN as an 
accepted state. Contributed by Kuhu Shukla.
4c063fa is described below

commit 4c063fa293c4f0bddcb47d157a7195c479d127bf
Author: Sunil G 
AuthorDate: Thu Feb 7 19:15:48 2019 +0530

YARN-9206. RMServerUtils does not count SHUTDOWN as an accepted state. 
Contributed by Kuhu Shukla.
---
 .../apache/hadoop/yarn/api/records/NodeState.java  | 12 
 .../yarn/server/resourcemanager/RMServerUtils.java | 22 +++---
 2 files changed, 27 insertions(+), 7 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeState.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeState.java
index d0344fb..2700cf2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeState.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeState.java
@@ -55,4 +55,16 @@ public enum NodeState {
 return (this == UNHEALTHY || this == DECOMMISSIONED
 || this == LOST || this == SHUTDOWN);
   }
+
+  public boolean isInactiveState() {
+return this == NodeState.DECOMMISSIONED ||
+  this == NodeState.LOST || this == NodeState.REBOOTED ||
+  this == NodeState.SHUTDOWN;
+  }
+
+  public boolean isActiveState() {
+return this == NodeState.NEW ||
+this == NodeState.RUNNING || this == NodeState.UNHEALTHY ||
+this == NodeState.DECOMMISSIONING;
+  }
 }
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
index 14416c7..d355837 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
@@ -91,10 +91,20 @@ public class RMServerUtils {
   EnumSet acceptedStates) {
 // nodes contains nodes that are NEW, RUNNING OR UNHEALTHY
 ArrayList results = new ArrayList();
-if (acceptedStates.contains(NodeState.NEW) ||
-acceptedStates.contains(NodeState.RUNNING) ||
-acceptedStates.contains(NodeState.DECOMMISSIONING) ||
-acceptedStates.contains(NodeState.UNHEALTHY)) {
+boolean hasActive = false;
+boolean hasInactive = false;
+for (NodeState nodeState : acceptedStates) {
+  if (!hasInactive && nodeState.isInactiveState()) {
+hasInactive = true;
+  }
+  if (!hasActive && nodeState.isActiveState()) {
+hasActive = true;
+  }
+  if (hasActive && hasInactive) {
+break;
+  }
+}
+if (hasActive) {
   for (RMNode rmNode : context.getRMNodes().values()) {
 if (acceptedStates.contains(rmNode.getState())) {
   results.add(rmNode);
@@ -103,9 +113,7 @@ public class RMServerUtils {
 }
 
 // inactiveNodes contains nodes that are DECOMMISSIONED, LOST, OR REBOOTED
-if (acceptedStates.contains(NodeState.DECOMMISSIONED) ||
-acceptedStates.contains(NodeState.LOST) ||
-acceptedStates.contains(NodeState.REBOOTED)) {
+if (hasInactive) {
   for (RMNode rmNode : context.getInactiveRMNodes().values()) {
 if ((rmNode != null) && acceptedStates.contains(rmNode.getState())) {
   results.add(rmNode);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.9 updated: YARN-9206. RMServerUtils does not count SHUTDOWN as an accepted state. Contributed by Kuhu Shukla.

2019-02-07 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a commit to branch branch-2.9
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.9 by this push:
 new 1d35c47  YARN-9206. RMServerUtils does not count SHUTDOWN as an 
accepted state. Contributed by Kuhu Shukla.
1d35c47 is described below

commit 1d35c476339f27bd9d937c0b6faac4b701bb482a
Author: Sunil G 
AuthorDate: Thu Feb 7 19:08:41 2019 +0530

YARN-9206. RMServerUtils does not count SHUTDOWN as an accepted state. 
Contributed by Kuhu Shukla.

(cherry picked from commit 6ffe6ea8999c1216b6e510e350535754431bac2d)
---
 .../apache/hadoop/yarn/api/records/NodeState.java  | 12 
 .../yarn/server/resourcemanager/RMServerUtils.java | 22 -
 .../resourcemanager/webapp/RMWebServices.java  |  4 +--
 .../server/resourcemanager/TestRMServerUtils.java  | 36 ++
 4 files changed, 64 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeState.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeState.java
index d0344fb..2700cf2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeState.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeState.java
@@ -55,4 +55,16 @@ public enum NodeState {
 return (this == UNHEALTHY || this == DECOMMISSIONED
 || this == LOST || this == SHUTDOWN);
   }
+
+  public boolean isInactiveState() {
+return this == NodeState.DECOMMISSIONED ||
+  this == NodeState.LOST || this == NodeState.REBOOTED ||
+  this == NodeState.SHUTDOWN;
+  }
+
+  public boolean isActiveState() {
+return this == NodeState.NEW ||
+this == NodeState.RUNNING || this == NodeState.UNHEALTHY ||
+this == NodeState.DECOMMISSIONING;
+  }
 }
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
index 35b0c98..5b07448 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
@@ -106,10 +106,20 @@ public class RMServerUtils {
   EnumSet acceptedStates) {
 // nodes contains nodes that are NEW, RUNNING, UNHEALTHY or 
DECOMMISSIONING.
 ArrayList results = new ArrayList();
-if (acceptedStates.contains(NodeState.NEW) ||
-acceptedStates.contains(NodeState.RUNNING) ||
-acceptedStates.contains(NodeState.DECOMMISSIONING) ||
-acceptedStates.contains(NodeState.UNHEALTHY)) {
+boolean hasActive = false;
+boolean hasInactive = false;
+for (NodeState nodeState : acceptedStates) {
+  if (!hasInactive && nodeState.isInactiveState()) {
+hasInactive = true;
+  }
+  if (!hasActive && nodeState.isActiveState()) {
+hasActive = true;
+  }
+  if (hasActive && hasInactive) {
+break;
+  }
+}
+if (hasActive) {
   for (RMNode rmNode : context.getRMNodes().values()) {
 if (acceptedStates.contains(rmNode.getState())) {
   results.add(rmNode);
@@ -118,9 +128,7 @@ public class RMServerUtils {
 }
 
 // inactiveNodes contains nodes that are DECOMMISSIONED, LOST, OR REBOOTED
-if (acceptedStates.contains(NodeState.DECOMMISSIONED) ||
-acceptedStates.contains(NodeState.LOST) ||
-acceptedStates.contains(NodeState.REBOOTED)) {
+if (hasInactive) {
   for (RMNode rmNode : context.getInactiveRMNodes().values()) {
 if ((rmNode != null) && acceptedStates.contains(rmNode.getState())) {
   results.add(rmNode);
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
index 5266581..33533b4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
+++ 

[hadoop] branch branch-2 updated: YARN-9206. RMServerUtils does not count SHUTDOWN as an accepted state. Contributed by Kuhu Shukla.

2019-02-07 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new a919336  YARN-9206. RMServerUtils does not count SHUTDOWN as an 
accepted state. Contributed by Kuhu Shukla.
a919336 is described below

commit a91933620d8755e80ad4bdf900b506dd73d26786
Author: Sunil G 
AuthorDate: Thu Feb 7 19:08:41 2019 +0530

YARN-9206. RMServerUtils does not count SHUTDOWN as an accepted state. 
Contributed by Kuhu Shukla.

(cherry picked from commit 6ffe6ea8999c1216b6e510e350535754431bac2d)
---
 .../apache/hadoop/yarn/api/records/NodeState.java  | 12 
 .../yarn/server/resourcemanager/RMServerUtils.java | 22 -
 .../resourcemanager/webapp/RMWebServices.java  |  4 +--
 .../server/resourcemanager/TestRMServerUtils.java  | 36 ++
 4 files changed, 64 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeState.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeState.java
index d0344fb..2700cf2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeState.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeState.java
@@ -55,4 +55,16 @@ public enum NodeState {
 return (this == UNHEALTHY || this == DECOMMISSIONED
 || this == LOST || this == SHUTDOWN);
   }
+
+  public boolean isInactiveState() {
+return this == NodeState.DECOMMISSIONED ||
+  this == NodeState.LOST || this == NodeState.REBOOTED ||
+  this == NodeState.SHUTDOWN;
+  }
+
+  public boolean isActiveState() {
+return this == NodeState.NEW ||
+this == NodeState.RUNNING || this == NodeState.UNHEALTHY ||
+this == NodeState.DECOMMISSIONING;
+  }
 }
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
index 35b0c98..5b07448 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
@@ -106,10 +106,20 @@ public class RMServerUtils {
   EnumSet acceptedStates) {
 // nodes contains nodes that are NEW, RUNNING, UNHEALTHY or 
DECOMMISSIONING.
 ArrayList results = new ArrayList();
-if (acceptedStates.contains(NodeState.NEW) ||
-acceptedStates.contains(NodeState.RUNNING) ||
-acceptedStates.contains(NodeState.DECOMMISSIONING) ||
-acceptedStates.contains(NodeState.UNHEALTHY)) {
+boolean hasActive = false;
+boolean hasInactive = false;
+for (NodeState nodeState : acceptedStates) {
+  if (!hasInactive && nodeState.isInactiveState()) {
+hasInactive = true;
+  }
+  if (!hasActive && nodeState.isActiveState()) {
+hasActive = true;
+  }
+  if (hasActive && hasInactive) {
+break;
+  }
+}
+if (hasActive) {
   for (RMNode rmNode : context.getRMNodes().values()) {
 if (acceptedStates.contains(rmNode.getState())) {
   results.add(rmNode);
@@ -118,9 +128,7 @@ public class RMServerUtils {
 }
 
 // inactiveNodes contains nodes that are DECOMMISSIONED, LOST, OR REBOOTED
-if (acceptedStates.contains(NodeState.DECOMMISSIONED) ||
-acceptedStates.contains(NodeState.LOST) ||
-acceptedStates.contains(NodeState.REBOOTED)) {
+if (hasInactive) {
   for (RMNode rmNode : context.getInactiveRMNodes().values()) {
 if ((rmNode != null) && acceptedStates.contains(rmNode.getState())) {
   results.add(rmNode);
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
index d76a412..cded2ec 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
+++ 

[hadoop] branch branch-3.0 updated: YARN-9206. RMServerUtils does not count SHUTDOWN as an accepted state. Contributed by Kuhu Shukla.

2019-02-07 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new ec0bed1  YARN-9206. RMServerUtils does not count SHUTDOWN as an 
accepted state. Contributed by Kuhu Shukla.
ec0bed1 is described below

commit ec0bed1008d7bd5dbd81ed3bdc847348d791fff5
Author: Sunil G 
AuthorDate: Thu Feb 7 19:08:41 2019 +0530

YARN-9206. RMServerUtils does not count SHUTDOWN as an accepted state. 
Contributed by Kuhu Shukla.

(cherry picked from commit 6ffe6ea8999c1216b6e510e350535754431bac2d)
---
 .../apache/hadoop/yarn/api/records/NodeState.java  | 12 
 .../yarn/server/resourcemanager/RMServerUtils.java | 22 -
 .../resourcemanager/webapp/RMWebServices.java  |  4 +--
 .../server/resourcemanager/TestRMServerUtils.java  | 36 ++
 4 files changed, 64 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeState.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeState.java
index d0344fb..2700cf2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeState.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeState.java
@@ -55,4 +55,16 @@ public enum NodeState {
 return (this == UNHEALTHY || this == DECOMMISSIONED
 || this == LOST || this == SHUTDOWN);
   }
+
+  public boolean isInactiveState() {
+return this == NodeState.DECOMMISSIONED ||
+  this == NodeState.LOST || this == NodeState.REBOOTED ||
+  this == NodeState.SHUTDOWN;
+  }
+
+  public boolean isActiveState() {
+return this == NodeState.NEW ||
+this == NodeState.RUNNING || this == NodeState.UNHEALTHY ||
+this == NodeState.DECOMMISSIONING;
+  }
 }
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
index e045f9a..ea47d2f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
@@ -106,10 +106,20 @@ public class RMServerUtils {
   EnumSet acceptedStates) {
 // nodes contains nodes that are NEW, RUNNING, UNHEALTHY or 
DECOMMISSIONING.
 ArrayList results = new ArrayList();
-if (acceptedStates.contains(NodeState.NEW) ||
-acceptedStates.contains(NodeState.RUNNING) ||
-acceptedStates.contains(NodeState.DECOMMISSIONING) ||
-acceptedStates.contains(NodeState.UNHEALTHY)) {
+boolean hasActive = false;
+boolean hasInactive = false;
+for (NodeState nodeState : acceptedStates) {
+  if (!hasInactive && nodeState.isInactiveState()) {
+hasInactive = true;
+  }
+  if (!hasActive && nodeState.isActiveState()) {
+hasActive = true;
+  }
+  if (hasActive && hasInactive) {
+break;
+  }
+}
+if (hasActive) {
   for (RMNode rmNode : context.getRMNodes().values()) {
 if (acceptedStates.contains(rmNode.getState())) {
   results.add(rmNode);
@@ -118,9 +128,7 @@ public class RMServerUtils {
 }
 
 // inactiveNodes contains nodes that are DECOMMISSIONED, LOST, OR REBOOTED
-if (acceptedStates.contains(NodeState.DECOMMISSIONED) ||
-acceptedStates.contains(NodeState.LOST) ||
-acceptedStates.contains(NodeState.REBOOTED)) {
+if (hasInactive) {
   for (RMNode rmNode : context.getInactiveRMNodes().values()) {
 if ((rmNode != null) && acceptedStates.contains(rmNode.getState())) {
   results.add(rmNode);
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
index 7f6b367..42294b0 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
+++ 

[hadoop] branch branch-3.1 updated: YARN-9206. RMServerUtils does not count SHUTDOWN as an accepted state. Contributed by Kuhu Shukla.

2019-02-07 Thread sunilg
This is an automated email from the ASF dual-hosted git repository.

sunilg pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 6ffe6ea  YARN-9206. RMServerUtils does not count SHUTDOWN as an 
accepted state. Contributed by Kuhu Shukla.
6ffe6ea is described below

commit 6ffe6ea8999c1216b6e510e350535754431bac2d
Author: Sunil G 
AuthorDate: Thu Feb 7 19:08:41 2019 +0530

YARN-9206. RMServerUtils does not count SHUTDOWN as an accepted state. 
Contributed by Kuhu Shukla.
---
 .../apache/hadoop/yarn/api/records/NodeState.java  | 12 
 .../yarn/server/resourcemanager/RMServerUtils.java | 22 -
 .../resourcemanager/webapp/RMWebServices.java  |  4 +--
 .../server/resourcemanager/TestRMServerUtils.java  | 36 ++
 4 files changed, 64 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeState.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeState.java
index d0344fb..2700cf2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeState.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeState.java
@@ -55,4 +55,16 @@ public enum NodeState {
 return (this == UNHEALTHY || this == DECOMMISSIONED
 || this == LOST || this == SHUTDOWN);
   }
+
+  public boolean isInactiveState() {
+return this == NodeState.DECOMMISSIONED ||
+  this == NodeState.LOST || this == NodeState.REBOOTED ||
+  this == NodeState.SHUTDOWN;
+  }
+
+  public boolean isActiveState() {
+return this == NodeState.NEW ||
+this == NodeState.RUNNING || this == NodeState.UNHEALTHY ||
+this == NodeState.DECOMMISSIONING;
+  }
 }
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
index ab6bbcf..8282b85 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMServerUtils.java
@@ -109,10 +109,20 @@ public class RMServerUtils {
   EnumSet acceptedStates) {
 // nodes contains nodes that are NEW, RUNNING, UNHEALTHY or 
DECOMMISSIONING.
 ArrayList results = new ArrayList();
-if (acceptedStates.contains(NodeState.NEW) ||
-acceptedStates.contains(NodeState.RUNNING) ||
-acceptedStates.contains(NodeState.DECOMMISSIONING) ||
-acceptedStates.contains(NodeState.UNHEALTHY)) {
+boolean hasActive = false;
+boolean hasInactive = false;
+for (NodeState nodeState : acceptedStates) {
+  if (!hasInactive && nodeState.isInactiveState()) {
+hasInactive = true;
+  }
+  if (!hasActive && nodeState.isActiveState()) {
+hasActive = true;
+  }
+  if (hasActive && hasInactive) {
+break;
+  }
+}
+if (hasActive) {
   for (RMNode rmNode : context.getRMNodes().values()) {
 if (acceptedStates.contains(rmNode.getState())) {
   results.add(rmNode);
@@ -121,9 +131,7 @@ public class RMServerUtils {
 }
 
 // inactiveNodes contains nodes that are DECOMMISSIONED, LOST, OR REBOOTED
-if (acceptedStates.contains(NodeState.DECOMMISSIONED) ||
-acceptedStates.contains(NodeState.LOST) ||
-acceptedStates.contains(NodeState.REBOOTED)) {
+if (hasInactive) {
   for (RMNode rmNode : context.getInactiveRMNodes().values()) {
 if ((rmNode != null) && acceptedStates.contains(rmNode.getState())) {
   results.add(rmNode);
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
index e1c8f0a..f9e115e 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
@@ -441,9 +441,7 @@ public class 

[hadoop] branch trunk updated: HDDS-1010. ContainerSet#getContainerMap should be renamed. Contributed by Supratim Deka.

2019-02-07 Thread msingh
This is an automated email from the ASF dual-hosted git repository.

msingh pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 214112b  HDDS-1010. ContainerSet#getContainerMap should be renamed. 
Contributed by Supratim Deka.
214112b is described below

commit 214112b2d706b301d18c4b270280cd7eee70e81e
Author: Mukul Kumar Singh 
AuthorDate: Thu Feb 7 18:06:23 2019 +0530

HDDS-1010. ContainerSet#getContainerMap should be renamed. Contributed by 
Supratim Deka.
---
 .../hadoop/ozone/container/common/impl/ContainerSet.java   |  3 ++-
 .../ozone/container/common/TestBlockDeletingService.java   |  2 +-
 .../common/impl/TestContainerDeletionChoosingPolicy.java   |  4 ++--
 .../container/common/impl/TestContainerPersistence.java| 14 +++---
 4 files changed, 12 insertions(+), 11 deletions(-)

diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerSet.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerSet.java
index 3c09f02..aff2275 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerSet.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerSet.java
@@ -135,7 +135,8 @@ public class ContainerSet {
* Return a copy of the containerMap.
* @return containerMap
*/
-  public Map getContainerMap() {
+  @VisibleForTesting
+  public Map getContainerMapCopy() {
 return ImmutableMap.copyOf(containerMap);
   }
 
diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java
index e8ae266..27fe4ff 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java
@@ -204,7 +204,7 @@ public class TestBlockDeletingService {
 
 MetadataStore meta = BlockUtils.getDB(
 (KeyValueContainerData) containerData.get(0), conf);
-Map containerMap = containerSet.getContainerMap();
+Map containerMap = containerSet.getContainerMapCopy();
 // NOTE: this test assumes that all the container is KetValueContainer and
 // have DeleteTransactionId in KetValueContainerData. If other
 // types is going to be added, this test should be checked.
diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerDeletionChoosingPolicy.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerDeletionChoosingPolicy.java
index 745f730..f4b089b 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerDeletionChoosingPolicy.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerDeletionChoosingPolicy.java
@@ -78,7 +78,7 @@ public class TestContainerDeletionChoosingPolicy {
   KeyValueContainer container = new KeyValueContainer(data, conf);
   containerSet.addContainer(container);
   Assert.assertTrue(
-  containerSet.getContainerMap().containsKey(data.getContainerID()));
+  
containerSet.getContainerMapCopy().containsKey(data.getContainerID()));
 }
 
 ContainerDeletionChoosingPolicy deletionPolicy =
@@ -138,7 +138,7 @@ public class TestContainerDeletionChoosingPolicy {
   KeyValueContainer container = new KeyValueContainer(data, conf);
   containerSet.addContainer(container);
   Assert.assertTrue(
-  containerSet.getContainerMap().containsKey(containerId));
+  containerSet.getContainerMapCopy().containsKey(containerId));
 }
 
 ContainerDeletionChoosingPolicy deletionPolicy =
diff --git 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
index 8bc73d4..f2e44cb 100644
--- 
a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
+++ 
b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
@@ -177,7 +177,7 @@ public class TestContainerPersistence {
   public void testCreateContainer() throws Exception {
 long testContainerID = getTestContainerID();
 addContainer(containerSet, testContainerID);
-Assert.assertTrue(containerSet.getContainerMap()
+

[hadoop] branch branch-3.2 updated: HADOOP-15281. Distcp to add no-rename copy option.

2019-02-07 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 36f3e77  HADOOP-15281. Distcp to add no-rename copy option.
36f3e77 is described below

commit 36f3e775d476eb848f33569bfbbab4872b11d9df
Author: Andrew Olson 
AuthorDate: Thu Feb 7 10:09:13 2019 +

HADOOP-15281. Distcp to add no-rename copy option.

Contributed by Andrew Olson.

(cherry picked from commit de804e53b9d20a2df75a4c7252bf83ed52011488)
---
 .../fs/contract/s3a/ITestS3AContractDistCp.java| 33 +++
 .../org/apache/hadoop/tools/DistCpConstants.java   |  3 +-
 .../org/apache/hadoop/tools/DistCpContext.java |  4 ++
 .../apache/hadoop/tools/DistCpOptionSwitch.java| 14 -
 .../org/apache/hadoop/tools/DistCpOptions.java | 19 ++
 .../org/apache/hadoop/tools/OptionsParser.java |  4 +-
 .../org/apache/hadoop/tools/mapred/CopyMapper.java |  6 +-
 .../tools/mapred/RetriableFileCopyCommand.java | 52 -
 .../hadoop-distcp/src/site/markdown/DistCp.md.vm   |  6 +-
 .../org/apache/hadoop/tools/TestDistCpOptions.java |  5 +-
 .../tools/contract/AbstractContractDistCpTest.java | 68 +-
 11 files changed, 191 insertions(+), 23 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
index b3d511e..740f256 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.fs.contract.s3a;
 
+import java.io.FileNotFoundException;
 import java.io.IOException;
 
 import static org.apache.hadoop.fs.s3a.Constants.*;
@@ -26,6 +27,7 @@ import static 
org.apache.hadoop.fs.s3a.S3ATestUtils.maybeEnableS3Guard;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.StorageStatistics;
 import org.apache.hadoop.fs.s3a.FailureInjectionPolicy;
 import org.apache.hadoop.tools.contract.AbstractContractDistCpTest;
 
@@ -74,4 +76,35 @@ public class ITestS3AContractDistCp extends 
AbstractContractDistCpTest {
 Path path = super.path(filepath);
 return new Path(path, FailureInjectionPolicy.DEFAULT_DELAY_KEY_SUBSTRING);
   }
+
+  @Override
+  public void testDirectWrite() throws Exception {
+resetStorageStatistics();
+super.testDirectWrite();
+assertEquals("Expected no renames for a direct write distcp", 0L,
+getRenameOperationCount());
+  }
+
+  @Override
+  public void testNonDirectWrite() throws Exception {
+resetStorageStatistics();
+try {
+  super.testNonDirectWrite();
+} catch (FileNotFoundException e) {
+  // We may get this exception when data is written to a DELAY_LISTING_ME
+  // directory causing verification of the distcp success to fail if
+  // S3Guard is not enabled
+}
+assertEquals("Expected 2 renames for a non-direct write distcp", 2L,
+getRenameOperationCount());
+  }
+
+  private void resetStorageStatistics() {
+getFileSystem().getStorageStatistics().reset();
+  }
+
+  private long getRenameOperationCount() {
+return getFileSystem().getStorageStatistics()
+.getLong(StorageStatistics.CommonStatisticNames.OP_RENAME);
+  }
 }
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
index 4946091..e20f206 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
@@ -85,7 +85,8 @@ public final class DistCpConstants {
   "distcp.dynamic.min.records_per_chunk";
   public static final String CONF_LABEL_SPLIT_RATIO =
   "distcp.dynamic.split.ratio";
-  
+  public static final String CONF_LABEL_DIRECT_WRITE = "distcp.direct.write";
+
   /* Total bytes to be copied. Updated by copylisting. Unfiltered count */
   public static final String CONF_LABEL_TOTAL_BYTES_TO_BE_COPIED = 
"mapred.total.bytes.expected";
 
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpContext.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpContext.java
index fc047ca..1e63d80 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpContext.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpContext.java
@@ -179,6 +179,10 @@ public class DistCpContext {
 return options.getCopyBufferSize();
   }
 
+  

[hadoop] branch branch-3.1 updated: HADOOP-15281. Distcp to add no-rename copy option.

2019-02-07 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new d2765ff  HADOOP-15281. Distcp to add no-rename copy option.
d2765ff is described below

commit d2765ffc2e3f6ce144bb0ca6066801d79cd7217d
Author: Andrew Olson 
AuthorDate: Thu Feb 7 10:09:55 2019 +

HADOOP-15281. Distcp to add no-rename copy option.

Contributed by Andrew Olson.

(cherry picked from commit de804e53b9d20a2df75a4c7252bf83ed52011488)
---
 .../fs/contract/s3a/ITestS3AContractDistCp.java| 33 +++
 .../org/apache/hadoop/tools/DistCpConstants.java   |  3 +-
 .../org/apache/hadoop/tools/DistCpContext.java |  4 ++
 .../apache/hadoop/tools/DistCpOptionSwitch.java| 14 -
 .../org/apache/hadoop/tools/DistCpOptions.java | 19 ++
 .../org/apache/hadoop/tools/OptionsParser.java |  4 +-
 .../org/apache/hadoop/tools/mapred/CopyMapper.java |  6 +-
 .../tools/mapred/RetriableFileCopyCommand.java | 52 -
 .../hadoop-distcp/src/site/markdown/DistCp.md.vm   |  6 +-
 .../org/apache/hadoop/tools/TestDistCpOptions.java |  5 +-
 .../tools/contract/AbstractContractDistCpTest.java | 68 +-
 11 files changed, 191 insertions(+), 23 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
index b3d511e..740f256 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.fs.contract.s3a;
 
+import java.io.FileNotFoundException;
 import java.io.IOException;
 
 import static org.apache.hadoop.fs.s3a.Constants.*;
@@ -26,6 +27,7 @@ import static 
org.apache.hadoop.fs.s3a.S3ATestUtils.maybeEnableS3Guard;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.StorageStatistics;
 import org.apache.hadoop.fs.s3a.FailureInjectionPolicy;
 import org.apache.hadoop.tools.contract.AbstractContractDistCpTest;
 
@@ -74,4 +76,35 @@ public class ITestS3AContractDistCp extends 
AbstractContractDistCpTest {
 Path path = super.path(filepath);
 return new Path(path, FailureInjectionPolicy.DEFAULT_DELAY_KEY_SUBSTRING);
   }
+
+  @Override
+  public void testDirectWrite() throws Exception {
+resetStorageStatistics();
+super.testDirectWrite();
+assertEquals("Expected no renames for a direct write distcp", 0L,
+getRenameOperationCount());
+  }
+
+  @Override
+  public void testNonDirectWrite() throws Exception {
+resetStorageStatistics();
+try {
+  super.testNonDirectWrite();
+} catch (FileNotFoundException e) {
+  // We may get this exception when data is written to a DELAY_LISTING_ME
+  // directory causing verification of the distcp success to fail if
+  // S3Guard is not enabled
+}
+assertEquals("Expected 2 renames for a non-direct write distcp", 2L,
+getRenameOperationCount());
+  }
+
+  private void resetStorageStatistics() {
+getFileSystem().getStorageStatistics().reset();
+  }
+
+  private long getRenameOperationCount() {
+return getFileSystem().getStorageStatistics()
+.getLong(StorageStatistics.CommonStatisticNames.OP_RENAME);
+  }
 }
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
index 4946091..e20f206 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
@@ -85,7 +85,8 @@ public final class DistCpConstants {
   "distcp.dynamic.min.records_per_chunk";
   public static final String CONF_LABEL_SPLIT_RATIO =
   "distcp.dynamic.split.ratio";
-  
+  public static final String CONF_LABEL_DIRECT_WRITE = "distcp.direct.write";
+
   /* Total bytes to be copied. Updated by copylisting. Unfiltered count */
   public static final String CONF_LABEL_TOTAL_BYTES_TO_BE_COPIED = 
"mapred.total.bytes.expected";
 
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpContext.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpContext.java
index fc047ca..1e63d80 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpContext.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpContext.java
@@ -179,6 +179,10 @@ public class DistCpContext {
 return options.getCopyBufferSize();
   }
 
+  

[hadoop] branch trunk updated: HADOOP-15281. Distcp to add no-rename copy option.

2019-02-07 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new de804e5  HADOOP-15281. Distcp to add no-rename copy option.
de804e5 is described below

commit de804e53b9d20a2df75a4c7252bf83ed52011488
Author: Andrew Olson 
AuthorDate: Thu Feb 7 10:05:58 2019 +

HADOOP-15281. Distcp to add no-rename copy option.

Contributed by Andrew Olson.
---
 .../fs/contract/s3a/ITestS3AContractDistCp.java| 33 +++
 .../org/apache/hadoop/tools/DistCpConstants.java   |  3 +-
 .../org/apache/hadoop/tools/DistCpContext.java |  4 ++
 .../apache/hadoop/tools/DistCpOptionSwitch.java| 14 -
 .../org/apache/hadoop/tools/DistCpOptions.java | 19 ++
 .../org/apache/hadoop/tools/OptionsParser.java |  4 +-
 .../org/apache/hadoop/tools/mapred/CopyMapper.java |  6 +-
 .../tools/mapred/RetriableFileCopyCommand.java | 52 -
 .../hadoop-distcp/src/site/markdown/DistCp.md.vm   |  6 +-
 .../org/apache/hadoop/tools/TestDistCpOptions.java |  5 +-
 .../tools/contract/AbstractContractDistCpTest.java | 68 +-
 11 files changed, 191 insertions(+), 23 deletions(-)

diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
index b3d511e..740f256 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.fs.contract.s3a;
 
+import java.io.FileNotFoundException;
 import java.io.IOException;
 
 import static org.apache.hadoop.fs.s3a.Constants.*;
@@ -26,6 +27,7 @@ import static 
org.apache.hadoop.fs.s3a.S3ATestUtils.maybeEnableS3Guard;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.StorageStatistics;
 import org.apache.hadoop.fs.s3a.FailureInjectionPolicy;
 import org.apache.hadoop.tools.contract.AbstractContractDistCpTest;
 
@@ -74,4 +76,35 @@ public class ITestS3AContractDistCp extends 
AbstractContractDistCpTest {
 Path path = super.path(filepath);
 return new Path(path, FailureInjectionPolicy.DEFAULT_DELAY_KEY_SUBSTRING);
   }
+
+  @Override
+  public void testDirectWrite() throws Exception {
+resetStorageStatistics();
+super.testDirectWrite();
+assertEquals("Expected no renames for a direct write distcp", 0L,
+getRenameOperationCount());
+  }
+
+  @Override
+  public void testNonDirectWrite() throws Exception {
+resetStorageStatistics();
+try {
+  super.testNonDirectWrite();
+} catch (FileNotFoundException e) {
+  // We may get this exception when data is written to a DELAY_LISTING_ME
+  // directory causing verification of the distcp success to fail if
+  // S3Guard is not enabled
+}
+assertEquals("Expected 2 renames for a non-direct write distcp", 2L,
+getRenameOperationCount());
+  }
+
+  private void resetStorageStatistics() {
+getFileSystem().getStorageStatistics().reset();
+  }
+
+  private long getRenameOperationCount() {
+return getFileSystem().getStorageStatistics()
+.getLong(StorageStatistics.CommonStatisticNames.OP_RENAME);
+  }
 }
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
index 4946091..e20f206 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
@@ -85,7 +85,8 @@ public final class DistCpConstants {
   "distcp.dynamic.min.records_per_chunk";
   public static final String CONF_LABEL_SPLIT_RATIO =
   "distcp.dynamic.split.ratio";
-  
+  public static final String CONF_LABEL_DIRECT_WRITE = "distcp.direct.write";
+
   /* Total bytes to be copied. Updated by copylisting. Unfiltered count */
   public static final String CONF_LABEL_TOTAL_BYTES_TO_BE_COPIED = 
"mapred.total.bytes.expected";
 
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpContext.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpContext.java
index fc047ca..1e63d80 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpContext.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpContext.java
@@ -179,6 +179,10 @@ public class DistCpContext {
 return options.getCopyBufferSize();
   }
 
+  public boolean shouldDirectWrite() {
+return options.shouldDirectWrite();
+  }
+