[hadoop] branch branch-3.3 updated: HADOOP-18592. Sasl connection failure should log remote address. (#5294)

2023-02-01 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new f3fa4af5dc3 HADOOP-18592. Sasl connection failure should log remote 
address. (#5294)
f3fa4af5dc3 is described below

commit f3fa4af5dc30f30784f507d1122b75ebeea50b46
Author: Viraj Jasani 
AuthorDate: Wed Feb 1 10:15:20 2023 -0800

HADOOP-18592. Sasl connection failure should log remote address. (#5294)

Contributed by Viraj Jasani 

Signed-off-by: Chris Nauroth 
Signed-off-by: Steve Loughran 
Signed-off-by: Mingliang Liu 
---
 .../src/main/java/org/apache/hadoop/ipc/Client.java   | 19 +--
 1 file changed, 9 insertions(+), 10 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
index 57f0b7c2149..7327d884bc7 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
@@ -687,7 +687,7 @@ public class Client implements AutoCloseable {
  * handle that, a relogin is attempted.
  */
 private synchronized void handleSaslConnectionFailure(
-final int currRetries, final int maxRetries, final Exception ex,
+final int currRetries, final int maxRetries, final IOException ex,
 final Random rand, final UserGroupInformation ugi) throws IOException,
 InterruptedException {
   ugi.doAs(new PrivilegedExceptionAction() {
@@ -698,10 +698,7 @@ public class Client implements AutoCloseable {
   disposeSasl();
   if (shouldAuthenticateOverKrb()) {
 if (currRetries < maxRetries) {
-  if(LOG.isDebugEnabled()) {
-LOG.debug("Exception encountered while connecting to "
-+ "the server : " + ex);
-  }
+  LOG.debug("Exception encountered while connecting to the server 
{}", remoteId, ex);
   // try re-login
   if (UserGroupInformation.isLoginKeytabBased()) {
 UserGroupInformation.getLoginUser().reloginFromKeytab();
@@ -719,7 +716,11 @@ public class Client implements AutoCloseable {
   + UserGroupInformation.getLoginUser().getUserName() + " to "
   + remoteId;
   LOG.warn(msg, ex);
-  throw (IOException) new IOException(msg).initCause(ex);
+  throw NetUtils.wrapException(remoteId.getAddress().getHostName(),
+  remoteId.getAddress().getPort(),
+  NetUtils.getHostname(),
+  0,
+  ex);
 }
   } else {
 // With RequestHedgingProxyProvider, one rpc call will send 
multiple
@@ -727,11 +728,9 @@ public class Client implements AutoCloseable {
 // all other requests will be interrupted. It's not a big problem,
 // and should not print a warning log.
 if (ex instanceof InterruptedIOException) {
-  LOG.debug("Exception encountered while connecting to the server",
-  ex);
+  LOG.debug("Exception encountered while connecting to the server 
{}", remoteId, ex);
 } else {
-  LOG.warn("Exception encountered while connecting to the server ",
-  ex);
+  LOG.warn("Exception encountered while connecting to the server 
{}", remoteId, ex);
 }
   }
   if (ex instanceof RemoteException)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (6d325d9d09c -> ad0cff2f973)

2023-02-01 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


from 6d325d9d09c HADOOP-18598. maven site generation doesn't include 
javadocs. (#5319)
 add ad0cff2f973 HADOOP-18592. Sasl connection failure should log remote 
address. (#5294)

No new revisions were added by this update.

Summary of changes:
 .../src/main/java/org/apache/hadoop/ipc/Client.java   | 19 +--
 1 file changed, 9 insertions(+), 10 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch HADOOP-17800 updated: HADOOP-18209. In Namenode UI Links are not working proper and port were displaying wrong in UI IPv6 (#4184)

2022-04-18 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch HADOOP-17800
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HADOOP-17800 by this push:
 new f5e9b6861a0 HADOOP-18209. In Namenode UI Links are not working proper 
and port were displaying wrong in UI IPv6 (#4184)
f5e9b6861a0 is described below

commit f5e9b6861a0bb5012dd865839581415e485cb84f
Author: Renukaprasad C <48682981+prasad-a...@users.noreply.github.com>
AuthorDate: Tue Apr 19 03:24:54 2022 +0530

HADOOP-18209. In Namenode UI Links are not working proper and port were 
displaying wrong in UI IPv6 (#4184)

Contributed by  Renukaprasad C.

Signed-off-by: Mingliang Liu 
---
 hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
index 9be19fefca9..9a6bf6f5a3a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.js
@@ -227,8 +227,10 @@
   var n = nodes[i];
   n.usedPercentage = Math.round((n.used + n.nonDfsUsedSpace) * 1.0 / 
n.capacity * 100);
 
-  var port = n.infoAddr.split(":")[1];
-  var securePort = n.infoSecureAddr.split(":")[1];
+  var array = n.infoAddr.split(":");
+  var port = array[array.length-1];
+  array = n.infoSecureAddr.split(":");
+  var securePort = array[array.length-1];
   var dnHost = n.name.split(":")[0];
   n.dnWebAddress = "http://; + dnHost + ":" + port;
   if (securePort != 0) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-16143. Add Timer in EditLogTailer and de-flake TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits (#3235)

2021-08-26 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new aa9cdf2  HDFS-16143. Add Timer in EditLogTailer and de-flake 
TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits (#3235)
aa9cdf2 is described below

commit aa9cdf2af6fd84aa24ec5a19da4f955472a8d5bd
Author: Viraj Jasani 
AuthorDate: Thu Aug 26 13:07:38 2021 +0530

HDFS-16143. Add Timer in EditLogTailer and de-flake 
TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits (#3235)

Contributed by Viraj Jasani.

Signed-off-by: Mingliang Liu 
Signed-off-by: Takanobu Asanuma 
Signed-off-by: Wei-Chiu Chuang 
---
 .../java/org/apache/hadoop/util/FakeTimer.java | 10 
 .../hdfs/server/namenode/ha/EditLogTailer.java | 59 +++---
 .../hdfs/server/namenode/ha/TestEditLogTailer.java | 52 +++
 3 files changed, 82 insertions(+), 39 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/FakeTimer.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/FakeTimer.java
index 05d66d3..17d20ea 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/FakeTimer.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/FakeTimer.java
@@ -39,6 +39,16 @@ public class FakeTimer extends Timer {
 nowNanos = TimeUnit.MILLISECONDS.toNanos(1000);
   }
 
+  /**
+   * FakeTimer constructor with milliseconds to keep as initial value.
+   *
+   * @param time time in millis.
+   */
+  public FakeTimer(long time) {
+now = time;
+nowNanos = TimeUnit.MILLISECONDS.toNanos(time);
+  }
+
   @Override
   public long now() {
 return now;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java
index b82fb5b..c4934bd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java
@@ -36,6 +36,7 @@ import java.util.concurrent.TimeoutException;
 
 import org.apache.hadoop.thirdparty.com.google.common.collect.Iterators;
 import 
org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.hadoop.util.Timer;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
@@ -55,12 +56,10 @@ import 
org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol;
 import org.apache.hadoop.ipc.RPC;
 import org.apache.hadoop.security.SecurityUtil;
 
-import static org.apache.hadoop.util.Time.monotonicNow;
 import static org.apache.hadoop.util.ExitUtil.terminate;
 
 import 
org.apache.hadoop.thirdparty.com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.thirdparty.com.google.common.base.Preconditions;
-import org.apache.hadoop.util.Time;
 
 
 /**
@@ -172,14 +171,21 @@ public class EditLogTailer {
*/
   private final long maxTxnsPerLock;
 
+  /**
+   * Timer instance to be set only using constructor.
+   * Only tests can reassign this by using setTimerForTests().
+   * For source code, this timer instance should be treated as final.
+   */
+  private Timer timer;
+
   public EditLogTailer(FSNamesystem namesystem, Configuration conf) {
 this.tailerThread = new EditLogTailerThread();
 this.conf = conf;
 this.namesystem = namesystem;
+this.timer = new Timer();
 this.editLog = namesystem.getEditLog();
-
-lastLoadTimeMs = monotonicNow();
-lastRollTimeMs = monotonicNow();
+this.lastLoadTimeMs = timer.monotonicNow();
+this.lastRollTimeMs = timer.monotonicNow();
 
 logRollPeriodMs = conf.getTimeDuration(
 DFSConfigKeys.DFS_HA_LOGROLL_PERIOD_KEY,
@@ -301,7 +307,7 @@ public class EditLogTailer {
 long editsTailed = 0;
 // Fully tail the journal to the end
 do {
-  long startTime = Time.monotonicNow();
+  long startTime = timer.monotonicNow();
   try {
 NameNode.getNameNodeMetrics().addEditLogTailInterval(
 startTime - lastLoadTimeMs);
@@ -312,7 +318,7 @@ public class EditLogTailer {
 throw new IOException(e);
   } finally {
 NameNode.getNameNodeMetrics().addEditLogTailTime(
-Time.monotonicNow() - startTime);
+timer.monotonicNow() - startTime);
   }
 } while(editsTailed > 0);
 return null;
@@ -336,7 +342,7 @@ public class EditLogTailer {
 LOG.debug("lastTxnId: " + lastTxnId);
   }
   Colle

[hadoop] branch branch-3.2 updated: Revert "HDFS-16033 Fix issue of the StatisticsDataReferenceCleaner cleanUp (#3042)"

2021-06-11 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new d92af85  Revert "HDFS-16033 Fix issue of the 
StatisticsDataReferenceCleaner cleanUp (#3042)"
d92af85 is described below

commit d92af850456bd80f61741e41ee9bc3329cfc00cd
Author: Mingliang Liu 
AuthorDate: Fri Jun 11 00:38:36 2021 -0700

Revert "HDFS-16033 Fix issue of the StatisticsDataReferenceCleaner cleanUp 
(#3042)"

This reverts commit 20a4cb0c67b413e7b9bc2b3213b5b592bfaa99d5.
---
 .../src/main/java/org/apache/hadoop/fs/FileSystem.java   | 9 +
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index 458bd34..ff5091c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -3831,19 +3831,12 @@ public abstract class FileSystem extends Configured
  * Background action to act on references being removed.
  */
 private static class StatisticsDataReferenceCleaner implements Runnable {
-  /**
-   * Represents the timeout period expires for remove reference objects 
from
-   * the STATS_DATA_REF_QUEUE when the queue is empty.
-   */
-  private static final int REF_QUEUE_POLL_TIMEOUT = 1;
-
   @Override
   public void run() {
 while (!Thread.interrupted()) {
   try {
 StatisticsDataReference ref =
-(StatisticsDataReference)STATS_DATA_REF_QUEUE.
-remove(REF_QUEUE_POLL_TIMEOUT);
+(StatisticsDataReference)STATS_DATA_REF_QUEUE.remove();
 ref.cleanUp();
   } catch (InterruptedException ie) {
 LOGGER.warn("Cleaner thread interrupted, will stop", ie);

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: Revert "HDFS-16033 Fix issue of the StatisticsDataReferenceCleaner cleanUp (#3042)"

2021-06-11 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 91bcfbd  Revert "HDFS-16033 Fix issue of the 
StatisticsDataReferenceCleaner cleanUp (#3042)"
91bcfbd is described below

commit 91bcfbd72e615867e0958224d2de377c8bff6eb7
Author: Mingliang Liu 
AuthorDate: Fri Jun 11 00:35:41 2021 -0700

Revert "HDFS-16033 Fix issue of the StatisticsDataReferenceCleaner cleanUp 
(#3042)"

This reverts commit 8c0f9480549a4e7fa7de02c9bf73bccb0381f22a.
---
 .../src/main/java/org/apache/hadoop/fs/FileSystem.java   | 9 +
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index e3c8d0d..528f6c2 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -4013,19 +4013,12 @@ public abstract class FileSystem extends Configured
  * Background action to act on references being removed.
  */
 private static class StatisticsDataReferenceCleaner implements Runnable {
-  /**
-   * Represents the timeout period expires for remove reference objects 
from
-   * the STATS_DATA_REF_QUEUE when the queue is empty.
-   */
-  private static final int REF_QUEUE_POLL_TIMEOUT = 1;
-
   @Override
   public void run() {
 while (!Thread.interrupted()) {
   try {
 StatisticsDataReference ref =
-(StatisticsDataReference)STATS_DATA_REF_QUEUE.
-remove(REF_QUEUE_POLL_TIMEOUT);
+(StatisticsDataReference)STATS_DATA_REF_QUEUE.remove();
 ref.cleanUp();
   } catch (InterruptedException ie) {
 LOGGER.warn("Cleaner thread interrupted, will stop", ie);

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: Revert "HDFS-16033 Fix issue of the StatisticsDataReferenceCleaner cleanUp (#3042)"

2021-06-11 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 6e5692e  Revert "HDFS-16033 Fix issue of the 
StatisticsDataReferenceCleaner cleanUp (#3042)"
6e5692e is described below

commit 6e5692e7e221a22a86257ba5f85746102528e96d
Author: Mingliang Liu 
AuthorDate: Fri Jun 11 00:34:24 2021 -0700

Revert "HDFS-16033 Fix issue of the StatisticsDataReferenceCleaner cleanUp 
(#3042)"

This reverts commit 4a26a61ecd54bd36b6d089f999359da5fca16723.
---
 .../src/main/java/org/apache/hadoop/fs/FileSystem.java   | 9 +
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index 057382b..c6cf941 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -4004,19 +4004,12 @@ public abstract class FileSystem extends Configured
  * Background action to act on references being removed.
  */
 private static class StatisticsDataReferenceCleaner implements Runnable {
-  /**
-   * Represents the timeout period expires for remove reference objects 
from
-   * the STATS_DATA_REF_QUEUE when the queue is empty.
-   */
-  private static final int REF_QUEUE_POLL_TIMEOUT = 1;
-
   @Override
   public void run() {
 while (!Thread.interrupted()) {
   try {
 StatisticsDataReference ref =
-(StatisticsDataReference)STATS_DATA_REF_QUEUE.
-remove(REF_QUEUE_POLL_TIMEOUT);
+(StatisticsDataReference)STATS_DATA_REF_QUEUE.remove();
 ref.cleanUp();
   } catch (InterruptedException ie) {
 LOGGER.warn("Cleaner thread interrupted, will stop", ie);

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HDFS-16033 Fix issue of the StatisticsDataReferenceCleaner cleanUp (#3042)

2021-06-04 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 20a4cb0  HDFS-16033 Fix issue of the StatisticsDataReferenceCleaner 
cleanUp (#3042)
20a4cb0 is described below

commit 20a4cb0c67b413e7b9bc2b3213b5b592bfaa99d5
Author: July <51110188+y...@users.noreply.github.com>
AuthorDate: Sat Jun 5 04:36:09 2021 +0800

HDFS-16033 Fix issue of the StatisticsDataReferenceCleaner cleanUp (#3042)

Contributed by kaifeiYi (yikf).

Signed-off-by: Mingliang Liu 
Reviewed-by: Steve Loughran 
---
 .../src/main/java/org/apache/hadoop/fs/FileSystem.java   | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index ff5091c..458bd34 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -3831,12 +3831,19 @@ public abstract class FileSystem extends Configured
  * Background action to act on references being removed.
  */
 private static class StatisticsDataReferenceCleaner implements Runnable {
+  /**
+   * Represents the timeout period expires for remove reference objects 
from
+   * the STATS_DATA_REF_QUEUE when the queue is empty.
+   */
+  private static final int REF_QUEUE_POLL_TIMEOUT = 1;
+
   @Override
   public void run() {
 while (!Thread.interrupted()) {
   try {
 StatisticsDataReference ref =
-(StatisticsDataReference)STATS_DATA_REF_QUEUE.remove();
+(StatisticsDataReference)STATS_DATA_REF_QUEUE.
+remove(REF_QUEUE_POLL_TIMEOUT);
 ref.cleanUp();
   } catch (InterruptedException ie) {
 LOGGER.warn("Cleaner thread interrupted, will stop", ie);

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HDFS-16033 Fix issue of the StatisticsDataReferenceCleaner cleanUp (#3042)

2021-06-04 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 8c0f948  HDFS-16033 Fix issue of the StatisticsDataReferenceCleaner 
cleanUp (#3042)
8c0f948 is described below

commit 8c0f9480549a4e7fa7de02c9bf73bccb0381f22a
Author: July <51110188+y...@users.noreply.github.com>
AuthorDate: Sat Jun 5 04:36:09 2021 +0800

HDFS-16033 Fix issue of the StatisticsDataReferenceCleaner cleanUp (#3042)

Contributed by kaifeiYi (yikf).

Signed-off-by: Mingliang Liu 
Reviewed-by: Steve Loughran 
---
 .../src/main/java/org/apache/hadoop/fs/FileSystem.java   | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index 528f6c2..e3c8d0d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -4013,12 +4013,19 @@ public abstract class FileSystem extends Configured
  * Background action to act on references being removed.
  */
 private static class StatisticsDataReferenceCleaner implements Runnable {
+  /**
+   * Represents the timeout period expires for remove reference objects 
from
+   * the STATS_DATA_REF_QUEUE when the queue is empty.
+   */
+  private static final int REF_QUEUE_POLL_TIMEOUT = 1;
+
   @Override
   public void run() {
 while (!Thread.interrupted()) {
   try {
 StatisticsDataReference ref =
-(StatisticsDataReference)STATS_DATA_REF_QUEUE.remove();
+(StatisticsDataReference)STATS_DATA_REF_QUEUE.
+remove(REF_QUEUE_POLL_TIMEOUT);
 ref.cleanUp();
   } catch (InterruptedException ie) {
 LOGGER.warn("Cleaner thread interrupted, will stop", ie);

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-16033 Fix issue of the StatisticsDataReferenceCleaner cleanUp (#3042)

2021-06-04 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 4a26a61  HDFS-16033 Fix issue of the StatisticsDataReferenceCleaner 
cleanUp (#3042)
4a26a61 is described below

commit 4a26a61ecd54bd36b6d089f999359da5fca16723
Author: July <51110188+y...@users.noreply.github.com>
AuthorDate: Sat Jun 5 04:36:09 2021 +0800

HDFS-16033 Fix issue of the StatisticsDataReferenceCleaner cleanUp (#3042)

Contributed by kaifeiYi (yikf).

Signed-off-by: Mingliang Liu 
Reviewed-by: Steve Loughran 
---
 .../src/main/java/org/apache/hadoop/fs/FileSystem.java   | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index c6cf941..057382b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -4004,12 +4004,19 @@ public abstract class FileSystem extends Configured
  * Background action to act on references being removed.
  */
 private static class StatisticsDataReferenceCleaner implements Runnable {
+  /**
+   * Represents the timeout period expires for remove reference objects 
from
+   * the STATS_DATA_REF_QUEUE when the queue is empty.
+   */
+  private static final int REF_QUEUE_POLL_TIMEOUT = 1;
+
   @Override
   public void run() {
 while (!Thread.interrupted()) {
   try {
 StatisticsDataReference ref =
-(StatisticsDataReference)STATS_DATA_REF_QUEUE.remove();
+(StatisticsDataReference)STATS_DATA_REF_QUEUE.
+remove(REF_QUEUE_POLL_TIMEOUT);
 ref.cleanUp();
   } catch (InterruptedException ie) {
 LOGGER.warn("Cleaner thread interrupted, will stop", ie);

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (6040e86 -> 46a5979)

2021-04-08 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 6040e86  HADOOP-17625. Update to Jetty 9.4.39. (#2870)
 add 46a5979  MAPREDUCE-7270. TestHistoryViewerPrinter could be failed when 
the locale isn't English. (#1942)

No new revisions were added by this update.

Summary of changes:
 .../mapreduce/jobhistory/TestHistoryViewerPrinter.java| 15 +++
 1 file changed, 15 insertions(+)

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-15946. Fix java doc in FSPermissionChecker (#2855). Contributed by tomscut.

2021-04-02 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 3cb7644  HDFS-15946. Fix java doc in FSPermissionChecker (#2855). 
Contributed by tomscut.
3cb7644 is described below

commit 3cb76447f501100f9d6368f38b0cd4d51c700b1e
Author: litao 
AuthorDate: Sat Apr 3 01:37:04 2021 +0800

HDFS-15946. Fix java doc in FSPermissionChecker (#2855). Contributed by 
tomscut.

Signed-off-by: Mingliang Liu 
---
 .../java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java | 1 -
 1 file changed, 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
index a83ec51..3f80952 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
@@ -549,7 +549,6 @@ public class FSPermissionChecker implements 
AccessControlEnforcer {
* - Default entries may be present, but they are ignored during enforcement.
*
* @param inode INodeAttributes accessed inode
-   * @param snapshotId int snapshot ID
* @param access FsAction requested permission
* @param mode FsPermission mode from inode
* @param aclFeature AclFeature of inode

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-15938. Fix java doc in FSEditLog (#2837). Contributed by tomscut.

2021-04-01 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 7c83f14  HDFS-15938. Fix java doc in FSEditLog (#2837). Contributed by 
tomscut.
7c83f14 is described below

commit 7c83f140dc8f55efce2da66eeb3cf28af1eb2b40
Author: litao 
AuthorDate: Fri Apr 2 10:28:17 2021 +0800

HDFS-15938. Fix java doc in FSEditLog (#2837). Contributed by tomscut.

Signed-off-by: Akira Ajisaka 
Signed-off-by: Mingliang Liu 
---
 .../main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java   | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
index 2ef3a02..6b73bbd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
@@ -25,6 +25,7 @@ import java.lang.reflect.Constructor;
 import java.net.URI;
 import java.util.ArrayList;
 import java.util.Collection;
+import java.util.EnumSet;
 import java.util.Iterator;
 import java.util.List;
 import java.util.concurrent.atomic.AtomicLong;
@@ -1180,7 +1181,8 @@ public class FSEditLog implements LogsPurgeable {
 
   /**
* Log a CacheDirectiveInfo returned from
-   * {@link CacheManager#addDirective(CacheDirectiveInfo, FSPermissionChecker)}
+   * {@link CacheManager#addDirective(CacheDirectiveInfo, FSPermissionChecker,
+   * EnumSet)}
*/
   void logAddCacheDirectiveInfo(CacheDirectiveInfo directive,
   boolean toLogRpcIds) {

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.10 updated: HDFS-15931 : Fix non-static inner classes for better memory management (#2830). Contributed by Viraj Jasani

2021-04-01 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.10 by this push:
 new 3c1c1b4  HDFS-15931 : Fix non-static inner classes for better memory 
management (#2830). Contributed by Viraj Jasani
3c1c1b4 is described below

commit 3c1c1b40d858b9e9822f3dca5ee0484dd4ee29d8
Author: Viraj Jasani 
AuthorDate: Fri Apr 2 05:04:31 2021 +0530

HDFS-15931 : Fix non-static inner classes for better memory management 
(#2830). Contributed by Viraj Jasani

Signed-off-by: Mingliang Liu 
---
 .../hdfs/server/federation/MiniRouterDFSCluster.java |  2 +-
 .../fsdataset/impl/RamDiskReplicaLruTracker.java |  4 ++--
 .../org/apache/hadoop/hdfs/tools/DebugAdmin.java | 20 ++--
 .../java/org/apache/hadoop/hdfs/MiniDFSCluster.java  |  2 +-
 4 files changed, 14 insertions(+), 14 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
index e34713d..6761dc2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
@@ -140,7 +140,7 @@ public class MiniRouterDFSCluster {
   /**
* Router context.
*/
-  public class RouterContext {
+  public static class RouterContext {
 private Router router;
 private FileContext fileContext;
 private String nameserviceId;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
index b940736..0f0f598 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
@@ -35,7 +35,7 @@ import java.util.*;
 @InterfaceStability.Unstable
 public class RamDiskReplicaLruTracker extends RamDiskReplicaTracker {
 
-  private class RamDiskReplicaLru extends RamDiskReplica {
+  private static class RamDiskReplicaLru extends RamDiskReplica {
 long lastUsedTime;
 
 private RamDiskReplicaLru(String bpid, long blockId,
@@ -88,7 +88,7 @@ public class RamDiskReplicaLruTracker extends 
RamDiskReplicaTracker {
 }
 RamDiskReplicaLru ramDiskReplicaLru =
 new RamDiskReplicaLru(bpid, blockId, transientVolume,
-  lockedBytesReserved);
+lockedBytesReserved);
 map.put(blockId, ramDiskReplicaLru);
 replicasNotPersisted.add(ramDiskReplicaLru);
   }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java
index 2c327f4..642bfab 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java
@@ -64,7 +64,7 @@ public class DebugAdmin extends Configured implements Tool {
   /**
* All the debug commands we can run.
*/
-  private DebugCommand DEBUG_COMMANDS[] = {
+  private final DebugCommand[] DEBUG_COMMANDS = {
   new VerifyMetaCommand(),
   new ComputeMetaCommand(),
   new RecoverLeaseCommand(),
@@ -74,7 +74,7 @@ public class DebugAdmin extends Configured implements Tool {
   /**
* The base class for debug commands.
*/
-  private abstract class DebugCommand {
+  private abstract static class DebugCommand {
 final String name;
 final String usageText;
 final String helpText;
@@ -93,15 +93,15 @@ public class DebugAdmin extends Configured implements Tool {
   /**
* The command for verifying a block metadata file and possibly block file.
*/
-  private class VerifyMetaCommand extends DebugCommand {
+  private static class VerifyMetaCommand extends DebugCommand {
 VerifyMetaCommand() {
   super("verifyMeta",
-"verifyMeta -meta  [-block ]",
-"  Verify HDFS metadata and block files.  If a block file is specified, we" +
-System.lineSeparator() +
-"  will verify that the checksums in the metadata file match the block" +
-System.lineSeparator() +
-"  file.");
+  "verifyMeta -meta  [-block ]",
+  "  Verify HDFS metadata and block files.  If a block file is 
specified, we" +

[hadoop] branch branch-3.1 updated: HDFS-15931 : Fix non-static inner classes for better memory management (#2830). Contributed by Viraj Jasani

2021-04-01 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 739f70b  HDFS-15931 : Fix non-static inner classes for better memory 
management (#2830). Contributed by Viraj Jasani
739f70b is described below

commit 739f70b5527645cfe65aafaf8c54bc945847bf23
Author: Viraj Jasani 
AuthorDate: Fri Apr 2 05:04:31 2021 +0530

HDFS-15931 : Fix non-static inner classes for better memory management 
(#2830). Contributed by Viraj Jasani

Signed-off-by: Mingliang Liu 
---
 .../hdfs/server/federation/MiniRouterDFSCluster.java |  2 +-
 .../fsdataset/impl/RamDiskReplicaLruTracker.java |  4 ++--
 .../hdfs/server/namenode/ReencryptionHandler.java|  2 +-
 .../org/apache/hadoop/hdfs/tools/DebugAdmin.java | 20 ++--
 .../java/org/apache/hadoop/hdfs/MiniDFSCluster.java  |  2 +-
 5 files changed, 15 insertions(+), 15 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
index e34713d..6761dc2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
@@ -140,7 +140,7 @@ public class MiniRouterDFSCluster {
   /**
* Router context.
*/
-  public class RouterContext {
+  public static class RouterContext {
 private Router router;
 private FileContext fileContext;
 private String nameserviceId;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
index b940736..0f0f598 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
@@ -35,7 +35,7 @@ import java.util.*;
 @InterfaceStability.Unstable
 public class RamDiskReplicaLruTracker extends RamDiskReplicaTracker {
 
-  private class RamDiskReplicaLru extends RamDiskReplica {
+  private static class RamDiskReplicaLru extends RamDiskReplica {
 long lastUsedTime;
 
 private RamDiskReplicaLru(String bpid, long blockId,
@@ -88,7 +88,7 @@ public class RamDiskReplicaLruTracker extends 
RamDiskReplicaTracker {
 }
 RamDiskReplicaLru ramDiskReplicaLru =
 new RamDiskReplicaLru(bpid, blockId, transientVolume,
-  lockedBytesReserved);
+lockedBytesReserved);
 map.put(blockId, ramDiskReplicaLru);
 replicasNotPersisted.add(ramDiskReplicaLru);
   }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
index d430352..655b20b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
@@ -834,7 +834,7 @@ public class ReencryptionHandler implements Runnable {
 }
   }
 
-  private class ZoneTraverseInfo extends TraverseInfo {
+  private static class ZoneTraverseInfo extends TraverseInfo {
 private String ezKeyVerName;
 
 ZoneTraverseInfo(String ezKeyVerName) {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java
index 2c327f4..642bfab 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DebugAdmin.java
@@ -64,7 +64,7 @@ public class DebugAdmin extends Configured implements Tool {
   /**
* All the debug commands we can run.
*/
-  private DebugCommand DEBUG_COMMANDS[] = {
+  private final DebugCommand[] DEBUG_COMMANDS = {
   new VerifyMetaCommand(),
   new ComputeMetaCommand(),
   new RecoverLeaseCommand(),
@@ -74,7 +74,7 @@ public class DebugAdmin extends Configured implements Tool {
   /**
* The base class for debug commands.
*/
-  private abstract class DebugCommand {
+  private abstract static class DebugCommand {
 final String name;
 final String usageText

[hadoop] branch branch-3.2 updated: HDFS-15931 : Fix non-static inner classes for better memory management (#2830). Contributed by Viraj Jasani

2021-04-01 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 5458ebf  HDFS-15931 : Fix non-static inner classes for better memory 
management (#2830). Contributed by Viraj Jasani
5458ebf is described below

commit 5458ebf6073233ca7065206e3e477073b1d16491
Author: Viraj Jasani 
AuthorDate: Fri Apr 2 05:04:31 2021 +0530

HDFS-15931 : Fix non-static inner classes for better memory management 
(#2830). Contributed by Viraj Jasani

Signed-off-by: Mingliang Liu 
---
 .../hdfs/server/federation/MiniRouterDFSCluster.java |  2 +-
 .../impl/InMemoryLevelDBAliasMapClient.java  |  2 +-
 .../fsdataset/impl/RamDiskReplicaLruTracker.java |  4 ++--
 .../hdfs/server/namenode/ReencryptionHandler.java|  2 +-
 .../org/apache/hadoop/hdfs/tools/DebugAdmin.java | 20 ++--
 .../java/org/apache/hadoop/hdfs/MiniDFSCluster.java  |  2 +-
 6 files changed, 16 insertions(+), 16 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
index 7b59e3c..9229390 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
@@ -140,7 +140,7 @@ public class MiniRouterDFSCluster {
   /**
* Router context.
*/
-  public class RouterContext {
+  public static class RouterContext {
 private Router router;
 private FileContext fileContext;
 private String nameserviceId;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/InMemoryLevelDBAliasMapClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/InMemoryLevelDBAliasMapClient.java
index fb5ee93..bfd540b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/InMemoryLevelDBAliasMapClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/InMemoryLevelDBAliasMapClient.java
@@ -130,7 +130,7 @@ public class InMemoryLevelDBAliasMapClient extends 
BlockAliasMap
 }
   }
 
-  class LevelDbWriter extends BlockAliasMap.Writer {
+  static class LevelDbWriter extends BlockAliasMap.Writer {
 
 private InMemoryAliasMapProtocol aliasMap;
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
index b940736..0f0f598 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
@@ -35,7 +35,7 @@ import java.util.*;
 @InterfaceStability.Unstable
 public class RamDiskReplicaLruTracker extends RamDiskReplicaTracker {
 
-  private class RamDiskReplicaLru extends RamDiskReplica {
+  private static class RamDiskReplicaLru extends RamDiskReplica {
 long lastUsedTime;
 
 private RamDiskReplicaLru(String bpid, long blockId,
@@ -88,7 +88,7 @@ public class RamDiskReplicaLruTracker extends 
RamDiskReplicaTracker {
 }
 RamDiskReplicaLru ramDiskReplicaLru =
 new RamDiskReplicaLru(bpid, blockId, transientVolume,
-  lockedBytesReserved);
+lockedBytesReserved);
 map.put(blockId, ramDiskReplicaLru);
 replicasNotPersisted.add(ramDiskReplicaLru);
   }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
index fa4de38..e0bc0ff 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
@@ -835,7 +835,7 @@ public class ReencryptionHandler implements Runnable {
 }
   }
 
-  private class ZoneTraverseInfo extends TraverseInfo {
+  private static class ZoneTraverseInfo extends TraverseInfo {
 private String ezKeyVerName;
 
 ZoneTraverseInfo(String ezKeyVerName) {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools

[hadoop] branch branch-3.3 updated: HDFS-15931 : Fix non-static inner classes for better memory management (#2830). Contributed by Viraj Jasani

2021-04-01 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new f707d84  HDFS-15931 : Fix non-static inner classes for better memory 
management (#2830). Contributed by Viraj Jasani
f707d84 is described below

commit f707d8407a2e879845465d6f656859358436e322
Author: Viraj Jasani 
AuthorDate: Fri Apr 2 05:04:31 2021 +0530

HDFS-15931 : Fix non-static inner classes for better memory management 
(#2830). Contributed by Viraj Jasani

Signed-off-by: Mingliang Liu 
---
 .../hdfs/server/federation/MiniRouterDFSCluster.java |  2 +-
 .../impl/InMemoryLevelDBAliasMapClient.java  |  2 +-
 .../fsdataset/impl/RamDiskReplicaLruTracker.java |  4 ++--
 .../hdfs/server/namenode/ReencryptionHandler.java|  2 +-
 .../org/apache/hadoop/hdfs/tools/DebugAdmin.java | 20 ++--
 .../java/org/apache/hadoop/hdfs/MiniDFSCluster.java  |  2 +-
 6 files changed, 16 insertions(+), 16 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
index 0c9a2e0..896d08f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
@@ -152,7 +152,7 @@ public class MiniRouterDFSCluster {
   /**
* Router context.
*/
-  public class RouterContext {
+  public static class RouterContext {
 private Router router;
 private FileContext fileContext;
 private String nameserviceId;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/InMemoryLevelDBAliasMapClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/InMemoryLevelDBAliasMapClient.java
index cacf8f1..6cac72a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/InMemoryLevelDBAliasMapClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/InMemoryLevelDBAliasMapClient.java
@@ -129,7 +129,7 @@ public class InMemoryLevelDBAliasMapClient extends 
BlockAliasMap
 }
   }
 
-  class LevelDbWriter extends BlockAliasMap.Writer {
+  static class LevelDbWriter extends BlockAliasMap.Writer {
 
 private InMemoryAliasMapProtocol aliasMap;
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
index 31e9ebe..aebedaa 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
@@ -35,7 +35,7 @@ import java.util.*;
 @InterfaceStability.Unstable
 public class RamDiskReplicaLruTracker extends RamDiskReplicaTracker {
 
-  private class RamDiskReplicaLru extends RamDiskReplica {
+  private static class RamDiskReplicaLru extends RamDiskReplica {
 long lastUsedTime;
 
 private RamDiskReplicaLru(String bpid, long blockId,
@@ -88,7 +88,7 @@ public class RamDiskReplicaLruTracker extends 
RamDiskReplicaTracker {
 }
 RamDiskReplicaLru ramDiskReplicaLru =
 new RamDiskReplicaLru(bpid, blockId, transientVolume,
-  lockedBytesReserved);
+lockedBytesReserved);
 map.put(blockId, ramDiskReplicaLru);
 replicasNotPersisted.add(ramDiskReplicaLru);
   }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
index ea38da6..b1c5928 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
@@ -835,7 +835,7 @@ public class ReencryptionHandler implements Runnable {
 }
   }
 
-  private class ZoneTraverseInfo extends TraverseInfo {
+  private static class ZoneTraverseInfo extends TraverseInfo {
 private String ezKeyVerName;
 
 ZoneTraverseInfo(String ezKeyVerName) {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools

[hadoop] branch trunk updated: HDFS-15931 : Fix non-static inner classes for better memory management (#2830). Contributed by Viraj Jasani

2021-04-01 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 4f28738  HDFS-15931 : Fix non-static inner classes for better memory 
management (#2830). Contributed by Viraj Jasani
4f28738 is described below

commit 4f2873801073dc44a5d35dd6a33451c5c9a6cb7e
Author: Viraj Jasani 
AuthorDate: Fri Apr 2 05:04:31 2021 +0530

HDFS-15931 : Fix non-static inner classes for better memory management 
(#2830). Contributed by Viraj Jasani

Signed-off-by: Mingliang Liu 
---
 .../hdfs/server/federation/MiniRouterDFSCluster.java |  2 +-
 .../impl/InMemoryLevelDBAliasMapClient.java  |  2 +-
 .../fsdataset/impl/RamDiskReplicaLruTracker.java |  4 ++--
 .../hdfs/server/namenode/ReencryptionHandler.java|  2 +-
 .../org/apache/hadoop/hdfs/tools/DebugAdmin.java | 20 ++--
 .../java/org/apache/hadoop/hdfs/MiniDFSCluster.java  |  2 +-
 6 files changed, 16 insertions(+), 16 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
index 0c9a2e0..896d08f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MiniRouterDFSCluster.java
@@ -152,7 +152,7 @@ public class MiniRouterDFSCluster {
   /**
* Router context.
*/
-  public class RouterContext {
+  public static class RouterContext {
 private Router router;
 private FileContext fileContext;
 private String nameserviceId;
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/InMemoryLevelDBAliasMapClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/InMemoryLevelDBAliasMapClient.java
index cacf8f1..6cac72a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/InMemoryLevelDBAliasMapClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/InMemoryLevelDBAliasMapClient.java
@@ -129,7 +129,7 @@ public class InMemoryLevelDBAliasMapClient extends 
BlockAliasMap
 }
   }
 
-  class LevelDbWriter extends BlockAliasMap.Writer {
+  static class LevelDbWriter extends BlockAliasMap.Writer {
 
 private InMemoryAliasMapProtocol aliasMap;
 
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
index 31e9ebe..aebedaa 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/RamDiskReplicaLruTracker.java
@@ -35,7 +35,7 @@ import java.util.*;
 @InterfaceStability.Unstable
 public class RamDiskReplicaLruTracker extends RamDiskReplicaTracker {
 
-  private class RamDiskReplicaLru extends RamDiskReplica {
+  private static class RamDiskReplicaLru extends RamDiskReplica {
 long lastUsedTime;
 
 private RamDiskReplicaLru(String bpid, long blockId,
@@ -88,7 +88,7 @@ public class RamDiskReplicaLruTracker extends 
RamDiskReplicaTracker {
 }
 RamDiskReplicaLru ramDiskReplicaLru =
 new RamDiskReplicaLru(bpid, blockId, transientVolume,
-  lockedBytesReserved);
+lockedBytesReserved);
 map.put(blockId, ramDiskReplicaLru);
 replicasNotPersisted.add(ramDiskReplicaLru);
   }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
index ea38da6..b1c5928 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
@@ -835,7 +835,7 @@ public class ReencryptionHandler implements Runnable {
 }
   }
 
-  private class ZoneTraverseInfo extends TraverseInfo {
+  private static class ZoneTraverseInfo extends TraverseInfo {
 private String ezKeyVerName;
 
 ZoneTraverseInfo(String ezKeyVerName) {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools

[hadoop] branch branch-3.1 updated: HDFS-15911 : Provide blocks moved count in Balancer iteration result (#2799)

2021-03-24 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new a0dd4e0  HDFS-15911 : Provide blocks moved count in Balancer iteration 
result (#2799)
a0dd4e0 is described below

commit a0dd4e07221ff13fa5b9189af5d5388b3e998994
Author: Viraj Jasani 
AuthorDate: Wed Mar 24 22:49:56 2021 +0530

HDFS-15911 : Provide blocks moved count in Balancer iteration result (#2799)

Contributed by Viraj Jasani.

Signed-off-by: Mingliang Liu 
Signed-off-by: Ayush Saxena 
---
 .../hadoop/hdfs/server/balancer/Balancer.java  | 40 +-
 .../hadoop/hdfs/server/balancer/TestBalancer.java  | 11 +++---
 2 files changed, 38 insertions(+), 13 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
index a1b7105..798ab77 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
@@ -578,36 +578,60 @@ public class Balancer {
   }
 
   static class Result {
-final ExitStatus exitStatus;
-final long bytesLeftToMove;
-final long bytesBeingMoved;
-final long bytesAlreadyMoved;
+private final ExitStatus exitStatus;
+private final long bytesLeftToMove;
+private final long bytesBeingMoved;
+private final long bytesAlreadyMoved;
+private final long blocksMoved;
 
 Result(ExitStatus exitStatus, long bytesLeftToMove, long bytesBeingMoved,
-long bytesAlreadyMoved) {
+   long bytesAlreadyMoved, long blocksMoved) {
   this.exitStatus = exitStatus;
   this.bytesLeftToMove = bytesLeftToMove;
   this.bytesBeingMoved = bytesBeingMoved;
   this.bytesAlreadyMoved = bytesAlreadyMoved;
+  this.blocksMoved = blocksMoved;
+}
+
+public ExitStatus getExitStatus() {
+  return exitStatus;
+}
+
+public long getBytesLeftToMove() {
+  return bytesLeftToMove;
+}
+
+public long getBytesBeingMoved() {
+  return bytesBeingMoved;
+}
+
+public long getBytesAlreadyMoved() {
+  return bytesAlreadyMoved;
+}
+
+public long getBlocksMoved() {
+  return blocksMoved;
 }
 
 void print(int iteration, NameNodeConnector nnc, PrintStream out) {
-  out.printf("%-24s %10d  %19s  %18s  %17s  %s%n",
+  out.printf("%-24s %10d  %19s  %18s  %17s  %17s  %s%n",
   DateFormat.getDateTimeInstance().format(new Date()), iteration,
   StringUtils.byteDesc(bytesAlreadyMoved),
   StringUtils.byteDesc(bytesLeftToMove),
   StringUtils.byteDesc(bytesBeingMoved),
+  blocksMoved,
   nnc.getNameNodeUri());
 }
   }
 
   Result newResult(ExitStatus exitStatus, long bytesLeftToMove, long 
bytesBeingMoved) {
 return new Result(exitStatus, bytesLeftToMove, bytesBeingMoved,
-dispatcher.getBytesMoved());
+dispatcher.getBytesMoved(), dispatcher.getBblocksMoved());
   }
 
   Result newResult(ExitStatus exitStatus) {
-return new Result(exitStatus, -1, -1, dispatcher.getBytesMoved());
+return new Result(exitStatus, -1, -1, dispatcher.getBytesMoved(),
+dispatcher.getBblocksMoved());
   }
 
   /** Run an iteration for all datanodes. */
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
index 6ef26f2..e381475 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
@@ -1022,14 +1022,14 @@ public class TestBalancer {
 
   // clean all lists
   b.resetData(conf);
-  if (r.exitStatus == ExitStatus.IN_PROGRESS) {
+  if (r.getExitStatus() == ExitStatus.IN_PROGRESS) {
 done = false;
-  } else if (r.exitStatus != ExitStatus.SUCCESS) {
+  } else if (r.getExitStatus() != ExitStatus.SUCCESS) {
 //must be an error statue, return.
-return r.exitStatus.getExitCode();
+return r.getExitStatus().getExitCode();
   } else {
 if (iteration > 0) {
-  assertTrue(r.bytesAlreadyMoved > 0);
+  assertTrue(r.getBytesAlreadyMoved() > 0);
 }
   }
 }
@@ -1655,7 +1655,8 @@ public class TestBalancer {
   // When a block move is not canceled in 2 seconds properly and then
   // a block is mo

[hadoop] branch branch-3.2 updated (46b2e96 -> 97b8992)

2021-03-24 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a change to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 46b2e96  HDFS-15902. Improve the log for HTTPFS server operation. 
Contributed by Bhavik Patel.
 add 97b8992  HDFS-15911 : Provide blocks moved count in Balancer iteration 
result (#2797)

No new revisions were added by this update.

Summary of changes:
 .../hadoop/hdfs/server/balancer/Balancer.java  | 40 +-
 .../hadoop/hdfs/server/balancer/TestBalancer.java  | 11 +++---
 2 files changed, 38 insertions(+), 13 deletions(-)

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HDFS-15911 : Provide blocks moved count in Balancer iteration result (#2796)

2021-03-23 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 6d3f5e8  HDFS-15911 : Provide blocks moved count in Balancer iteration 
result (#2796)
6d3f5e8 is described below

commit 6d3f5e844b1ad25b5e357e7fa01fd64f637a48b2
Author: Viraj Jasani 
AuthorDate: Wed Mar 24 11:21:21 2021 +0530

HDFS-15911 : Provide blocks moved count in Balancer iteration result (#2796)

Contributed by Viraj Jasani.

Signed-off-by: Mingliang Liu 
Signed-off-by: Ayush Saxena 
---
 .../hadoop/hdfs/server/balancer/Balancer.java  | 40 +-
 .../hadoop/hdfs/server/balancer/TestBalancer.java  | 11 +++---
 2 files changed, 38 insertions(+), 13 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
index 33b5fa4..8d97d2e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
@@ -596,36 +596,60 @@ public class Balancer {
   }
 
   static class Result {
-final ExitStatus exitStatus;
-final long bytesLeftToMove;
-final long bytesBeingMoved;
-final long bytesAlreadyMoved;
+private final ExitStatus exitStatus;
+private final long bytesLeftToMove;
+private final long bytesBeingMoved;
+private final long bytesAlreadyMoved;
+private final long blocksMoved;
 
 Result(ExitStatus exitStatus, long bytesLeftToMove, long bytesBeingMoved,
-long bytesAlreadyMoved) {
+   long bytesAlreadyMoved, long blocksMoved) {
   this.exitStatus = exitStatus;
   this.bytesLeftToMove = bytesLeftToMove;
   this.bytesBeingMoved = bytesBeingMoved;
   this.bytesAlreadyMoved = bytesAlreadyMoved;
+  this.blocksMoved = blocksMoved;
+}
+
+public ExitStatus getExitStatus() {
+  return exitStatus;
+}
+
+public long getBytesLeftToMove() {
+  return bytesLeftToMove;
+}
+
+public long getBytesBeingMoved() {
+  return bytesBeingMoved;
+}
+
+public long getBytesAlreadyMoved() {
+  return bytesAlreadyMoved;
+}
+
+public long getBlocksMoved() {
+  return blocksMoved;
 }
 
 void print(int iteration, NameNodeConnector nnc, PrintStream out) {
-  out.printf("%-24s %10d  %19s  %18s  %17s  %s%n",
+  out.printf("%-24s %10d  %19s  %18s  %17s  %17s  %s%n",
   DateFormat.getDateTimeInstance().format(new Date()), iteration,
   StringUtils.byteDesc(bytesAlreadyMoved),
   StringUtils.byteDesc(bytesLeftToMove),
   StringUtils.byteDesc(bytesBeingMoved),
+  blocksMoved,
   nnc.getNameNodeUri());
 }
   }
 
   Result newResult(ExitStatus exitStatus, long bytesLeftToMove, long 
bytesBeingMoved) {
 return new Result(exitStatus, bytesLeftToMove, bytesBeingMoved,
-dispatcher.getBytesMoved());
+dispatcher.getBytesMoved(), dispatcher.getBblocksMoved());
   }
 
   Result newResult(ExitStatus exitStatus) {
-return new Result(exitStatus, -1, -1, dispatcher.getBytesMoved());
+return new Result(exitStatus, -1, -1, dispatcher.getBytesMoved(),
+dispatcher.getBblocksMoved());
   }
 
   /** Run an iteration for all datanodes. */
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
index bb3ad65..f44bbb2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
@@ -1022,14 +1022,14 @@ public class TestBalancer {
 
   // clean all lists
   b.resetData(conf);
-  if (r.exitStatus == ExitStatus.IN_PROGRESS) {
+  if (r.getExitStatus() == ExitStatus.IN_PROGRESS) {
 done = false;
-  } else if (r.exitStatus != ExitStatus.SUCCESS) {
+  } else if (r.getExitStatus() != ExitStatus.SUCCESS) {
 //must be an error statue, return.
-return r.exitStatus.getExitCode();
+return r.getExitStatus().getExitCode();
   } else {
 if (iteration > 0) {
-  assertTrue(r.bytesAlreadyMoved > 0);
+  assertTrue(r.getBytesAlreadyMoved() > 0);
 }
   }
 }
@@ -1655,7 +1655,8 @@ public class TestBalancer {
   // When a block move is not canceled in 2 seconds properly and then
   // a block is mo

[hadoop] branch trunk updated: HDFS-15911 : Provide blocks moved count in Balancer iteration result (#2794)

2021-03-23 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 4b4ccce  HDFS-15911 : Provide blocks moved count in Balancer iteration 
result (#2794)
4b4ccce is described below

commit 4b4ccce02f591f63dff7db346de39c8d996e8f1d
Author: Viraj Jasani 
AuthorDate: Wed Mar 24 11:17:45 2021 +0530

HDFS-15911 : Provide blocks moved count in Balancer iteration result (#2794)

Contributed by Viraj Jasani.

Signed-off-by: Mingliang Liu 
Signed-off-by: Ayush Saxena 
---
 .../hadoop/hdfs/server/balancer/Balancer.java  | 28 ++
 .../hadoop/hdfs/server/balancer/TestBalancer.java  |  8 +--
 2 files changed, 30 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
index 0024ba5..33650ea 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
@@ -38,6 +38,7 @@ import java.util.Map;
 import java.util.Set;
 import java.util.concurrent.TimeUnit;
 
+import org.apache.commons.lang3.builder.ToStringBuilder;
 import 
org.apache.hadoop.thirdparty.com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.hdfs.DFSUtilClient;
 import org.slf4j.Logger;
@@ -638,13 +639,15 @@ public class Balancer {
 private final long bytesLeftToMove;
 private final long bytesBeingMoved;
 private final long bytesAlreadyMoved;
+private final long blocksMoved;
 
 Result(ExitStatus exitStatus, long bytesLeftToMove, long bytesBeingMoved,
-long bytesAlreadyMoved) {
+   long bytesAlreadyMoved, long blocksMoved) {
   this.exitStatus = exitStatus;
   this.bytesLeftToMove = bytesLeftToMove;
   this.bytesBeingMoved = bytesBeingMoved;
   this.bytesAlreadyMoved = bytesAlreadyMoved;
+  this.blocksMoved = blocksMoved;
 }
 
 public ExitStatus getExitStatus() {
@@ -663,23 +666,40 @@ public class Balancer {
   return bytesAlreadyMoved;
 }
 
+public long getBlocksMoved() {
+  return blocksMoved;
+}
+
 void print(int iteration, NameNodeConnector nnc, PrintStream out) {
-  out.printf("%-24s %10d  %19s  %18s  %17s  %s%n",
+  out.printf("%-24s %10d  %19s  %18s  %17s  %17s  %s%n",
   DateFormat.getDateTimeInstance().format(new Date()), iteration,
   StringUtils.byteDesc(bytesAlreadyMoved),
   StringUtils.byteDesc(bytesLeftToMove),
   StringUtils.byteDesc(bytesBeingMoved),
+  blocksMoved,
   nnc.getNameNodeUri());
 }
+
+@Override
+public String toString() {
+  return new ToStringBuilder(this)
+  .append("exitStatus", exitStatus)
+  .append("bytesLeftToMove", bytesLeftToMove)
+  .append("bytesBeingMoved", bytesBeingMoved)
+  .append("bytesAlreadyMoved", bytesAlreadyMoved)
+  .append("blocksMoved", blocksMoved)
+  .toString();
+}
   }
 
   Result newResult(ExitStatus exitStatus, long bytesLeftToMove, long 
bytesBeingMoved) {
 return new Result(exitStatus, bytesLeftToMove, bytesBeingMoved,
-dispatcher.getBytesMoved());
+dispatcher.getBytesMoved(), dispatcher.getBblocksMoved());
   }
 
   Result newResult(ExitStatus exitStatus) {
-return new Result(exitStatus, -1, -1, dispatcher.getBytesMoved());
+return new Result(exitStatus, -1, -1, dispatcher.getBytesMoved(),
+dispatcher.getBblocksMoved());
   }
 
   /** Run an iteration for all datanodes. */
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
index 343faf6..f59743f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
@@ -1658,6 +1658,7 @@ public class TestBalancer {
   // a block is moved unexpectedly, IN_PROGRESS will be reported.
   assertEquals("We expect ExitStatus.NO_MOVE_PROGRESS to be reported.",
   ExitStatus.NO_MOVE_PROGRESS, r.getExitStatus());
+  assertEquals(0, r.getBlocksMoved());
 }
   } finally {
 for (NameNodeConnector nnc : connectors) {
@@ -2309,8 +2310,11 @@ public class TestBalancer {
 // Hence, overall total blocks moved by HDFS balancer would be either of 
these 2 options:
 /

[hadoop] branch branch-3.1 updated: YARN-10706. Upgrade com.github.eirslett:frontend-maven-plugin to 1.11.2 (#2791)

2021-03-22 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new f2de8cc  YARN-10706. Upgrade com.github.eirslett:frontend-maven-plugin 
to 1.11.2 (#2791)
f2de8cc is described below

commit f2de8cc5cfd4e8fc41f7e554502cef97ef7d2bcf
Author: Mingliang Liu 
AuthorDate: Sun Mar 21 21:12:27 2021 -0700

YARN-10706. Upgrade com.github.eirslett:frontend-maven-plugin to 1.11.2 
(#2791)

Contributed by Mingliang Liu.

Signed-off-by: Ayush Saxena 
Signed-off-by: Akira Ajisaka 
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index ef4d69f..b29dc5c 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -141,7 +141,7 @@
 900
 1.11.271
 2.3.4
-1.5
+1.11.2
 
 ${project.version}

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: YARN-10706. Upgrade com.github.eirslett:frontend-maven-plugin to 1.11.2 (#2791)

2021-03-22 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 001e097  YARN-10706. Upgrade com.github.eirslett:frontend-maven-plugin 
to 1.11.2 (#2791)
001e097 is described below

commit 001e09753c04390e032f60a365adea09882ef95e
Author: Mingliang Liu 
AuthorDate: Sun Mar 21 21:12:27 2021 -0700

YARN-10706. Upgrade com.github.eirslett:frontend-maven-plugin to 1.11.2 
(#2791)

Contributed by Mingliang Liu.

Signed-off-by: Ayush Saxena 
Signed-off-by: Akira Ajisaka 
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index e617e6f..6b6b50b 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -174,7 +174,7 @@
 900
 1.11.901
 2.3.4
-1.5
+1.11.2
 
 ${hadoop.version}

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: YARN-10706. Upgrade com.github.eirslett:frontend-maven-plugin to 1.11.2 (#2791)

2021-03-21 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 23082ac  YARN-10706. Upgrade com.github.eirslett:frontend-maven-plugin 
to 1.11.2 (#2791)
23082ac is described below

commit 23082ac6c76c20e6068997f52965332a21f2479e
Author: Mingliang Liu 
AuthorDate: Sun Mar 21 21:12:27 2021 -0700

YARN-10706. Upgrade com.github.eirslett:frontend-maven-plugin to 1.11.2 
(#2791)

Contributed by Mingliang Liu.

Signed-off-by: Ayush Saxena 
Signed-off-by: Akira Ajisaka 
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 12a7ab6..eb42719 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -187,7 +187,7 @@
 900
 1.11.901
 2.3.4
-1.6
+1.11.2
 2.1
 0.7
 
1.5.1

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: YARN-10706. Upgrade com.github.eirslett:frontend-maven-plugin to 1.11.2 (#2791)

2021-03-21 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 648bbbd  YARN-10706. Upgrade com.github.eirslett:frontend-maven-plugin 
to 1.11.2 (#2791)
648bbbd is described below

commit 648bbbdad64aa79179447bdb656129fff7636bee
Author: Mingliang Liu 
AuthorDate: Sun Mar 21 21:12:27 2021 -0700

YARN-10706. Upgrade com.github.eirslett:frontend-maven-plugin to 1.11.2 
(#2791)

Contributed by Mingliang Liu.

Signed-off-by: Ayush Saxena 
Signed-off-by: Akira Ajisaka 
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 6a0813f..90a7c44 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -185,7 +185,7 @@
 900
 1.11.901
 2.3.4
-1.6
+1.11.2
 2.1
 0.7
 
1.5.1

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-15904 : De-flake TestBalancer#testBalancerWithSortTopNodes() (#2785)

2021-03-19 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 261191c  HDFS-15904 : De-flake 
TestBalancer#testBalancerWithSortTopNodes() (#2785)
261191c is described below

commit 261191cbc06cf28e656085e7e6633e80fc1f17a9
Author: Viraj Jasani 
AuthorDate: Sat Mar 20 09:07:44 2021 +0530

HDFS-15904 : De-flake TestBalancer#testBalancerWithSortTopNodes() (#2785)

Contributed by Viraj Jasani.

Signed-off-by: Mingliang Liu 
Signed-off-by: Ayush Saxena 
---
 .../hadoop/hdfs/server/balancer/Balancer.java  | 24 +
 .../hadoop/hdfs/server/balancer/Dispatcher.java|  7 +-
 .../hadoop/hdfs/server/balancer/TestBalancer.java  | 25 --
 3 files changed, 40 insertions(+), 16 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
index 6734c97..0024ba5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
@@ -634,10 +634,10 @@ public class Balancer {
   }
 
   static class Result {
-final ExitStatus exitStatus;
-final long bytesLeftToMove;
-final long bytesBeingMoved;
-final long bytesAlreadyMoved;
+private final ExitStatus exitStatus;
+private final long bytesLeftToMove;
+private final long bytesBeingMoved;
+private final long bytesAlreadyMoved;
 
 Result(ExitStatus exitStatus, long bytesLeftToMove, long bytesBeingMoved,
 long bytesAlreadyMoved) {
@@ -647,6 +647,22 @@ public class Balancer {
   this.bytesAlreadyMoved = bytesAlreadyMoved;
 }
 
+public ExitStatus getExitStatus() {
+  return exitStatus;
+}
+
+public long getBytesLeftToMove() {
+  return bytesLeftToMove;
+}
+
+public long getBytesBeingMoved() {
+  return bytesBeingMoved;
+}
+
+public long getBytesAlreadyMoved() {
+  return bytesAlreadyMoved;
+}
+
 void print(int iteration, NameNodeConnector nnc, PrintStream out) {
   out.printf("%-24s %10d  %19s  %18s  %17s  %s%n",
   DateFormat.getDateTimeInstance().format(new Date()), iteration,
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
index c34e6a3..17f0d8f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
@@ -1158,12 +1158,7 @@ public class Dispatcher {
   p.proxySource.removePendingBlock(p);
   return;
 }
-moveExecutor.execute(new Runnable() {
-  @Override
-  public void run() {
-p.dispatch();
-  }
-});
+moveExecutor.execute(p::dispatch);
   }
 
   public boolean dispatchAndCheckContinue() throws InterruptedException {
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
index b94cebc..343faf6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
@@ -1024,14 +1024,14 @@ public class TestBalancer {
 
   // clean all lists
   b.resetData(conf);
-  if (r.exitStatus == ExitStatus.IN_PROGRESS) {
+  if (r.getExitStatus() == ExitStatus.IN_PROGRESS) {
 done = false;
-  } else if (r.exitStatus != ExitStatus.SUCCESS) {
+  } else if (r.getExitStatus() != ExitStatus.SUCCESS) {
 //must be an error statue, return.
-return r.exitStatus.getExitCode();
+return r.getExitStatus().getExitCode();
   } else {
 if (iteration > 0) {
-  assertTrue(r.bytesAlreadyMoved > 0);
+  assertTrue(r.getBytesAlreadyMoved() > 0);
 }
   }
 }
@@ -1657,7 +1657,7 @@ public class TestBalancer {
   // When a block move is not canceled in 2 seconds properly and then
   // a block is moved unexpectedly, IN_PROGRESS will be reported.
   assertEquals("We expect ExitStatus.NO_MOVE_PROGRESS to be reported.",
-  ExitStatus.NO_MOVE_PROGRESS, r.exitStatus);
+  ExitStatus.NO_M

[hadoop] branch branch-2.10 updated: HADOOP-17571 : Bump up woodstox-core to 5.3.0 due to security concerns (#2757)

2021-03-11 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.10 by this push:
 new 3bb40d2  HADOOP-17571 : Bump up woodstox-core to 5.3.0 due to security 
concerns (#2757)
3bb40d2 is described below

commit 3bb40d2a568e498a89e3243ffa545959103192f7
Author: Viraj Jasani 
AuthorDate: Fri Mar 12 01:21:24 2021 +0530

HADOOP-17571 : Bump up woodstox-core to 5.3.0 due to security concerns 
(#2757)

Contributed by Viraj Jasani.

Signed-off-by: Mingliang Liu 
Signed-off-by: Akira Ajisaka 
---
 hadoop-project/pom.xml | 6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 840e1dd..85769e42 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -143,6 +143,7 @@
 
 ${project.version}
+5.3.0
   
 
   
@@ -495,7 +496,6 @@
 hadoop-openstack
 ${project.version}
   
-  
   
 org.apache.hadoop
 hadoop-azure
@@ -676,7 +676,6 @@
 guice
 3.0
   
-  
   
 cglib
 cglib
@@ -887,7 +886,7 @@
   
 com.fasterxml.woodstox
 woodstox-core
-5.0.3
+${woodstox.version}
   
   
 org.codehaus.jackson
@@ -1189,7 +1188,6 @@
1.46
test
  
-  
  
 joda-time
 joda-time


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HADOOP-17571 : Bump up woodstox-core to 5.3.0 due to security concerns (#2757)

2021-03-11 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 6ecf829  HADOOP-17571 : Bump up woodstox-core to 5.3.0 due to security 
concerns (#2757)
6ecf829 is described below

commit 6ecf829e79f6227df195d8b175ee6c3ee552e21a
Author: Viraj Jasani 
AuthorDate: Fri Mar 12 01:21:24 2021 +0530

HADOOP-17571 : Bump up woodstox-core to 5.3.0 due to security concerns 
(#2757)

Contributed by Viraj Jasani.

Signed-off-by: Mingliang Liu 
Signed-off-by: Akira Ajisaka 
---
 hadoop-project/pom.xml | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 755ef53..657e0a0 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -150,6 +150,7 @@
 1.16
 1.2.6
 2.0.0-beta-1
+5.3.0
   
 
   
@@ -952,7 +953,7 @@
   
 com.fasterxml.woodstox
 woodstox-core
-5.0.3
+${woodstox.version}
   
   
 org.codehaus.jackson


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HADOOP-17571 : Bump up woodstox-core to 5.3.0 due to security concerns (#2757)

2021-03-11 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 2d245c9  HADOOP-17571 : Bump up woodstox-core to 5.3.0 due to security 
concerns (#2757)
2d245c9 is described below

commit 2d245c97e9015780d5e629b838e7d2d5132d6909
Author: Viraj Jasani 
AuthorDate: Fri Mar 12 01:21:24 2021 +0530

HADOOP-17571 : Bump up woodstox-core to 5.3.0 due to security concerns 
(#2757)

Contributed by Viraj Jasani.

Signed-off-by: Mingliang Liu 
Signed-off-by: Akira Ajisaka 
---
 hadoop-project/pom.xml | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 132aeaf..f8ad24a 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -183,6 +183,7 @@
 1.16
 1.2.6
 2.0.0-beta-1
+5.3.0
   
 
   
@@ -1028,7 +1029,7 @@
   
 com.fasterxml.woodstox
 woodstox-core
-5.0.3
+${woodstox.version}
   
   
 org.codehaus.jackson


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HADOOP-17571 : Bump up woodstox-core to 5.3.0 due to security concerns (#2757)

2021-03-11 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 36313b3  HADOOP-17571 : Bump up woodstox-core to 5.3.0 due to security 
concerns (#2757)
36313b3 is described below

commit 36313b38cbcd9b06c6bffbf58a24610f3bff1868
Author: Viraj Jasani 
AuthorDate: Fri Mar 12 01:21:24 2021 +0530

HADOOP-17571 : Bump up woodstox-core to 5.3.0 due to security concerns 
(#2757)

Contributed by Viraj Jasani.

Signed-off-by: Mingliang Liu 
Signed-off-by: Akira Ajisaka 
---
 hadoop-project/pom.xml | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 5d8ceb2..7ac05aa 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -210,6 +210,7 @@
 1.5.6
 7.7.0
 1.0.7.Final
+5.3.0
   
 
   
@@ -1109,7 +1110,7 @@
   
 com.fasterxml.woodstox
 woodstox-core
-5.0.3
+${woodstox.version}
   
   
 org.codehaus.jackson


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-17571 : Bump up woodstox-core to 5.3.0 due to security concerns (#2757)

2021-03-11 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 54ae6bc  HADOOP-17571 : Bump up woodstox-core to 5.3.0 due to security 
concerns (#2757)
54ae6bc is described below

commit 54ae6bcfc380d37165f734297dabc1b565f130e7
Author: Viraj Jasani 
AuthorDate: Fri Mar 12 01:21:24 2021 +0530

HADOOP-17571 : Bump up woodstox-core to 5.3.0 due to security concerns 
(#2757)

Contributed by Viraj Jasani.

Signed-off-by: Mingliang Liu 
Signed-off-by: Akira Ajisaka 
---
 hadoop-project/pom.xml | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 9e728e1..c0e8e2c 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -209,6 +209,7 @@
 7.7.0
 1.0.7.Final
 1.0.2
+5.3.0
   
 
   
@@ -1131,7 +1132,7 @@
   
 com.fasterxml.woodstox
 woodstox-core
-5.0.3
+${woodstox.version}
   
   
 org.codehaus.jackson


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated (6fc26ad -> 394b9f7)

2021-02-02 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 6fc26ad  YARN-10352 Skip schedule on not heartbeated nodes in Multi 
Node Placement. Contributed by Prabhu Joseph and Qi Zhu
 add 394b9f7  HDFS-15624. fix the function of setting quota by storage type 
(#2377)

No new revisions were added by this update.

Summary of changes:
 .../src/main/java/org/apache/hadoop/fs/StorageType.java|  7 +++
 .../test/java/org/apache/hadoop/fs/shell/TestCount.java|  4 ++--
 .../hdfs/server/federation/router/TestRouterQuota.java | 14 +++---
 .../apache/hadoop/hdfs/server/namenode/FSNamesystem.java   |  7 +++
 .../hadoop/hdfs/server/namenode/NameNodeLayoutVersion.java |  3 ++-
 .../org/apache/hadoop/hdfs/TestBlockStoragePolicy.java |  6 +++---
 .../org/apache/hadoop/hdfs/protocol/TestLayoutVersion.java |  3 ++-
 7 files changed, 26 insertions(+), 18 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.10 updated: MAPREDUCE-7301: Expose Mini MR Cluster attribute for testing

2020-10-12 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.10 by this push:
 new 893fcea  MAPREDUCE-7301: Expose Mini MR Cluster attribute for testing
893fcea is described below

commit 893fceadee8bac3878b0e77ed493af72bdb388fe
Author: Swaroopa Kadam 
AuthorDate: Thu Oct 8 14:56:27 2020 -0700

MAPREDUCE-7301: Expose Mini MR Cluster attribute for testing

Signed-off-by: Mingliang Liu 
---
 .../src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java | 4 
 .../test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java  | 4 
 2 files changed, 8 insertions(+)

diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java
index 2e144414..4226ebc 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java
@@ -270,4 +270,8 @@ public class MiniMRCluster {
 }
   }
 
+  public MiniMRClientCluster getMrClientCluster() {
+return mrClientCluster;
+  }
+
 }
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java
index 94d6ff3..f8342b5 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java
@@ -73,4 +73,8 @@ public class MiniMRYarnClusterAdapter implements 
MiniMRClientCluster {
 miniMRYarnCluster.start();
   }
 
+  public MiniMRYarnCluster getMiniMRYarnCluster() {
+return miniMRYarnCluster;
+  }
+
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: MAPREDUCE-7301: Expose Mini MR Cluster attribute for testing

2020-10-12 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 55f01bd  MAPREDUCE-7301: Expose Mini MR Cluster attribute for testing
55f01bd is described below

commit 55f01bda8ea2383a6a40f097e783e481de55ac2f
Author: Swaroopa Kadam 
AuthorDate: Thu Oct 8 14:56:27 2020 -0700

MAPREDUCE-7301: Expose Mini MR Cluster attribute for testing

Signed-off-by: Mingliang Liu 
---
 .../src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java | 4 
 .../test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java  | 4 
 2 files changed, 8 insertions(+)

diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java
index e7df5b3..2de885f 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java
@@ -271,4 +271,8 @@ public class MiniMRCluster {
 }
   }
 
+  public MiniMRClientCluster getMrClientCluster() {
+return mrClientCluster;
+  }
+
 }
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java
index 4f89840..684587d 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java
@@ -74,4 +74,8 @@ public class MiniMRYarnClusterAdapter implements 
MiniMRClientCluster {
 miniMRYarnCluster.start();
   }
 
+  public MiniMRYarnCluster getMiniMRYarnCluster() {
+return miniMRYarnCluster;
+  }
+
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: MAPREDUCE-7301: Expose Mini MR Cluster attribute for testing

2020-10-12 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 0d55344  MAPREDUCE-7301: Expose Mini MR Cluster attribute for testing
0d55344 is described below

commit 0d5534430c203620417c501c0fbfbd783ae75680
Author: Swaroopa Kadam 
AuthorDate: Thu Oct 8 14:56:27 2020 -0700

MAPREDUCE-7301: Expose Mini MR Cluster attribute for testing

Signed-off-by: Mingliang Liu 
---
 .../src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java | 4 
 .../test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java  | 4 
 2 files changed, 8 insertions(+)

diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java
index e7df5b3..2de885f 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java
@@ -271,4 +271,8 @@ public class MiniMRCluster {
 }
   }
 
+  public MiniMRClientCluster getMrClientCluster() {
+return mrClientCluster;
+  }
+
 }
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java
index 4f89840..684587d 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java
@@ -74,4 +74,8 @@ public class MiniMRYarnClusterAdapter implements 
MiniMRClientCluster {
 miniMRYarnCluster.start();
   }
 
+  public MiniMRYarnCluster getMiniMRYarnCluster() {
+return miniMRYarnCluster;
+  }
+
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: MAPREDUCE-7301: Expose Mini MR Cluster attribute for testing

2020-10-12 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 2e46ef9  MAPREDUCE-7301: Expose Mini MR Cluster attribute for testing
2e46ef9 is described below

commit 2e46ef9417e31e38c632d8a966a07c45496755b2
Author: Swaroopa Kadam 
AuthorDate: Thu Oct 8 14:56:27 2020 -0700

MAPREDUCE-7301: Expose Mini MR Cluster attribute for testing

Signed-off-by: Mingliang Liu 
---
 .../src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java | 4 
 .../test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java  | 4 
 2 files changed, 8 insertions(+)

diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java
index e7df5b3..2de885f 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java
@@ -271,4 +271,8 @@ public class MiniMRCluster {
 }
   }
 
+  public MiniMRClientCluster getMrClientCluster() {
+return mrClientCluster;
+  }
+
 }
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java
index 4f89840..684587d 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java
@@ -74,4 +74,8 @@ public class MiniMRYarnClusterAdapter implements 
MiniMRClientCluster {
 miniMRYarnCluster.start();
   }
 
+  public MiniMRYarnCluster getMiniMRYarnCluster() {
+return miniMRYarnCluster;
+  }
+
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: MAPREDUCE-7301: Expose Mini MR Cluster attribute for testing

2020-10-12 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 05a73de  MAPREDUCE-7301: Expose Mini MR Cluster attribute for testing
05a73de is described below

commit 05a73ded9391b2621ed564f17b206779d8b32883
Author: Swaroopa Kadam 
AuthorDate: Thu Oct 8 14:56:27 2020 -0700

MAPREDUCE-7301: Expose Mini MR Cluster attribute for testing

Signed-off-by: Mingliang Liu 
---
 .../src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java | 4 
 .../test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java  | 4 
 2 files changed, 8 insertions(+)

diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java
index e7df5b3..2de885f 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRCluster.java
@@ -271,4 +271,8 @@ public class MiniMRCluster {
 }
   }
 
+  public MiniMRClientCluster getMrClientCluster() {
+return mrClientCluster;
+  }
+
 }
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java
index 4f89840..684587d 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/MiniMRYarnClusterAdapter.java
@@ -74,4 +74,8 @@ public class MiniMRYarnClusterAdapter implements 
MiniMRClientCluster {
 miniMRYarnCluster.start();
   }
 
+  public MiniMRYarnCluster getMiniMRYarnCluster() {
+return miniMRYarnCluster;
+  }
+
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.10 updated: MAPREDUCE-7294. Only application master should upload resource to Yarn Shared Cache. (#2319)

2020-09-22 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.10 by this push:
 new be42149  MAPREDUCE-7294. Only application master should upload 
resource to Yarn Shared Cache. (#2319)
be42149 is described below

commit be421490dadd61c4bcbd23bdb23bed408607ff22
Author: zz 
AuthorDate: Tue Sep 22 11:57:36 2020 -0700

MAPREDUCE-7294. Only application master should upload resource to Yarn 
Shared Cache. (#2319)

Contributed by Zhenzhao Wang 

Signed-off-by: Mingliang Liu 
---
 .../hadoop/mapreduce/v2/app/job/impl/JobImpl.java  |  3 +-
 .../mapreduce/v2/app/job/impl/TestJobImpl.java | 23 
 .../main/java/org/apache/hadoop/mapreduce/Job.java | 41 --
 3 files changed, 47 insertions(+), 20 deletions(-)

diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
index 4995120..b688f4d 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
@@ -1421,7 +1421,8 @@ public class JobImpl implements 
org.apache.hadoop.mapreduce.v2.app.job.Job,
* be set up to false. In that way, the NMs that host the task containers
* won't try to upload the resources to shared cache.
*/
-  private static void cleanupSharedCacheUploadPolicies(Configuration conf) {
+  @VisibleForTesting
+  static void cleanupSharedCacheUploadPolicies(Configuration conf) {
 Map emap = Collections.emptyMap();
 Job.setArchiveSharedCacheUploadPolicies(conf, emap);
 Job.setFileSharedCacheUploadPolicies(conf, emap);
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
index 1827ce4..d342a3f 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
@@ -39,6 +39,7 @@ import java.util.concurrent.CyclicBarrier;
 
 import org.apache.commons.io.FileUtils;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.mapreduce.Job;
 import org.apache.hadoop.mapreduce.JobACL;
 import org.apache.hadoop.mapreduce.JobContext;
 import org.apache.hadoop.mapreduce.JobID;
@@ -989,6 +990,28 @@ public class TestJobImpl {
 Assert.assertEquals(updatedPriority, jobPriority);
   }
 
+  @Test
+  public void testCleanupSharedCacheUploadPolicies() {
+Configuration config = new Configuration();
+Map archivePolicies = new HashMap<>();
+archivePolicies.put("archive1", true);
+archivePolicies.put("archive2", true);
+Job.setArchiveSharedCacheUploadPolicies(config, archivePolicies);
+Map filePolicies = new HashMap<>();
+filePolicies.put("file1", true);
+filePolicies.put("jar1", true);
+Job.setFileSharedCacheUploadPolicies(config, filePolicies);
+Assert.assertEquals(
+2, Job.getArchiveSharedCacheUploadPolicies(config).size());
+Assert.assertEquals(
+2, Job.getFileSharedCacheUploadPolicies(config).size());
+JobImpl.cleanupSharedCacheUploadPolicies(config);
+Assert.assertEquals(
+0, Job.getArchiveSharedCacheUploadPolicies(config).size());
+Assert.assertEquals(
+0, Job.getFileSharedCacheUploadPolicies(config).size());
+  }
+
   private static CommitterEventHandler createCommitterEventHandler(
   Dispatcher dispatcher, OutputCommitter committer) {
 final SystemClock clock = SystemClock.getInstance();
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
index 493a221..c276ec0 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/map

[hadoop] branch branch-3.1 updated: MAPREDUCE-7294. Only application master should upload resource to Yarn Shared Cache (#2223)

2020-09-20 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 06ff4d1  MAPREDUCE-7294. Only application master should upload 
resource to Yarn Shared Cache (#2223)
06ff4d1 is described below

commit 06ff4d141670f8bd306da2bdfd3df773e8f5866f
Author: zz 
AuthorDate: Sat Sep 19 23:10:05 2020 -0700

MAPREDUCE-7294. Only application master should upload resource to Yarn 
Shared Cache (#2223)

Contributed by Zhenzhao Wang 

Signed-off-by: Mingliang Liu 
---
 .../hadoop/mapreduce/v2/app/job/impl/JobImpl.java  |  3 +-
 .../mapreduce/v2/app/job/impl/TestJobImpl.java | 23 +++
 .../main/java/org/apache/hadoop/mapreduce/Job.java | 33 +-
 3 files changed, 39 insertions(+), 20 deletions(-)

diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
index d2e2492..59320b2 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
@@ -1423,7 +1423,8 @@ public class JobImpl implements 
org.apache.hadoop.mapreduce.v2.app.job.Job,
* be set up to false. In that way, the NMs that host the task containers
* won't try to upload the resources to shared cache.
*/
-  private static void cleanupSharedCacheUploadPolicies(Configuration conf) {
+  @VisibleForTesting
+  static void cleanupSharedCacheUploadPolicies(Configuration conf) {
 Job.setArchiveSharedCacheUploadPolicies(conf, Collections.emptyMap());
 Job.setFileSharedCacheUploadPolicies(conf, Collections.emptyMap());
   }
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
index 1367ff6..013f74a 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
@@ -39,6 +39,7 @@ import java.util.concurrent.CyclicBarrier;
 
 import org.apache.commons.io.FileUtils;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.mapreduce.Job;
 import org.apache.hadoop.mapreduce.JobACL;
 import org.apache.hadoop.mapreduce.JobContext;
 import org.apache.hadoop.mapreduce.JobID;
@@ -991,6 +992,28 @@ public class TestJobImpl {
 Assert.assertEquals(updatedPriority, jobPriority);
   }
 
+  @Test
+  public void testCleanupSharedCacheUploadPolicies() {
+Configuration config = new Configuration();
+Map archivePolicies = new HashMap<>();
+archivePolicies.put("archive1", true);
+archivePolicies.put("archive2", true);
+Job.setArchiveSharedCacheUploadPolicies(config, archivePolicies);
+Map filePolicies = new HashMap<>();
+filePolicies.put("file1", true);
+filePolicies.put("jar1", true);
+Job.setFileSharedCacheUploadPolicies(config, filePolicies);
+Assert.assertEquals(
+2, Job.getArchiveSharedCacheUploadPolicies(config).size());
+Assert.assertEquals(
+2, Job.getFileSharedCacheUploadPolicies(config).size());
+JobImpl.cleanupSharedCacheUploadPolicies(config);
+Assert.assertEquals(
+0, Job.getArchiveSharedCacheUploadPolicies(config).size());
+Assert.assertEquals(
+0, Job.getFileSharedCacheUploadPolicies(config).size());
+  }
+
   private static CommitterEventHandler createCommitterEventHandler(
   Dispatcher dispatcher, OutputCommitter committer) {
 final SystemClock clock = SystemClock.getInstance();
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
index f164b62..d7fa75d 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/map

[hadoop] branch branch-3.2 updated: MAPREDUCE-7294. Only application master should upload resource to Yarn Shared Cache (#2223)

2020-09-20 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 0873555  MAPREDUCE-7294. Only application master should upload 
resource to Yarn Shared Cache (#2223)
0873555 is described below

commit 0873555f040e632228a476168047ae99f433d3ba
Author: zz 
AuthorDate: Sat Sep 19 23:10:05 2020 -0700

MAPREDUCE-7294. Only application master should upload resource to Yarn 
Shared Cache (#2223)

Contributed by Zhenzhao Wang 

Signed-off-by: Mingliang Liu 
---
 .../hadoop/mapreduce/v2/app/job/impl/JobImpl.java  |  3 +-
 .../mapreduce/v2/app/job/impl/TestJobImpl.java | 23 +++
 .../main/java/org/apache/hadoop/mapreduce/Job.java | 33 +-
 3 files changed, 39 insertions(+), 20 deletions(-)

diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
index d2e2492..59320b2 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
@@ -1423,7 +1423,8 @@ public class JobImpl implements 
org.apache.hadoop.mapreduce.v2.app.job.Job,
* be set up to false. In that way, the NMs that host the task containers
* won't try to upload the resources to shared cache.
*/
-  private static void cleanupSharedCacheUploadPolicies(Configuration conf) {
+  @VisibleForTesting
+  static void cleanupSharedCacheUploadPolicies(Configuration conf) {
 Job.setArchiveSharedCacheUploadPolicies(conf, Collections.emptyMap());
 Job.setFileSharedCacheUploadPolicies(conf, Collections.emptyMap());
   }
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
index 1367ff6..013f74a 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
@@ -39,6 +39,7 @@ import java.util.concurrent.CyclicBarrier;
 
 import org.apache.commons.io.FileUtils;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.mapreduce.Job;
 import org.apache.hadoop.mapreduce.JobACL;
 import org.apache.hadoop.mapreduce.JobContext;
 import org.apache.hadoop.mapreduce.JobID;
@@ -991,6 +992,28 @@ public class TestJobImpl {
 Assert.assertEquals(updatedPriority, jobPriority);
   }
 
+  @Test
+  public void testCleanupSharedCacheUploadPolicies() {
+Configuration config = new Configuration();
+Map archivePolicies = new HashMap<>();
+archivePolicies.put("archive1", true);
+archivePolicies.put("archive2", true);
+Job.setArchiveSharedCacheUploadPolicies(config, archivePolicies);
+Map filePolicies = new HashMap<>();
+filePolicies.put("file1", true);
+filePolicies.put("jar1", true);
+Job.setFileSharedCacheUploadPolicies(config, filePolicies);
+Assert.assertEquals(
+2, Job.getArchiveSharedCacheUploadPolicies(config).size());
+Assert.assertEquals(
+2, Job.getFileSharedCacheUploadPolicies(config).size());
+JobImpl.cleanupSharedCacheUploadPolicies(config);
+Assert.assertEquals(
+0, Job.getArchiveSharedCacheUploadPolicies(config).size());
+Assert.assertEquals(
+0, Job.getFileSharedCacheUploadPolicies(config).size());
+  }
+
   private static CommitterEventHandler createCommitterEventHandler(
   Dispatcher dispatcher, OutputCommitter committer) {
 final SystemClock clock = SystemClock.getInstance();
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
index f164b62..d7fa75d 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/map

[hadoop] branch branch-3.3 updated: MAPREDUCE-7294. Only application master should upload resource to Yarn Shared Cache (#2223)

2020-09-20 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new e5e9139  MAPREDUCE-7294. Only application master should upload 
resource to Yarn Shared Cache (#2223)
e5e9139 is described below

commit e5e91397de906bef9091ae4625dc71de301a7d25
Author: zz 
AuthorDate: Sat Sep 19 23:10:05 2020 -0700

MAPREDUCE-7294. Only application master should upload resource to Yarn 
Shared Cache (#2223)

Contributed by Zhenzhao Wang 

Signed-off-by: Mingliang Liu 
---
 .../hadoop/mapreduce/v2/app/job/impl/JobImpl.java  |  3 +-
 .../mapreduce/v2/app/job/impl/TestJobImpl.java | 23 +++
 .../main/java/org/apache/hadoop/mapreduce/Job.java | 33 +-
 3 files changed, 39 insertions(+), 20 deletions(-)

diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
index d2e2492..59320b2 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
@@ -1423,7 +1423,8 @@ public class JobImpl implements 
org.apache.hadoop.mapreduce.v2.app.job.Job,
* be set up to false. In that way, the NMs that host the task containers
* won't try to upload the resources to shared cache.
*/
-  private static void cleanupSharedCacheUploadPolicies(Configuration conf) {
+  @VisibleForTesting
+  static void cleanupSharedCacheUploadPolicies(Configuration conf) {
 Job.setArchiveSharedCacheUploadPolicies(conf, Collections.emptyMap());
 Job.setFileSharedCacheUploadPolicies(conf, Collections.emptyMap());
   }
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
index 945b254..43e59a7 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
@@ -39,6 +39,7 @@ import java.util.concurrent.CyclicBarrier;
 
 import org.apache.commons.io.FileUtils;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.mapreduce.Job;
 import org.apache.hadoop.mapreduce.JobACL;
 import org.apache.hadoop.mapreduce.JobContext;
 import org.apache.hadoop.mapreduce.JobID;
@@ -991,6 +992,28 @@ public class TestJobImpl {
 Assert.assertEquals(updatedPriority, jobPriority);
   }
 
+  @Test
+  public void testCleanupSharedCacheUploadPolicies() {
+Configuration config = new Configuration();
+Map archivePolicies = new HashMap<>();
+archivePolicies.put("archive1", true);
+archivePolicies.put("archive2", true);
+Job.setArchiveSharedCacheUploadPolicies(config, archivePolicies);
+Map filePolicies = new HashMap<>();
+filePolicies.put("file1", true);
+filePolicies.put("jar1", true);
+Job.setFileSharedCacheUploadPolicies(config, filePolicies);
+Assert.assertEquals(
+2, Job.getArchiveSharedCacheUploadPolicies(config).size());
+Assert.assertEquals(
+2, Job.getFileSharedCacheUploadPolicies(config).size());
+JobImpl.cleanupSharedCacheUploadPolicies(config);
+Assert.assertEquals(
+0, Job.getArchiveSharedCacheUploadPolicies(config).size());
+Assert.assertEquals(
+0, Job.getFileSharedCacheUploadPolicies(config).size());
+  }
+
   private static CommitterEventHandler createCommitterEventHandler(
   Dispatcher dispatcher, OutputCommitter committer) {
 final SystemClock clock = SystemClock.getInstance();
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
index 31e2057..9a998da 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/map

[hadoop] branch trunk updated: MAPREDUCE-7294. Only application master should upload resource to Yarn Shared Cache (#2223)

2020-09-20 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 95dfc87  MAPREDUCE-7294. Only application master should upload 
resource to Yarn Shared Cache (#2223)
95dfc87 is described below

commit 95dfc875d32414e81df73a6e57e6767d7cce90c3
Author: zz 
AuthorDate: Sat Sep 19 23:10:05 2020 -0700

MAPREDUCE-7294. Only application master should upload resource to Yarn 
Shared Cache (#2223)

Contributed by Zhenzhao Wang 

Signed-off-by: Mingliang Liu 
---
 .../hadoop/mapreduce/v2/app/job/impl/JobImpl.java  |  3 +-
 .../mapreduce/v2/app/job/impl/TestJobImpl.java | 23 +++
 .../main/java/org/apache/hadoop/mapreduce/Job.java | 33 +-
 3 files changed, 39 insertions(+), 20 deletions(-)

diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
index 8ee097f..0e26046 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
@@ -1425,7 +1425,8 @@ public class JobImpl implements 
org.apache.hadoop.mapreduce.v2.app.job.Job,
* be set up to false. In that way, the NMs that host the task containers
* won't try to upload the resources to shared cache.
*/
-  private static void cleanupSharedCacheUploadPolicies(Configuration conf) {
+  @VisibleForTesting
+  static void cleanupSharedCacheUploadPolicies(Configuration conf) {
 Job.setArchiveSharedCacheUploadPolicies(conf, Collections.emptyMap());
 Job.setFileSharedCacheUploadPolicies(conf, Collections.emptyMap());
   }
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
index 122fb9b..5f378e4 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
@@ -39,6 +39,7 @@ import java.util.concurrent.CyclicBarrier;
 
 import org.apache.commons.io.FileUtils;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.mapreduce.Job;
 import org.apache.hadoop.mapreduce.JobACL;
 import org.apache.hadoop.mapreduce.JobContext;
 import org.apache.hadoop.mapreduce.JobID;
@@ -1001,6 +1002,28 @@ public class TestJobImpl {
 Assert.assertEquals(updatedPriority, jobPriority);
   }
 
+  @Test
+  public void testCleanupSharedCacheUploadPolicies() {
+Configuration config = new Configuration();
+Map archivePolicies = new HashMap<>();
+archivePolicies.put("archive1", true);
+archivePolicies.put("archive2", true);
+Job.setArchiveSharedCacheUploadPolicies(config, archivePolicies);
+Map filePolicies = new HashMap<>();
+filePolicies.put("file1", true);
+filePolicies.put("jar1", true);
+Job.setFileSharedCacheUploadPolicies(config, filePolicies);
+Assert.assertEquals(
+2, Job.getArchiveSharedCacheUploadPolicies(config).size());
+Assert.assertEquals(
+2, Job.getFileSharedCacheUploadPolicies(config).size());
+JobImpl.cleanupSharedCacheUploadPolicies(config);
+Assert.assertEquals(
+0, Job.getArchiveSharedCacheUploadPolicies(config).size());
+Assert.assertEquals(
+0, Job.getFileSharedCacheUploadPolicies(config).size());
+  }
+
   private static CommitterEventHandler createCommitterEventHandler(
   Dispatcher dispatcher, OutputCommitter committer) {
 final SystemClock clock = SystemClock.getInstance();
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
index 31e2057..9a998da 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapredu

[hadoop] branch branch-3.3 updated: HDFS-15573. Only log warning if considerLoad and considerStorageType are both true. Contributed by Stephen O'Donnell

2020-09-12 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 4eccdd9  HDFS-15573. Only log warning if considerLoad and 
considerStorageType are both true. Contributed by Stephen O'Donnell
4eccdd9 is described below

commit 4eccdd950fe9eed4909e8602ddd86b5dcecc06cd
Author: Mingliang Liu 
AuthorDate: Sat Sep 12 01:41:38 2020 -0700

HDFS-15573. Only log warning if considerLoad and considerStorageType are 
both true. Contributed by Stephen O'Donnell
---
 .../hdfs/server/blockmanagement/DatanodeManager.java   | 14 --
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index fbe132a..1b474fd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -326,12 +326,14 @@ public class DatanodeManager {
 this.readConsiderStorageType = conf.getBoolean(
 DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERSTORAGETYPE_KEY,
 DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERSTORAGETYPE_DEFAULT);
-LOG.warn(
-"{} and {} are incompatible and only one can be enabled. "
-+ "Both are currently enabled.",
-DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERLOAD_KEY,
-DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERSTORAGETYPE_KEY);
-
+if (readConsiderLoad && readConsiderStorageType) {
+  LOG.warn(
+  "{} and {} are incompatible and only one can be enabled. "
+  + "Both are currently enabled. {} will be ignored.",
+  DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERLOAD_KEY,
+  DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERSTORAGETYPE_KEY,
+  DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERSTORAGETYPE_KEY);
+}
 this.avoidStaleDataNodesForWrite = conf.getBoolean(
 DFSConfigKeys.DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_KEY,
 DFSConfigKeys.DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_DEFAULT);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-15573. Only log warning if considerLoad and considerStorageType are both true. Contributed by Stephen O'Donnell

2020-09-12 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new f59f7f2  HDFS-15573. Only log warning if considerLoad and 
considerStorageType are both true. Contributed by Stephen O'Donnell
f59f7f2 is described below

commit f59f7f21758fb7a391f2f5198a4c3eaba445
Author: Mingliang Liu 
AuthorDate: Sat Sep 12 01:41:38 2020 -0700

HDFS-15573. Only log warning if considerLoad and considerStorageType are 
both true. Contributed by Stephen O'Donnell
---
 .../hdfs/server/blockmanagement/DatanodeManager.java   | 14 --
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index fbe132a..1b474fd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -326,12 +326,14 @@ public class DatanodeManager {
 this.readConsiderStorageType = conf.getBoolean(
 DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERSTORAGETYPE_KEY,
 DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERSTORAGETYPE_DEFAULT);
-LOG.warn(
-"{} and {} are incompatible and only one can be enabled. "
-+ "Both are currently enabled.",
-DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERLOAD_KEY,
-DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERSTORAGETYPE_KEY);
-
+if (readConsiderLoad && readConsiderStorageType) {
+  LOG.warn(
+  "{} and {} are incompatible and only one can be enabled. "
+  + "Both are currently enabled. {} will be ignored.",
+  DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERLOAD_KEY,
+  DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERSTORAGETYPE_KEY,
+  DFSConfigKeys.DFS_NAMENODE_READ_CONSIDERSTORAGETYPE_KEY);
+}
 this.avoidStaleDataNodesForWrite = conf.getBoolean(
 DFSConfigKeys.DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_KEY,
 DFSConfigKeys.DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_WRITE_DEFAULT);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-17222. Create socket address leveraging URI cache (#2241)

2020-09-10 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 56ebabd  HADOOP-17222. Create socket address leveraging URI cache 
(#2241)
56ebabd is described below

commit 56ebabd426757dd95c778535548abb8c01fbc1fb
Author: 1996fanrui <1996fan...@gmail.com>
AuthorDate: Fri Sep 11 13:30:52 2020 +0800

HADOOP-17222. Create socket address leveraging URI cache (#2241)

Contributed by fanrui.

Signed-off-by: Mingliang Liu 
Signed-off-by: He Xiaoqiao 
---
 .../main/java/org/apache/hadoop/net/NetUtils.java  | 78 ++
 .../java/org/apache/hadoop/net/TestNetUtils.java   | 45 -
 .../apache/hadoop/security/TestSecurityUtil.java   | 10 +++
 .../org/apache/hadoop/hdfs/DFSInputStream.java |  4 +-
 .../hadoop/hdfs/client/HdfsClientConfigKeys.java   |  3 +
 .../hadoop/hdfs/client/impl/DfsClientConf.java | 52 ++-
 .../src/main/resources/hdfs-default.xml|  9 +++
 .../apache/hadoop/tools/TestHdfsConfigFields.java  |  1 +
 8 files changed, 169 insertions(+), 33 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
index c5a5b11..004fa1c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
@@ -39,12 +39,16 @@ import java.net.ConnectException;
 import java.nio.channels.SocketChannel;
 import java.nio.channels.UnresolvedAddressException;
 import java.util.Map.Entry;
+import java.util.concurrent.TimeUnit;
 import java.util.regex.Pattern;
 import java.util.*;
 import java.util.concurrent.ConcurrentHashMap;
 
 import javax.net.SocketFactory;
 
+import com.google.common.cache.Cache;
+import com.google.common.cache.CacheBuilder;
+
 import org.apache.commons.net.util.SubnetUtils;
 import org.apache.commons.net.util.SubnetUtils.SubnetInfo;
 import org.apache.hadoop.classification.InterfaceAudience;
@@ -177,11 +181,33 @@ public class NetUtils {
*include a port number
* @param configName the name of the configuration from which
*   target was loaded. This is used in the
-   *   exception message in the case that parsing fails. 
+   *   exception message in the case that parsing fails.
*/
   public static InetSocketAddress createSocketAddr(String target,
int defaultPort,
String configName) {
+return createSocketAddr(target, defaultPort, configName, false);
+  }
+
+  /**
+   * Create an InetSocketAddress from the given target string and
+   * default port. If the string cannot be parsed correctly, the
+   * configName parameter is used as part of the
+   * exception message, allowing the user to better diagnose
+   * the misconfiguration.
+   *
+   * @param target a string of either "host" or "host:port"
+   * @param defaultPort the default port if target does not
+   *include a port number
+   * @param configName the name of the configuration from which
+   *   target was loaded. This is used in the
+   *   exception message in the case that parsing fails.
+   * @param useCacheIfPresent Whether use cache when create URI
+   */
+  public static InetSocketAddress createSocketAddr(String target,
+   int defaultPort,
+   String configName,
+   boolean useCacheIfPresent) {
 String helpText = "";
 if (configName != null) {
   helpText = " (configuration property '" + configName + "')";
@@ -191,15 +217,8 @@ public class NetUtils {
   helpText);
 }
 target = target.trim();
-boolean hasScheme = target.contains("://");
-URI uri = null;
-try {
-  uri = hasScheme ? URI.create(target) : 
URI.create("dummyscheme://"+target);
-} catch (IllegalArgumentException e) {
-  throw new IllegalArgumentException(
-  "Does not contain a valid host:port authority: " + target + helpText
-  );
-}
+boolean hasScheme = target.contains("://");
+URI uri = createURI(target, hasScheme, helpText, useCacheIfPresent);
 
 String host = uri.getHost();
 int port = uri.getPort();
@@ -207,10 +226,9 @@ public class NetUtils {
   port = defaultPort;
 }
 String path = uri.getPath();
-
+
 if ((host == null) || (port < 0) ||
-(!hasSchem

[hadoop] branch branch-2.10 updated: HADOOP-17159. Make UGI support forceful relogin from keytab ignoring the last login time (#2245)

2020-08-27 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.10 by this push:
 new e645204  HADOOP-17159. Make UGI support forceful relogin from keytab 
ignoring the last login time (#2245)
e645204 is described below

commit e645204733d152c33f9487a77ffe85c9a7d4675e
Author: sguggilam 
AuthorDate: Thu Aug 27 15:21:20 2020 -0700

HADOOP-17159. Make UGI support forceful relogin from keytab ignoring the 
last login time (#2245)

Contributed by Sandeep Guggilam.

Signed-off-by: Mingliang Liu 
Signed-off-by: Steve Loughran 
---
 .../hadoop/security/UserGroupInformation.java  | 33 ---
 .../hadoop/security/TestUGILoginFromKeytab.java| 37 ++
 2 files changed, 65 insertions(+), 5 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index 07cd314..bc8b47a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -1215,15 +1215,37 @@ public class UserGroupInformation {
* Re-Login a user in from a keytab file. Loads a user identity from a keytab
* file and logs them in. They become the currently logged-in user. This
* method assumes that {@link #loginUserFromKeytab(String, String)} had
-   * happened already.
-   * The Subject field of this UserGroupInformation object is updated to have
-   * the new credentials.
+   * happened already. The Subject field of this UserGroupInformation object is
+   * updated to have the new credentials.
+   *
* @throws IOException
* @throws KerberosAuthException on a failure
*/
   @InterfaceAudience.Public
   @InterfaceStability.Evolving
-  public synchronized void reloginFromKeytab() throws IOException {
+  public void reloginFromKeytab() throws IOException {
+reloginFromKeytab(false);
+  }
+
+  /**
+   * Force re-Login a user in from a keytab file irrespective of the last login
+   * time. Loads a user identity from a keytab file and logs them in. They
+   * become the currently logged-in user. This method assumes that
+   * {@link #loginUserFromKeytab(String, String)} had happened already. The
+   * Subject field of this UserGroupInformation object is updated to have the
+   * new credentials.
+   *
+   * @throws IOException
+   * @throws KerberosAuthException on a failure
+   */
+  @InterfaceAudience.Public
+  @InterfaceStability.Evolving
+  public void forceReloginFromKeytab() throws IOException {
+reloginFromKeytab(true);
+  }
+
+  private synchronized void reloginFromKeytab(boolean ignoreTimeElapsed)
+  throws IOException {
 if (!isSecurityEnabled()
 || user.getAuthenticationMethod() != AuthenticationMethod.KERBEROS
 || !isKeytab) {
@@ -1231,7 +1253,8 @@ public class UserGroupInformation {
 }
 
 long now = Time.now();
-if (!shouldRenewImmediatelyForTests && !hasSufficientTimeElapsed(now)) {
+if (!shouldRenewImmediatelyForTests && !ignoreTimeElapsed
+&& !hasSufficientTimeElapsed(now)) {
   return;
 }
 
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
index 66c2af4..0ada333 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
@@ -31,6 +31,8 @@ import org.junit.rules.TemporaryFolder;
 
 import java.io.File;
 
+import javax.security.auth.login.LoginContext;
+
 /**
  * Verify UGI login from keytab. Check that the UGI is
  * configured to use keytab to catch regressions like
@@ -115,4 +117,39 @@ public class TestUGILoginFromKeytab {
 secondLogin > firstLogin);
   }
 
+  /**
+   * Force re-login from keytab using the MiniKDC and verify the UGI can
+   * successfully relogin from keytab as well.
+   */
+  @Test
+  public void testUGIForceReLoginFromKeytab() throws Exception {
+UserGroupInformation.setShouldRenewImmediatelyForTests(true);
+String principal = "foo";
+File keytab = new File(workDir, "foo.keytab");
+kdc.createPrincipal(keytab, principal);
+
+UserGroupInformation.loginUserFromKeytab(principal, keytab.getPath());
+UserGroupInformation ugi = UserGroupInformation.getLoginUser();
+Assert.assertTrue("UGI should be configured to login from keytab",
+

[hadoop] branch branch-3.1 updated: HADOOP-17159. Make UGI support forceful relogin from keytab ignoring the last login time (#2249)

2020-08-27 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new e1ac832  HADOOP-17159. Make UGI support forceful relogin from keytab 
ignoring the last login time (#2249)
e1ac832 is described below

commit e1ac832d5dce283c659ccfc47534c71cef10d921
Author: sguggilam 
AuthorDate: Wed Aug 26 23:45:21 2020 -0700

HADOOP-17159. Make UGI support forceful relogin from keytab ignoring the 
last login time (#2249)

Contributed by Sandeep Guggilam.

Signed-off-by: Mingliang Liu 
Signed-off-by: Steve Loughran 
---
 .../hadoop/security/UserGroupInformation.java  | 36 ++
 .../hadoop/security/TestUGILoginFromKeytab.java| 36 ++
 2 files changed, 66 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index 0e4168c..ea22f53 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -1115,7 +1115,29 @@ public class UserGroupInformation {
 reloginFromKeytab(false);
   }
 
+  /**
+   * Force re-Login a user in from a keytab file irrespective of the last login
+   * time. Loads a user identity from a keytab file and logs them in. They
+   * become the currently logged-in user. This method assumes that
+   * {@link #loginUserFromKeytab(String, String)} had happened already. The
+   * Subject field of this UserGroupInformation object is updated to have the
+   * new credentials.
+   *
+   * @throws IOException
+   * @throws KerberosAuthException on a failure
+   */
+  @InterfaceAudience.Public
+  @InterfaceStability.Evolving
+  public void forceReloginFromKeytab() throws IOException {
+reloginFromKeytab(false, true);
+  }
+
   private void reloginFromKeytab(boolean checkTGT) throws IOException {
+reloginFromKeytab(checkTGT, false);
+  }
+
+  private void reloginFromKeytab(boolean checkTGT, boolean ignoreLastLoginTime)
+  throws IOException {
 if (!shouldRelogin() || !isFromKeytab()) {
   return;
 }
@@ -1130,7 +1152,7 @@ public class UserGroupInformation {
 return;
   }
 }
-relogin(login);
+relogin(login, ignoreLastLoginTime);
   }
 
   /**
@@ -1151,25 +1173,27 @@ public class UserGroupInformation {
 if (login == null) {
   throw new KerberosAuthException(MUST_FIRST_LOGIN);
 }
-relogin(login);
+relogin(login, false);
   }
 
-  private void relogin(HadoopLoginContext login) throws IOException {
+  private void relogin(HadoopLoginContext login, boolean ignoreLastLoginTime)
+  throws IOException {
 // ensure the relogin is atomic to avoid leaving credentials in an
 // inconsistent state.  prevents other ugi instances, SASL, and SPNEGO
 // from accessing or altering credentials during the relogin.
 synchronized(login.getSubjectLock()) {
   // another racing thread may have beat us to the relogin.
   if (login == getLogin()) {
-unprotectedRelogin(login);
+unprotectedRelogin(login, ignoreLastLoginTime);
   }
 }
   }
 
-  private void unprotectedRelogin(HadoopLoginContext login) throws IOException 
{
+  private void unprotectedRelogin(HadoopLoginContext login,
+  boolean ignoreLastLoginTime) throws IOException {
 assert Thread.holdsLock(login.getSubjectLock());
 long now = Time.now();
-if (!hasSufficientTimeElapsed(now)) {
+if (!hasSufficientTimeElapsed(now) && !ignoreLastLoginTime) {
   return;
 }
 // register most recent relogin attempt
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
index bf4a2cc..47084ce 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
@@ -154,6 +154,42 @@ public class TestUGILoginFromKeytab {
 Assert.assertNotSame(login1, login2);
   }
 
+  /**
+   * Force re-login from keytab using the MiniKDC and verify the UGI can
+   * successfully relogin from keytab as well.
+   */
+  @Test
+  public void testUGIForceReLoginFromKeytab() throws Exception {
+// Set this to false as we are testing force re-login anyways
+UserGroupInformation.setShouldRenewImmediatelyForTests(false);
+String principal = "foo";
+File keytab = new File(wo

[hadoop] branch branch-3.2 updated: HADOOP-17159. Make UGI support forceful relogin from keytab ignoring the last login time (#2249)

2020-08-27 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 970b9a2  HADOOP-17159. Make UGI support forceful relogin from keytab 
ignoring the last login time (#2249)
970b9a2 is described below

commit 970b9a283b52ff257c2ae431266ad4b133a2e675
Author: sguggilam 
AuthorDate: Wed Aug 26 23:45:21 2020 -0700

HADOOP-17159. Make UGI support forceful relogin from keytab ignoring the 
last login time (#2249)

Contributed by Sandeep Guggilam.

Signed-off-by: Mingliang Liu 
Signed-off-by: Steve Loughran 
---
 .../hadoop/security/UserGroupInformation.java  | 36 ++
 .../hadoop/security/TestUGILoginFromKeytab.java| 36 ++
 2 files changed, 66 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index 11f91f2..23f3ae9 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -1116,7 +1116,29 @@ public class UserGroupInformation {
 reloginFromKeytab(false);
   }
 
+  /**
+   * Force re-Login a user in from a keytab file irrespective of the last login
+   * time. Loads a user identity from a keytab file and logs them in. They
+   * become the currently logged-in user. This method assumes that
+   * {@link #loginUserFromKeytab(String, String)} had happened already. The
+   * Subject field of this UserGroupInformation object is updated to have the
+   * new credentials.
+   *
+   * @throws IOException
+   * @throws KerberosAuthException on a failure
+   */
+  @InterfaceAudience.Public
+  @InterfaceStability.Evolving
+  public void forceReloginFromKeytab() throws IOException {
+reloginFromKeytab(false, true);
+  }
+
   private void reloginFromKeytab(boolean checkTGT) throws IOException {
+reloginFromKeytab(checkTGT, false);
+  }
+
+  private void reloginFromKeytab(boolean checkTGT, boolean ignoreLastLoginTime)
+  throws IOException {
 if (!shouldRelogin() || !isFromKeytab()) {
   return;
 }
@@ -1131,7 +1153,7 @@ public class UserGroupInformation {
 return;
   }
 }
-relogin(login);
+relogin(login, ignoreLastLoginTime);
   }
 
   /**
@@ -1152,25 +1174,27 @@ public class UserGroupInformation {
 if (login == null) {
   throw new KerberosAuthException(MUST_FIRST_LOGIN);
 }
-relogin(login);
+relogin(login, false);
   }
 
-  private void relogin(HadoopLoginContext login) throws IOException {
+  private void relogin(HadoopLoginContext login, boolean ignoreLastLoginTime)
+  throws IOException {
 // ensure the relogin is atomic to avoid leaving credentials in an
 // inconsistent state.  prevents other ugi instances, SASL, and SPNEGO
 // from accessing or altering credentials during the relogin.
 synchronized(login.getSubjectLock()) {
   // another racing thread may have beat us to the relogin.
   if (login == getLogin()) {
-unprotectedRelogin(login);
+unprotectedRelogin(login, ignoreLastLoginTime);
   }
 }
   }
 
-  private void unprotectedRelogin(HadoopLoginContext login) throws IOException 
{
+  private void unprotectedRelogin(HadoopLoginContext login,
+  boolean ignoreLastLoginTime) throws IOException {
 assert Thread.holdsLock(login.getSubjectLock());
 long now = Time.now();
-if (!hasSufficientTimeElapsed(now)) {
+if (!hasSufficientTimeElapsed(now) && !ignoreLastLoginTime) {
   return;
 }
 // register most recent relogin attempt
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
index bf4a2cc..47084ce 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
@@ -154,6 +154,42 @@ public class TestUGILoginFromKeytab {
 Assert.assertNotSame(login1, login2);
   }
 
+  /**
+   * Force re-login from keytab using the MiniKDC and verify the UGI can
+   * successfully relogin from keytab as well.
+   */
+  @Test
+  public void testUGIForceReLoginFromKeytab() throws Exception {
+// Set this to false as we are testing force re-login anyways
+UserGroupInformation.setShouldRenewImmediatelyForTests(false);
+String principal = "foo";
+File keytab = new File(wo

[hadoop] branch branch-3.3 updated: HADOOP-17159. Make UGI support forceful relogin from keytab ignoring the last login time (#2249)

2020-08-27 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new fcb80c1  HADOOP-17159. Make UGI support forceful relogin from keytab 
ignoring the last login time (#2249)
fcb80c1 is described below

commit fcb80c1ade5162b323b7138984f19af673a29ebd
Author: sguggilam 
AuthorDate: Wed Aug 26 23:45:21 2020 -0700

HADOOP-17159. Make UGI support forceful relogin from keytab ignoring the 
last login time (#2249)

Contributed by Sandeep Guggilam.

Signed-off-by: Mingliang Liu 
Signed-off-by: Steve Loughran 
---
 .../hadoop/security/UserGroupInformation.java  | 36 ++
 .../hadoop/security/TestUGILoginFromKeytab.java| 36 ++
 2 files changed, 66 insertions(+), 6 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index d37da72..dcee9f4 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -1233,7 +1233,29 @@ public class UserGroupInformation {
 reloginFromKeytab(false);
   }
 
+  /**
+   * Force re-Login a user in from a keytab file irrespective of the last login
+   * time. Loads a user identity from a keytab file and logs them in. They
+   * become the currently logged-in user. This method assumes that
+   * {@link #loginUserFromKeytab(String, String)} had happened already. The
+   * Subject field of this UserGroupInformation object is updated to have the
+   * new credentials.
+   *
+   * @throws IOException
+   * @throws KerberosAuthException on a failure
+   */
+  @InterfaceAudience.Public
+  @InterfaceStability.Evolving
+  public void forceReloginFromKeytab() throws IOException {
+reloginFromKeytab(false, true);
+  }
+
   private void reloginFromKeytab(boolean checkTGT) throws IOException {
+reloginFromKeytab(checkTGT, false);
+  }
+
+  private void reloginFromKeytab(boolean checkTGT, boolean ignoreLastLoginTime)
+  throws IOException {
 if (!shouldRelogin() || !isFromKeytab()) {
   return;
 }
@@ -1248,7 +1270,7 @@ public class UserGroupInformation {
 return;
   }
 }
-relogin(login);
+relogin(login, ignoreLastLoginTime);
   }
 
   /**
@@ -1269,25 +1291,27 @@ public class UserGroupInformation {
 if (login == null) {
   throw new KerberosAuthException(MUST_FIRST_LOGIN);
 }
-relogin(login);
+relogin(login, false);
   }
 
-  private void relogin(HadoopLoginContext login) throws IOException {
+  private void relogin(HadoopLoginContext login, boolean ignoreLastLoginTime)
+  throws IOException {
 // ensure the relogin is atomic to avoid leaving credentials in an
 // inconsistent state.  prevents other ugi instances, SASL, and SPNEGO
 // from accessing or altering credentials during the relogin.
 synchronized(login.getSubjectLock()) {
   // another racing thread may have beat us to the relogin.
   if (login == getLogin()) {
-unprotectedRelogin(login);
+unprotectedRelogin(login, ignoreLastLoginTime);
   }
 }
   }
 
-  private void unprotectedRelogin(HadoopLoginContext login) throws IOException 
{
+  private void unprotectedRelogin(HadoopLoginContext login,
+  boolean ignoreLastLoginTime) throws IOException {
 assert Thread.holdsLock(login.getSubjectLock());
 long now = Time.now();
-if (!hasSufficientTimeElapsed(now)) {
+if (!hasSufficientTimeElapsed(now) && !ignoreLastLoginTime) {
   return;
 }
 // register most recent relogin attempt
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
index d233234..db0095f 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
@@ -158,6 +158,42 @@ public class TestUGILoginFromKeytab {
 Assert.assertNotSame(login1, login2);
   }
 
+  /**
+   * Force re-login from keytab using the MiniKDC and verify the UGI can
+   * successfully relogin from keytab as well.
+   */
+  @Test
+  public void testUGIForceReLoginFromKeytab() throws Exception {
+// Set this to false as we are testing force re-login anyways
+UserGroupInformation.setShouldRenewImmediatelyForTests(false);
+String principal = "foo";
+File keytab = new File(wo

[hadoop] branch trunk updated (2ffe00f -> d8aaa8c)

2020-08-27 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


from 2ffe00f  HDFS-15540. Directories protected from delete can still be 
moved to the trash. Contributed by Stephen O'Donnell.
 add d8aaa8c  HADOOP-17159. Make UGI support forceful relogin from keytab 
ignoring the last login time (#2249)

No new revisions were added by this update.

Summary of changes:
 .../hadoop/security/UserGroupInformation.java  | 36 ++
 .../hadoop/security/TestUGILoginFromKeytab.java| 36 ++
 2 files changed, 66 insertions(+), 6 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: Revert "HADOOP-17159 Ability for forceful relogin in UserGroupInformation class (#2197)"

2020-08-26 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 9f94c9e  Revert "HADOOP-17159 Ability for forceful relogin in 
UserGroupInformation class (#2197)"
9f94c9e is described below

commit 9f94c9e60dc5e663774c6bd3ef601b4d38039377
Author: Mingliang Liu 
AuthorDate: Wed Aug 26 11:24:03 2020 -0700

Revert "HADOOP-17159 Ability for forceful relogin in UserGroupInformation 
class (#2197)"

This reverts commit 12fb9e0600f665aca3e7ebe0be9b95ff232d520f.
---
 .../hadoop/security/UserGroupInformation.java  | 35 +
 .../hadoop/security/TestUGILoginFromKeytab.java| 36 --
 2 files changed, 7 insertions(+), 64 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index 2471e0a..0e4168c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -1115,26 +1115,7 @@ public class UserGroupInformation {
 reloginFromKeytab(false);
   }
 
-  /**
-   * Force re-Login a user in from a keytab file. Loads a user identity from a
-   * keytab file and logs them in. They become the currently logged-in user.
-   * This method assumes that {@link #loginUserFromKeytab(String, String)} had
-   * happened already. The Subject field of this UserGroupInformation object is
-   * updated to have the new credentials.
-   *
-   * @param ignoreTimeElapsed Force re-login irrespective of the time of last
-   *  login
-   * @throws IOException
-   * @throws KerberosAuthException on a failure
-   */
-  @InterfaceAudience.Public
-  @InterfaceStability.Evolving
-  public void reloginFromKeytab(boolean ignoreTimeElapsed) throws IOException {
-reloginFromKeytab(false, ignoreTimeElapsed);
-  }
-
-  private void reloginFromKeytab(boolean checkTGT, boolean ignoreTimeElapsed)
-  throws IOException {
+  private void reloginFromKeytab(boolean checkTGT) throws IOException {
 if (!shouldRelogin() || !isFromKeytab()) {
   return;
 }
@@ -1149,7 +1130,7 @@ public class UserGroupInformation {
 return;
   }
 }
-relogin(login, ignoreTimeElapsed);
+relogin(login);
   }
 
   /**
@@ -1170,27 +1151,25 @@ public class UserGroupInformation {
 if (login == null) {
   throw new KerberosAuthException(MUST_FIRST_LOGIN);
 }
-relogin(login, false);
+relogin(login);
   }
 
-  private void relogin(HadoopLoginContext login, boolean ignoreTimeElapsed)
-  throws IOException {
+  private void relogin(HadoopLoginContext login) throws IOException {
 // ensure the relogin is atomic to avoid leaving credentials in an
 // inconsistent state.  prevents other ugi instances, SASL, and SPNEGO
 // from accessing or altering credentials during the relogin.
 synchronized(login.getSubjectLock()) {
   // another racing thread may have beat us to the relogin.
   if (login == getLogin()) {
-unprotectedRelogin(login, ignoreTimeElapsed);
+unprotectedRelogin(login);
   }
 }
   }
 
-  private void unprotectedRelogin(HadoopLoginContext login,
-  boolean ignoreTimeElapsed) throws IOException {
+  private void unprotectedRelogin(HadoopLoginContext login) throws IOException 
{
 assert Thread.holdsLock(login.getSubjectLock());
 long now = Time.now();
-if (!hasSufficientTimeElapsed(now) && !ignoreTimeElapsed) {
+if (!hasSufficientTimeElapsed(now)) {
   return;
 }
 // register most recent relogin attempt
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
index 7e2c250d..bf4a2cc 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
@@ -154,42 +154,6 @@ public class TestUGILoginFromKeytab {
 Assert.assertNotSame(login1, login2);
   }
 
-  /**
-   * Force re-login from keytab using the MiniKDC and verify the UGI can
-   * successfully relogin from keytab as well.
-   */
-  @Test
-  public void testUGIForceReLoginFromKeytab() throws Exception {
-// Set this to false as we are testing force re-login anyways
-UserGroupInformation.setShouldRenewImmediatelyForTests(false);
-String principal = "foo";
-File keytab = new File(workDir, &

[hadoop] branch branch-3.2 updated: Revert "HADOOP-17159 Ability for forceful relogin in UserGroupInformation class (#2197)"

2020-08-26 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new acec431  Revert "HADOOP-17159 Ability for forceful relogin in 
UserGroupInformation class (#2197)"
acec431 is described below

commit acec4313777d4c13f151ecd286cf2e88c5d44d9e
Author: Mingliang Liu 
AuthorDate: Wed Aug 26 11:23:26 2020 -0700

Revert "HADOOP-17159 Ability for forceful relogin in UserGroupInformation 
class (#2197)"

This reverts commit d06f0de3affbd5e8232a6fcdb9a3c396934b6a05.
---
 .../hadoop/security/UserGroupInformation.java  | 35 +
 .../hadoop/security/TestUGILoginFromKeytab.java| 36 --
 2 files changed, 7 insertions(+), 64 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index c91cf73..11f91f2 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -1116,26 +1116,7 @@ public class UserGroupInformation {
 reloginFromKeytab(false);
   }
 
-  /**
-   * Force re-Login a user in from a keytab file. Loads a user identity from a
-   * keytab file and logs them in. They become the currently logged-in user.
-   * This method assumes that {@link #loginUserFromKeytab(String, String)} had
-   * happened already. The Subject field of this UserGroupInformation object is
-   * updated to have the new credentials.
-   *
-   * @param ignoreTimeElapsed Force re-login irrespective of the time of last
-   *  login
-   * @throws IOException
-   * @throws KerberosAuthException on a failure
-   */
-  @InterfaceAudience.Public
-  @InterfaceStability.Evolving
-  public void reloginFromKeytab(boolean ignoreTimeElapsed) throws IOException {
-reloginFromKeytab(false, ignoreTimeElapsed);
-  }
-
-  private void reloginFromKeytab(boolean checkTGT, boolean ignoreTimeElapsed)
-  throws IOException {
+  private void reloginFromKeytab(boolean checkTGT) throws IOException {
 if (!shouldRelogin() || !isFromKeytab()) {
   return;
 }
@@ -1150,7 +1131,7 @@ public class UserGroupInformation {
 return;
   }
 }
-relogin(login, ignoreTimeElapsed);
+relogin(login);
   }
 
   /**
@@ -1171,27 +1152,25 @@ public class UserGroupInformation {
 if (login == null) {
   throw new KerberosAuthException(MUST_FIRST_LOGIN);
 }
-relogin(login, false);
+relogin(login);
   }
 
-  private void relogin(HadoopLoginContext login, boolean ignoreTimeElapsed)
-  throws IOException {
+  private void relogin(HadoopLoginContext login) throws IOException {
 // ensure the relogin is atomic to avoid leaving credentials in an
 // inconsistent state.  prevents other ugi instances, SASL, and SPNEGO
 // from accessing or altering credentials during the relogin.
 synchronized(login.getSubjectLock()) {
   // another racing thread may have beat us to the relogin.
   if (login == getLogin()) {
-unprotectedRelogin(login, ignoreTimeElapsed);
+unprotectedRelogin(login);
   }
 }
   }
 
-  private void unprotectedRelogin(HadoopLoginContext login,
-  boolean ignoreTimeElapsed) throws IOException {
+  private void unprotectedRelogin(HadoopLoginContext login) throws IOException 
{
 assert Thread.holdsLock(login.getSubjectLock());
 long now = Time.now();
-if (!hasSufficientTimeElapsed(now) && !ignoreTimeElapsed) {
+if (!hasSufficientTimeElapsed(now)) {
   return;
 }
 // register most recent relogin attempt
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
index 7e2c250d..bf4a2cc 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
@@ -154,42 +154,6 @@ public class TestUGILoginFromKeytab {
 Assert.assertNotSame(login1, login2);
   }
 
-  /**
-   * Force re-login from keytab using the MiniKDC and verify the UGI can
-   * successfully relogin from keytab as well.
-   */
-  @Test
-  public void testUGIForceReLoginFromKeytab() throws Exception {
-// Set this to false as we are testing force re-login anyways
-UserGroupInformation.setShouldRenewImmediatelyForTests(false);
-String principal = "foo";
-File keytab = new File(workDir, &

[hadoop] branch branch-3.3 updated: Revert "HADOOP-17159 Ability for forceful relogin in UserGroupInformation class (#2197)"

2020-08-26 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new ee7d214  Revert "HADOOP-17159 Ability for forceful relogin in 
UserGroupInformation class (#2197)"
ee7d214 is described below

commit ee7d21411869ec18f620615b9e62caa5add72a1d
Author: Mingliang Liu 
AuthorDate: Wed Aug 26 11:22:46 2020 -0700

Revert "HADOOP-17159 Ability for forceful relogin in UserGroupInformation 
class (#2197)"

This reverts commit da129a67bb4a169d3efcfc7cf298af68bad5fb73.
---
 .../hadoop/security/UserGroupInformation.java  | 35 +
 .../hadoop/security/TestUGILoginFromKeytab.java| 36 --
 2 files changed, 7 insertions(+), 64 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index 57a4c74..d37da72 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -1233,26 +1233,7 @@ public class UserGroupInformation {
 reloginFromKeytab(false);
   }
 
-  /**
-   * Force re-Login a user in from a keytab file. Loads a user identity from a
-   * keytab file and logs them in. They become the currently logged-in user.
-   * This method assumes that {@link #loginUserFromKeytab(String, String)} had
-   * happened already. The Subject field of this UserGroupInformation object is
-   * updated to have the new credentials.
-   *
-   * @param ignoreTimeElapsed Force re-login irrespective of the time of last
-   *  login
-   * @throws IOException
-   * @throws KerberosAuthException on a failure
-   */
-  @InterfaceAudience.Public
-  @InterfaceStability.Evolving
-  public void reloginFromKeytab(boolean ignoreTimeElapsed) throws IOException {
-reloginFromKeytab(false, ignoreTimeElapsed);
-  }
-
-  private void reloginFromKeytab(boolean checkTGT, boolean ignoreTimeElapsed)
-  throws IOException {
+  private void reloginFromKeytab(boolean checkTGT) throws IOException {
 if (!shouldRelogin() || !isFromKeytab()) {
   return;
 }
@@ -1267,7 +1248,7 @@ public class UserGroupInformation {
 return;
   }
 }
-relogin(login, ignoreTimeElapsed);
+relogin(login);
   }
 
   /**
@@ -1288,27 +1269,25 @@ public class UserGroupInformation {
 if (login == null) {
   throw new KerberosAuthException(MUST_FIRST_LOGIN);
 }
-relogin(login, false);
+relogin(login);
   }
 
-  private void relogin(HadoopLoginContext login, boolean ignoreTimeElapsed)
-  throws IOException {
+  private void relogin(HadoopLoginContext login) throws IOException {
 // ensure the relogin is atomic to avoid leaving credentials in an
 // inconsistent state.  prevents other ugi instances, SASL, and SPNEGO
 // from accessing or altering credentials during the relogin.
 synchronized(login.getSubjectLock()) {
   // another racing thread may have beat us to the relogin.
   if (login == getLogin()) {
-unprotectedRelogin(login, ignoreTimeElapsed);
+unprotectedRelogin(login);
   }
 }
   }
 
-  private void unprotectedRelogin(HadoopLoginContext login,
-  boolean ignoreTimeElapsed) throws IOException {
+  private void unprotectedRelogin(HadoopLoginContext login) throws IOException 
{
 assert Thread.holdsLock(login.getSubjectLock());
 long now = Time.now();
-if (!hasSufficientTimeElapsed(now) && !ignoreTimeElapsed) {
+if (!hasSufficientTimeElapsed(now)) {
   return;
 }
 // register most recent relogin attempt
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
index bf4cf75..d233234 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
@@ -158,42 +158,6 @@ public class TestUGILoginFromKeytab {
 Assert.assertNotSame(login1, login2);
   }
 
-  /**
-   * Force re-login from keytab using the MiniKDC and verify the UGI can
-   * successfully relogin from keytab as well.
-   */
-  @Test
-  public void testUGIForceReLoginFromKeytab() throws Exception {
-// Set this to false as we are testing force re-login anyways
-UserGroupInformation.setShouldRenewImmediatelyForTests(false);
-String principal = "foo";
-File keytab = new File(workDir, &

[hadoop] branch revert-2197-trunk created (now b7745b0)

2020-08-26 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a change to branch revert-2197-trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at b7745b0  Revert "HADOOP-17159 Ability for forceful relogin in 
UserGroupInformation class (#2197)"

This branch includes the following new commits:

 new b7745b0  Revert "HADOOP-17159 Ability for forceful relogin in 
UserGroupInformation class (#2197)"

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/01: Revert "HADOOP-17159 Ability for forceful relogin in UserGroupInformation class (#2197)"

2020-08-26 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch revert-2197-trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit b7745b00b2810fd405e19971ac8da27ef1668b01
Author: Mingliang Liu 
AuthorDate: Wed Aug 26 10:41:00 2020 -0700

Revert "HADOOP-17159 Ability for forceful relogin in UserGroupInformation 
class (#2197)"

This reverts commit a932796d0cad3d84df0003782e4247cbc2dcca93.
---
 .../hadoop/security/UserGroupInformation.java  | 35 +
 .../hadoop/security/TestUGILoginFromKeytab.java| 36 --
 2 files changed, 7 insertions(+), 64 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index d1ab436..91b64ad 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -1232,26 +1232,7 @@ public class UserGroupInformation {
 reloginFromKeytab(false);
   }
 
-  /**
-   * Force re-Login a user in from a keytab file. Loads a user identity from a
-   * keytab file and logs them in. They become the currently logged-in user.
-   * This method assumes that {@link #loginUserFromKeytab(String, String)} had
-   * happened already. The Subject field of this UserGroupInformation object is
-   * updated to have the new credentials.
-   *
-   * @param ignoreTimeElapsed Force re-login irrespective of the time of last
-   *  login
-   * @throws IOException
-   * @throws KerberosAuthException on a failure
-   */
-  @InterfaceAudience.Public
-  @InterfaceStability.Evolving
-  public void reloginFromKeytab(boolean ignoreTimeElapsed) throws IOException {
-reloginFromKeytab(false, ignoreTimeElapsed);
-  }
-
-  private void reloginFromKeytab(boolean checkTGT, boolean ignoreTimeElapsed)
-  throws IOException {
+  private void reloginFromKeytab(boolean checkTGT) throws IOException {
 if (!shouldRelogin() || !isFromKeytab()) {
   return;
 }
@@ -1266,7 +1247,7 @@ public class UserGroupInformation {
 return;
   }
 }
-relogin(login, ignoreTimeElapsed);
+relogin(login);
   }
 
   /**
@@ -1287,27 +1268,25 @@ public class UserGroupInformation {
 if (login == null) {
   throw new KerberosAuthException(MUST_FIRST_LOGIN);
 }
-relogin(login, false);
+relogin(login);
   }
 
-  private void relogin(HadoopLoginContext login, boolean ignoreTimeElapsed)
-  throws IOException {
+  private void relogin(HadoopLoginContext login) throws IOException {
 // ensure the relogin is atomic to avoid leaving credentials in an
 // inconsistent state.  prevents other ugi instances, SASL, and SPNEGO
 // from accessing or altering credentials during the relogin.
 synchronized(login.getSubjectLock()) {
   // another racing thread may have beat us to the relogin.
   if (login == getLogin()) {
-unprotectedRelogin(login, ignoreTimeElapsed);
+unprotectedRelogin(login);
   }
 }
   }
 
-  private void unprotectedRelogin(HadoopLoginContext login,
-  boolean ignoreTimeElapsed) throws IOException {
+  private void unprotectedRelogin(HadoopLoginContext login) throws IOException 
{
 assert Thread.holdsLock(login.getSubjectLock());
 long now = Time.now();
-if (!hasSufficientTimeElapsed(now) && !ignoreTimeElapsed) {
+if (!hasSufficientTimeElapsed(now)) {
   return;
 }
 // register most recent relogin attempt
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
index bf4cf75..d233234 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
@@ -158,42 +158,6 @@ public class TestUGILoginFromKeytab {
 Assert.assertNotSame(login1, login2);
   }
 
-  /**
-   * Force re-login from keytab using the MiniKDC and verify the UGI can
-   * successfully relogin from keytab as well.
-   */
-  @Test
-  public void testUGIForceReLoginFromKeytab() throws Exception {
-// Set this to false as we are testing force re-login anyways
-UserGroupInformation.setShouldRenewImmediatelyForTests(false);
-String principal = "foo";
-File keytab = new File(workDir, "foo.keytab");
-kdc.createPrincipal(keytab, principal);
-
-UserGroupInformation.loginUserFromKeytab(principal, keytab.getPath());
-UserGroupInformation ugi = UserGroupInformation.getLoginUser

[hadoop] 01/01: Revert "HADOOP-17159 Ability for forceful relogin in UserGroupInformation class (#2197)"

2020-08-26 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch revert-2197-trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit b7745b00b2810fd405e19971ac8da27ef1668b01
Author: Mingliang Liu 
AuthorDate: Wed Aug 26 10:41:00 2020 -0700

Revert "HADOOP-17159 Ability for forceful relogin in UserGroupInformation 
class (#2197)"

This reverts commit a932796d0cad3d84df0003782e4247cbc2dcca93.
---
 .../hadoop/security/UserGroupInformation.java  | 35 +
 .../hadoop/security/TestUGILoginFromKeytab.java| 36 --
 2 files changed, 7 insertions(+), 64 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index d1ab436..91b64ad 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -1232,26 +1232,7 @@ public class UserGroupInformation {
 reloginFromKeytab(false);
   }
 
-  /**
-   * Force re-Login a user in from a keytab file. Loads a user identity from a
-   * keytab file and logs them in. They become the currently logged-in user.
-   * This method assumes that {@link #loginUserFromKeytab(String, String)} had
-   * happened already. The Subject field of this UserGroupInformation object is
-   * updated to have the new credentials.
-   *
-   * @param ignoreTimeElapsed Force re-login irrespective of the time of last
-   *  login
-   * @throws IOException
-   * @throws KerberosAuthException on a failure
-   */
-  @InterfaceAudience.Public
-  @InterfaceStability.Evolving
-  public void reloginFromKeytab(boolean ignoreTimeElapsed) throws IOException {
-reloginFromKeytab(false, ignoreTimeElapsed);
-  }
-
-  private void reloginFromKeytab(boolean checkTGT, boolean ignoreTimeElapsed)
-  throws IOException {
+  private void reloginFromKeytab(boolean checkTGT) throws IOException {
 if (!shouldRelogin() || !isFromKeytab()) {
   return;
 }
@@ -1266,7 +1247,7 @@ public class UserGroupInformation {
 return;
   }
 }
-relogin(login, ignoreTimeElapsed);
+relogin(login);
   }
 
   /**
@@ -1287,27 +1268,25 @@ public class UserGroupInformation {
 if (login == null) {
   throw new KerberosAuthException(MUST_FIRST_LOGIN);
 }
-relogin(login, false);
+relogin(login);
   }
 
-  private void relogin(HadoopLoginContext login, boolean ignoreTimeElapsed)
-  throws IOException {
+  private void relogin(HadoopLoginContext login) throws IOException {
 // ensure the relogin is atomic to avoid leaving credentials in an
 // inconsistent state.  prevents other ugi instances, SASL, and SPNEGO
 // from accessing or altering credentials during the relogin.
 synchronized(login.getSubjectLock()) {
   // another racing thread may have beat us to the relogin.
   if (login == getLogin()) {
-unprotectedRelogin(login, ignoreTimeElapsed);
+unprotectedRelogin(login);
   }
 }
   }
 
-  private void unprotectedRelogin(HadoopLoginContext login,
-  boolean ignoreTimeElapsed) throws IOException {
+  private void unprotectedRelogin(HadoopLoginContext login) throws IOException 
{
 assert Thread.holdsLock(login.getSubjectLock());
 long now = Time.now();
-if (!hasSufficientTimeElapsed(now) && !ignoreTimeElapsed) {
+if (!hasSufficientTimeElapsed(now)) {
   return;
 }
 // register most recent relogin attempt
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
index bf4cf75..d233234 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
@@ -158,42 +158,6 @@ public class TestUGILoginFromKeytab {
 Assert.assertNotSame(login1, login2);
   }
 
-  /**
-   * Force re-login from keytab using the MiniKDC and verify the UGI can
-   * successfully relogin from keytab as well.
-   */
-  @Test
-  public void testUGIForceReLoginFromKeytab() throws Exception {
-// Set this to false as we are testing force re-login anyways
-UserGroupInformation.setShouldRenewImmediatelyForTests(false);
-String principal = "foo";
-File keytab = new File(workDir, "foo.keytab");
-kdc.createPrincipal(keytab, principal);
-
-UserGroupInformation.loginUserFromKeytab(principal, keytab.getPath());
-UserGroupInformation ugi = UserGroupInformation.getLoginUser

[hadoop] branch revert-2197-trunk created (now b7745b0)

2020-08-26 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a change to branch revert-2197-trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at b7745b0  Revert "HADOOP-17159 Ability for forceful relogin in 
UserGroupInformation class (#2197)"

This branch includes the following new commits:

 new b7745b0  Revert "HADOOP-17159 Ability for forceful relogin in 
UserGroupInformation class (#2197)"

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/01: Revert "HADOOP-17159 Ability for forceful relogin in UserGroupInformation class (#2197)"

2020-08-26 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch revert-2197-trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit b7745b00b2810fd405e19971ac8da27ef1668b01
Author: Mingliang Liu 
AuthorDate: Wed Aug 26 10:41:00 2020 -0700

Revert "HADOOP-17159 Ability for forceful relogin in UserGroupInformation 
class (#2197)"

This reverts commit a932796d0cad3d84df0003782e4247cbc2dcca93.
---
 .../hadoop/security/UserGroupInformation.java  | 35 +
 .../hadoop/security/TestUGILoginFromKeytab.java| 36 --
 2 files changed, 7 insertions(+), 64 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index d1ab436..91b64ad 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -1232,26 +1232,7 @@ public class UserGroupInformation {
 reloginFromKeytab(false);
   }
 
-  /**
-   * Force re-Login a user in from a keytab file. Loads a user identity from a
-   * keytab file and logs them in. They become the currently logged-in user.
-   * This method assumes that {@link #loginUserFromKeytab(String, String)} had
-   * happened already. The Subject field of this UserGroupInformation object is
-   * updated to have the new credentials.
-   *
-   * @param ignoreTimeElapsed Force re-login irrespective of the time of last
-   *  login
-   * @throws IOException
-   * @throws KerberosAuthException on a failure
-   */
-  @InterfaceAudience.Public
-  @InterfaceStability.Evolving
-  public void reloginFromKeytab(boolean ignoreTimeElapsed) throws IOException {
-reloginFromKeytab(false, ignoreTimeElapsed);
-  }
-
-  private void reloginFromKeytab(boolean checkTGT, boolean ignoreTimeElapsed)
-  throws IOException {
+  private void reloginFromKeytab(boolean checkTGT) throws IOException {
 if (!shouldRelogin() || !isFromKeytab()) {
   return;
 }
@@ -1266,7 +1247,7 @@ public class UserGroupInformation {
 return;
   }
 }
-relogin(login, ignoreTimeElapsed);
+relogin(login);
   }
 
   /**
@@ -1287,27 +1268,25 @@ public class UserGroupInformation {
 if (login == null) {
   throw new KerberosAuthException(MUST_FIRST_LOGIN);
 }
-relogin(login, false);
+relogin(login);
   }
 
-  private void relogin(HadoopLoginContext login, boolean ignoreTimeElapsed)
-  throws IOException {
+  private void relogin(HadoopLoginContext login) throws IOException {
 // ensure the relogin is atomic to avoid leaving credentials in an
 // inconsistent state.  prevents other ugi instances, SASL, and SPNEGO
 // from accessing or altering credentials during the relogin.
 synchronized(login.getSubjectLock()) {
   // another racing thread may have beat us to the relogin.
   if (login == getLogin()) {
-unprotectedRelogin(login, ignoreTimeElapsed);
+unprotectedRelogin(login);
   }
 }
   }
 
-  private void unprotectedRelogin(HadoopLoginContext login,
-  boolean ignoreTimeElapsed) throws IOException {
+  private void unprotectedRelogin(HadoopLoginContext login) throws IOException 
{
 assert Thread.holdsLock(login.getSubjectLock());
 long now = Time.now();
-if (!hasSufficientTimeElapsed(now) && !ignoreTimeElapsed) {
+if (!hasSufficientTimeElapsed(now)) {
   return;
 }
 // register most recent relogin attempt
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
index bf4cf75..d233234 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
@@ -158,42 +158,6 @@ public class TestUGILoginFromKeytab {
 Assert.assertNotSame(login1, login2);
   }
 
-  /**
-   * Force re-login from keytab using the MiniKDC and verify the UGI can
-   * successfully relogin from keytab as well.
-   */
-  @Test
-  public void testUGIForceReLoginFromKeytab() throws Exception {
-// Set this to false as we are testing force re-login anyways
-UserGroupInformation.setShouldRenewImmediatelyForTests(false);
-String principal = "foo";
-File keytab = new File(workDir, "foo.keytab");
-kdc.createPrincipal(keytab, principal);
-
-UserGroupInformation.loginUserFromKeytab(principal, keytab.getPath());
-UserGroupInformation ugi = UserGroupInformation.getLoginUser

[hadoop] branch revert-2197-trunk created (now b7745b0)

2020-08-26 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a change to branch revert-2197-trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at b7745b0  Revert "HADOOP-17159 Ability for forceful relogin in 
UserGroupInformation class (#2197)"

This branch includes the following new commits:

 new b7745b0  Revert "HADOOP-17159 Ability for forceful relogin in 
UserGroupInformation class (#2197)"

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/01: Revert "HADOOP-17159 Ability for forceful relogin in UserGroupInformation class (#2197)"

2020-08-26 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch revert-2197-trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit b7745b00b2810fd405e19971ac8da27ef1668b01
Author: Mingliang Liu 
AuthorDate: Wed Aug 26 10:41:00 2020 -0700

Revert "HADOOP-17159 Ability for forceful relogin in UserGroupInformation 
class (#2197)"

This reverts commit a932796d0cad3d84df0003782e4247cbc2dcca93.
---
 .../hadoop/security/UserGroupInformation.java  | 35 +
 .../hadoop/security/TestUGILoginFromKeytab.java| 36 --
 2 files changed, 7 insertions(+), 64 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index d1ab436..91b64ad 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -1232,26 +1232,7 @@ public class UserGroupInformation {
 reloginFromKeytab(false);
   }
 
-  /**
-   * Force re-Login a user in from a keytab file. Loads a user identity from a
-   * keytab file and logs them in. They become the currently logged-in user.
-   * This method assumes that {@link #loginUserFromKeytab(String, String)} had
-   * happened already. The Subject field of this UserGroupInformation object is
-   * updated to have the new credentials.
-   *
-   * @param ignoreTimeElapsed Force re-login irrespective of the time of last
-   *  login
-   * @throws IOException
-   * @throws KerberosAuthException on a failure
-   */
-  @InterfaceAudience.Public
-  @InterfaceStability.Evolving
-  public void reloginFromKeytab(boolean ignoreTimeElapsed) throws IOException {
-reloginFromKeytab(false, ignoreTimeElapsed);
-  }
-
-  private void reloginFromKeytab(boolean checkTGT, boolean ignoreTimeElapsed)
-  throws IOException {
+  private void reloginFromKeytab(boolean checkTGT) throws IOException {
 if (!shouldRelogin() || !isFromKeytab()) {
   return;
 }
@@ -1266,7 +1247,7 @@ public class UserGroupInformation {
 return;
   }
 }
-relogin(login, ignoreTimeElapsed);
+relogin(login);
   }
 
   /**
@@ -1287,27 +1268,25 @@ public class UserGroupInformation {
 if (login == null) {
   throw new KerberosAuthException(MUST_FIRST_LOGIN);
 }
-relogin(login, false);
+relogin(login);
   }
 
-  private void relogin(HadoopLoginContext login, boolean ignoreTimeElapsed)
-  throws IOException {
+  private void relogin(HadoopLoginContext login) throws IOException {
 // ensure the relogin is atomic to avoid leaving credentials in an
 // inconsistent state.  prevents other ugi instances, SASL, and SPNEGO
 // from accessing or altering credentials during the relogin.
 synchronized(login.getSubjectLock()) {
   // another racing thread may have beat us to the relogin.
   if (login == getLogin()) {
-unprotectedRelogin(login, ignoreTimeElapsed);
+unprotectedRelogin(login);
   }
 }
   }
 
-  private void unprotectedRelogin(HadoopLoginContext login,
-  boolean ignoreTimeElapsed) throws IOException {
+  private void unprotectedRelogin(HadoopLoginContext login) throws IOException 
{
 assert Thread.holdsLock(login.getSubjectLock());
 long now = Time.now();
-if (!hasSufficientTimeElapsed(now) && !ignoreTimeElapsed) {
+if (!hasSufficientTimeElapsed(now)) {
   return;
 }
 // register most recent relogin attempt
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
index bf4cf75..d233234 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
@@ -158,42 +158,6 @@ public class TestUGILoginFromKeytab {
 Assert.assertNotSame(login1, login2);
   }
 
-  /**
-   * Force re-login from keytab using the MiniKDC and verify the UGI can
-   * successfully relogin from keytab as well.
-   */
-  @Test
-  public void testUGIForceReLoginFromKeytab() throws Exception {
-// Set this to false as we are testing force re-login anyways
-UserGroupInformation.setShouldRenewImmediatelyForTests(false);
-String principal = "foo";
-File keytab = new File(workDir, "foo.keytab");
-kdc.createPrincipal(keytab, principal);
-
-UserGroupInformation.loginUserFromKeytab(principal, keytab.getPath());
-UserGroupInformation ugi = UserGroupInformation.getLoginUser

[hadoop] branch revert-2197-trunk created (now b7745b0)

2020-08-26 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a change to branch revert-2197-trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at b7745b0  Revert "HADOOP-17159 Ability for forceful relogin in 
UserGroupInformation class (#2197)"

This branch includes the following new commits:

 new b7745b0  Revert "HADOOP-17159 Ability for forceful relogin in 
UserGroupInformation class (#2197)"

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch revert-2197-trunk created (now b7745b0)

2020-08-26 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a change to branch revert-2197-trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at b7745b0  Revert "HADOOP-17159 Ability for forceful relogin in 
UserGroupInformation class (#2197)"

This branch includes the following new commits:

 new b7745b0  Revert "HADOOP-17159 Ability for forceful relogin in 
UserGroupInformation class (#2197)"

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] 01/01: Revert "HADOOP-17159 Ability for forceful relogin in UserGroupInformation class (#2197)"

2020-08-26 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch revert-2197-trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit b7745b00b2810fd405e19971ac8da27ef1668b01
Author: Mingliang Liu 
AuthorDate: Wed Aug 26 10:41:00 2020 -0700

Revert "HADOOP-17159 Ability for forceful relogin in UserGroupInformation 
class (#2197)"

This reverts commit a932796d0cad3d84df0003782e4247cbc2dcca93.
---
 .../hadoop/security/UserGroupInformation.java  | 35 +
 .../hadoop/security/TestUGILoginFromKeytab.java| 36 --
 2 files changed, 7 insertions(+), 64 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index d1ab436..91b64ad 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -1232,26 +1232,7 @@ public class UserGroupInformation {
 reloginFromKeytab(false);
   }
 
-  /**
-   * Force re-Login a user in from a keytab file. Loads a user identity from a
-   * keytab file and logs them in. They become the currently logged-in user.
-   * This method assumes that {@link #loginUserFromKeytab(String, String)} had
-   * happened already. The Subject field of this UserGroupInformation object is
-   * updated to have the new credentials.
-   *
-   * @param ignoreTimeElapsed Force re-login irrespective of the time of last
-   *  login
-   * @throws IOException
-   * @throws KerberosAuthException on a failure
-   */
-  @InterfaceAudience.Public
-  @InterfaceStability.Evolving
-  public void reloginFromKeytab(boolean ignoreTimeElapsed) throws IOException {
-reloginFromKeytab(false, ignoreTimeElapsed);
-  }
-
-  private void reloginFromKeytab(boolean checkTGT, boolean ignoreTimeElapsed)
-  throws IOException {
+  private void reloginFromKeytab(boolean checkTGT) throws IOException {
 if (!shouldRelogin() || !isFromKeytab()) {
   return;
 }
@@ -1266,7 +1247,7 @@ public class UserGroupInformation {
 return;
   }
 }
-relogin(login, ignoreTimeElapsed);
+relogin(login);
   }
 
   /**
@@ -1287,27 +1268,25 @@ public class UserGroupInformation {
 if (login == null) {
   throw new KerberosAuthException(MUST_FIRST_LOGIN);
 }
-relogin(login, false);
+relogin(login);
   }
 
-  private void relogin(HadoopLoginContext login, boolean ignoreTimeElapsed)
-  throws IOException {
+  private void relogin(HadoopLoginContext login) throws IOException {
 // ensure the relogin is atomic to avoid leaving credentials in an
 // inconsistent state.  prevents other ugi instances, SASL, and SPNEGO
 // from accessing or altering credentials during the relogin.
 synchronized(login.getSubjectLock()) {
   // another racing thread may have beat us to the relogin.
   if (login == getLogin()) {
-unprotectedRelogin(login, ignoreTimeElapsed);
+unprotectedRelogin(login);
   }
 }
   }
 
-  private void unprotectedRelogin(HadoopLoginContext login,
-  boolean ignoreTimeElapsed) throws IOException {
+  private void unprotectedRelogin(HadoopLoginContext login) throws IOException 
{
 assert Thread.holdsLock(login.getSubjectLock());
 long now = Time.now();
-if (!hasSufficientTimeElapsed(now) && !ignoreTimeElapsed) {
+if (!hasSufficientTimeElapsed(now)) {
   return;
 }
 // register most recent relogin attempt
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
index bf4cf75..d233234 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
@@ -158,42 +158,6 @@ public class TestUGILoginFromKeytab {
 Assert.assertNotSame(login1, login2);
   }
 
-  /**
-   * Force re-login from keytab using the MiniKDC and verify the UGI can
-   * successfully relogin from keytab as well.
-   */
-  @Test
-  public void testUGIForceReLoginFromKeytab() throws Exception {
-// Set this to false as we are testing force re-login anyways
-UserGroupInformation.setShouldRenewImmediatelyForTests(false);
-String principal = "foo";
-File keytab = new File(workDir, "foo.keytab");
-kdc.createPrincipal(keytab, principal);
-
-UserGroupInformation.loginUserFromKeytab(principal, keytab.getPath());
-UserGroupInformation ugi = UserGroupInformation.getLoginUser

[hadoop] branch trunk updated: Revert "HADOOP-17159 Ability for forceful relogin in UserGroupInformation class (#2197)"

2020-08-26 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 5e52955  Revert "HADOOP-17159 Ability for forceful relogin in 
UserGroupInformation class (#2197)"
5e52955 is described below

commit 5e52955112a3151bb608e092f31fc5084de78705
Author: Mingliang Liu 
AuthorDate: Wed Aug 26 10:41:10 2020 -0700

Revert "HADOOP-17159 Ability for forceful relogin in UserGroupInformation 
class (#2197)"

This reverts commit a932796d0cad3d84df0003782e4247cbc2dcca93.
---
 .../hadoop/security/UserGroupInformation.java  | 35 +
 .../hadoop/security/TestUGILoginFromKeytab.java| 36 --
 2 files changed, 7 insertions(+), 64 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index d1ab436..91b64ad 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -1232,26 +1232,7 @@ public class UserGroupInformation {
 reloginFromKeytab(false);
   }
 
-  /**
-   * Force re-Login a user in from a keytab file. Loads a user identity from a
-   * keytab file and logs them in. They become the currently logged-in user.
-   * This method assumes that {@link #loginUserFromKeytab(String, String)} had
-   * happened already. The Subject field of this UserGroupInformation object is
-   * updated to have the new credentials.
-   *
-   * @param ignoreTimeElapsed Force re-login irrespective of the time of last
-   *  login
-   * @throws IOException
-   * @throws KerberosAuthException on a failure
-   */
-  @InterfaceAudience.Public
-  @InterfaceStability.Evolving
-  public void reloginFromKeytab(boolean ignoreTimeElapsed) throws IOException {
-reloginFromKeytab(false, ignoreTimeElapsed);
-  }
-
-  private void reloginFromKeytab(boolean checkTGT, boolean ignoreTimeElapsed)
-  throws IOException {
+  private void reloginFromKeytab(boolean checkTGT) throws IOException {
 if (!shouldRelogin() || !isFromKeytab()) {
   return;
 }
@@ -1266,7 +1247,7 @@ public class UserGroupInformation {
 return;
   }
 }
-relogin(login, ignoreTimeElapsed);
+relogin(login);
   }
 
   /**
@@ -1287,27 +1268,25 @@ public class UserGroupInformation {
 if (login == null) {
   throw new KerberosAuthException(MUST_FIRST_LOGIN);
 }
-relogin(login, false);
+relogin(login);
   }
 
-  private void relogin(HadoopLoginContext login, boolean ignoreTimeElapsed)
-  throws IOException {
+  private void relogin(HadoopLoginContext login) throws IOException {
 // ensure the relogin is atomic to avoid leaving credentials in an
 // inconsistent state.  prevents other ugi instances, SASL, and SPNEGO
 // from accessing or altering credentials during the relogin.
 synchronized(login.getSubjectLock()) {
   // another racing thread may have beat us to the relogin.
   if (login == getLogin()) {
-unprotectedRelogin(login, ignoreTimeElapsed);
+unprotectedRelogin(login);
   }
 }
   }
 
-  private void unprotectedRelogin(HadoopLoginContext login,
-  boolean ignoreTimeElapsed) throws IOException {
+  private void unprotectedRelogin(HadoopLoginContext login) throws IOException 
{
 assert Thread.holdsLock(login.getSubjectLock());
 long now = Time.now();
-if (!hasSufficientTimeElapsed(now) && !ignoreTimeElapsed) {
+if (!hasSufficientTimeElapsed(now)) {
   return;
 }
 // register most recent relogin attempt
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
index bf4cf75..d233234 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
@@ -158,42 +158,6 @@ public class TestUGILoginFromKeytab {
 Assert.assertNotSame(login1, login2);
   }
 
-  /**
-   * Force re-login from keytab using the MiniKDC and verify the UGI can
-   * successfully relogin from keytab as well.
-   */
-  @Test
-  public void testUGIForceReLoginFromKeytab() throws Exception {
-// Set this to false as we are testing force re-login anyways
-UserGroupInformation.setShouldRenewImmediatelyForTests(false);
-String principal = "foo";
-File keytab = new File(workDir, "fo

[hadoop] branch branch-3.1 updated: HADOOP-17159 Ability for forceful relogin in UserGroupInformation class (#2197)

2020-08-25 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 12fb9e0  HADOOP-17159 Ability for forceful relogin in 
UserGroupInformation class (#2197)
12fb9e0 is described below

commit 12fb9e0600f665aca3e7ebe0be9b95ff232d520f
Author: sguggilam 
AuthorDate: Mon Aug 24 23:39:57 2020 -0700

HADOOP-17159 Ability for forceful relogin in UserGroupInformation class 
(#2197)

Contributed by Sandeep Guggilam.

Signed-off-by: Mingliang Liu 
Signed-off-by: Steve Loughran 
---
 .../hadoop/security/UserGroupInformation.java  | 35 -
 .../hadoop/security/TestUGILoginFromKeytab.java| 36 ++
 2 files changed, 64 insertions(+), 7 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index 0e4168c..2471e0a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -1115,7 +1115,26 @@ public class UserGroupInformation {
 reloginFromKeytab(false);
   }
 
-  private void reloginFromKeytab(boolean checkTGT) throws IOException {
+  /**
+   * Force re-Login a user in from a keytab file. Loads a user identity from a
+   * keytab file and logs them in. They become the currently logged-in user.
+   * This method assumes that {@link #loginUserFromKeytab(String, String)} had
+   * happened already. The Subject field of this UserGroupInformation object is
+   * updated to have the new credentials.
+   *
+   * @param ignoreTimeElapsed Force re-login irrespective of the time of last
+   *  login
+   * @throws IOException
+   * @throws KerberosAuthException on a failure
+   */
+  @InterfaceAudience.Public
+  @InterfaceStability.Evolving
+  public void reloginFromKeytab(boolean ignoreTimeElapsed) throws IOException {
+reloginFromKeytab(false, ignoreTimeElapsed);
+  }
+
+  private void reloginFromKeytab(boolean checkTGT, boolean ignoreTimeElapsed)
+  throws IOException {
 if (!shouldRelogin() || !isFromKeytab()) {
   return;
 }
@@ -1130,7 +1149,7 @@ public class UserGroupInformation {
 return;
   }
 }
-relogin(login);
+relogin(login, ignoreTimeElapsed);
   }
 
   /**
@@ -1151,25 +1170,27 @@ public class UserGroupInformation {
 if (login == null) {
   throw new KerberosAuthException(MUST_FIRST_LOGIN);
 }
-relogin(login);
+relogin(login, false);
   }
 
-  private void relogin(HadoopLoginContext login) throws IOException {
+  private void relogin(HadoopLoginContext login, boolean ignoreTimeElapsed)
+  throws IOException {
 // ensure the relogin is atomic to avoid leaving credentials in an
 // inconsistent state.  prevents other ugi instances, SASL, and SPNEGO
 // from accessing or altering credentials during the relogin.
 synchronized(login.getSubjectLock()) {
   // another racing thread may have beat us to the relogin.
   if (login == getLogin()) {
-unprotectedRelogin(login);
+unprotectedRelogin(login, ignoreTimeElapsed);
   }
 }
   }
 
-  private void unprotectedRelogin(HadoopLoginContext login) throws IOException 
{
+  private void unprotectedRelogin(HadoopLoginContext login,
+  boolean ignoreTimeElapsed) throws IOException {
 assert Thread.holdsLock(login.getSubjectLock());
 long now = Time.now();
-if (!hasSufficientTimeElapsed(now)) {
+if (!hasSufficientTimeElapsed(now) && !ignoreTimeElapsed) {
   return;
 }
 // register most recent relogin attempt
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
index bf4a2cc..7e2c250d 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
@@ -154,6 +154,42 @@ public class TestUGILoginFromKeytab {
 Assert.assertNotSame(login1, login2);
   }
 
+  /**
+   * Force re-login from keytab using the MiniKDC and verify the UGI can
+   * successfully relogin from keytab as well.
+   */
+  @Test
+  public void testUGIForceReLoginFromKeytab() throws Exception {
+// Set this to false as we are testing force re-login anyways
+UserGroupInformation.setShouldRenewImmediatelyForTests(false);
+String principal = "foo";
+File keytab = new File(wo

[hadoop] branch branch-3.2 updated: HADOOP-17159 Ability for forceful relogin in UserGroupInformation class (#2197)

2020-08-25 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new d06f0de  HADOOP-17159 Ability for forceful relogin in 
UserGroupInformation class (#2197)
d06f0de is described below

commit d06f0de3affbd5e8232a6fcdb9a3c396934b6a05
Author: sguggilam 
AuthorDate: Mon Aug 24 23:39:57 2020 -0700

HADOOP-17159 Ability for forceful relogin in UserGroupInformation class 
(#2197)

Contributed by Sandeep Guggilam.

Signed-off-by: Mingliang Liu 
Signed-off-by: Steve Loughran 
---
 .../hadoop/security/UserGroupInformation.java  | 35 -
 .../hadoop/security/TestUGILoginFromKeytab.java| 36 ++
 2 files changed, 64 insertions(+), 7 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index 11f91f2..c91cf73 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -1116,7 +1116,26 @@ public class UserGroupInformation {
 reloginFromKeytab(false);
   }
 
-  private void reloginFromKeytab(boolean checkTGT) throws IOException {
+  /**
+   * Force re-Login a user in from a keytab file. Loads a user identity from a
+   * keytab file and logs them in. They become the currently logged-in user.
+   * This method assumes that {@link #loginUserFromKeytab(String, String)} had
+   * happened already. The Subject field of this UserGroupInformation object is
+   * updated to have the new credentials.
+   *
+   * @param ignoreTimeElapsed Force re-login irrespective of the time of last
+   *  login
+   * @throws IOException
+   * @throws KerberosAuthException on a failure
+   */
+  @InterfaceAudience.Public
+  @InterfaceStability.Evolving
+  public void reloginFromKeytab(boolean ignoreTimeElapsed) throws IOException {
+reloginFromKeytab(false, ignoreTimeElapsed);
+  }
+
+  private void reloginFromKeytab(boolean checkTGT, boolean ignoreTimeElapsed)
+  throws IOException {
 if (!shouldRelogin() || !isFromKeytab()) {
   return;
 }
@@ -1131,7 +1150,7 @@ public class UserGroupInformation {
 return;
   }
 }
-relogin(login);
+relogin(login, ignoreTimeElapsed);
   }
 
   /**
@@ -1152,25 +1171,27 @@ public class UserGroupInformation {
 if (login == null) {
   throw new KerberosAuthException(MUST_FIRST_LOGIN);
 }
-relogin(login);
+relogin(login, false);
   }
 
-  private void relogin(HadoopLoginContext login) throws IOException {
+  private void relogin(HadoopLoginContext login, boolean ignoreTimeElapsed)
+  throws IOException {
 // ensure the relogin is atomic to avoid leaving credentials in an
 // inconsistent state.  prevents other ugi instances, SASL, and SPNEGO
 // from accessing or altering credentials during the relogin.
 synchronized(login.getSubjectLock()) {
   // another racing thread may have beat us to the relogin.
   if (login == getLogin()) {
-unprotectedRelogin(login);
+unprotectedRelogin(login, ignoreTimeElapsed);
   }
 }
   }
 
-  private void unprotectedRelogin(HadoopLoginContext login) throws IOException 
{
+  private void unprotectedRelogin(HadoopLoginContext login,
+  boolean ignoreTimeElapsed) throws IOException {
 assert Thread.holdsLock(login.getSubjectLock());
 long now = Time.now();
-if (!hasSufficientTimeElapsed(now)) {
+if (!hasSufficientTimeElapsed(now) && !ignoreTimeElapsed) {
   return;
 }
 // register most recent relogin attempt
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
index bf4a2cc..7e2c250d 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
@@ -154,6 +154,42 @@ public class TestUGILoginFromKeytab {
 Assert.assertNotSame(login1, login2);
   }
 
+  /**
+   * Force re-login from keytab using the MiniKDC and verify the UGI can
+   * successfully relogin from keytab as well.
+   */
+  @Test
+  public void testUGIForceReLoginFromKeytab() throws Exception {
+// Set this to false as we are testing force re-login anyways
+UserGroupInformation.setShouldRenewImmediatelyForTests(false);
+String principal = "foo";
+File keytab = new File(wo

[hadoop] branch branch-3.3 updated: HADOOP-17159 Ability for forceful relogin in UserGroupInformation class (#2197)

2020-08-25 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new da129a6  HADOOP-17159 Ability for forceful relogin in 
UserGroupInformation class (#2197)
da129a6 is described below

commit da129a67bb4a169d3efcfc7cf298af68bad5fb73
Author: sguggilam 
AuthorDate: Mon Aug 24 23:39:57 2020 -0700

HADOOP-17159 Ability for forceful relogin in UserGroupInformation class 
(#2197)

Contributed by Sandeep Guggilam.

Signed-off-by: Mingliang Liu 
Signed-off-by: Steve Loughran 
---
 .../hadoop/security/UserGroupInformation.java  | 35 -
 .../hadoop/security/TestUGILoginFromKeytab.java| 36 ++
 2 files changed, 64 insertions(+), 7 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index d37da72..57a4c74 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -1233,7 +1233,26 @@ public class UserGroupInformation {
 reloginFromKeytab(false);
   }
 
-  private void reloginFromKeytab(boolean checkTGT) throws IOException {
+  /**
+   * Force re-Login a user in from a keytab file. Loads a user identity from a
+   * keytab file and logs them in. They become the currently logged-in user.
+   * This method assumes that {@link #loginUserFromKeytab(String, String)} had
+   * happened already. The Subject field of this UserGroupInformation object is
+   * updated to have the new credentials.
+   *
+   * @param ignoreTimeElapsed Force re-login irrespective of the time of last
+   *  login
+   * @throws IOException
+   * @throws KerberosAuthException on a failure
+   */
+  @InterfaceAudience.Public
+  @InterfaceStability.Evolving
+  public void reloginFromKeytab(boolean ignoreTimeElapsed) throws IOException {
+reloginFromKeytab(false, ignoreTimeElapsed);
+  }
+
+  private void reloginFromKeytab(boolean checkTGT, boolean ignoreTimeElapsed)
+  throws IOException {
 if (!shouldRelogin() || !isFromKeytab()) {
   return;
 }
@@ -1248,7 +1267,7 @@ public class UserGroupInformation {
 return;
   }
 }
-relogin(login);
+relogin(login, ignoreTimeElapsed);
   }
 
   /**
@@ -1269,25 +1288,27 @@ public class UserGroupInformation {
 if (login == null) {
   throw new KerberosAuthException(MUST_FIRST_LOGIN);
 }
-relogin(login);
+relogin(login, false);
   }
 
-  private void relogin(HadoopLoginContext login) throws IOException {
+  private void relogin(HadoopLoginContext login, boolean ignoreTimeElapsed)
+  throws IOException {
 // ensure the relogin is atomic to avoid leaving credentials in an
 // inconsistent state.  prevents other ugi instances, SASL, and SPNEGO
 // from accessing or altering credentials during the relogin.
 synchronized(login.getSubjectLock()) {
   // another racing thread may have beat us to the relogin.
   if (login == getLogin()) {
-unprotectedRelogin(login);
+unprotectedRelogin(login, ignoreTimeElapsed);
   }
 }
   }
 
-  private void unprotectedRelogin(HadoopLoginContext login) throws IOException 
{
+  private void unprotectedRelogin(HadoopLoginContext login,
+  boolean ignoreTimeElapsed) throws IOException {
 assert Thread.holdsLock(login.getSubjectLock());
 long now = Time.now();
-if (!hasSufficientTimeElapsed(now)) {
+if (!hasSufficientTimeElapsed(now) && !ignoreTimeElapsed) {
   return;
 }
 // register most recent relogin attempt
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
index d233234..bf4cf75 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
@@ -158,6 +158,42 @@ public class TestUGILoginFromKeytab {
 Assert.assertNotSame(login1, login2);
   }
 
+  /**
+   * Force re-login from keytab using the MiniKDC and verify the UGI can
+   * successfully relogin from keytab as well.
+   */
+  @Test
+  public void testUGIForceReLoginFromKeytab() throws Exception {
+// Set this to false as we are testing force re-login anyways
+UserGroupInformation.setShouldRenewImmediatelyForTests(false);
+String principal = "foo";
+File keytab = new File(wo

[hadoop] branch trunk updated: HADOOP-17159 Ability for forceful relogin in UserGroupInformation class (#2197)

2020-08-25 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new a932796  HADOOP-17159 Ability for forceful relogin in 
UserGroupInformation class (#2197)
a932796 is described below

commit a932796d0cad3d84df0003782e4247cbc2dcca93
Author: sguggilam 
AuthorDate: Mon Aug 24 23:39:57 2020 -0700

HADOOP-17159 Ability for forceful relogin in UserGroupInformation class 
(#2197)

Contributed by Sandeep Guggilam.

Signed-off-by: Mingliang Liu 
Signed-off-by: Steve Loughran 
---
 .../hadoop/security/UserGroupInformation.java  | 35 -
 .../hadoop/security/TestUGILoginFromKeytab.java| 36 ++
 2 files changed, 64 insertions(+), 7 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index 91b64ad..d1ab436 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -1232,7 +1232,26 @@ public class UserGroupInformation {
 reloginFromKeytab(false);
   }
 
-  private void reloginFromKeytab(boolean checkTGT) throws IOException {
+  /**
+   * Force re-Login a user in from a keytab file. Loads a user identity from a
+   * keytab file and logs them in. They become the currently logged-in user.
+   * This method assumes that {@link #loginUserFromKeytab(String, String)} had
+   * happened already. The Subject field of this UserGroupInformation object is
+   * updated to have the new credentials.
+   *
+   * @param ignoreTimeElapsed Force re-login irrespective of the time of last
+   *  login
+   * @throws IOException
+   * @throws KerberosAuthException on a failure
+   */
+  @InterfaceAudience.Public
+  @InterfaceStability.Evolving
+  public void reloginFromKeytab(boolean ignoreTimeElapsed) throws IOException {
+reloginFromKeytab(false, ignoreTimeElapsed);
+  }
+
+  private void reloginFromKeytab(boolean checkTGT, boolean ignoreTimeElapsed)
+  throws IOException {
 if (!shouldRelogin() || !isFromKeytab()) {
   return;
 }
@@ -1247,7 +1266,7 @@ public class UserGroupInformation {
 return;
   }
 }
-relogin(login);
+relogin(login, ignoreTimeElapsed);
   }
 
   /**
@@ -1268,25 +1287,27 @@ public class UserGroupInformation {
 if (login == null) {
   throw new KerberosAuthException(MUST_FIRST_LOGIN);
 }
-relogin(login);
+relogin(login, false);
   }
 
-  private void relogin(HadoopLoginContext login) throws IOException {
+  private void relogin(HadoopLoginContext login, boolean ignoreTimeElapsed)
+  throws IOException {
 // ensure the relogin is atomic to avoid leaving credentials in an
 // inconsistent state.  prevents other ugi instances, SASL, and SPNEGO
 // from accessing or altering credentials during the relogin.
 synchronized(login.getSubjectLock()) {
   // another racing thread may have beat us to the relogin.
   if (login == getLogin()) {
-unprotectedRelogin(login);
+unprotectedRelogin(login, ignoreTimeElapsed);
   }
 }
   }
 
-  private void unprotectedRelogin(HadoopLoginContext login) throws IOException 
{
+  private void unprotectedRelogin(HadoopLoginContext login,
+  boolean ignoreTimeElapsed) throws IOException {
 assert Thread.holdsLock(login.getSubjectLock());
 long now = Time.now();
-if (!hasSufficientTimeElapsed(now)) {
+if (!hasSufficientTimeElapsed(now) && !ignoreTimeElapsed) {
   return;
 }
 // register most recent relogin attempt
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
index d233234..bf4cf75 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
@@ -158,6 +158,42 @@ public class TestUGILoginFromKeytab {
 Assert.assertNotSame(login1, login2);
   }
 
+  /**
+   * Force re-login from keytab using the MiniKDC and verify the UGI can
+   * successfully relogin from keytab as well.
+   */
+  @Test
+  public void testUGIForceReLoginFromKeytab() throws Exception {
+// Set this to false as we are testing force re-login anyways
+UserGroupInformation.setShouldRenewImmediatelyForTests(false);
+String principal = "foo";
+File keytab = new File(wo

[hadoop] branch trunk updated: HADOOP-17182. Remove breadcrumbs from web site (#2190)

2020-08-08 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 4054202  HADOOP-17182. Remove breadcrumbs from web site (#2190)
4054202 is described below

commit 40542024df3b9b02b89004dee28c26bae53811f0
Author: Akira Ajisaka 
AuthorDate: Sat Aug 8 15:29:52 2020 +0900

HADOOP-17182. Remove breadcrumbs from web site (#2190)

Signed-off-by: Mingliang Liu 
Signed-off-by: Ayush Saxena 
---
 hadoop-project/src/site/site.xml | 5 -
 1 file changed, 5 deletions(-)

diff --git a/hadoop-project/src/site/site.xml b/hadoop-project/src/site/site.xml
index 4c9d356..86949b0 100644
--- a/hadoop-project/src/site/site.xml
+++ b/hadoop-project/src/site/site.xml
@@ -41,11 +41,6 @@
   https://gitbox.apache.org/repos/asf/hadoop.git; />
 
 
-
-  http://www.apache.org/; />
-  http://hadoop.apache.org/"/>
-
-
 
   
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch release created (now 5edd8b9)

2020-08-05 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a change to branch release
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at 5edd8b9  YARN-4575. ApplicationResourceUsageReport should return ALL 
reserved resource. Contributed by Bibin Chundatt and Eric Payne.

No new revisions were added by this update.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.10 updated: HADOOP-17164 UGI loginUserFromKeytab doesn't set the last login time (#2194)

2020-08-05 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.10 by this push:
 new 4eb1f81  HADOOP-17164 UGI loginUserFromKeytab doesn't set the last 
login time (#2194)
4eb1f81 is described below

commit 4eb1f818763c619112e0422cf6968ffcd53d22ee
Author: sguggilam 
AuthorDate: Wed Aug 5 09:04:01 2020 -0700

HADOOP-17164 UGI loginUserFromKeytab doesn't set the last login time (#2194)

Contributed by Sandeep Guggilam.

Signed-off-by: Mingliang Liu 
Signed-off-by: Steve Loughran 
---
 .../hadoop/security/UserGroupInformation.java  | 10 
 .../hadoop/security/TestUGILoginFromKeytab.java| 29 +-
 2 files changed, 38 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index b2cd395..07cd314 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -644,6 +644,15 @@ public class UserGroupInformation {
   }
 
   /**
+   * Set the last login time for logged in user
+   *
+   * @param loginTime the number of milliseconds since the beginning of time
+   */
+  private void setLastLogin(long loginTime) {
+user.setLastLogin(loginTime);
+  }
+
+  /**
* Create a UserGroupInformation for the given subject.
* This does not change the subject or acquire new credentials.
* @param subject the user's subject
@@ -1096,6 +1105,7 @@ public class UserGroupInformation {
   metrics.loginSuccess.add(Time.now() - start);
   loginUser = new UserGroupInformation(subject);
   loginUser.setLogin(login);
+  loginUser.setLastLogin(start);
   loginUser.setAuthenticationMethod(AuthenticationMethod.KERBEROS);
 } catch (LoginException le) {
   if (start > 0) {
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
index 61fbf89..66c2af4 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
@@ -21,6 +21,7 @@ package org.apache.hadoop.security;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.minikdc.MiniKdc;
+import org.apache.hadoop.util.Time;
 import org.junit.After;
 import org.junit.Assert;
 import org.junit.Before;
@@ -64,11 +65,35 @@ public class TestUGILoginFromKeytab {
   }
 
   /**
+   * Login from keytab using the MiniKDC.
+   */
+  @Test
+  public void testUGILoginFromKeytab() throws Exception {
+long beforeLogin = Time.now();
+String principal = "foo";
+File keytab = new File(workDir, "foo.keytab");
+kdc.createPrincipal(keytab, principal);
+
+UserGroupInformation.loginUserFromKeytab(principal, keytab.getPath());
+UserGroupInformation ugi = UserGroupInformation.getLoginUser();
+Assert.assertTrue("UGI should be configured to login from keytab",
+ugi.isFromKeytab());
+
+User user = ugi.getSubject().getPrincipals(User.class).iterator().next();
+Assert.assertNotNull(user.getLogin());
+
+Assert.assertTrue(
+"User login time is less than before login time, " + "beforeLoginTime:"
++ beforeLogin + " userLoginTime:" + user.getLastLogin(),
+user.getLastLogin() > beforeLogin);
+  }
+
+  /**
* Login from keytab using the MiniKDC and verify the UGI can successfully
* relogin from keytab as well. This will catch regressions like 
HADOOP-10786.
*/
   @Test
-  public void testUGILoginFromKeytab() throws Exception {
+  public void testUGIReloginFromKeytab() throws Exception {
 UserGroupInformation.setShouldRenewImmediatelyForTests(true);
 String principal = "foo";
 File keytab = new File(workDir, "foo.keytab");
@@ -82,6 +107,8 @@ public class TestUGILoginFromKeytab {
 // Verify relogin from keytab.
 User user = ugi.getSubject().getPrincipals(User.class).iterator().next();
 final long firstLogin = user.getLastLogin();
+// Sleep for 2 secs to have a difference between first and second login
+Thread.sleep(2000);
 ugi.reloginFromKeytab();
 final long secondLogin = user.getLastLogin();
 Assert.assertTrue("User should have been able to relogin from keytab",



[hadoop] branch branch-2.10 updated: HDFS-15499. Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion (#2188)

2020-08-05 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.10 by this push:
 new 8d5821b  HDFS-15499. Clean up httpfs/pom.xml to remove aws-java-sdk-s3 
exclusion (#2188)
8d5821b is described below

commit 8d5821b9c3bafd990a63c75890fb8f4b7f73104e
Author: Mingliang Liu 
AuthorDate: Tue Aug 4 20:48:45 2020 -0700

HDFS-15499. Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion 
(#2188)

Signed-off-by: Akira Ajisaka 
---
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml | 8 
 1 file changed, 8 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
index cfd8c6f..108e238 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
@@ -131,10 +131,6 @@
   jets3t
 
 
-  com.amazonaws
-  aws-java-sdk-s3
-
-
   org.eclipse.jdt
   core
 
@@ -174,10 +170,6 @@
   jets3t
 
 
-  com.amazonaws
-  aws-java-sdk-s3
-
-
   org.eclipse.jdt
   core
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HDFS-15499. Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion (#2188)

2020-08-05 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 1a53bc6  HDFS-15499. Clean up httpfs/pom.xml to remove aws-java-sdk-s3 
exclusion (#2188)
1a53bc6 is described below

commit 1a53bc6ecf912c635e41b9df8dc708d0d5ea5e01
Author: Mingliang Liu 
AuthorDate: Tue Aug 4 20:48:45 2020 -0700

HDFS-15499. Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion 
(#2188)

Signed-off-by: Akira Ajisaka 
---
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml | 8 
 1 file changed, 8 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
index b02b03f..5adda3f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
@@ -119,10 +119,6 @@
   servlet-api-2.5
 
 
-  com.amazonaws
-  aws-java-sdk-s3
-
-
   org.eclipse.jdt
   core
 
@@ -154,10 +150,6 @@
   servlet-api-2.5
 
 
-  com.amazonaws
-  aws-java-sdk-s3
-
-
   org.eclipse.jdt
   core
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HDFS-15499. Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion (#2188)

2020-08-05 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 41c211d  HDFS-15499. Clean up httpfs/pom.xml to remove aws-java-sdk-s3 
exclusion (#2188)
41c211d is described below

commit 41c211d762e03047fde4e7f697083bf402e7cf00
Author: Mingliang Liu 
AuthorDate: Tue Aug 4 20:48:45 2020 -0700

HDFS-15499. Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion 
(#2188)

Signed-off-by: Akira Ajisaka 
---
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml | 8 
 1 file changed, 8 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
index ec47e1c..c7e2ee9 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
@@ -119,10 +119,6 @@
   servlet-api-2.5
 
 
-  com.amazonaws
-  aws-java-sdk-s3
-
-
   org.eclipse.jdt
   core
 
@@ -154,10 +150,6 @@
   servlet-api-2.5
 
 
-  com.amazonaws
-  aws-java-sdk-s3
-
-
   org.eclipse.jdt
   core
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HDFS-15499. Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion (#2188)

2020-08-05 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 1cd1f97  HDFS-15499. Clean up httpfs/pom.xml to remove aws-java-sdk-s3 
exclusion (#2188)
1cd1f97 is described below

commit 1cd1f978da9c000914945ade589ee77276f48445
Author: Mingliang Liu 
AuthorDate: Tue Aug 4 20:48:45 2020 -0700

HDFS-15499. Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion 
(#2188)

Signed-off-by: Akira Ajisaka 
---
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml | 8 
 1 file changed, 8 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
index 47edc99..3c62b16 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
@@ -119,10 +119,6 @@
   servlet-api-2.5
 
 
-  com.amazonaws
-  aws-java-sdk-s3
-
-
   org.eclipse.jdt
   core
 
@@ -154,10 +150,6 @@
   servlet-api-2.5
 
 
-  com.amazonaws
-  aws-java-sdk-s3
-
-
   org.eclipse.jdt
   core
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-15499. Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion (#2188)

2020-08-04 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 58def7c  HDFS-15499. Clean up httpfs/pom.xml to remove aws-java-sdk-s3 
exclusion (#2188)
58def7c is described below

commit 58def7cecbed74512798248b89db86aa3e4ca746
Author: Mingliang Liu 
AuthorDate: Tue Aug 4 20:48:45 2020 -0700

HDFS-15499. Clean up httpfs/pom.xml to remove aws-java-sdk-s3 exclusion 
(#2188)

Signed-off-by: Akira Ajisaka 
---
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml | 8 
 1 file changed, 8 deletions(-)

diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
index b0d7c4b..f1d172d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
@@ -119,10 +119,6 @@
   servlet-api-2.5
 
 
-  com.amazonaws
-  aws-java-sdk-s3
-
-
   org.eclipse.jdt
   core
 
@@ -154,10 +150,6 @@
   servlet-api-2.5
 
 
-  com.amazonaws
-  aws-java-sdk-s3
-
-
   org.eclipse.jdt
   core
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-17179. [JDK 11] Fix javadoc error while detecting Java API link (#2186)

2020-08-04 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new ed3ab4b  HADOOP-17179. [JDK 11] Fix javadoc error while detecting Java 
API link (#2186)
ed3ab4b is described below

commit ed3ab4b87d90450e68f510c158517fc186d2e9e1
Author: Akira Ajisaka 
AuthorDate: Wed Aug 5 04:09:14 2020 +0900

HADOOP-17179. [JDK 11] Fix javadoc error while detecting Java API link 
(#2186)

Signed-off-by: Mingliang Liu 
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 06fcee3..373450c 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -2401,7 +2401,7 @@
 maven-javadoc-plugin
 
   ${javadoc.skip.jdk11}
-  8
+  false
   
 
 -html4


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HADOOP-17164. UGI loginUserFromKeytab doesn't set the last login time (#2178)

2020-08-04 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 31c61f4  HADOOP-17164. UGI loginUserFromKeytab doesn't set the last 
login time (#2178)
31c61f4 is described below

commit 31c61f4363cabdaca503ddc3c009025e254fd994
Author: sguggilam 
AuthorDate: Tue Aug 4 10:30:06 2020 -0700

HADOOP-17164. UGI loginUserFromKeytab doesn't set the last login time 
(#2178)

Contributed by Sandeep Guggilam.

Signed-off-by: Mingliang Liu 
Signed-off-by: Steve Loughran 
---
 .../hadoop/security/UserGroupInformation.java  |  9 +++
 .../hadoop/security/TestUGILoginFromKeytab.java| 29 +-
 2 files changed, 37 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index c44ef72..0e4168c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -531,6 +531,14 @@ public class UserGroupInformation {
   }
 
   /**
+   * Set the last login time for logged in user
+   * @param loginTime the number of milliseconds since the beginning of time
+   */
+  private void setLastLogin(long loginTime) {
+user.setLastLogin(loginTime);
+  }
+
+  /**
* Create a UserGroupInformation for the given subject.
* This does not change the subject or acquire new credentials.
*
@@ -1840,6 +1848,7 @@ public class UserGroupInformation {
   if (subject == null) {
 params.put(LoginParam.PRINCIPAL, ugi.getUserName());
 ugi.setLogin(login);
+ugi.setLastLogin(Time.now());
   }
   return ugi;
 } catch (LoginException le) {
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
index 826e4b2..bf4a2cc 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
@@ -22,6 +22,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.minikdc.MiniKdc;
 import org.apache.hadoop.security.UserGroupInformation.AuthenticationMethod;
+import org.apache.hadoop.util.Time;
 import org.junit.After;
 import org.junit.Assert;
 import org.junit.Before;
@@ -98,11 +99,34 @@ public class TestUGILoginFromKeytab {
   }
 
   /**
+   * Login from keytab using the MiniKDC.
+   */
+  @Test
+  public void testUGILoginFromKeytab() throws Exception {
+long beforeLogin = Time.now();
+String principal = "foo";
+File keytab = new File(workDir, "foo.keytab");
+kdc.createPrincipal(keytab, principal);
+
+UserGroupInformation.loginUserFromKeytab(principal, keytab.getPath());
+UserGroupInformation ugi = UserGroupInformation.getLoginUser();
+Assert.assertTrue("UGI should be configured to login from keytab",
+ugi.isFromKeytab());
+
+User user = getUser(ugi.getSubject());
+Assert.assertNotNull(user.getLogin());
+ 
+Assert.assertTrue("User login time is less than before login time, "
++ "beforeLoginTime:" + beforeLogin + " userLoginTime:" + 
user.getLastLogin(),
+user.getLastLogin() > beforeLogin);
+  }
+
+  /**
* Login from keytab using the MiniKDC and verify the UGI can successfully
* relogin from keytab as well. This will catch regressions like 
HADOOP-10786.
*/
   @Test
-  public void testUGILoginFromKeytab() throws Exception {
+  public void testUGIReLoginFromKeytab() throws Exception {
 String principal = "foo";
 File keytab = new File(workDir, "foo.keytab");
 kdc.createPrincipal(keytab, principal);
@@ -118,6 +142,9 @@ public class TestUGILoginFromKeytab {
 final LoginContext login1 = user.getLogin();
 Assert.assertNotNull(login1);
 
+// Sleep for 2 secs to have a difference between first and second login
+Thread.sleep(2000);
+
 ugi.reloginFromKeytab();
 final long secondLogin = user.getLastLogin();
 final LoginContext login2 = user.getLogin();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HADOOP-17164. UGI loginUserFromKeytab doesn't set the last login time (#2178)

2020-08-04 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new e484f55  HADOOP-17164. UGI loginUserFromKeytab doesn't set the last 
login time (#2178)
e484f55 is described below

commit e484f5529cc64a598d3510fe90ce0cbfe825144c
Author: sguggilam 
AuthorDate: Tue Aug 4 10:30:06 2020 -0700

HADOOP-17164. UGI loginUserFromKeytab doesn't set the last login time 
(#2178)

Contributed by Sandeep Guggilam.

Signed-off-by: Mingliang Liu 
Signed-off-by: Steve Loughran 
---
 .../hadoop/security/UserGroupInformation.java  |  9 +++
 .../hadoop/security/TestUGILoginFromKeytab.java| 29 +-
 2 files changed, 37 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index 6ce72edb..11f91f2 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -532,6 +532,14 @@ public class UserGroupInformation {
   }
 
   /**
+   * Set the last login time for logged in user
+   * @param loginTime the number of milliseconds since the beginning of time
+   */
+  private void setLastLogin(long loginTime) {
+user.setLastLogin(loginTime);
+  }
+
+  /**
* Create a UserGroupInformation for the given subject.
* This does not change the subject or acquire new credentials.
*
@@ -1841,6 +1849,7 @@ public class UserGroupInformation {
   if (subject == null) {
 params.put(LoginParam.PRINCIPAL, ugi.getUserName());
 ugi.setLogin(login);
+ugi.setLastLogin(Time.now());
   }
   return ugi;
 } catch (LoginException le) {
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
index 826e4b2..bf4a2cc 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
@@ -22,6 +22,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.minikdc.MiniKdc;
 import org.apache.hadoop.security.UserGroupInformation.AuthenticationMethod;
+import org.apache.hadoop.util.Time;
 import org.junit.After;
 import org.junit.Assert;
 import org.junit.Before;
@@ -98,11 +99,34 @@ public class TestUGILoginFromKeytab {
   }
 
   /**
+   * Login from keytab using the MiniKDC.
+   */
+  @Test
+  public void testUGILoginFromKeytab() throws Exception {
+long beforeLogin = Time.now();
+String principal = "foo";
+File keytab = new File(workDir, "foo.keytab");
+kdc.createPrincipal(keytab, principal);
+
+UserGroupInformation.loginUserFromKeytab(principal, keytab.getPath());
+UserGroupInformation ugi = UserGroupInformation.getLoginUser();
+Assert.assertTrue("UGI should be configured to login from keytab",
+ugi.isFromKeytab());
+
+User user = getUser(ugi.getSubject());
+Assert.assertNotNull(user.getLogin());
+ 
+Assert.assertTrue("User login time is less than before login time, "
++ "beforeLoginTime:" + beforeLogin + " userLoginTime:" + 
user.getLastLogin(),
+user.getLastLogin() > beforeLogin);
+  }
+
+  /**
* Login from keytab using the MiniKDC and verify the UGI can successfully
* relogin from keytab as well. This will catch regressions like 
HADOOP-10786.
*/
   @Test
-  public void testUGILoginFromKeytab() throws Exception {
+  public void testUGIReLoginFromKeytab() throws Exception {
 String principal = "foo";
 File keytab = new File(workDir, "foo.keytab");
 kdc.createPrincipal(keytab, principal);
@@ -118,6 +142,9 @@ public class TestUGILoginFromKeytab {
 final LoginContext login1 = user.getLogin();
 Assert.assertNotNull(login1);
 
+// Sleep for 2 secs to have a difference between first and second login
+Thread.sleep(2000);
+
 ugi.reloginFromKeytab();
 final long secondLogin = user.getLastLogin();
 final LoginContext login2 = user.getLogin();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HADOOP-17164. UGI loginUserFromKeytab doesn't set the last login time (#2178)

2020-08-04 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 97dd1cb  HADOOP-17164. UGI loginUserFromKeytab doesn't set the last 
login time (#2178)
97dd1cb is described below

commit 97dd1cb57e3754c50d39ffa323c9a55e696c4b16
Author: sguggilam 
AuthorDate: Tue Aug 4 10:30:06 2020 -0700

HADOOP-17164. UGI loginUserFromKeytab doesn't set the last login time 
(#2178)

Contributed by Sandeep Guggilam.

Signed-off-by: Mingliang Liu 
Signed-off-by: Steve Loughran 
---
 .../hadoop/security/UserGroupInformation.java  |  9 +++
 .../hadoop/security/TestUGILoginFromKeytab.java| 29 +-
 2 files changed, 37 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index 8c84a8d..d37da72 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -531,6 +531,14 @@ public class UserGroupInformation {
   }
 
   /**
+   * Set the last login time for logged in user
+   * @param loginTime the number of milliseconds since the beginning of time
+   */
+  private void setLastLogin(long loginTime) {
+user.setLastLogin(loginTime);
+  }
+
+  /**
* Create a UserGroupInformation for the given subject.
* This does not change the subject or acquire new credentials.
*
@@ -1946,6 +1954,7 @@ public class UserGroupInformation {
   if (subject == null) {
 params.put(LoginParam.PRINCIPAL, ugi.getUserName());
 ugi.setLogin(login);
+ugi.setLastLogin(Time.now());
   }
   return ugi;
 } catch (LoginException le) {
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
index 8ede451..d233234 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
@@ -23,6 +23,7 @@ import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.minikdc.MiniKdc;
 import org.apache.hadoop.security.UserGroupInformation.AuthenticationMethod;
 import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.util.Time;
 import org.junit.After;
 import org.junit.Assert;
 import org.junit.Before;
@@ -102,11 +103,34 @@ public class TestUGILoginFromKeytab {
   }
 
   /**
+   * Login from keytab using the MiniKDC.
+   */
+  @Test
+  public void testUGILoginFromKeytab() throws Exception {
+long beforeLogin = Time.now();
+String principal = "foo";
+File keytab = new File(workDir, "foo.keytab");
+kdc.createPrincipal(keytab, principal);
+
+UserGroupInformation.loginUserFromKeytab(principal, keytab.getPath());
+UserGroupInformation ugi = UserGroupInformation.getLoginUser();
+Assert.assertTrue("UGI should be configured to login from keytab",
+ugi.isFromKeytab());
+
+User user = getUser(ugi.getSubject());
+Assert.assertNotNull(user.getLogin());
+ 
+Assert.assertTrue("User login time is less than before login time, "
++ "beforeLoginTime:" + beforeLogin + " userLoginTime:" + 
user.getLastLogin(),
+user.getLastLogin() > beforeLogin);
+  }
+
+  /**
* Login from keytab using the MiniKDC and verify the UGI can successfully
* relogin from keytab as well. This will catch regressions like 
HADOOP-10786.
*/
   @Test
-  public void testUGILoginFromKeytab() throws Exception {
+  public void testUGIReLoginFromKeytab() throws Exception {
 String principal = "foo";
 File keytab = new File(workDir, "foo.keytab");
 kdc.createPrincipal(keytab, principal);
@@ -122,6 +146,9 @@ public class TestUGILoginFromKeytab {
 final LoginContext login1 = user.getLogin();
 Assert.assertNotNull(login1);
 
+// Sleep for 2 secs to have a difference between first and second login
+Thread.sleep(2000);
+
 ugi.reloginFromKeytab();
 final long secondLogin = user.getLastLogin();
 final LoginContext login2 = user.getLogin();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-17164. UGI loginUserFromKeytab doesn't set the last login time (#2178)

2020-08-04 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 2986058  HADOOP-17164. UGI loginUserFromKeytab doesn't set the last 
login time (#2178)
2986058 is described below

commit 2986058e7f6fa1b5aab259c64a745b2eedb2febe
Author: sguggilam 
AuthorDate: Tue Aug 4 10:30:06 2020 -0700

HADOOP-17164. UGI loginUserFromKeytab doesn't set the last login time 
(#2178)

Contributed by Sandeep Guggilam.

Signed-off-by: Mingliang Liu 
Signed-off-by: Steve Loughran 
---
 .../hadoop/security/UserGroupInformation.java  |  9 +++
 .../hadoop/security/TestUGILoginFromKeytab.java| 29 +-
 2 files changed, 37 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
index 5269e5a..91b64ad 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
@@ -530,6 +530,14 @@ public class UserGroupInformation {
   }
 
   /**
+   * Set the last login time for logged in user
+   * @param loginTime the number of milliseconds since the beginning of time
+   */
+  private void setLastLogin(long loginTime) {
+user.setLastLogin(loginTime);
+  }
+
+  /**
* Create a UserGroupInformation for the given subject.
* This does not change the subject or acquire new credentials.
*
@@ -1968,6 +1976,7 @@ public class UserGroupInformation {
   if (subject == null) {
 params.put(LoginParam.PRINCIPAL, ugi.getUserName());
 ugi.setLogin(login);
+ugi.setLastLogin(Time.now());
   }
   return ugi;
 } catch (LoginException le) {
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
index 8ede451..d233234 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
@@ -23,6 +23,7 @@ import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.minikdc.MiniKdc;
 import org.apache.hadoop.security.UserGroupInformation.AuthenticationMethod;
 import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.util.Time;
 import org.junit.After;
 import org.junit.Assert;
 import org.junit.Before;
@@ -102,11 +103,34 @@ public class TestUGILoginFromKeytab {
   }
 
   /**
+   * Login from keytab using the MiniKDC.
+   */
+  @Test
+  public void testUGILoginFromKeytab() throws Exception {
+long beforeLogin = Time.now();
+String principal = "foo";
+File keytab = new File(workDir, "foo.keytab");
+kdc.createPrincipal(keytab, principal);
+
+UserGroupInformation.loginUserFromKeytab(principal, keytab.getPath());
+UserGroupInformation ugi = UserGroupInformation.getLoginUser();
+Assert.assertTrue("UGI should be configured to login from keytab",
+ugi.isFromKeytab());
+
+User user = getUser(ugi.getSubject());
+Assert.assertNotNull(user.getLogin());
+ 
+Assert.assertTrue("User login time is less than before login time, "
++ "beforeLoginTime:" + beforeLogin + " userLoginTime:" + 
user.getLastLogin(),
+user.getLastLogin() > beforeLogin);
+  }
+
+  /**
* Login from keytab using the MiniKDC and verify the UGI can successfully
* relogin from keytab as well. This will catch regressions like 
HADOOP-10786.
*/
   @Test
-  public void testUGILoginFromKeytab() throws Exception {
+  public void testUGIReLoginFromKeytab() throws Exception {
 String principal = "foo";
 File keytab = new File(workDir, "foo.keytab");
 kdc.createPrincipal(keytab, principal);
@@ -122,6 +146,9 @@ public class TestUGILoginFromKeytab {
 final LoginContext login1 = user.getLogin();
 Assert.assertNotNull(login1);
 
+// Sleep for 2 secs to have a difference between first and second login
+Thread.sleep(2000);
+
 ugi.reloginFromKeytab();
 final long secondLogin = user.getLastLogin();
 final LoginContext login2 = user.getLogin();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.9 updated: HADOOP-17059. ArrayIndexOfboundsException in ViewFileSystem#listStatus. Contributed by hemanthboyina

2020-06-10 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-2.9
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.9 by this push:
 new ab458b0  HADOOP-17059. ArrayIndexOfboundsException in 
ViewFileSystem#listStatus. Contributed by hemanthboyina
ab458b0 is described below

commit ab458b01081d884bd0a17f19f300fdbde98f2824
Author: Mingliang Liu 
AuthorDate: Wed Jun 10 09:03:57 2020 -0700

HADOOP-17059. ArrayIndexOfboundsException in ViewFileSystem#listStatus. 
Contributed by hemanthboyina
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  2 +-
 .../apache/hadoop/fs/viewfs/ViewFsBaseTest.java| 24 ++
 2 files changed, 25 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 4c73eae..f83afa0 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -929,7 +929,7 @@ public class ViewFileSystem extends FileSystem {
 } else {
   result[i++] = new FileStatus(0, true, 0, 0,
 creationTime, creationTime, PERMISSION_555,
-ugi.getShortUserName(), ugi.getGroupNames()[0],
+ugi.getShortUserName(), ugi.getPrimaryGroupName(),
 new Path(inode.fullPath).makeQualified(
 myUri, null));
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
index 73e43e1..62d3117 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
@@ -55,6 +55,7 @@ import org.apache.hadoop.fs.LocatedFileStatus;
 import org.apache.hadoop.fs.RemoteIterator;
 import org.apache.hadoop.fs.FileContextTestHelper.fileType;
 import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.FsConstants;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.UnresolvedLinkException;
@@ -1003,4 +1004,27 @@ abstract public class ViewFsBaseTest {
   return mockFs;
 }
   }
+
+  @Test
+  public void testListStatusWithNoGroups() throws Exception {
+final UserGroupInformation userUgi = UserGroupInformation
+.createUserForTesting("u...@hadoop.com", new String[] {});
+userUgi.doAs(new PrivilegedExceptionAction() {
+  @Override
+  public Object run() throws Exception {
+String clusterName = Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE;
+URI viewFsUri =
+new URI(FsConstants.VIEWFS_SCHEME, clusterName, "/", null, null);
+FileSystem vfs = FileSystem.get(viewFsUri, conf);
+try {
+  vfs.listStatus(new Path(viewFsUri.toString() + "internalDir"));
+  Assert.fail("Exception should be thrown.");
+} catch (IOException e) {
+  GenericTestUtils
+  .assertExceptionContains("There is no primary group for UGI", e);
+}
+return null;
+  }
+});
+  }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.10 updated: HADOOP-17059. ArrayIndexOfboundsException in ViewFileSystem#listStatus. Contributed by hemanthboyina

2020-06-10 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.10 by this push:
 new 8e01bd3  HADOOP-17059. ArrayIndexOfboundsException in 
ViewFileSystem#listStatus. Contributed by hemanthboyina
8e01bd3 is described below

commit 8e01bd317b7aede1f0bf6c9a55f8ee435da0fb84
Author: Mingliang Liu 
AuthorDate: Wed Jun 10 09:03:57 2020 -0700

HADOOP-17059. ArrayIndexOfboundsException in ViewFileSystem#listStatus. 
Contributed by hemanthboyina
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  2 +-
 .../apache/hadoop/fs/viewfs/ViewFsBaseTest.java| 24 ++
 2 files changed, 25 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index c248a27..7960048 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1018,7 +1018,7 @@ public class ViewFileSystem extends FileSystem {
 } else {
   result[i++] = new FileStatus(0, true, 0, 0,
 creationTime, creationTime, PERMISSION_555,
-ugi.getShortUserName(), ugi.getGroupNames()[0],
+ugi.getShortUserName(), ugi.getPrimaryGroupName(),
 new Path(inode.fullPath).makeQualified(
 myUri, null));
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
index 73e43e1..62d3117 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
@@ -55,6 +55,7 @@ import org.apache.hadoop.fs.LocatedFileStatus;
 import org.apache.hadoop.fs.RemoteIterator;
 import org.apache.hadoop.fs.FileContextTestHelper.fileType;
 import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.FsConstants;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.UnresolvedLinkException;
@@ -1003,4 +1004,27 @@ abstract public class ViewFsBaseTest {
   return mockFs;
 }
   }
+
+  @Test
+  public void testListStatusWithNoGroups() throws Exception {
+final UserGroupInformation userUgi = UserGroupInformation
+.createUserForTesting("u...@hadoop.com", new String[] {});
+userUgi.doAs(new PrivilegedExceptionAction() {
+  @Override
+  public Object run() throws Exception {
+String clusterName = Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE;
+URI viewFsUri =
+new URI(FsConstants.VIEWFS_SCHEME, clusterName, "/", null, null);
+FileSystem vfs = FileSystem.get(viewFsUri, conf);
+try {
+  vfs.listStatus(new Path(viewFsUri.toString() + "internalDir"));
+  Assert.fail("Exception should be thrown.");
+} catch (IOException e) {
+  GenericTestUtils
+  .assertExceptionContains("There is no primary group for UGI", e);
+}
+return null;
+  }
+});
+  }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.9 updated: HADOOP-17047. TODO comment exist in trunk while related issue HADOOP-6223 is already fixed. Contributed by Rungroj Maipradit

2020-06-08 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-2.9
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.9 by this push:
 new 5805a76  HADOOP-17047. TODO comment exist in trunk while related issue 
HADOOP-6223 is already fixed. Contributed by Rungroj Maipradit
5805a76 is described below

commit 5805a766f09d843f0c6bb129fb888f3d1bc9a0af
Author: Mingliang Liu 
AuthorDate: Mon Jun 8 11:28:36 2020 -0700

HADOOP-17047. TODO comment exist in trunk while related issue HADOOP-6223 
is already fixed. Contributed by Rungroj Maipradit
---
 .../src/main/java/org/apache/hadoop/fs/FileContext.java | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
index 7edef14..12b39bb 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
@@ -59,6 +59,7 @@ import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.util.ShutdownHookManager;
 
 import com.google.common.base.Preconditions;
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.htrace.core.Tracer;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -508,10 +509,9 @@ public class FileContext {
 return getFileContext(FsConstants.LOCAL_FS_URI, aConf);
   }
 
-  /* This method is needed for tests. */
+  @VisibleForTesting
   @InterfaceAudience.Private
-  @InterfaceStability.Unstable /* return type will change to AFS once
-  HADOOP-6223 is completed */
+  @InterfaceStability.Unstable
   public AbstractFileSystem getDefaultFileSystem() {
 return defaultFS;
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.10 updated: HADOOP-17047. TODO comment exist in trunk while related issue HADOOP-6223 is already fixed. Contributed by Rungroj Maipradit

2020-06-08 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.10 by this push:
 new dcc6d63  HADOOP-17047. TODO comment exist in trunk while related issue 
HADOOP-6223 is already fixed. Contributed by Rungroj Maipradit
dcc6d63 is described below

commit dcc6d63828bd97314d7357e04acffd0f2fd62cc9
Author: Mingliang Liu 
AuthorDate: Mon Jun 8 11:28:36 2020 -0700

HADOOP-17047. TODO comment exist in trunk while related issue HADOOP-6223 
is already fixed. Contributed by Rungroj Maipradit
---
 .../src/main/java/org/apache/hadoop/fs/FileContext.java | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
index 7edef14..12b39bb 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
@@ -59,6 +59,7 @@ import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.util.ShutdownHookManager;
 
 import com.google.common.base.Preconditions;
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.htrace.core.Tracer;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -508,10 +509,9 @@ public class FileContext {
 return getFileContext(FsConstants.LOCAL_FS_URI, aConf);
   }
 
-  /* This method is needed for tests. */
+  @VisibleForTesting
   @InterfaceAudience.Private
-  @InterfaceStability.Unstable /* return type will change to AFS once
-  HADOOP-6223 is completed */
+  @InterfaceStability.Unstable
   public AbstractFileSystem getDefaultFileSystem() {
 return defaultFS;
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HADOOP-17047. TODO comment exist in trunk while related issue HADOOP-6223 is already fixed. Contributed by Rungroj Maipradit

2020-06-08 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new ac0e928  HADOOP-17047. TODO comment exist in trunk while related issue 
HADOOP-6223 is already fixed. Contributed by Rungroj Maipradit
ac0e928 is described below

commit ac0e92856482f03e097937f958d46f5b3841f252
Author: Mingliang Liu 
AuthorDate: Mon Jun 8 11:28:36 2020 -0700

HADOOP-17047. TODO comment exist in trunk while related issue HADOOP-6223 
is already fixed. Contributed by Rungroj Maipradit
---
 .../src/main/java/org/apache/hadoop/fs/FileContext.java | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
index 5b8dabe..2148360 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
@@ -60,6 +60,7 @@ import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.util.ShutdownHookManager;
 
 import com.google.common.base.Preconditions;
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.htrace.core.Tracer;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -499,10 +500,9 @@ public class FileContext {
 return getFileContext(FsConstants.LOCAL_FS_URI, aConf);
   }
 
-  /* This method is needed for tests. */
+  @VisibleForTesting
   @InterfaceAudience.Private
-  @InterfaceStability.Unstable /* return type will change to AFS once
-  HADOOP-6223 is completed */
+  @InterfaceStability.Unstable
   public AbstractFileSystem getDefaultFileSystem() {
 return defaultFS;
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HADOOP-17047. TODO comment exist in trunk while related issue HADOOP-6223 is already fixed. Contributed by Rungroj Maipradit

2020-06-08 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 6110646  HADOOP-17047. TODO comment exist in trunk while related issue 
HADOOP-6223 is already fixed. Contributed by Rungroj Maipradit
6110646 is described below

commit 61106467e38bd4489d5ba448d3ce3fa1cee6f39c
Author: Mingliang Liu 
AuthorDate: Mon Jun 8 11:28:36 2020 -0700

HADOOP-17047. TODO comment exist in trunk while related issue HADOOP-6223 
is already fixed. Contributed by Rungroj Maipradit
---
 .../src/main/java/org/apache/hadoop/fs/FileContext.java | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
index 0b3889b..25b0037 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
@@ -63,6 +63,7 @@ import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.util.ShutdownHookManager;
 
 import com.google.common.base.Preconditions;
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.htrace.core.Tracer;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -502,10 +503,9 @@ public class FileContext {
 return getFileContext(FsConstants.LOCAL_FS_URI, aConf);
   }
 
-  /* This method is needed for tests. */
+  @VisibleForTesting
   @InterfaceAudience.Private
-  @InterfaceStability.Unstable /* return type will change to AFS once
-  HADOOP-6223 is completed */
+  @InterfaceStability.Unstable
   public AbstractFileSystem getDefaultFileSystem() {
 return defaultFS;
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HADOOP-17047. TODO comment exist in trunk while related issue HADOOP-6223 is already fixed. Contributed by Rungroj Maipradit

2020-06-08 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new fa723aa  HADOOP-17047. TODO comment exist in trunk while related issue 
HADOOP-6223 is already fixed. Contributed by Rungroj Maipradit
fa723aa is described below

commit fa723aa7f8bddc0c4a7300de9c2f0f7ce9ad9b42
Author: Mingliang Liu 
AuthorDate: Mon Jun 8 11:28:36 2020 -0700

HADOOP-17047. TODO comment exist in trunk while related issue HADOOP-6223 
is already fixed. Contributed by Rungroj Maipradit
---
 .../src/main/java/org/apache/hadoop/fs/FileContext.java | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
index df93e89..364777f 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
@@ -66,6 +66,7 @@ import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.util.ShutdownHookManager;
 
 import com.google.common.base.Preconditions;
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.htrace.core.Tracer;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -507,10 +508,9 @@ public class FileContext implements PathCapabilities {
 return getFileContext(FsConstants.LOCAL_FS_URI, aConf);
   }
 
-  /* This method is needed for tests. */
+  @VisibleForTesting
   @InterfaceAudience.Private
-  @InterfaceStability.Unstable /* return type will change to AFS once
-  HADOOP-6223 is completed */
+  @InterfaceStability.Unstable
   public AbstractFileSystem getDefaultFileSystem() {
 return defaultFS;
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-17047. TODO comment exist in trunk while related issue HADOOP-6223 is already fixed. Contributed by Rungroj Maipradit

2020-06-08 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 0c25131  HADOOP-17047. TODO comment exist in trunk while related issue 
HADOOP-6223 is already fixed. Contributed by Rungroj Maipradit
0c25131 is described below

commit 0c25131ca430fcd6bf0f2c77dc01f027b92a9f4f
Author: Mingliang Liu 
AuthorDate: Mon Jun 8 11:28:36 2020 -0700

HADOOP-17047. TODO comment exist in trunk while related issue HADOOP-6223 
is already fixed. Contributed by Rungroj Maipradit
---
 .../src/main/java/org/apache/hadoop/fs/FileContext.java | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
index ba0064f..e9d8ea4 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
@@ -66,6 +66,7 @@ import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.util.ShutdownHookManager;
 
 import com.google.common.base.Preconditions;
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.htrace.core.Tracer;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -507,10 +508,9 @@ public class FileContext implements PathCapabilities {
 return getFileContext(FsConstants.LOCAL_FS_URI, aConf);
   }
 
-  /* This method is needed for tests. */
+  @VisibleForTesting
   @InterfaceAudience.Private
-  @InterfaceStability.Unstable /* return type will change to AFS once
-  HADOOP-6223 is completed */
+  @InterfaceStability.Unstable
   public AbstractFileSystem getDefaultFileSystem() {
 return defaultFS;
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HADOOP-17059. ArrayIndexOfboundsException in ViewFileSystem#listStatus. Contributed by hemanthboyina

2020-06-08 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 101ce83  HADOOP-17059. ArrayIndexOfboundsException in 
ViewFileSystem#listStatus. Contributed by hemanthboyina
101ce83 is described below

commit 101ce83f37b1e24feae99d1069d3369519c428de
Author: Mingliang Liu 
AuthorDate: Mon Jun 8 10:11:30 2020 -0700

HADOOP-17059. ArrayIndexOfboundsException in ViewFileSystem#listStatus. 
Contributed by hemanthboyina
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  2 +-
 .../apache/hadoop/fs/viewfs/ViewFsBaseTest.java| 22 ++
 2 files changed, 23 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index f052743..a13b6ea 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1182,7 +1182,7 @@ public class ViewFileSystem extends FileSystem {
 } else {
   result[i++] = new FileStatus(0, true, 0, 0,
 creationTime, creationTime, PERMISSION_555,
-ugi.getShortUserName(), ugi.getGroupNames()[0],
+ugi.getShortUserName(), ugi.getPrimaryGroupName(),
 new Path(inode.fullPath).makeQualified(
 myUri, null));
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
index d72ab74..f876390 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
@@ -55,6 +55,7 @@ import org.apache.hadoop.fs.LocatedFileStatus;
 import org.apache.hadoop.fs.RemoteIterator;
 import org.apache.hadoop.fs.FileContextTestHelper.fileType;
 import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.FsConstants;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.UnresolvedLinkException;
@@ -68,6 +69,7 @@ import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.test.LambdaTestUtils;
 import org.junit.After;
 import org.junit.Assert;
 import org.junit.Before;
@@ -1003,4 +1005,24 @@ abstract public class ViewFsBaseTest {
   return mockFs;
 }
   }
+
+  @Test
+  public void testListStatusWithNoGroups() throws Exception {
+final UserGroupInformation userUgi = UserGroupInformation
+.createUserForTesting("u...@hadoop.com", new String[] {});
+userUgi.doAs(new PrivilegedExceptionAction() {
+  @Override
+  public Object run() throws Exception {
+String clusterName = Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE;
+URI viewFsUri =
+new URI(FsConstants.VIEWFS_SCHEME, clusterName, "/", null, null);
+FileSystem vfs = FileSystem.get(viewFsUri, conf);
+LambdaTestUtils.intercept(IOException.class,
+"There is no primary group for UGI", () -> vfs
+.listStatus(new Path(viewFsUri.toString() + "internalDir")));
+return null;
+  }
+});
+  }
+
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HADOOP-17059. ArrayIndexOfboundsException in ViewFileSystem#listStatus. Contributed by hemanthboyina

2020-06-08 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 0eee1c8  HADOOP-17059. ArrayIndexOfboundsException in 
ViewFileSystem#listStatus. Contributed by hemanthboyina
0eee1c8 is described below

commit 0eee1c88ecd7b59e2c4f3bbb56207cd6e90e26ca
Author: Mingliang Liu 
AuthorDate: Mon Jun 8 10:11:30 2020 -0700

HADOOP-17059. ArrayIndexOfboundsException in ViewFileSystem#listStatus. 
Contributed by hemanthboyina
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  2 +-
 .../apache/hadoop/fs/viewfs/ViewFsBaseTest.java| 22 ++
 2 files changed, 23 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index f052743..a13b6ea 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1182,7 +1182,7 @@ public class ViewFileSystem extends FileSystem {
 } else {
   result[i++] = new FileStatus(0, true, 0, 0,
 creationTime, creationTime, PERMISSION_555,
-ugi.getShortUserName(), ugi.getGroupNames()[0],
+ugi.getShortUserName(), ugi.getPrimaryGroupName(),
 new Path(inode.fullPath).makeQualified(
 myUri, null));
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
index d72ab74..f876390 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
@@ -55,6 +55,7 @@ import org.apache.hadoop.fs.LocatedFileStatus;
 import org.apache.hadoop.fs.RemoteIterator;
 import org.apache.hadoop.fs.FileContextTestHelper.fileType;
 import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.FsConstants;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.UnresolvedLinkException;
@@ -68,6 +69,7 @@ import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.test.LambdaTestUtils;
 import org.junit.After;
 import org.junit.Assert;
 import org.junit.Before;
@@ -1003,4 +1005,24 @@ abstract public class ViewFsBaseTest {
   return mockFs;
 }
   }
+
+  @Test
+  public void testListStatusWithNoGroups() throws Exception {
+final UserGroupInformation userUgi = UserGroupInformation
+.createUserForTesting("u...@hadoop.com", new String[] {});
+userUgi.doAs(new PrivilegedExceptionAction() {
+  @Override
+  public Object run() throws Exception {
+String clusterName = Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE;
+URI viewFsUri =
+new URI(FsConstants.VIEWFS_SCHEME, clusterName, "/", null, null);
+FileSystem vfs = FileSystem.get(viewFsUri, conf);
+LambdaTestUtils.intercept(IOException.class,
+"There is no primary group for UGI", () -> vfs
+.listStatus(new Path(viewFsUri.toString() + "internalDir")));
+return null;
+  }
+});
+  }
+
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HADOOP-17059. ArrayIndexOfboundsException in ViewFileSystem#listStatus. Contributed by hemanthboyina

2020-06-08 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 543075b  HADOOP-17059. ArrayIndexOfboundsException in 
ViewFileSystem#listStatus. Contributed by hemanthboyina
543075b is described below

commit 543075b84568ffd3b664d86843d3de9098caf448
Author: Mingliang Liu 
AuthorDate: Mon Jun 8 10:11:30 2020 -0700

HADOOP-17059. ArrayIndexOfboundsException in ViewFileSystem#listStatus. 
Contributed by hemanthboyina
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  2 +-
 .../apache/hadoop/fs/viewfs/ViewFsBaseTest.java| 22 ++
 2 files changed, 23 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 115fc03..0acb04d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1221,7 +1221,7 @@ public class ViewFileSystem extends FileSystem {
 } else {
   result[i++] = new FileStatus(0, true, 0, 0,
 creationTime, creationTime, PERMISSION_555,
-ugi.getShortUserName(), ugi.getGroupNames()[0],
+ugi.getShortUserName(), ugi.getPrimaryGroupName(),
 new Path(inode.fullPath).makeQualified(
 myUri, null));
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
index d96cdb1..90722aa 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
@@ -56,6 +56,7 @@ import org.apache.hadoop.fs.LocatedFileStatus;
 import org.apache.hadoop.fs.RemoteIterator;
 import org.apache.hadoop.fs.FileContextTestHelper.fileType;
 import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.FsConstants;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.UnresolvedLinkException;
@@ -69,6 +70,7 @@ import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.test.LambdaTestUtils;
 import org.junit.After;
 import org.junit.Assert;
 import org.junit.Before;
@@ -1001,4 +1003,24 @@ abstract public class ViewFsBaseTest {
   return mockFs;
 }
   }
+
+  @Test
+  public void testListStatusWithNoGroups() throws Exception {
+final UserGroupInformation userUgi = UserGroupInformation
+.createUserForTesting("u...@hadoop.com", new String[] {});
+userUgi.doAs(new PrivilegedExceptionAction() {
+  @Override
+  public Object run() throws Exception {
+String clusterName = Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE;
+URI viewFsUri =
+new URI(FsConstants.VIEWFS_SCHEME, clusterName, "/", null, null);
+FileSystem vfs = FileSystem.get(viewFsUri, conf);
+LambdaTestUtils.intercept(IOException.class,
+"There is no primary group for UGI", () -> vfs
+.listStatus(new Path(viewFsUri.toString() + "internalDir")));
+return null;
+  }
+});
+  }
+
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-17059. ArrayIndexOfboundsException in ViewFileSystem#listStatus. Contributed by hemanthboyina

2020-06-08 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 9f242c2  HADOOP-17059. ArrayIndexOfboundsException in 
ViewFileSystem#listStatus. Contributed by hemanthboyina
9f242c2 is described below

commit 9f242c215e1969ffec2fa2e24e65edc712097641
Author: Mingliang Liu 
AuthorDate: Mon Jun 8 10:11:30 2020 -0700

HADOOP-17059. ArrayIndexOfboundsException in ViewFileSystem#listStatus. 
Contributed by hemanthboyina
---
 .../apache/hadoop/fs/viewfs/ViewFileSystem.java|  2 +-
 .../apache/hadoop/fs/viewfs/ViewFsBaseTest.java| 22 ++
 2 files changed, 23 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
index 2711bff..56d0fc5 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
@@ -1226,7 +1226,7 @@ public class ViewFileSystem extends FileSystem {
 } else {
   result[i++] = new FileStatus(0, true, 0, 0,
 creationTime, creationTime, PERMISSION_555,
-ugi.getShortUserName(), ugi.getGroupNames()[0],
+ugi.getShortUserName(), ugi.getPrimaryGroupName(),
 new Path(inode.fullPath).makeQualified(
 myUri, null));
 }
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
index d96cdb1..90722aa 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
@@ -56,6 +56,7 @@ import org.apache.hadoop.fs.LocatedFileStatus;
 import org.apache.hadoop.fs.RemoteIterator;
 import org.apache.hadoop.fs.FileContextTestHelper.fileType;
 import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.FsConstants;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.UnresolvedLinkException;
@@ -69,6 +70,7 @@ import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.test.LambdaTestUtils;
 import org.junit.After;
 import org.junit.Assert;
 import org.junit.Before;
@@ -1001,4 +1003,24 @@ abstract public class ViewFsBaseTest {
   return mockFs;
 }
   }
+
+  @Test
+  public void testListStatusWithNoGroups() throws Exception {
+final UserGroupInformation userUgi = UserGroupInformation
+.createUserForTesting("u...@hadoop.com", new String[] {});
+userUgi.doAs(new PrivilegedExceptionAction() {
+  @Override
+  public Object run() throws Exception {
+String clusterName = Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE;
+URI viewFsUri =
+new URI(FsConstants.VIEWFS_SCHEME, clusterName, "/", null, null);
+FileSystem vfs = FileSystem.get(viewFsUri, conf);
+LambdaTestUtils.intercept(IOException.class,
+"There is no primary group for UGI", () -> vfs
+.listStatus(new Path(viewFsUri.toString() + "internalDir")));
+return null;
+  }
+});
+  }
+
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.9 updated: HADOOP-17052. NetUtils.connect() throws unchecked exception (UnresolvedAddressException) causing clients to abort (#2036)

2020-06-01 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-2.9
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.9 by this push:
 new 875e9a1  HADOOP-17052. NetUtils.connect() throws unchecked exception 
(UnresolvedAddressException) causing clients to abort (#2036)
875e9a1 is described below

commit 875e9a1310beff755b4548c5c2f1befba12ff4a3
Author: Dhiraj 
AuthorDate: Mon Jun 1 10:49:17 2020 -0700

HADOOP-17052. NetUtils.connect() throws unchecked exception 
(UnresolvedAddressException) causing clients to abort (#2036)

Contributed by Dhiraj Hegde.

Signed-off-by: Mingliang Liu 
---
 .../main/java/org/apache/hadoop/net/NetUtils.java|  3 +++
 .../java/org/apache/hadoop/net/TestNetUtils.java | 20 +++-
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
index 4697320..00b3f1a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
@@ -37,6 +37,7 @@ import java.net.URISyntaxException;
 import java.net.UnknownHostException;
 import java.net.ConnectException;
 import java.nio.channels.SocketChannel;
+import java.nio.channels.UnresolvedAddressException;
 import java.util.Map.Entry;
 import java.util.regex.Pattern;
 import java.util.*;
@@ -532,6 +533,8 @@ public class NetUtils {
   }
 } catch (SocketTimeoutException ste) {
   throw new ConnectTimeoutException(ste.getMessage());
+}  catch (UnresolvedAddressException uae) {
+  throw new UnknownHostException(uae.getMessage());
 }
 
 // There is a very rare case allowed by the TCP specification, such that
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
index 1c56c60..cbd0cb1 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
@@ -95,7 +95,25 @@ public class TestNetUtils {
   assertInException(se, "Invalid argument");
 }
   }
-  
+
+  @Test
+  public void testInvalidAddress() throws Throwable {
+Configuration conf = new Configuration();
+
+Socket socket = NetUtils.getDefaultSocketFactory(conf)
+.createSocket();
+socket.bind(new InetSocketAddress("127.0.0.1", 0));
+try {
+  NetUtils.connect(socket,
+  new InetSocketAddress("invalid-test-host",
+  0), 2);
+  socket.close();
+  fail("Should not have connected");
+} catch (UnknownHostException uhe) {
+  LOG.info("Got exception: ", uhe);
+}
+  }
+
   @Test
   public void testSocketReadTimeoutWithChannel() throws Exception {
 doSocketReadTimeoutTest(true);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-2.10 updated: HADOOP-17052. NetUtils.connect() throws unchecked exception (UnresolvedAddressException) causing clients to abort (#2036)

2020-06-01 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-2.10
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2.10 by this push:
 new e495736  HADOOP-17052. NetUtils.connect() throws unchecked exception 
(UnresolvedAddressException) causing clients to abort (#2036)
e495736 is described below

commit e49573631c6acd7b6a3d1300a600f22f4b349c8e
Author: Dhiraj 
AuthorDate: Mon Jun 1 10:49:17 2020 -0700

HADOOP-17052. NetUtils.connect() throws unchecked exception 
(UnresolvedAddressException) causing clients to abort (#2036)

Contributed by Dhiraj Hegde.

Signed-off-by: Mingliang Liu 
---
 .../main/java/org/apache/hadoop/net/NetUtils.java|  3 +++
 .../java/org/apache/hadoop/net/TestNetUtils.java | 20 +++-
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
index 4697320..00b3f1a 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
@@ -37,6 +37,7 @@ import java.net.URISyntaxException;
 import java.net.UnknownHostException;
 import java.net.ConnectException;
 import java.nio.channels.SocketChannel;
+import java.nio.channels.UnresolvedAddressException;
 import java.util.Map.Entry;
 import java.util.regex.Pattern;
 import java.util.*;
@@ -532,6 +533,8 @@ public class NetUtils {
   }
 } catch (SocketTimeoutException ste) {
   throw new ConnectTimeoutException(ste.getMessage());
+}  catch (UnresolvedAddressException uae) {
+  throw new UnknownHostException(uae.getMessage());
 }
 
 // There is a very rare case allowed by the TCP specification, such that
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
index 1c56c60..cbd0cb1 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
@@ -95,7 +95,25 @@ public class TestNetUtils {
   assertInException(se, "Invalid argument");
 }
   }
-  
+
+  @Test
+  public void testInvalidAddress() throws Throwable {
+Configuration conf = new Configuration();
+
+Socket socket = NetUtils.getDefaultSocketFactory(conf)
+.createSocket();
+socket.bind(new InetSocketAddress("127.0.0.1", 0));
+try {
+  NetUtils.connect(socket,
+  new InetSocketAddress("invalid-test-host",
+  0), 2);
+  socket.close();
+  fail("Should not have connected");
+} catch (UnknownHostException uhe) {
+  LOG.info("Got exception: ", uhe);
+}
+  }
+
   @Test
   public void testSocketReadTimeoutWithChannel() throws Exception {
 doSocketReadTimeoutTest(true);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HADOOP-17052. NetUtils.connect() throws unchecked exception (UnresolvedAddressException) causing clients to abort (#2036)

2020-06-01 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 2c40f20  HADOOP-17052. NetUtils.connect() throws unchecked exception 
(UnresolvedAddressException) causing clients to abort (#2036)
2c40f20 is described below

commit 2c40f20debd3a4c02dab3a6866cb403913096096
Author: Dhiraj 
AuthorDate: Mon Jun 1 10:49:17 2020 -0700

HADOOP-17052. NetUtils.connect() throws unchecked exception 
(UnresolvedAddressException) causing clients to abort (#2036)

Contributed by Dhiraj Hegde.

Signed-off-by: Mingliang Liu 
---
 .../main/java/org/apache/hadoop/net/NetUtils.java|  3 +++
 .../java/org/apache/hadoop/net/TestNetUtils.java | 20 +++-
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
index 0f9cfc3..1b54fce 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
@@ -37,6 +37,7 @@ import java.net.URISyntaxException;
 import java.net.UnknownHostException;
 import java.net.ConnectException;
 import java.nio.channels.SocketChannel;
+import java.nio.channels.UnresolvedAddressException;
 import java.util.Map.Entry;
 import java.util.regex.Pattern;
 import java.util.*;
@@ -532,6 +533,8 @@ public class NetUtils {
   }
 } catch (SocketTimeoutException ste) {
   throw new ConnectTimeoutException(ste.getMessage());
+}  catch (UnresolvedAddressException uae) {
+  throw new UnknownHostException(uae.getMessage());
 }
 
 // There is a very rare case allowed by the TCP specification, such that
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
index 30176f2..d80a249 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
@@ -95,7 +95,25 @@ public class TestNetUtils {
   assertInException(se, "Invalid argument");
 }
   }
-  
+
+  @Test
+  public void testInvalidAddress() throws Throwable {
+Configuration conf = new Configuration();
+
+Socket socket = NetUtils.getDefaultSocketFactory(conf)
+.createSocket();
+socket.bind(new InetSocketAddress("127.0.0.1", 0));
+try {
+  NetUtils.connect(socket,
+  new InetSocketAddress("invalid-test-host",
+  0), 2);
+  socket.close();
+  fail("Should not have connected");
+} catch (UnknownHostException uhe) {
+  LOG.info("Got exception: ", uhe);
+}
+  }
+
   @Test
   public void testSocketReadTimeoutWithChannel() throws Exception {
 doSocketReadTimeoutTest(true);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HADOOP-17052. NetUtils.connect() throws unchecked exception (UnresolvedAddressException) causing clients to abort (#2036)

2020-06-01 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 462a68a  HADOOP-17052. NetUtils.connect() throws unchecked exception 
(UnresolvedAddressException) causing clients to abort (#2036)
462a68a is described below

commit 462a68a69f337d680732ccc9b89dce4fce294f2f
Author: Dhiraj 
AuthorDate: Mon Jun 1 10:49:17 2020 -0700

HADOOP-17052. NetUtils.connect() throws unchecked exception 
(UnresolvedAddressException) causing clients to abort (#2036)

Contributed by Dhiraj Hegde.

Signed-off-by: Mingliang Liu 
---
 .../main/java/org/apache/hadoop/net/NetUtils.java|  3 +++
 .../java/org/apache/hadoop/net/TestNetUtils.java | 20 +++-
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
index 6fa0a38..198c26c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
@@ -37,6 +37,7 @@ import java.net.URISyntaxException;
 import java.net.UnknownHostException;
 import java.net.ConnectException;
 import java.nio.channels.SocketChannel;
+import java.nio.channels.UnresolvedAddressException;
 import java.util.Map.Entry;
 import java.util.regex.Pattern;
 import java.util.*;
@@ -534,6 +535,8 @@ public class NetUtils {
   }
 } catch (SocketTimeoutException ste) {
   throw new ConnectTimeoutException(ste.getMessage());
+}  catch (UnresolvedAddressException uae) {
+  throw new UnknownHostException(uae.getMessage());
 }
 
 // There is a very rare case allowed by the TCP specification, such that
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
index 62bd1b1..fb91ff6 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
@@ -95,7 +95,25 @@ public class TestNetUtils {
   assertInException(se, "Invalid argument");
 }
   }
-  
+
+  @Test
+  public void testInvalidAddress() throws Throwable {
+Configuration conf = new Configuration();
+
+Socket socket = NetUtils.getDefaultSocketFactory(conf)
+.createSocket();
+socket.bind(new InetSocketAddress("127.0.0.1", 0));
+try {
+  NetUtils.connect(socket,
+  new InetSocketAddress("invalid-test-host",
+  0), 2);
+  socket.close();
+  fail("Should not have connected");
+} catch (UnknownHostException uhe) {
+  LOG.info("Got exception: ", uhe);
+}
+  }
+
   @Test
   public void testSocketReadTimeoutWithChannel() throws Exception {
 doSocketReadTimeoutTest(true);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.3 updated: HADOOP-17052. NetUtils.connect() throws unchecked exception (UnresolvedAddressException) causing clients to abort (#2036)

2020-06-01 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
 new 910d88e  HADOOP-17052. NetUtils.connect() throws unchecked exception 
(UnresolvedAddressException) causing clients to abort (#2036)
910d88e is described below

commit 910d88eeed185073d0c841001735d51021ccdcb9
Author: Dhiraj 
AuthorDate: Mon Jun 1 10:49:17 2020 -0700

HADOOP-17052. NetUtils.connect() throws unchecked exception 
(UnresolvedAddressException) causing clients to abort (#2036)

Contributed by Dhiraj Hegde.

Signed-off-by: Mingliang Liu 
---
 .../main/java/org/apache/hadoop/net/NetUtils.java|  3 +++
 .../java/org/apache/hadoop/net/TestNetUtils.java | 20 +++-
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
index d98254c..77cbf3b 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
@@ -37,6 +37,7 @@ import java.net.URISyntaxException;
 import java.net.UnknownHostException;
 import java.net.ConnectException;
 import java.nio.channels.SocketChannel;
+import java.nio.channels.UnresolvedAddressException;
 import java.util.Map.Entry;
 import java.util.regex.Pattern;
 import java.util.*;
@@ -534,6 +535,8 @@ public class NetUtils {
   }
 } catch (SocketTimeoutException ste) {
   throw new ConnectTimeoutException(ste.getMessage());
+}  catch (UnresolvedAddressException uae) {
+  throw new UnknownHostException(uae.getMessage());
 }
 
 // There is a very rare case allowed by the TCP specification, such that
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
index b11b1e9..7628493 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
@@ -95,7 +95,25 @@ public class TestNetUtils {
   assertInException(se, "Invalid argument");
 }
   }
-  
+
+  @Test
+  public void testInvalidAddress() throws Throwable {
+Configuration conf = new Configuration();
+
+Socket socket = NetUtils.getDefaultSocketFactory(conf)
+.createSocket();
+socket.bind(new InetSocketAddress("127.0.0.1", 0));
+try {
+  NetUtils.connect(socket,
+  new InetSocketAddress("invalid-test-host",
+  0), 2);
+  socket.close();
+  fail("Should not have connected");
+} catch (UnknownHostException uhe) {
+  LOG.info("Got exception: ", uhe);
+}
+  }
+
   @Test
   public void testSocketReadTimeoutWithChannel() throws Exception {
 doSocketReadTimeoutTest(true);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-17052. NetUtils.connect() throws unchecked exception (UnresolvedAddressException) causing clients to abort (#2036)

2020-06-01 Thread liuml07
This is an automated email from the ASF dual-hosted git repository.

liuml07 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 9fe4c37  HADOOP-17052. NetUtils.connect() throws unchecked exception 
(UnresolvedAddressException) causing clients to abort (#2036)
9fe4c37 is described below

commit 9fe4c37c25b256d31202854066eb7e15c6335b9f
Author: Dhiraj 
AuthorDate: Mon Jun 1 10:49:17 2020 -0700

HADOOP-17052. NetUtils.connect() throws unchecked exception 
(UnresolvedAddressException) causing clients to abort (#2036)

Contributed by Dhiraj Hegde.

Signed-off-by: Mingliang Liu 
---
 .../main/java/org/apache/hadoop/net/NetUtils.java|  3 +++
 .../java/org/apache/hadoop/net/TestNetUtils.java | 20 +++-
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
index 478bd41..c5a5b11 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
@@ -37,6 +37,7 @@ import java.net.URISyntaxException;
 import java.net.UnknownHostException;
 import java.net.ConnectException;
 import java.nio.channels.SocketChannel;
+import java.nio.channels.UnresolvedAddressException;
 import java.util.Map.Entry;
 import java.util.regex.Pattern;
 import java.util.*;
@@ -534,6 +535,8 @@ public class NetUtils {
   }
 } catch (SocketTimeoutException ste) {
   throw new ConnectTimeoutException(ste.getMessage());
+}  catch (UnresolvedAddressException uae) {
+  throw new UnknownHostException(uae.getMessage());
 }
 
 // There is a very rare case allowed by the TCP specification, such that
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
index b11b1e9..7628493 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java
@@ -95,7 +95,25 @@ public class TestNetUtils {
   assertInException(se, "Invalid argument");
 }
   }
-  
+
+  @Test
+  public void testInvalidAddress() throws Throwable {
+Configuration conf = new Configuration();
+
+Socket socket = NetUtils.getDefaultSocketFactory(conf)
+.createSocket();
+socket.bind(new InetSocketAddress("127.0.0.1", 0));
+try {
+  NetUtils.connect(socket,
+  new InetSocketAddress("invalid-test-host",
+  0), 2);
+  socket.close();
+  fail("Should not have connected");
+} catch (UnknownHostException uhe) {
+  LOG.info("Got exception: ", uhe);
+}
+  }
+
   @Test
   public void testSocketReadTimeoutWithChannel() throws Exception {
 doSocketReadTimeoutTest(true);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   >