[hadoop] branch branch-3.1 updated: HDFS-14198. Upload and Create button doesn't get enabled after getting reset. Contributed by Ayush Saxena.

2019-01-11 Thread surendralilhore
This is an automated email from the ASF dual-hosted git repository.

surendralilhore pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new 6c846cf  HDFS-14198. Upload and Create button doesn't get enabled 
after getting reset. Contributed by Ayush Saxena.
6c846cf is described below

commit 6c846cfbf23e64243fa774485f685aecbcbd782a
Author: Surendra Singh Lilhore 
AuthorDate: Fri Jan 11 14:36:55 2019 +0530

HDFS-14198. Upload and Create button doesn't get enabled after getting 
reset. Contributed by Ayush Saxena.

(cherry picked from commit 9aeaaa0479ea6b4c4135214722c8c7c39fb17d75)
---
 .../src/main/webapps/hdfs/explorer.html|  4 ++--
 .../hadoop-hdfs/src/main/webapps/hdfs/explorer.js  | 28 ++
 2 files changed, 30 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
index 9ddb597..88cf183 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
@@ -184,11 +184,11 @@
   
 
+  title="Create Directory" id="btn-create-dir">
 
 
 
+  data-target="#modal-upload-file" title="Upload Files" 
id="btn-upload-files">
 
 
 0) {
+ $('#modal-upload-file-button').prop('disabled', false);
+}
+  else {
+$('#modal-upload-file-button').prop('disabled', true);
+}
+  });
+
+  $('#new_directory').on('keyup keypress blur change',function() {
+  if($('#new_directory').val() == '' ||  $('#new_directory').val() == 
null) {
+ $('#btn-create-directory-send').prop('disabled', true);
+}
+  else {
+ $('#btn-create-directory-send').prop('disabled', false);
+}
+  });
+
   $('#modal-upload-file-button').click(function() {
 $(this).prop('disabled', true);
 $(this).button('complete');


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HDFS-14198. Upload and Create button doesn't get enabled after getting reset. Contributed by Ayush Saxena.

2019-01-11 Thread surendralilhore
This is an automated email from the ASF dual-hosted git repository.

surendralilhore pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 5d2d0e7  HDFS-14198. Upload and Create button doesn't get enabled 
after getting reset. Contributed by Ayush Saxena.
5d2d0e7 is described below

commit 5d2d0e74d50a811bf6edc6e846414b9ea16cf290
Author: Surendra Singh Lilhore 
AuthorDate: Fri Jan 11 14:36:55 2019 +0530

HDFS-14198. Upload and Create button doesn't get enabled after getting 
reset. Contributed by Ayush Saxena.

(cherry picked from commit 9aeaaa0479ea6b4c4135214722c8c7c39fb17d75)
---
 .../src/main/webapps/hdfs/explorer.html|  4 ++--
 .../hadoop-hdfs/src/main/webapps/hdfs/explorer.js  | 28 ++
 2 files changed, 30 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
index 9ddb597..88cf183 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
@@ -184,11 +184,11 @@
   
 
+  title="Create Directory" id="btn-create-dir">
 
 
 
+  data-target="#modal-upload-file" title="Upload Files" 
id="btn-upload-files">
 
 
 0) {
+ $('#modal-upload-file-button').prop('disabled', false);
+}
+  else {
+$('#modal-upload-file-button').prop('disabled', true);
+}
+  });
+
+  $('#new_directory').on('keyup keypress blur change',function() {
+  if($('#new_directory').val() == '' ||  $('#new_directory').val() == 
null) {
+ $('#btn-create-directory-send').prop('disabled', true);
+}
+  else {
+ $('#btn-create-directory-send').prop('disabled', false);
+}
+  });
+
   $('#modal-upload-file-button').click(function() {
 $(this).prop('disabled', true);
 $(this).button('complete');


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HDFS-14198. Upload and Create button doesn't get enabled after getting reset. Contributed by Ayush Saxena.

2019-01-11 Thread surendralilhore
This is an automated email from the ASF dual-hosted git repository.

surendralilhore pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 9aeaaa0  HDFS-14198. Upload and Create button doesn't get enabled 
after getting reset. Contributed by Ayush Saxena.
9aeaaa0 is described below

commit 9aeaaa0479ea6b4c4135214722c8c7c39fb17d75
Author: Surendra Singh Lilhore 
AuthorDate: Fri Jan 11 14:36:55 2019 +0530

HDFS-14198. Upload and Create button doesn't get enabled after getting 
reset. Contributed by Ayush Saxena.
---
 .../src/main/webapps/hdfs/explorer.html|  4 ++--
 .../hadoop-hdfs/src/main/webapps/hdfs/explorer.js  | 28 ++
 2 files changed, 30 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
index 9ddb597..88cf183 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html
@@ -184,11 +184,11 @@
   
 
+  title="Create Directory" id="btn-create-dir">
 
 
 
+  data-target="#modal-upload-file" title="Upload Files" 
id="btn-upload-files">
 
 
 0) {
+ $('#modal-upload-file-button').prop('disabled', false);
+}
+  else {
+$('#modal-upload-file-button').prop('disabled', true);
+}
+  });
+
+  $('#new_directory').on('keyup keypress blur change',function() {
+  if($('#new_directory').val() == '' ||  $('#new_directory').val() == 
null) {
+ $('#btn-create-directory-send').prop('disabled', true);
+}
+  else {
+ $('#btn-create-directory-send').prop('disabled', false);
+}
+  });
+
   $('#modal-upload-file-button').click(function() {
 $(this).prop('disabled', true);
 $(this).button('complete');


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HADOOP-15975. ABFS: remove timeout check for DELETE and RENAME.

2019-01-11 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 8b5fbe7  HADOOP-15975. ABFS: remove timeout check for DELETE and 
RENAME.
8b5fbe7 is described below

commit 8b5fbe7a125f9d08cbb9f5e5ae28dc984e0d73d8
Author: Da Zhou 
AuthorDate: Fri Jan 11 11:12:39 2019 +

HADOOP-15975. ABFS: remove timeout check for DELETE and RENAME.

Contributed by Da Zhou.
---
 .../hadoop/fs/azurebfs/AzureBlobFileSystemStore.java | 20 
 1 file changed, 20 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
index 85f..2a058c7 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
@@ -62,7 +62,6 @@ import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.InvalidAbfsRestOperati
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.InvalidFileSystemPropertyException;
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.InvalidUriAuthorityException;
 import org.apache.hadoop.fs.azurebfs.contracts.exceptions.InvalidUriException;
-import org.apache.hadoop.fs.azurebfs.contracts.exceptions.TimeoutException;
 import org.apache.hadoop.fs.azurebfs.contracts.services.AzureServiceErrorCode;
 import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema;
 import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema;
@@ -90,7 +89,6 @@ import org.slf4j.LoggerFactory;
 
 import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.SUPER_USER;
 import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.AZURE_ABFS_ENDPOINT;
-import static org.apache.hadoop.util.Time.now;
 
 /**
  * Provides the bridging logic between Hadoop's abstract filesystem and Azure 
Storage.
@@ -108,8 +106,6 @@ public class AzureBlobFileSystemStore {
   private static final String DATE_TIME_PATTERN = "E, dd MMM  HH:mm:ss 
'GMT'";
   private static final String XMS_PROPERTIES_ENCODING = "ISO-8859-1";
   private static final int LIST_MAX_RESULTS = 500;
-  private static final int DELETE_DIRECTORY_TIMEOUT_MILISECONDS = 18;
-  private static final int RENAME_TIMEOUT_MILISECONDS = 18;
 
   private final AbfsConfiguration abfsConfiguration;
   private final Set azureAtomicRenameDirSet;
@@ -422,17 +418,8 @@ public class AzureBlobFileSystemStore {
 destination);
 
 String continuation = null;
-long deadline = now() + RENAME_TIMEOUT_MILISECONDS;
 
 do {
-  if (now() > deadline) {
-LOG.debug("Rename {} to {} timed out.",
-source,
-destination);
-
-throw new TimeoutException("Rename timed out.");
-  }
-
   AbfsRestOperation op = client.renamePath(AbfsHttpConstants.FORWARD_SLASH 
+ getRelativePath(source),
   AbfsHttpConstants.FORWARD_SLASH + getRelativePath(destination), 
continuation);
   continuation = 
op.getResult().getResponseHeader(HttpHeaderConfigurations.X_MS_CONTINUATION);
@@ -448,15 +435,8 @@ public class AzureBlobFileSystemStore {
 String.valueOf(recursive));
 
 String continuation = null;
-long deadline = now() + DELETE_DIRECTORY_TIMEOUT_MILISECONDS;
 
 do {
-  if (now() > deadline) {
-LOG.debug("Delete directory {} timed out.", path);
-
-throw new TimeoutException("Delete directory timed out.");
-  }
-
   AbfsRestOperation op = client.deletePath(
   AbfsHttpConstants.FORWARD_SLASH + getRelativePath(path), recursive, 
continuation);
   continuation = 
op.getResult().getResponseHeader(HttpHeaderConfigurations.X_MS_CONTINUATION);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-15975. ABFS: remove timeout check for DELETE and RENAME.

2019-01-11 Thread stevel
This is an automated email from the ASF dual-hosted git repository.

stevel pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new dddad98  HADOOP-15975. ABFS: remove timeout check for DELETE and 
RENAME.
dddad98 is described below

commit dddad985d7ff54448c871515c1e8d1773dfbc3df
Author: Da Zhou 
AuthorDate: Fri Jan 11 11:13:41 2019 +

HADOOP-15975. ABFS: remove timeout check for DELETE and RENAME.

Contributed by Da Zhou.

(cherry picked from commit 8b5fbe7a125f9d08cbb9f5e5ae28dc984e0d73d8)
---
 .../hadoop/fs/azurebfs/AzureBlobFileSystemStore.java | 20 
 1 file changed, 20 deletions(-)

diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
index 85f..2a058c7 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
@@ -62,7 +62,6 @@ import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.InvalidAbfsRestOperati
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.InvalidFileSystemPropertyException;
 import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.InvalidUriAuthorityException;
 import org.apache.hadoop.fs.azurebfs.contracts.exceptions.InvalidUriException;
-import org.apache.hadoop.fs.azurebfs.contracts.exceptions.TimeoutException;
 import org.apache.hadoop.fs.azurebfs.contracts.services.AzureServiceErrorCode;
 import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema;
 import org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema;
@@ -90,7 +89,6 @@ import org.slf4j.LoggerFactory;
 
 import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.SUPER_USER;
 import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.AZURE_ABFS_ENDPOINT;
-import static org.apache.hadoop.util.Time.now;
 
 /**
  * Provides the bridging logic between Hadoop's abstract filesystem and Azure 
Storage.
@@ -108,8 +106,6 @@ public class AzureBlobFileSystemStore {
   private static final String DATE_TIME_PATTERN = "E, dd MMM  HH:mm:ss 
'GMT'";
   private static final String XMS_PROPERTIES_ENCODING = "ISO-8859-1";
   private static final int LIST_MAX_RESULTS = 500;
-  private static final int DELETE_DIRECTORY_TIMEOUT_MILISECONDS = 18;
-  private static final int RENAME_TIMEOUT_MILISECONDS = 18;
 
   private final AbfsConfiguration abfsConfiguration;
   private final Set azureAtomicRenameDirSet;
@@ -422,17 +418,8 @@ public class AzureBlobFileSystemStore {
 destination);
 
 String continuation = null;
-long deadline = now() + RENAME_TIMEOUT_MILISECONDS;
 
 do {
-  if (now() > deadline) {
-LOG.debug("Rename {} to {} timed out.",
-source,
-destination);
-
-throw new TimeoutException("Rename timed out.");
-  }
-
   AbfsRestOperation op = client.renamePath(AbfsHttpConstants.FORWARD_SLASH 
+ getRelativePath(source),
   AbfsHttpConstants.FORWARD_SLASH + getRelativePath(destination), 
continuation);
   continuation = 
op.getResult().getResponseHeader(HttpHeaderConfigurations.X_MS_CONTINUATION);
@@ -448,15 +435,8 @@ public class AzureBlobFileSystemStore {
 String.valueOf(recursive));
 
 String continuation = null;
-long deadline = now() + DELETE_DIRECTORY_TIMEOUT_MILISECONDS;
 
 do {
-  if (now() > deadline) {
-LOG.debug("Delete directory {} timed out.", path);
-
-throw new TimeoutException("Delete directory timed out.");
-  }
-
   AbfsRestOperation op = client.deletePath(
   AbfsHttpConstants.FORWARD_SLASH + getRelativePath(path), recursive, 
continuation);
   continuation = 
op.getResult().getResponseHeader(HttpHeaderConfigurations.X_MS_CONTINUATION);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch YARN-8200 updated (12568a2 -> 3e02695)

2019-01-11 Thread jhung
This is an automated email from the ASF dual-hosted git repository.

jhung pushed a change to branch YARN-8200
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


 discard 12568a2  YARN-9175. Null resources check in ResourceInfo for branch-3.0
 discard 1edf39d  YARN-7137. [YARN-3926] Move newly added APIs to unstable in 
YARN-3926 branch. Contributed by Wangda Tan.
 discard 48570b6  YARN-9188. Port YARN-7136 to branch-2

This update removed existing revisions from the reference, leaving the
reference pointing at a previous point in the repository history.

 * -- * -- N   refs/heads/YARN-8200 (3e02695)
\
 O -- O -- O   (12568a2)

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .../hadoop-yarn/dev-support/findbugs-exclude.xml   |   2 +-
 .../apache/hadoop/yarn/api/records/Resource.java   | 202 
 .../yarn/api/records/ResourceInformation.java  |  15 +-
 .../hadoop/yarn/api/records/ResourceRequest.java   |   1 -
 ...{LightWeightResource.java => BaseResource.java} | 104 +++-
 .../hadoop/yarn/util/resource/ResourceUtils.java   |  42 ++--
 .../hadoop/yarn/util/resource/package-info.java|   6 +-
 .../yarn/api/records/impl/pb/ResourcePBImpl.java   |  19 +-
 .../util/resource/DominantResourceCalculator.java  |  75 +++---
 .../hadoop/yarn/util/resource/Resources.java   |  30 +--
 .../yarn/util/resource/TestResourceUtils.java  |   2 -
 .../server/resourcemanager/webapp/dao/AppInfo.java |   2 +-
 .../resourcemanager/webapp/dao/ResourceInfo.java   |   5 +-
 .../hadoop/yarn/server/resourcemanager/MockRM.java |   6 +-
 .../scheduler/capacity/TestCapacityScheduler.java  | 137 +++
 .../capacity/TestCapacitySchedulerPerf.java| 265 -
 .../apache/hadoop/yarn/server/MiniYARNCluster.java |   7 +-
 17 files changed, 377 insertions(+), 543 deletions(-)
 rename 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/{LightWeightResource.java
 => BaseResource.java} (55%)
 delete mode 100644 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerPerf.java


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch HDFS-13891 updated: HDFS-13856. RBF: RouterAdmin should support dfsrouteradmin -refreshRouterArgs command. Contributed by yanghuafeng.

2019-01-11 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch HDFS-13891
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HDFS-13891 by this push:
 new d730107  HDFS-13856. RBF: RouterAdmin should support dfsrouteradmin 
-refreshRouterArgs command. Contributed by yanghuafeng.
d730107 is described below

commit d730107648ee0b440292059234d28ff23674b03e
Author: Inigo Goiri 
AuthorDate: Fri Jan 11 10:11:18 2019 -0800

HDFS-13856. RBF: RouterAdmin should support dfsrouteradmin 
-refreshRouterArgs command. Contributed by yanghuafeng.
---
 .../federation/router/RouterAdminServer.java   |  26 ++-
 .../hadoop/hdfs/tools/federation/RouterAdmin.java  |  72 ++
 .../src/site/markdown/HDFSRouterFederation.md  |   6 +
 .../router/TestRouterAdminGenericRefresh.java  | 252 +
 .../hadoop-hdfs/src/site/markdown/HDFSCommands.md  |   2 +
 5 files changed, 357 insertions(+), 1 deletion(-)

diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
index 18c19e0..027dd11 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
@@ -23,12 +23,14 @@ import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY;
 
 import java.io.IOException;
 import java.net.InetSocketAddress;
+import java.util.Collection;
 import java.util.Set;
 
 import com.google.common.base.Preconditions;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.HDFSPolicyProvider;
+import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
 import 
org.apache.hadoop.hdfs.protocol.proto.RouterProtocolProtos.RouterAdminProtocolService;
@@ -64,9 +66,15 @@ import 
org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableE
 import 
org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryResponse;
 import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
 import org.apache.hadoop.hdfs.server.namenode.NameNode;
+import org.apache.hadoop.ipc.GenericRefreshProtocol;
 import org.apache.hadoop.ipc.ProtobufRpcEngine;
 import org.apache.hadoop.ipc.RPC;
 import org.apache.hadoop.ipc.RPC.Server;
+import org.apache.hadoop.ipc.RefreshRegistry;
+import org.apache.hadoop.ipc.RefreshResponse;
+import org.apache.hadoop.ipc.proto.GenericRefreshProtocolProtos;
+import org.apache.hadoop.ipc.protocolPB.GenericRefreshProtocolPB;
+import 
org.apache.hadoop.ipc.protocolPB.GenericRefreshProtocolServerSideTranslatorPB;
 import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.service.AbstractService;
@@ -81,7 +89,8 @@ import com.google.protobuf.BlockingService;
  * router. It is created, started, and stopped by {@link Router}.
  */
 public class RouterAdminServer extends AbstractService
-implements MountTableManager, RouterStateManager, NameserviceManager {
+implements MountTableManager, RouterStateManager, NameserviceManager,
+GenericRefreshProtocol {
 
   private static final Logger LOG =
   LoggerFactory.getLogger(RouterAdminServer.class);
@@ -160,6 +169,15 @@ public class RouterAdminServer extends AbstractService
 router.setAdminServerAddress(this.adminAddress);
 iStateStoreCache =
 router.getSubclusterResolver() instanceof StateStoreCache;
+
+GenericRefreshProtocolServerSideTranslatorPB genericRefreshXlator =
+new GenericRefreshProtocolServerSideTranslatorPB(this);
+BlockingService genericRefreshService =
+GenericRefreshProtocolProtos.GenericRefreshProtocolService.
+newReflectiveBlockingService(genericRefreshXlator);
+
+DFSUtil.addPBProtocol(conf, GenericRefreshProtocolPB.class,
+genericRefreshService, adminServer);
   }
 
   /**
@@ -487,4 +505,10 @@ public class RouterAdminServer extends AbstractService
   public static String getSuperGroup(){
 return superGroup;
   }
+
+  @Override // GenericRefreshProtocol
+  public Collection refresh(String identifier, String[] args) 
{
+// Let the registry handle as needed
+return RefreshRegistry.defaultRegistry().dispatch(identifier, args);
+  }
 }
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java
index 27c42cd..37aad88 100644

[hadoop] branch HDDS-4 updated: HDDS-597. Ratis: Support secure gRPC endpoint with mTLS for Ratis. Contributed by Ajay Kumar.

2019-01-11 Thread ajay
This is an automated email from the ASF dual-hosted git repository.

ajay pushed a commit to branch HDDS-4
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/HDDS-4 by this push:
 new 1675e97  HDDS-597. Ratis: Support secure gRPC endpoint with mTLS for 
Ratis. Contributed by Ajay Kumar.
1675e97 is described below

commit 1675e97e380a475a9dacbdb8e89077f175927513
Author: Ajay Kumar 
AuthorDate: Fri Jan 11 10:32:06 2019 -0800

HDDS-597. Ratis: Support secure gRPC endpoint with mTLS for Ratis. 
Contributed by Ajay Kumar.
---
 .../apache/hadoop/hdds/scm/XceiverClientGrpc.java  |  2 +-
 .../apache/hadoop/hdds/scm/XceiverClientRatis.java | 20 +--
 .../hadoop/hdds/security/x509/SecurityConfig.java  | 23 
 .../main/java/org/apache/ratis/RatisHelper.java| 67 ++
 .../common/transport/server/XceiverServerGrpc.java |  2 +-
 .../transport/server/ratis/XceiverServerRatis.java | 29 ++
 .../TestCloseContainerCommandHandler.java  |  5 +-
 hadoop-hdds/pom.xml|  2 +-
 .../hdds/scm/pipeline/RatisPipelineUtils.java  | 17 +-
 .../org/apache/hadoop/ozone/OzoneTestUtils.java|  2 +-
 .../org/apache/hadoop/ozone/RatisTestHelper.java   |  7 ++-
 .../transport/server/ratis/TestCSMMetrics.java |  2 +-
 .../ozoneimpl/TestOzoneContainerRatis.java |  2 +-
 .../container/server/TestContainerServer.java  |  2 +-
 hadoop-ozone/pom.xml   |  2 +-
 15 files changed, 131 insertions(+), 53 deletions(-)

diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
index 94798e0..a25a605 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
@@ -133,7 +133,7 @@ public class XceiverClientGrpc extends XceiverClientSpi {
 .getIpAddress(), port).usePlaintext()
 .maxInboundMessageSize(OzoneConsts.OZONE_SCM_CHUNK_MAX_SIZE)
 .intercept(new ClientCredentialInterceptor(userName, 
encodedToken));
-if (SecurityConfig.isGrpcTlsEnabled(config)) {
+if (secConfig.isGrpcTlsEnabled()) {
   File trustCertCollectionFile = secConfig.getTrustStoreFile();
   File privateKeyFile = secConfig.getClientPrivateKeyFile();
   File clientCertChainFile = secConfig.getClientCertChainFile();
diff --git 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientRatis.java
 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientRatis.java
index 01e76af..6d975ff 100644
--- 
a/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientRatis.java
+++ 
b/hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientRatis.java
@@ -19,6 +19,8 @@
 package org.apache.hadoop.hdds.scm;
 
 import org.apache.hadoop.hdds.HddsUtils;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.ratis.grpc.GrpcTlsConfig;
 import org.apache.ratis.proto.RaftProtos;
 import org.apache.ratis.protocol.RaftRetryFailureException;
 import org.apache.ratis.retry.RetryPolicy;
@@ -68,9 +70,11 @@ public final class XceiverClientRatis extends 
XceiverClientSpi {
 final int maxOutstandingRequests =
 HddsClientUtils.getMaxOutstandingRequests(ozoneConf);
 final RetryPolicy retryPolicy = RatisHelper.createRetryPolicy(ozoneConf);
+final GrpcTlsConfig tlsConfig = RatisHelper.createTlsClientConfig(new
+SecurityConfig(ozoneConf));
 return new XceiverClientRatis(pipeline,
 SupportedRpcType.valueOfIgnoreCase(rpcType), maxOutstandingRequests,
-retryPolicy);
+retryPolicy, tlsConfig);
   }
 
   private final Pipeline pipeline;
@@ -78,17 +82,20 @@ public final class XceiverClientRatis extends 
XceiverClientSpi {
   private final AtomicReference client = new AtomicReference<>();
   private final int maxOutstandingRequests;
   private final RetryPolicy retryPolicy;
+  private final GrpcTlsConfig tlsConfig;
 
   /**
* Constructs a client.
*/
   private XceiverClientRatis(Pipeline pipeline, RpcType rpcType,
-  int maxOutStandingChunks, RetryPolicy retryPolicy) {
+  int maxOutStandingChunks, RetryPolicy retryPolicy,
+  GrpcTlsConfig tlsConfig) {
 super();
 this.pipeline = pipeline;
 this.rpcType = rpcType;
 this.maxOutstandingRequests = maxOutStandingChunks;
 this.retryPolicy = retryPolicy;
+this.tlsConfig = tlsConfig;
   }
 
   /**
@@ -114,7 +121,8 @@ public final class XceiverClientRatis extends 
XceiverClientSpi {
 // maxOutstandingRequests so as to set the upper bound on max no of async
 // requests to be handled by raft client
 if (!client.compareAndSet(null,
-RatisHelper.newRaftClient(rpcType, getPipeline(), 

[hadoop] branch trunk updated: HADOOP-16029. Consecutive StringBuilder.append can be reused. Contributed by Ayush Saxena.

2019-01-11 Thread gifuma
This is an automated email from the ASF dual-hosted git repository.

gifuma pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new fb8932a  HADOOP-16029. Consecutive StringBuilder.append can be reused. 
Contributed by Ayush Saxena.
fb8932a is described below

commit fb8932a727f757b2e9c1c61a18145878d0eb77bd
Author: Giovanni Matteo Fumarola 
AuthorDate: Fri Jan 11 10:54:49 2019 -0800

HADOOP-16029. Consecutive StringBuilder.append can be reused. Contributed 
by Ayush Saxena.
---
 .../java/org/apache/hadoop/crypto/CipherSuite.java |   4 +-
 .../java/org/apache/hadoop/fs/BlockLocation.java   |   6 +-
 .../org/apache/hadoop/fs/FSDataOutputStream.java   |   4 +-
 .../org/apache/hadoop/fs/FileEncryptionInfo.java   |  32 +-
 .../main/java/org/apache/hadoop/fs/FileStatus.java |  34 +-
 .../main/java/org/apache/hadoop/fs/FileUtil.java   |  30 +-
 .../src/main/java/org/apache/hadoop/fs/Path.java   |  12 +-
 .../org/apache/hadoop/fs/permission/AclEntry.java  |   4 +-
 .../java/org/apache/hadoop/fs/shell/Count.java |   4 +-
 .../main/java/org/apache/hadoop/fs/shell/Ls.java   |   8 +-
 .../java/org/apache/hadoop/fs/shell/PathData.java  |   6 +-
 .../hadoop/fs/shell/find/BaseExpression.java   |   4 +-
 .../java/org/apache/hadoop/fs/shell/find/Find.java |   4 +-
 .../main/java/org/apache/hadoop/io/MD5Hash.java|   4 +-
 .../java/org/apache/hadoop/io/SequenceFile.java|   4 +-
 .../io/compress/CompressionCodecFactory.java   |  18 +-
 .../org/apache/hadoop/io/erasurecode/ECSchema.java |   8 +-
 .../org/apache/hadoop/ipc/WritableRpcEngine.java   |  12 +-
 .../apache/hadoop/metrics2/sink/GraphiteSink.java  |   8 +-
 .../apache/hadoop/metrics2/sink/StatsDSink.java|   6 +-
 .../hadoop/net/AbstractDNSToSwitchMapping.java |   4 +-
 .../main/java/org/apache/hadoop/net/NetUtils.java  |   4 +-
 .../org/apache/hadoop/net/NetworkTopology.java |  16 +-
 .../org/apache/hadoop/security/ProviderUtils.java  |   4 +-
 .../hadoop/security/alias/CredentialProvider.java  |   6 +-
 .../hadoop/security/alias/CredentialShell.java |  12 +-
 .../security/authorize/AccessControlList.java  |   6 +-
 .../hadoop/security/ssl/SSLHostnameVerifier.java   |   6 +-
 .../org/apache/hadoop/security/token/Token.java|  10 +-
 .../service/launcher/InterruptEscalator.java   |   6 +-
 .../org/apache/hadoop/tools/GetGroupsBase.java |   4 +-
 .../util/BlockingThreadPoolExecutorService.java|   6 +-
 .../org/apache/hadoop/util/CpuTimeTracker.java |  12 +-
 .../hadoop/util/SemaphoredDelegatingExecutor.java  |   8 +-
 .../main/java/org/apache/hadoop/util/Shell.java|  14 +-
 .../java/org/apache/hadoop/util/SignalLogger.java  |   4 +-
 .../hadoop/util/bloom/DynamicBloomFilter.java  |   4 +-
 .../org/apache/hadoop/hdfs/DFSInputStream.java |  12 +-
 .../java/org/apache/hadoop/hdfs/DFSUtilClient.java |   8 +-
 .../apache/hadoop/hdfs/protocol/DatanodeInfo.java  |  83 ++-
 .../hadoop/hdfs/protocol/HdfsPathHandle.java   |   4 +-
 .../hadoop/hdfs/protocol/ReencryptionStatus.java   |  12 +-
 .../apache/hadoop/hdfs/util/StripedBlockUtil.java  |  10 +-
 .../server/federation/resolver/PathLocation.java   |   6 +-
 .../federation/router/ConnectionContext.java   |  10 +-
 .../server/federation/router/RouterQuotaUsage.java |   4 +-
 .../hdfs/server/blockmanagement/BlockManager.java  |   3 +-
 .../blockmanagement/DatanodeAdminManager.java  |   3 +-
 .../hadoop/hdfs/server/datanode/DataNode.java  |   3 +-
 .../hadoop/hdfs/server/datanode/VolumeScanner.java |  18 +-
 .../server/diskbalancer/command/PlanCommand.java   |   3 +-
 .../server/namenode/EncryptionZoneManager.java |   7 +-
 .../hadoop/hdfs/server/namenode/FSEditLog.java |  20 +-
 .../hdfs/server/namenode/FSEditLogLoader.java  |   4 +-
 .../hadoop/hdfs/server/namenode/FSEditLogOp.java   | 734 ++---
 .../hadoop/hdfs/server/namenode/FSNamesystem.java  |  28 +-
 .../hadoop/hdfs/server/namenode/JournalSet.java|   4 +-
 .../hadoop/hdfs/server/namenode/NamenodeFsck.java  |  10 +-
 .../server/namenode/QuotaByStorageTypeEntry.java   |   6 +-
 .../namenode/RedundantEditLogInputStream.java  |   4 +-
 .../hdfs/server/namenode/StoragePolicySummary.java |  13 +-
 .../hadoop/hdfs/server/protocol/ServerCommand.java |   6 +-
 .../hadoop/hdfs/tools/DFSZKFailoverController.java |   4 +-
 .../apache/hadoop/tools/CopyListingFileStatus.java |  14 +-
 64 files changed, 683 insertions(+), 688 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java
index a811aa7..8221ba2 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java
+++ 

[hadoop] branch branch-2 updated: HADOOP-16013. DecayRpcScheduler decay thread should run as a daemon. Contributed by Erik Krogen.

2019-01-11 Thread cliang
This is an automated email from the ASF dual-hosted git repository.

cliang pushed a commit to branch branch-2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-2 by this push:
 new 1dbe6c9  HADOOP-16013. DecayRpcScheduler decay thread should run as a 
daemon. Contributed by Erik Krogen.
1dbe6c9 is described below

commit 1dbe6c9780c3a83e8ef701da03f1a70557f97687
Author: Chen Liang 
AuthorDate: Fri Jan 11 12:57:30 2019 -0800

HADOOP-16013. DecayRpcScheduler decay thread should run as a daemon. 
Contributed by Erik Krogen.
---
 .../src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
index d9dbdbd..f8c8dd3 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
@@ -232,7 +232,7 @@ public class DecayRpcScheduler implements RpcScheduler,
 "the number of top users for scheduler metrics must be at least 1");
 
 // Setup delay timer
-Timer timer = new Timer();
+Timer timer = new Timer(true);
 DecayTask task = new DecayTask(this, timer);
 timer.scheduleAtFixedRate(task, decayPeriodMillis, decayPeriodMillis);
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-15481. Emit FairCallQueue stats as metrics. Contributed by Christopher Gregorian.

2019-01-11 Thread cliang
This is an automated email from the ASF dual-hosted git repository.

cliang pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new bf08f4a  HADOOP-15481. Emit FairCallQueue stats as metrics. 
Contributed by Christopher Gregorian.
bf08f4a is described below

commit bf08f4abae43d706a305af3f14e00f01c00dba7c
Author: Chen Liang 
AuthorDate: Fri Jan 11 14:01:23 2019 -0800

HADOOP-15481. Emit FairCallQueue stats as metrics. Contributed by 
Christopher Gregorian.
---
 .../java/org/apache/hadoop/ipc/FairCallQueue.java  | 32 +--
 .../hadoop-common/src/site/markdown/Metrics.md | 10 ++
 .../org/apache/hadoop/ipc/TestFairCallQueue.java   | 36 ++
 3 files changed, 76 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/FairCallQueue.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/FairCallQueue.java
index 3a8c83d..380426f 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/FairCallQueue.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/FairCallQueue.java
@@ -35,6 +35,11 @@ import com.google.common.annotations.VisibleForTesting;
 import org.apache.commons.lang3.NotImplementedException;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.ipc.CallQueueManager.CallQueueOverflowException;
+import org.apache.hadoop.metrics2.MetricsCollector;
+import org.apache.hadoop.metrics2.MetricsRecordBuilder;
+import org.apache.hadoop.metrics2.MetricsSource;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.lib.Interns;
 import org.apache.hadoop.metrics2.util.MBeans;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -43,7 +48,7 @@ import org.slf4j.LoggerFactory;
  * A queue with multiple levels for each priority.
  */
 public class FairCallQueue extends AbstractQueue
-  implements BlockingQueue {
+implements BlockingQueue {
   @Deprecated
   public static final intIPC_CALLQUEUE_PRIORITY_LEVELS_DEFAULT = 4;
   @Deprecated
@@ -335,7 +340,8 @@ public class FairCallQueue extends 
AbstractQueue
* MetricsProxy is a singleton because we may init multiple
* FairCallQueues, but the metrics system cannot unregister beans cleanly.
*/
-  private static final class MetricsProxy implements FairCallQueueMXBean {
+  private static final class MetricsProxy implements FairCallQueueMXBean,
+  MetricsSource {
 // One singleton per namespace
 private static final HashMap INSTANCES =
   new HashMap();
@@ -346,8 +352,13 @@ public class FairCallQueue extends 
AbstractQueue
 // Keep track of how many objects we registered
 private int revisionNumber = 0;
 
+private String namespace;
+
 private MetricsProxy(String namespace) {
+  this.namespace = namespace;
   MBeans.register(namespace, "FairCallQueue", this);
+  final String name = namespace + ".FairCallQueue";
+  DefaultMetricsSystem.instance().register(name, name, this);
 }
 
 public static synchronized MetricsProxy getInstance(String namespace) {
@@ -389,6 +400,23 @@ public class FairCallQueue extends 
AbstractQueue
 @Override public int getRevision() {
   return revisionNumber;
 }
+
+@Override
+public void getMetrics(MetricsCollector collector, boolean all) {
+  MetricsRecordBuilder rb = collector.addRecord("FairCallQueue")
+  .setContext("rpc")
+  .tag(Interns.info("namespace", "Namespace"), namespace);
+
+  final int[] currentQueueSizes = getQueueSizes();
+  final long[] currentOverflowedCalls = getOverflowedCalls();
+
+  for (int i = 0; i < currentQueueSizes.length; i++) {
+rb.addGauge(Interns.info("FairCallQueueSize_p" + i, "FCQ Queue Size"),
+currentQueueSizes[i]);
+rb.addCounter(Interns.info("FairCallQueueOverflowedCalls_p" + i,
+"FCQ Overflowed Calls"), currentOverflowedCalls[i]);
+  }
+}
   }
 
   // FairCallQueueMXBean
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md 
b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
index 1e21940..1ef2b44 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
@@ -104,6 +104,16 @@ RetryCache metrics is useful to monitor NameNode 
fail-over. Each metrics record
 | `CacheCleared` | Total number of RetryCache cleared |
 | `CacheUpdated` | Total number of RetryCache updated |
 
+FairCallQueue
+-
+
+FairCallQueue metrics will only exist if FairCallQueue is enabled. Each metric 
exists for each level of priority.
+
+| Name | Description |
+|: |: |
+| `FairCallQueueSize_p`*Priority* | Current number of calls in priority queue |
+| 

[hadoop] branch YARN-8200 updated (c77aac5 -> 3e02695)

2019-01-11 Thread jhung
This is an automated email from the ASF dual-hosted git repository.

jhung pushed a change to branch YARN-8200
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


 discard c77aac5  testing

This update removed existing revisions from the reference, leaving the
reference pointing at a previous point in the repository history.

 * -- * -- N   refs/heads/YARN-8200 (3e02695)
\
 O -- O -- O   (c77aac5)

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/hadoop/yarn/server/router/webapp/JavaProcess.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch YARN-8200 updated: testing

2019-01-11 Thread jhung
This is an automated email from the ASF dual-hosted git repository.

jhung pushed a commit to branch YARN-8200
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/YARN-8200 by this push:
 new c77aac5  testing
c77aac5 is described below

commit c77aac5246b22265f89b6dcc8e48f8de2da84b74
Author: Jonathan Hung 
AuthorDate: Fri Jan 11 17:24:05 2019 -0500

testing
---
 .../java/org/apache/hadoop/yarn/server/router/webapp/JavaProcess.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/JavaProcess.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/JavaProcess.java
index 6c0938c..d168efd 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/JavaProcess.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/JavaProcess.java
@@ -48,7 +48,7 @@ public class JavaProcess {
 String className = clazz.getCanonicalName();
 ProcessBuilder builder =
 new ProcessBuilder(javaBin, "-cp", classpath, className);
-builder.inheritIO();
+//builder.inheritIO();
 process = builder.start();
   }
 
@@ -60,4 +60,4 @@ public class JavaProcess {
 }
   }
 
-}
\ No newline at end of file
+}


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-16013. DecayRpcScheduler decay thread should run as a daemon. Contributed by Erik Krogen.

2019-01-11 Thread cliang
This is an automated email from the ASF dual-hosted git repository.

cliang pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 01cb958  HADOOP-16013. DecayRpcScheduler decay thread should run as a 
daemon. Contributed by Erik Krogen.
01cb958 is described below

commit 01cb958af44b2376bcf579cc65d90566530f733d
Author: Chen Liang 
AuthorDate: Fri Jan 11 12:51:07 2019 -0800

HADOOP-16013. DecayRpcScheduler decay thread should run as a daemon. 
Contributed by Erik Krogen.
---
 .../src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
index 512b0b7..5410aeb 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
@@ -233,7 +233,7 @@ public class DecayRpcScheduler implements RpcScheduler,
 "the number of top users for scheduler metrics must be at least 1");
 
 // Setup delay timer
-Timer timer = new Timer();
+Timer timer = new Timer(true);
 DecayTask task = new DecayTask(this, timer);
 timer.scheduleAtFixedRate(task, decayPeriodMillis, decayPeriodMillis);
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[Hadoop Wiki] Update of "HowToRelease" by MartonElek

2019-01-11 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "HowToRelease" page has been changed by MartonElek:
https://wiki.apache.org/hadoop/HowToRelease?action=diff=100=101

Comment:
HADOOP-15205, dist profile is requierd to upload sources to the maven repo

   1. Push branch-X.Y.Z and the newly created tag to the remote repo.
   1. Deploy the maven artifacts, on your personal computer. Please be sure you 
have completed the prerequisite step of preparing the {{{settings.xml}}} file 
before the deployment. You might want to do this in private and clear your 
history file as your gpg-passphrase is in clear text.
   {{{
- mvn deploy -Psign -DskipTests -DskipShade
+ mvn deploy -Psign,dist -DskipTests -DskipShade
  }}}
   1. Copy release files to a public place and ensure they are readable. Note 
that {{{home.apache.org}}} only supports SFTP, so this may be easier with a 
graphical SFTP client like Nautilus, Konqueror, etc.
   {{{

-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] annotated tag release-3.1.2-RC0 created (now ac9a81a)

2019-01-11 Thread wangda
This is an automated email from the ASF dual-hosted git repository.

wangda pushed a change to annotated tag release-3.1.2-RC0
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


  at ac9a81a  (tag)
 tagging 9f88af79029606b29a5d0ff9bea661a2a83a1dfb (commit)
  by Wangda Tan
  on Fri Jan 11 16:16:59 2019 -0800

- Log -
Release candidate - 3.1.2-RC0
-BEGIN PGP SIGNATURE-

iQIzBAABCAAdFiEETImYU83aTkDGAhK1s/plPVcwDUUFAlw5MfwACgkQs/plPVcw
DUWUwg/9HZdjZV9i5teFuavrSOyguJ3ZJYqLzngAVVqln/66F/TxV511ScpYZInF
nPmh2kFIGLuVMN4/FzAui04at0eonMroCrZ17wcu0zQh5evhy7LtS87ghPukhFOX
RiAduDGauRHbSLuImERMIXH39S1sW48sRr1nBuJjFZjPJDkOVcerR0oYnt8ZX3P9
vy2Tb+B7xpq+8bkUxUJn0ysiUzqy0l8+UVA4iYJ9o3yWJ6WOTgbnqGU4e7s4F9Kv
EdtBakiyvx8f0gS25LFs6LmS90pLtlxEQb+BY60cXxAWVfWf+sIHK2/Jsjq5A2LP
+1lpR6gRtXmwTbXulWdqbpaZO2+k0bcR0O/3LfKAEQvgGA5DI8/5K+iiurXvLDLD
CT5ZVUhMw9R//ZsPveKDPlm+wmDgqzK13SvfZZ/XSS0LouQ9/419oxenHR6Yp0WS
dpycHqHFPa3ufxCSgN207p8YwsZihdxmGILCbCuzDQwWKncJTs6eZjYRuYAAUKCS
ByWY1wHN6sDvhpvAWzQwwK9bOghH0yMJj+Shmm8prr4s0Hwh9hCi8S4BfIv3hz5v
3l9i6ohZA1Kwf9inRetLTkhFStIWLTUcFE78ycKJ9vc+VBK2y2RhCpSuzsHrj2in
wCRLEjGvYzjB0U0NTkKVpPtu8aVPin6rcdCotk28cneapsCzUuk=
=0WUH
-END PGP SIGNATURE-
---

No new revisions were added by this update.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-16045. Don't run TestDU on Windows. Contributed by Lukas Majercak.

2019-01-11 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 35fa3bd  HADOOP-16045. Don't run TestDU on Windows. Contributed by 
Lukas Majercak.
35fa3bd is described below

commit 35fa3bd685605d8b3639e6c5cbe83cd9acd8cbe7
Author: Inigo Goiri 
AuthorDate: Fri Jan 11 18:07:19 2019 -0800

HADOOP-16045. Don't run TestDU on Windows. Contributed by Lukas Majercak.
---
 .../hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java   | 3 +++
 1 file changed, 3 insertions(+)

diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java
index a22b765..f340cc2 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java
@@ -17,10 +17,12 @@
  */
 package org.apache.hadoop.fs;
 
+import org.apache.hadoop.util.Shell;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
 import static org.junit.Assert.*;
+import static org.junit.Assume.assumeFalse;
 
 import java.io.File;
 import java.io.IOException;
@@ -37,6 +39,7 @@ public class TestDU {
 
   @Before
   public void setUp() {
+assumeFalse(Shell.WINDOWS);
 FileUtil.fullyDelete(DU_DIR);
 assertTrue(DU_DIR.mkdirs());
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.0 updated: HADOOP-16045. Don't run TestDU on Windows. Contributed by Lukas Majercak.

2019-01-11 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new 174fa73  HADOOP-16045. Don't run TestDU on Windows. Contributed by 
Lukas Majercak.
174fa73 is described below

commit 174fa73f9937bcca30e21db8b73c020885e9bf04
Author: Inigo Goiri 
AuthorDate: Fri Jan 11 18:07:19 2019 -0800

HADOOP-16045. Don't run TestDU on Windows. Contributed by Lukas Majercak.

(cherry picked from commit 35fa3bd685605d8b3639e6c5cbe83cd9acd8cbe7)
---
 .../hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java   | 3 +++
 1 file changed, 3 insertions(+)

diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java
index a22b765..f340cc2 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java
@@ -17,10 +17,12 @@
  */
 package org.apache.hadoop.fs;
 
+import org.apache.hadoop.util.Shell;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
 import static org.junit.Assert.*;
+import static org.junit.Assume.assumeFalse;
 
 import java.io.File;
 import java.io.IOException;
@@ -37,6 +39,7 @@ public class TestDU {
 
   @Before
   public void setUp() {
+assumeFalse(Shell.WINDOWS);
 FileUtil.fullyDelete(DU_DIR);
 assertTrue(DU_DIR.mkdirs());
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.1 updated: HADOOP-16045. Don't run TestDU on Windows. Contributed by Lukas Majercak.

2019-01-11 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new b05d8de  HADOOP-16045. Don't run TestDU on Windows. Contributed by 
Lukas Majercak.
b05d8de is described below

commit b05d8de10ea230204aa741c6f25dc02339441779
Author: Inigo Goiri 
AuthorDate: Fri Jan 11 18:07:19 2019 -0800

HADOOP-16045. Don't run TestDU on Windows. Contributed by Lukas Majercak.

(cherry picked from commit 35fa3bd685605d8b3639e6c5cbe83cd9acd8cbe7)
---
 .../hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java   | 3 +++
 1 file changed, 3 insertions(+)

diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java
index a22b765..f340cc2 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java
@@ -17,10 +17,12 @@
  */
 package org.apache.hadoop.fs;
 
+import org.apache.hadoop.util.Shell;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
 import static org.junit.Assert.*;
+import static org.junit.Assume.assumeFalse;
 
 import java.io.File;
 import java.io.IOException;
@@ -37,6 +39,7 @@ public class TestDU {
 
   @Before
   public void setUp() {
+assumeFalse(Shell.WINDOWS);
 FileUtil.fullyDelete(DU_DIR);
 assertTrue(DU_DIR.mkdirs());
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HADOOP-16045. Don't run TestDU on Windows. Contributed by Lukas Majercak.

2019-01-11 Thread inigoiri
This is an automated email from the ASF dual-hosted git repository.

inigoiri pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new c1c15c1  HADOOP-16045. Don't run TestDU on Windows. Contributed by 
Lukas Majercak.
c1c15c1 is described below

commit c1c15c1bd80482e86c9f8429820e08417580a684
Author: Inigo Goiri 
AuthorDate: Fri Jan 11 18:07:19 2019 -0800

HADOOP-16045. Don't run TestDU on Windows. Contributed by Lukas Majercak.

(cherry picked from commit 35fa3bd685605d8b3639e6c5cbe83cd9acd8cbe7)
---
 .../hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java   | 3 +++
 1 file changed, 3 insertions(+)

diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java
index a22b765..f340cc2 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDU.java
@@ -17,10 +17,12 @@
  */
 package org.apache.hadoop.fs;
 
+import org.apache.hadoop.util.Shell;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
 import static org.junit.Assert.*;
+import static org.junit.Assume.assumeFalse;
 
 import java.io.File;
 import java.io.IOException;
@@ -37,6 +39,7 @@ public class TestDU {
 
   @Before
   public void setUp() {
+assumeFalse(Shell.WINDOWS);
 FileUtil.fullyDelete(DU_DIR);
 assertTrue(DU_DIR.mkdirs());
   }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch trunk updated: HADOOP-15994. Upgrade Jackson2 to 2.9.8. Contributed by lqjacklee.

2019-01-11 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 3bb745d  HADOOP-15994. Upgrade Jackson2 to 2.9.8. Contributed by 
lqjacklee.
3bb745d is described below

commit 3bb745df18669e2ae400dc0d1a37a81cdc270eb2
Author: Akira Ajisaka 
AuthorDate: Sat Jan 12 15:23:49 2019 +0900

HADOOP-15994. Upgrade Jackson2 to 2.9.8. Contributed by lqjacklee.
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 05a35bd..e2e288b 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -69,7 +69,7 @@
 
 
 1.9.13
-2.9.5
+2.9.8
 
 
 4.5.2


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[hadoop] branch branch-3.2 updated: HADOOP-15994. Upgrade Jackson2 to 2.9.8. Contributed by lqjacklee.

2019-01-11 Thread aajisaka
This is an automated email from the ASF dual-hosted git repository.

aajisaka pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
 new 0ec1802  HADOOP-15994. Upgrade Jackson2 to 2.9.8. Contributed by 
lqjacklee.
0ec1802 is described below

commit 0ec180213cedcf03279581298c8968881f33d2a6
Author: Akira Ajisaka 
AuthorDate: Sat Jan 12 15:23:49 2019 +0900

HADOOP-15994. Upgrade Jackson2 to 2.9.8. Contributed by lqjacklee.

(cherry picked from commit 3bb745df18669e2ae400dc0d1a37a81cdc270eb2)
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 9e4da848..941f510 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -69,7 +69,7 @@
 
 
 1.9.13
-2.9.5
+2.9.8
 
 
 4.5.2


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org