hadoop git commit: YARN-8612. Fix NM Collector Service Port issue in YarnConfiguration. Contributed by Prabha Manepalli.

2018-08-16 Thread rohithsharmaks
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.9 055e11fcc -> 50ba2272e


YARN-8612. Fix NM Collector Service Port issue in YarnConfiguration. 
Contributed by Prabha Manepalli.

(cherry picked from commit 1697a0230696e1ed6d9c19471463b44a6d791dfa)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/50ba2272
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/50ba2272
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/50ba2272

Branch: refs/heads/branch-2.9
Commit: 50ba2272e8d93f86b28552615269ed91a46be336
Parents: 055e11f
Author: Rohith Sharma K S 
Authored: Fri Aug 17 11:11:56 2018 +0530
Committer: Rohith Sharma K S 
Committed: Fri Aug 17 11:27:18 2018 +0530

--
 .../main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/50ba2272/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 001d02e..2a8185e 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -1036,7 +1036,7 @@ public class YarnConfiguration extends Configuration {
   NM_PREFIX + "collector-service.address";
   public static final int DEFAULT_NM_COLLECTOR_SERVICE_PORT = 8048;
   public static final String DEFAULT_NM_COLLECTOR_SERVICE_ADDRESS =
-  "0.0.0.0:" + DEFAULT_NM_LOCALIZER_PORT;
+  "0.0.0.0:" + DEFAULT_NM_COLLECTOR_SERVICE_PORT;
 
   /** Interval in between cache cleanups.*/
   public static final String NM_LOCALIZER_CACHE_CLEANUP_INTERVAL_MS =


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-8612. Fix NM Collector Service Port issue in YarnConfiguration. Contributed by Prabha Manepalli.

2018-08-16 Thread rohithsharmaks
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 27508086d -> 90bf2d3b5


YARN-8612. Fix NM Collector Service Port issue in YarnConfiguration. 
Contributed by Prabha Manepalli.

(cherry picked from commit 1697a0230696e1ed6d9c19471463b44a6d791dfa)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/90bf2d3b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/90bf2d3b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/90bf2d3b

Branch: refs/heads/branch-3.0
Commit: 90bf2d3b52245242cc0213ea58b42cdc706498a4
Parents: 2750808
Author: Rohith Sharma K S 
Authored: Fri Aug 17 11:11:56 2018 +0530
Committer: Rohith Sharma K S 
Committed: Fri Aug 17 11:14:10 2018 +0530

--
 .../main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/90bf2d3b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 4ea6085..6c65b19 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -1086,7 +1086,7 @@ public class YarnConfiguration extends Configuration {
   NM_PREFIX + "collector-service.address";
   public static final int DEFAULT_NM_COLLECTOR_SERVICE_PORT = 8048;
   public static final String DEFAULT_NM_COLLECTOR_SERVICE_ADDRESS =
-  "0.0.0.0:" + DEFAULT_NM_LOCALIZER_PORT;
+  "0.0.0.0:" + DEFAULT_NM_COLLECTOR_SERVICE_PORT;
 
   /** Interval in between cache cleanups.*/
   public static final String NM_LOCALIZER_CACHE_CLEANUP_INTERVAL_MS =


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-8612. Fix NM Collector Service Port issue in YarnConfiguration. Contributed by Prabha Manepalli.

2018-08-16 Thread rohithsharmaks
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 5237bdfb5 -> 3532bd5c8


YARN-8612. Fix NM Collector Service Port issue in YarnConfiguration. 
Contributed by Prabha Manepalli.

(cherry picked from commit 1697a0230696e1ed6d9c19471463b44a6d791dfa)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3532bd5c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3532bd5c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3532bd5c

Branch: refs/heads/branch-3.1
Commit: 3532bd5c8b7a0593e67c0830b0d5ad10074309cf
Parents: 5237bdf
Author: Rohith Sharma K S 
Authored: Fri Aug 17 11:11:56 2018 +0530
Committer: Rohith Sharma K S 
Committed: Fri Aug 17 11:13:25 2018 +0530

--
 .../main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3532bd5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 5f2f985..affa76a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -1196,7 +1196,7 @@ public class YarnConfiguration extends Configuration {
   NM_PREFIX + "collector-service.address";
   public static final int DEFAULT_NM_COLLECTOR_SERVICE_PORT = 8048;
   public static final String DEFAULT_NM_COLLECTOR_SERVICE_ADDRESS =
-  "0.0.0.0:" + DEFAULT_NM_LOCALIZER_PORT;
+  "0.0.0.0:" + DEFAULT_NM_COLLECTOR_SERVICE_PORT;
 
   /** Interval in between cache cleanups.*/
   public static final String NM_LOCALIZER_CACHE_CLEANUP_INTERVAL_MS =


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-8612. Fix NM Collector Service Port issue in YarnConfiguration. Contributed by Prabha Manepalli.

2018-08-16 Thread rohithsharmaks
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 04c8b5dbb -> e2210a517


YARN-8612. Fix NM Collector Service Port issue in YarnConfiguration. 
Contributed by Prabha Manepalli.

(cherry picked from commit 1697a0230696e1ed6d9c19471463b44a6d791dfa)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e2210a51
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e2210a51
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e2210a51

Branch: refs/heads/branch-2
Commit: e2210a5175cca12605cbbb402f0a42cf1d815efe
Parents: 04c8b5d
Author: Rohith Sharma K S 
Authored: Fri Aug 17 11:11:56 2018 +0530
Committer: Rohith Sharma K S 
Committed: Fri Aug 17 11:17:30 2018 +0530

--
 .../main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e2210a51/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 3f0c735..3b39e55 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -1038,7 +1038,7 @@ public class YarnConfiguration extends Configuration {
   NM_PREFIX + "collector-service.address";
   public static final int DEFAULT_NM_COLLECTOR_SERVICE_PORT = 8048;
   public static final String DEFAULT_NM_COLLECTOR_SERVICE_ADDRESS =
-  "0.0.0.0:" + DEFAULT_NM_LOCALIZER_PORT;
+  "0.0.0.0:" + DEFAULT_NM_COLLECTOR_SERVICE_PORT;
 
   /** Interval in between cache cleanups.*/
   public static final String NM_LOCALIZER_CACHE_CLEANUP_INTERVAL_MS =


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-8612. Fix NM Collector Service Port issue in YarnConfiguration. Contributed by Prabha Manepalli.

2018-08-16 Thread rohithsharmaks
Repository: hadoop
Updated Branches:
  refs/heads/trunk edeb2a356 -> 1697a0230


YARN-8612. Fix NM Collector Service Port issue in YarnConfiguration. 
Contributed by Prabha Manepalli.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1697a023
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1697a023
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1697a023

Branch: refs/heads/trunk
Commit: 1697a0230696e1ed6d9c19471463b44a6d791dfa
Parents: edeb2a3
Author: Rohith Sharma K S 
Authored: Fri Aug 17 11:11:56 2018 +0530
Committer: Rohith Sharma K S 
Committed: Fri Aug 17 11:12:10 2018 +0530

--
 .../main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1697a023/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 84bfb55..78e28f7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -1216,7 +1216,7 @@ public class YarnConfiguration extends Configuration {
   NM_PREFIX + "collector-service.address";
   public static final int DEFAULT_NM_COLLECTOR_SERVICE_PORT = 8048;
   public static final String DEFAULT_NM_COLLECTOR_SERVICE_ADDRESS =
-  "0.0.0.0:" + DEFAULT_NM_LOCALIZER_PORT;
+  "0.0.0.0:" + DEFAULT_NM_COLLECTOR_SERVICE_PORT;
 
   /** Interval in between cache cleanups.*/
   public static final String NM_LOCALIZER_CACHE_CLEANUP_INTERVAL_MS =


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-15655. Enhance KMS client retry behavior. Contributed by Kitti Nanasi.

2018-08-16 Thread xiao
Repository: hadoop
Updated Branches:
  refs/heads/trunk 2d13e410d -> edeb2a356


HADOOP-15655. Enhance KMS client retry behavior. Contributed by Kitti Nanasi.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/edeb2a35
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/edeb2a35
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/edeb2a35

Branch: refs/heads/trunk
Commit: edeb2a356ad671d962764c6e2aee9f9e7d6f394c
Parents: 2d13e41
Author: Xiao Chen 
Authored: Thu Aug 16 22:32:32 2018 -0700
Committer: Xiao Chen 
Committed: Thu Aug 16 22:42:03 2018 -0700

--
 .../key/kms/LoadBalancingKMSClientProvider.java |  43 ++---
 .../kms/TestLoadBalancingKMSClientProvider.java | 181 ++-
 2 files changed, 193 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/edeb2a35/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java
index 23cdc50..e68e844 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java
@@ -113,8 +113,8 @@ public class LoadBalancingKMSClientProvider extends 
KeyProvider implements
 return providers;
   }
 
-  private  T doOp(ProviderCallable op, int currPos)
-  throws IOException {
+  private  T doOp(ProviderCallable op, int currPos,
+  boolean isIdempotent) throws IOException {
 if (providers.length == 0) {
   throw new IOException("No providers configured !");
 }
@@ -143,7 +143,7 @@ public class LoadBalancingKMSClientProvider extends 
KeyProvider implements
 }
 RetryAction action = null;
 try {
-  action = retryPolicy.shouldRetry(ioe, 0, numFailovers, false);
+  action = retryPolicy.shouldRetry(ioe, 0, numFailovers, isIdempotent);
 } catch (Exception e) {
   if (e instanceof IOException) {
 throw (IOException)e;
@@ -201,7 +201,7 @@ public class LoadBalancingKMSClientProvider extends 
KeyProvider implements
   public Token[] call(KMSClientProvider provider) throws IOException {
 return provider.addDelegationTokens(renewer, credentials);
   }
-}, nextIdx());
+}, nextIdx(), false);
   }
 
   @Override
@@ -211,7 +211,7 @@ public class LoadBalancingKMSClientProvider extends 
KeyProvider implements
   public Long call(KMSClientProvider provider) throws IOException {
 return provider.renewDelegationToken(token);
   }
-}, nextIdx());
+}, nextIdx(), false);
   }
 
   @Override
@@ -222,7 +222,7 @@ public class LoadBalancingKMSClientProvider extends 
KeyProvider implements
 provider.cancelDelegationToken(token);
 return null;
   }
-}, nextIdx());
+}, nextIdx(), false);
   }
 
   // This request is sent to all providers in the load-balancing group
@@ -275,7 +275,7 @@ public class LoadBalancingKMSClientProvider extends 
KeyProvider implements
 throws IOException, GeneralSecurityException {
   return provider.generateEncryptedKey(encryptionKeyName);
 }
-  }, nextIdx());
+  }, nextIdx(), true);
 } catch (WrapperException we) {
   if (we.getCause() instanceof GeneralSecurityException) {
 throw (GeneralSecurityException) we.getCause();
@@ -295,7 +295,7 @@ public class LoadBalancingKMSClientProvider extends 
KeyProvider implements
 throws IOException, GeneralSecurityException {
   return provider.decryptEncryptedKey(encryptedKeyVersion);
 }
-  }, nextIdx());
+  }, nextIdx(), true);
 } catch (WrapperException we) {
   if (we.getCause() instanceof GeneralSecurityException) {
 throw (GeneralSecurityException) we.getCause();
@@ -315,7 +315,7 @@ public class LoadBalancingKMSClientProvider extends 
KeyProvider implements
 throws IOException, GeneralSecurityException {
   return provider.reencryptEncryptedKey(ekv);
 }
-  }, nextIdx());
+  }, nextIdx(), true);
 } catch (WrapperException we) {
   if (we.getCause() instanceof GeneralSecurityException) {
 throw (GeneralSecurityException) we.getCause();
@@ -335,7 +335,7 @@ public class LoadBalancingKMSClientProvider extends 
KeyProvider implements
   provider.reencryptEncryptedKeys(ekvs);
   return null;
 

hadoop git commit: HDDS-119. Skip Apache license header check for some ozone doc scripts. Contributed by Ajay Kumar.

2018-08-16 Thread xyao
Repository: hadoop
Updated Branches:
  refs/heads/trunk 1290e3c64 -> 2d13e410d


HDDS-119. Skip Apache license header check for some ozone doc scripts. 
Contributed by Ajay Kumar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2d13e410
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2d13e410
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2d13e410

Branch: refs/heads/trunk
Commit: 2d13e410d8a84b27e65fccff24bd8d86c3ab6b1d
Parents: 1290e3c
Author: Xiaoyu Yao 
Authored: Thu Aug 16 22:13:50 2018 -0700
Committer: Xiaoyu Yao 
Committed: Thu Aug 16 22:18:52 2018 -0700

--
 hadoop-ozone/docs/pom.xml  | 17 +
 hadoop-ozone/docs/static/OzoneOverview.svg | 13 +
 2 files changed, 30 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2d13e410/hadoop-ozone/docs/pom.xml
--
diff --git a/hadoop-ozone/docs/pom.xml b/hadoop-ozone/docs/pom.xml
index e0f9a87..f5e6aaf 100644
--- a/hadoop-ozone/docs/pom.xml
+++ b/hadoop-ozone/docs/pom.xml
@@ -53,6 +53,23 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   
 
   
+  
+org.apache.rat
+apache-rat-plugin
+
+  
+themes/ozonedoc/static/js/bootstrap.min.js
+themes/ozonedoc/static/js/jquery.min.js
+
themes/ozonedoc/static/css/bootstrap-theme.min.css
+themes/ozonedoc/static/css/bootstrap.min.css.map
+themes/ozonedoc/static/css/bootstrap.min.css
+
themes/ozonedoc/static/css/bootstrap-theme.min.css.map
+
themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
+themes/ozonedoc/layouts/index.html
+themes/ozonedoc/theme.toml
+  
+
+  
 
   
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2d13e410/hadoop-ozone/docs/static/OzoneOverview.svg
--
diff --git a/hadoop-ozone/docs/static/OzoneOverview.svg 
b/hadoop-ozone/docs/static/OzoneOverview.svg
index 0120a5c..9d4660d 100644
--- a/hadoop-ozone/docs/static/OzoneOverview.svg
+++ b/hadoop-ozone/docs/static/OzoneOverview.svg
@@ -1,4 +1,17 @@
 
+
 http://www.w3.org/2000/svg; xmlns:xlink="http://www.w3.org/1999/xlink;>
 
 Desktop HD


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-10240. Race between close/recoverLease leads to missing block. Contributed by Jinglun, zhouyingchao and Wei-Chiu Chuang.

2018-08-16 Thread weichiu
Repository: hadoop
Updated Branches:
  refs/heads/trunk d42806160 -> 1290e3c64


HDFS-10240. Race between close/recoverLease leads to missing block. Contributed 
by Jinglun, zhouyingchao and Wei-Chiu Chuang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1290e3c6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1290e3c6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1290e3c6

Branch: refs/heads/trunk
Commit: 1290e3c647092f0bfbb250731a6805aba1be8e4b
Parents: d428061
Author: Wei-Chiu Chuang 
Authored: Thu Aug 16 16:29:38 2018 -0700
Committer: Wei-Chiu Chuang 
Committed: Thu Aug 16 16:29:38 2018 -0700

--
 .../hdfs/server/blockmanagement/BlockInfo.java  |  4 ++
 .../server/blockmanagement/BlockManager.java|  4 ++
 .../hdfs/server/datanode/BPServiceActor.java|  3 +-
 .../hadoop/hdfs/server/datanode/DataNode.java   | 10 +++
 .../apache/hadoop/hdfs/TestLeaseRecovery2.java  | 65 
 .../hdfs/server/datanode/DataNodeTestUtils.java |  3 +
 6 files changed, 88 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1290e3c6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
index 111ade1..43f4f47 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
@@ -262,6 +262,10 @@ public abstract class BlockInfo extends Block
 return getBlockUCState().equals(BlockUCState.COMPLETE);
   }
 
+  public boolean isUnderRecovery() {
+return getBlockUCState().equals(BlockUCState.UNDER_RECOVERY);
+  }
+
   public final boolean isCompleteOrCommitted() {
 final BlockUCState state = getBlockUCState();
 return state.equals(BlockUCState.COMPLETE) ||

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1290e3c6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index d8a3aa3..17f6f6e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -985,6 +985,10 @@ public class BlockManager implements BlockStatsMXBean {
   return false; // no blocks in file yet
 if(lastBlock.isComplete())
   return false; // already completed (e.g. by syncBlock)
+if(lastBlock.isUnderRecovery()) {
+  throw new IOException("Commit or complete block " + commitBlock +
+  ", whereas it is under recovery.");
+}
 
 final boolean committed = commitBlock(lastBlock, commitBlock);
 if (committed && lastBlock.isStriped()) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1290e3c6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
index f09ff66..8f7a186 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
@@ -684,7 +684,8 @@ class BPServiceActor implements Runnable {
 }
   }
 }
-if (ibrManager.sendImmediately() || sendHeartbeat) {
+if (!dn.areIBRDisabledForTests() &&
+(ibrManager.sendImmediately()|| sendHeartbeat)) {
   ibrManager.sendIBRs(bpNamenode, bpRegistration,
   bpos.getBlockPoolId());
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1290e3c6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
--
diff --git 

hadoop git commit: YARN-8667. Cleanup symlinks when container restarted by NM. Contributed by Chandni Singh

2018-08-16 Thread eyang
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 fbedf8937 -> 5237bdfb5


YARN-8667. Cleanup symlinks when container restarted by NM.
   Contributed by Chandni Singh

(cherry picked from commit d42806160eb95594f08f38bb753cf0306a191a38)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5237bdfb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5237bdfb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5237bdfb

Branch: refs/heads/branch-3.1
Commit: 5237bdfb5a02638dd1a7207ed7f500910d97047f
Parents: fbedf89
Author: Eric Yang 
Authored: Thu Aug 16 18:41:58 2018 -0400
Committer: Eric Yang 
Committed: Thu Aug 16 18:44:47 2018 -0400

--
 .../server/nodemanager/ContainerExecutor.java   | 62 +++-
 .../launcher/ContainerLaunch.java   |  7 +++
 .../nodemanager/TestContainerExecutor.java  | 31 +-
 3 files changed, 85 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5237bdfb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
index 8e335350..9714731 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
@@ -26,6 +26,7 @@ import java.net.InetAddress;
 import java.net.UnknownHostException;
 import java.util.ArrayList;
 import java.util.Arrays;
+import java.util.HashMap;
 import java.util.List;
 import java.util.LinkedHashSet;
 import java.util.Map;
@@ -34,6 +35,7 @@ import java.util.concurrent.ConcurrentMap;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock.ReadLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock.WriteLock;
+
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -415,20 +417,9 @@ public abstract class ContainerExecutor implements 
Configurable {
 
 if (resources != null) {
   sb.echo("Setting up job resources");
-  for (Map.Entry> resourceEntry :
-  resources.entrySet()) {
-for (String linkName : resourceEntry.getValue()) {
-  if (new Path(linkName).getName().equals(WILDCARD)) {
-// If this is a wildcarded path, link to everything in the
-// directory from the working directory
-for (File wildLink : readDirAsUser(user, resourceEntry.getKey())) {
-  sb.symlink(new Path(wildLink.toString()),
-  new Path(wildLink.getName()));
-}
-  } else {
-sb.symlink(resourceEntry.getKey(), new Path(linkName));
-  }
-}
+  Map symLinks = resolveSymLinks(resources, user);
+  for (Map.Entry symLink : symLinks.entrySet()) {
+sb.symlink(symLink.getKey(), symLink.getValue());
   }
 }
 
@@ -790,6 +781,28 @@ public abstract class ContainerExecutor implements 
Configurable {
   }
 
   /**
+   * Perform any cleanup before the next launch of the container.
+   * @param container container
+   */
+  public void cleanupBeforeRelaunch(Container container)
+  throws IOException, InterruptedException {
+if (container.getLocalizedResources() != null) {
+
+  Map symLinks = resolveSymLinks(
+  container.getLocalizedResources(), container.getUser());
+
+  for (Map.Entry symLink : symLinks.entrySet()) {
+LOG.debug("{} deleting {}", container.getContainerId(),
+symLink.getValue());
+deleteAsUser(new DeletionAsUserContext.Builder()
+.setUser(container.getUser())
+.setSubDir(symLink.getValue())
+.build());
+  }
+}
+  }
+
+  /**
* Get the process-identifier for the container.
*
* @param containerID the container ID
@@ -868,4 +881,25 @@ public abstract class ContainerExecutor implements 
Configurable {
   }
 }
   }
+
+  private Map resolveSymLinks(Map> resources, String user) {
+Map symLinks = new HashMap<>();
+for (Map.Entry> resourceEntry :
+resources.entrySet()) {
+  for (String linkName : resourceEntry.getValue()) {
+if (new 

hadoop git commit: YARN-8667. Cleanup symlinks when container restarted by NM. Contributed by Chandni Singh

2018-08-16 Thread eyang
Repository: hadoop
Updated Branches:
  refs/heads/trunk 8512e1a91 -> d42806160


YARN-8667. Cleanup symlinks when container restarted by NM.
   Contributed by Chandni Singh


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d4280616
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d4280616
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d4280616

Branch: refs/heads/trunk
Commit: d42806160eb95594f08f38bb753cf0306a191a38
Parents: 8512e1a
Author: Eric Yang 
Authored: Thu Aug 16 18:41:58 2018 -0400
Committer: Eric Yang 
Committed: Thu Aug 16 18:41:58 2018 -0400

--
 .../server/nodemanager/ContainerExecutor.java   | 62 +++-
 .../launcher/ContainerLaunch.java   |  7 +++
 .../nodemanager/TestContainerExecutor.java  | 31 +-
 3 files changed, 85 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d4280616/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
index 9b604ce..ba272e2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
@@ -27,6 +27,7 @@ import java.net.UnknownHostException;
 import java.nio.charset.Charset;
 import java.util.ArrayList;
 import java.util.Arrays;
+import java.util.HashMap;
 import java.util.List;
 import java.util.LinkedHashSet;
 import java.util.Map;
@@ -35,6 +36,7 @@ import java.util.concurrent.ConcurrentMap;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock.ReadLock;
 import java.util.concurrent.locks.ReentrantReadWriteLock.WriteLock;
+
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -417,20 +419,9 @@ public abstract class ContainerExecutor implements 
Configurable {
 
 if (resources != null) {
   sb.echo("Setting up job resources");
-  for (Map.Entry> resourceEntry :
-  resources.entrySet()) {
-for (String linkName : resourceEntry.getValue()) {
-  if (new Path(linkName).getName().equals(WILDCARD)) {
-// If this is a wildcarded path, link to everything in the
-// directory from the working directory
-for (File wildLink : readDirAsUser(user, resourceEntry.getKey())) {
-  sb.symlink(new Path(wildLink.toString()),
-  new Path(wildLink.getName()));
-}
-  } else {
-sb.symlink(resourceEntry.getKey(), new Path(linkName));
-  }
-}
+  Map symLinks = resolveSymLinks(resources, user);
+  for (Map.Entry symLink : symLinks.entrySet()) {
+sb.symlink(symLink.getKey(), symLink.getValue());
   }
 }
 
@@ -792,6 +783,28 @@ public abstract class ContainerExecutor implements 
Configurable {
   }
 
   /**
+   * Perform any cleanup before the next launch of the container.
+   * @param container container
+   */
+  public void cleanupBeforeRelaunch(Container container)
+  throws IOException, InterruptedException {
+if (container.getLocalizedResources() != null) {
+
+  Map symLinks = resolveSymLinks(
+  container.getLocalizedResources(), container.getUser());
+
+  for (Map.Entry symLink : symLinks.entrySet()) {
+LOG.debug("{} deleting {}", container.getContainerId(),
+symLink.getValue());
+deleteAsUser(new DeletionAsUserContext.Builder()
+.setUser(container.getUser())
+.setSubDir(symLink.getValue())
+.build());
+  }
+}
+  }
+
+  /**
* Get the process-identifier for the container.
*
* @param containerID the container ID
@@ -870,4 +883,25 @@ public abstract class ContainerExecutor implements 
Configurable {
   }
 }
   }
+
+  private Map resolveSymLinks(Map> resources, String user) {
+Map symLinks = new HashMap<>();
+for (Map.Entry> resourceEntry :
+resources.entrySet()) {
+  for (String linkName : resourceEntry.getValue()) {
+if (new Path(linkName).getName().equals(WILDCARD)) {
+  // If this is a wildcarded 

hadoop git commit: HDFS-13746. Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh (Contributed by Siyao Meng via Daniel Templeton)

2018-08-16 Thread templedf
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.0 8c0cecb03 -> 27508086d


HDFS-13746. Still occasional "Should be different group" failure in 
TestRefreshUserMappings#testGroupMappingRefresh
(Contributed by Siyao Meng via Daniel Templeton)

Change-Id: I9fad1537ace38367a463d9fe67aaa28d3178fc69
(cherry picked from commit 8512e1a91be3e340d919c7cdc9c09dfb762a6a4e)
(cherry picked from commit fbedf89377e540fb10239a880fc2e01ef7021b93)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/27508086
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/27508086
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/27508086

Branch: refs/heads/branch-3.0
Commit: 27508086de5c82f8b656601fc804ff5d55fed8ad
Parents: 8c0cecb
Author: Daniel Templeton 
Authored: Thu Aug 16 13:43:49 2018 -0700
Committer: Daniel Templeton 
Committed: Thu Aug 16 15:02:18 2018 -0700

--
 .../security/TestRefreshUserMappings.java   | 51 +++-
 1 file changed, 27 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/27508086/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
index 8a6c21f..d18d2c7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
@@ -34,7 +34,6 @@ import java.io.UnsupportedEncodingException;
 import java.net.URL;
 import java.net.URLDecoder;
 import java.util.ArrayList;
-import java.util.Arrays;
 import java.util.List;
 
 import org.apache.hadoop.conf.Configuration;
@@ -46,6 +45,8 @@ import 
org.apache.hadoop.security.authorize.AuthorizationException;
 import org.apache.hadoop.security.authorize.DefaultImpersonationProvider;
 import org.apache.hadoop.security.authorize.ProxyUsers;
 import org.apache.hadoop.test.GenericTestUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 import org.slf4j.event.Level;
 import org.junit.After;
 import org.junit.Before;
@@ -53,6 +54,8 @@ import org.junit.Test;
 
 
 public class TestRefreshUserMappings {
+  private static final Logger LOG = LoggerFactory.getLogger(
+  TestRefreshUserMappings.class);
   private MiniDFSCluster cluster;
   Configuration config;
   private static final long groupRefreshTimeoutSec = 1;
@@ -119,42 +122,42 @@ public class TestRefreshUserMappings {
 Groups groups = Groups.getUserToGroupsMappingService(config);
 String user = UserGroupInformation.getCurrentUser().getUserName();
 
-System.out.println("First attempt:");
+LOG.debug("First attempt:");
 List g1 = groups.getGroups(user);
-String [] str_groups = new String [g1.size()];
-g1.toArray(str_groups);
-System.out.println(Arrays.toString(str_groups));
-
-System.out.println("Second attempt, should be the same:");
+LOG.debug(g1.toString());
+
+LOG.debug("Second attempt, should be the same:");
 List g2 = groups.getGroups(user);
-g2.toArray(str_groups);
-System.out.println(Arrays.toString(str_groups));
+LOG.debug(g2.toString());
 for(int i=0; i g3 = groups.getGroups(user);
-g3.toArray(str_groups);
-System.out.println(Arrays.toString(str_groups));
+LOG.debug(g3.toString());
 for(int i=0; i g4 = groups.getGroups(user);
-g4.toArray(str_groups);
-System.out.println(Arrays.toString(str_groups));
-for(int i=0; i {
+  List g4;
+  try {
+g4 = groups.getGroups(user);
+  } catch (IOException e) {
+return false;
+  }
+  LOG.debug(g4.toString());
+  // if g4 is the same as g3, wait and retry
+  return !g3.equals(g4);
+}, 50, Math.toIntExact(groupRefreshTimeoutSec * 1000 * 30));
   }
-  
+
   @Test
   public void testRefreshSuperUserGroupsConfiguration() throws Exception {
 final String SUPER_USER = "super_user";


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-13746. Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh (Contributed by Siyao Meng via Daniel Templeton)

2018-08-16 Thread templedf
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 97c193424 -> fbedf8937


HDFS-13746. Still occasional "Should be different group" failure in 
TestRefreshUserMappings#testGroupMappingRefresh
(Contributed by Siyao Meng via Daniel Templeton)

Change-Id: I9fad1537ace38367a463d9fe67aaa28d3178fc69
(cherry picked from commit 8512e1a91be3e340d919c7cdc9c09dfb762a6a4e)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fbedf893
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fbedf893
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fbedf893

Branch: refs/heads/branch-3.1
Commit: fbedf89377e540fb10239a880fc2e01ef7021b93
Parents: 97c1934
Author: Daniel Templeton 
Authored: Thu Aug 16 13:43:49 2018 -0700
Committer: Daniel Templeton 
Committed: Thu Aug 16 15:01:34 2018 -0700

--
 .../security/TestRefreshUserMappings.java   | 51 +++-
 1 file changed, 27 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fbedf893/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
index 0e7dfc3..2d7410a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
@@ -34,7 +34,6 @@ import java.io.UnsupportedEncodingException;
 import java.net.URL;
 import java.net.URLDecoder;
 import java.util.ArrayList;
-import java.util.Arrays;
 import java.util.List;
 
 import org.apache.hadoop.conf.Configuration;
@@ -46,6 +45,8 @@ import 
org.apache.hadoop.security.authorize.AuthorizationException;
 import org.apache.hadoop.security.authorize.DefaultImpersonationProvider;
 import org.apache.hadoop.security.authorize.ProxyUsers;
 import org.apache.hadoop.test.GenericTestUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 import org.slf4j.event.Level;
 import org.junit.After;
 import org.junit.Before;
@@ -53,6 +54,8 @@ import org.junit.Test;
 
 
 public class TestRefreshUserMappings {
+  private static final Logger LOG = LoggerFactory.getLogger(
+  TestRefreshUserMappings.class);
   private MiniDFSCluster cluster;
   Configuration config;
   private static final long groupRefreshTimeoutSec = 1;
@@ -119,42 +122,42 @@ public class TestRefreshUserMappings {
 Groups groups = Groups.getUserToGroupsMappingService(config);
 String user = UserGroupInformation.getCurrentUser().getUserName();
 
-System.out.println("First attempt:");
+LOG.debug("First attempt:");
 List g1 = groups.getGroups(user);
-String [] str_groups = new String [g1.size()];
-g1.toArray(str_groups);
-System.out.println(Arrays.toString(str_groups));
-
-System.out.println("Second attempt, should be the same:");
+LOG.debug(g1.toString());
+
+LOG.debug("Second attempt, should be the same:");
 List g2 = groups.getGroups(user);
-g2.toArray(str_groups);
-System.out.println(Arrays.toString(str_groups));
+LOG.debug(g2.toString());
 for(int i=0; i g3 = groups.getGroups(user);
-g3.toArray(str_groups);
-System.out.println(Arrays.toString(str_groups));
+LOG.debug(g3.toString());
 for(int i=0; i g4 = groups.getGroups(user);
-g4.toArray(str_groups);
-System.out.println(Arrays.toString(str_groups));
-for(int i=0; i {
+  List g4;
+  try {
+g4 = groups.getGroups(user);
+  } catch (IOException e) {
+return false;
+  }
+  LOG.debug(g4.toString());
+  // if g4 is the same as g3, wait and retry
+  return !g3.equals(g4);
+}, 50, Math.toIntExact(groupRefreshTimeoutSec * 1000 * 30));
   }
-  
+
   @Test
   public void testRefreshSuperUserGroupsConfiguration() throws Exception {
 final String SUPER_USER = "super_user";


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-13746. Still occasional "Should be different group" failure in TestRefreshUserMappings#testGroupMappingRefresh (Contributed by Siyao Meng via Daniel Templeton)

2018-08-16 Thread templedf
Repository: hadoop
Updated Branches:
  refs/heads/trunk 5ef29087a -> 8512e1a91


HDFS-13746. Still occasional "Should be different group" failure in 
TestRefreshUserMappings#testGroupMappingRefresh
(Contributed by Siyao Meng via Daniel Templeton)

Change-Id: I9fad1537ace38367a463d9fe67aaa28d3178fc69


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8512e1a9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8512e1a9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8512e1a9

Branch: refs/heads/trunk
Commit: 8512e1a91be3e340d919c7cdc9c09dfb762a6a4e
Parents: 5ef2908
Author: Daniel Templeton 
Authored: Thu Aug 16 13:43:49 2018 -0700
Committer: Daniel Templeton 
Committed: Thu Aug 16 15:00:45 2018 -0700

--
 .../security/TestRefreshUserMappings.java   | 51 +++-
 1 file changed, 27 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8512e1a9/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
index 0e7dfc3..2d7410a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
@@ -34,7 +34,6 @@ import java.io.UnsupportedEncodingException;
 import java.net.URL;
 import java.net.URLDecoder;
 import java.util.ArrayList;
-import java.util.Arrays;
 import java.util.List;
 
 import org.apache.hadoop.conf.Configuration;
@@ -46,6 +45,8 @@ import 
org.apache.hadoop.security.authorize.AuthorizationException;
 import org.apache.hadoop.security.authorize.DefaultImpersonationProvider;
 import org.apache.hadoop.security.authorize.ProxyUsers;
 import org.apache.hadoop.test.GenericTestUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 import org.slf4j.event.Level;
 import org.junit.After;
 import org.junit.Before;
@@ -53,6 +54,8 @@ import org.junit.Test;
 
 
 public class TestRefreshUserMappings {
+  private static final Logger LOG = LoggerFactory.getLogger(
+  TestRefreshUserMappings.class);
   private MiniDFSCluster cluster;
   Configuration config;
   private static final long groupRefreshTimeoutSec = 1;
@@ -119,42 +122,42 @@ public class TestRefreshUserMappings {
 Groups groups = Groups.getUserToGroupsMappingService(config);
 String user = UserGroupInformation.getCurrentUser().getUserName();
 
-System.out.println("First attempt:");
+LOG.debug("First attempt:");
 List g1 = groups.getGroups(user);
-String [] str_groups = new String [g1.size()];
-g1.toArray(str_groups);
-System.out.println(Arrays.toString(str_groups));
-
-System.out.println("Second attempt, should be the same:");
+LOG.debug(g1.toString());
+
+LOG.debug("Second attempt, should be the same:");
 List g2 = groups.getGroups(user);
-g2.toArray(str_groups);
-System.out.println(Arrays.toString(str_groups));
+LOG.debug(g2.toString());
 for(int i=0; i g3 = groups.getGroups(user);
-g3.toArray(str_groups);
-System.out.println(Arrays.toString(str_groups));
+LOG.debug(g3.toString());
 for(int i=0; i g4 = groups.getGroups(user);
-g4.toArray(str_groups);
-System.out.println(Arrays.toString(str_groups));
-for(int i=0; i {
+  List g4;
+  try {
+g4 = groups.getGroups(user);
+  } catch (IOException e) {
+return false;
+  }
+  LOG.debug(g4.toString());
+  // if g4 is the same as g3, wait and retry
+  return !g3.equals(g4);
+}, 50, Math.toIntExact(groupRefreshTimeoutSec * 1000 * 30));
   }
-  
+
   @Test
   public void testRefreshSuperUserGroupsConfiguration() throws Exception {
 final String SUPER_USER = "super_user";


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDDS-179. CloseContainer/PutKey command should be syncronized with write operations. Contributed by Shashikant Banerjee.

2018-08-16 Thread msingh
Repository: hadoop
Updated Branches:
  refs/heads/trunk 0e832e7a7 -> 5ef29087a


HDDS-179. CloseContainer/PutKey command should be syncronized with write 
operations. Contributed by Shashikant Banerjee.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5ef29087
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5ef29087
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5ef29087

Branch: refs/heads/trunk
Commit: 5ef29087ad27f4f6b815dbc08ea7427d14df58e1
Parents: 0e832e7
Author: Mukul Kumar Singh 
Authored: Thu Aug 16 23:35:19 2018 +0530
Committer: Mukul Kumar Singh 
Committed: Thu Aug 16 23:35:19 2018 +0530

--
 .../server/ratis/ContainerStateMachine.java | 323 +++
 .../server/TestContainerStateMachine.java   | 201 
 2 files changed, 467 insertions(+), 57 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5ef29087/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
--
diff --git 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
index 15e991a..52ea3aa 100644
--- 
a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
+++ 
b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.ozone.container.common.transport.server.ratis;
 
+import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
 import org.apache.hadoop.hdds.HddsUtils;
 import org.apache.ratis.protocol.RaftGroupId;
@@ -52,6 +53,9 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
 import java.util.concurrent.CompletableFuture;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ThreadPoolExecutor;
@@ -95,6 +99,13 @@ import java.util.concurrent.ThreadPoolExecutor;
  * {@link #applyTransaction} need to be enforced in the StateMachine
  * implementation. For example, synchronization between writeChunk and
  * createContainer in {@link ContainerStateMachine}.
+ *
+ * PutKey is synchronized with WriteChunk operations, PutKey for a block is
+ * executed only after all the WriteChunk preceding the PutKey have finished.
+ *
+ * CloseContainer is synchronized with WriteChunk and PutKey operations,
+ * CloseContainer for a container is processed after all the preceding write
+ * operations for the container have finished.
  * */
 public class ContainerStateMachine extends BaseStateMachine {
   static final Logger LOG = LoggerFactory.getLogger(
@@ -105,15 +116,14 @@ public class ContainerStateMachine extends 
BaseStateMachine {
   private ThreadPoolExecutor chunkExecutor;
   private final ConcurrentHashMap>
   writeChunkFutureMap;
-  private final ConcurrentHashMap>
-  createContainerFutureMap;
+  private final ConcurrentHashMap stateMachineMap;
 
-  ContainerStateMachine(ContainerDispatcher dispatcher,
+  public ContainerStateMachine(ContainerDispatcher dispatcher,
   ThreadPoolExecutor chunkExecutor) {
 this.dispatcher = dispatcher;
 this.chunkExecutor = chunkExecutor;
 this.writeChunkFutureMap = new ConcurrentHashMap<>();
-this.createContainerFutureMap = new ConcurrentHashMap<>();
+this.stateMachineMap = new ConcurrentHashMap<>();
   }
 
   @Override
@@ -203,32 +213,6 @@ public class ContainerStateMachine extends 
BaseStateMachine {
 return dispatchCommand(requestProto)::toByteString;
   }
 
-  private CompletableFuture handleWriteChunk(
-  ContainerCommandRequestProto requestProto, long entryIndex) {
-final WriteChunkRequestProto write = requestProto.getWriteChunk();
-long containerID = write.getBlockID().getContainerID();
-CompletableFuture future =
-createContainerFutureMap.get(containerID);
-CompletableFuture writeChunkFuture;
-if (future != null) {
-  writeChunkFuture = future.thenApplyAsync(
-  v -> runCommand(requestProto), chunkExecutor);
-} else {
-  writeChunkFuture = CompletableFuture.supplyAsync(
-  () -> runCommand(requestProto), chunkExecutor);
-}
-writeChunkFutureMap.put(entryIndex, writeChunkFuture);
-return writeChunkFuture;
-  }
-
-  private CompletableFuture handleCreateContainer(
-  

hadoop git commit: YARN-8474. Fixed ApiServiceClient kerberos negotiation. Contributed by Billie Rinaldi

2018-08-16 Thread eyang
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 819a2a6f1 -> 97c193424


YARN-8474. Fixed ApiServiceClient kerberos negotiation.
   Contributed by Billie Rinaldi

(cherry picked from commit 8990eaf5925afa533fbd9c3641859a146dc5a22c)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/97c19342
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/97c19342
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/97c19342

Branch: refs/heads/branch-3.1
Commit: 97c1934247b4863b60e6233c21cedeb6bc1e2a11
Parents: 819a2a6
Author: Eric Yang 
Authored: Thu Aug 16 12:46:37 2018 -0400
Committer: Eric Yang 
Committed: Thu Aug 16 12:50:52 2018 -0400

--
 .../hadoop-yarn-services-api/pom.xml| 57 +
 .../yarn/service/client/ApiServiceClient.java   | 85 ++--
 .../client/TestSecureApiServiceClient.java  | 83 +++
 3 files changed, 218 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/97c19342/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/pom.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/pom.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/pom.xml
index 2a2ee7f..646781f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/pom.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/pom.xml
@@ -93,10 +93,18 @@
 
 
   org.apache.hadoop
+  hadoop-yarn-client
+
+
+  org.apache.hadoop
   hadoop-yarn-common
 
 
   org.apache.hadoop
+  hadoop-yarn-registry
+
+
+  org.apache.hadoop
   hadoop-yarn-server-common
 
 
@@ -104,6 +112,14 @@
   hadoop-common
 
 
+  org.apache.hadoop
+  hadoop-annotations
+
+
+  org.apache.hadoop
+  hadoop-auth
+
+
   org.slf4j
   slf4j-api
 
@@ -120,6 +136,42 @@
   jsr311-api
 
 
+  javax.servlet
+  javax.servlet-api
+
+
+  commons-codec
+  commons-codec
+
+
+  commons-io
+  commons-io
+
+
+  org.apache.commons
+  commons-lang3
+
+
+  com.google.guava
+  guava
+
+
+  com.sun.jersey
+  jersey-client
+
+
+  org.eclipse.jetty
+  jetty-server
+
+
+  org.eclipse.jetty
+  jetty-util
+
+
+  org.eclipse.jetty
+  jetty-servlet
+
+
   org.mockito
   mockito-all
   test
@@ -155,6 +207,11 @@
   curator-test
   test
 
+
+  org.apache.hadoop
+  hadoop-minikdc
+  test
+
 
   
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/97c19342/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
index 18d45fa..e6dee79 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
@@ -20,19 +20,26 @@ import static 
org.apache.hadoop.yarn.service.utils.ServiceApiUtil.jsonSerDeser;
 
 import java.io.File;
 import java.io.IOException;
+import java.net.URI;
+import java.nio.charset.StandardCharsets;
+import java.security.PrivilegedExceptionAction;
 import java.text.MessageFormat;
 import java.util.List;
 import java.util.Map;
 
+import javax.ws.rs.core.HttpHeaders;
 import javax.ws.rs.core.MediaType;
 
 import com.google.common.base.Preconditions;
-import org.apache.commons.lang.StringUtils;
+import org.apache.commons.codec.binary.Base64;
+import com.google.common.base.Strings;
+import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import 

hadoop git commit: HADOOP-15642. Update aws-sdk version to 1.11.375. Contributed by Steve Loughran.

2018-08-16 Thread stevel
Repository: hadoop
Updated Branches:
  refs/heads/trunk 8990eaf59 -> 0e832e7a7


HADOOP-15642. Update aws-sdk version to 1.11.375.
Contributed by Steve Loughran.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0e832e7a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0e832e7a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0e832e7a

Branch: refs/heads/trunk
Commit: 0e832e7a74ea26d1de22f5b0bb142a1a23580dcf
Parents: 8990eaf
Author: Steve Loughran 
Authored: Thu Aug 16 09:58:46 2018 -0700
Committer: Steve Loughran 
Committed: Thu Aug 16 09:58:46 2018 -0700

--
 hadoop-project/pom.xml  |   2 +-
 .../site/markdown/tools/hadoop-aws/testing.md   | 157 +++
 2 files changed, 158 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0e832e7a/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index bb0e48b..b45b495 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -142,7 +142,7 @@
 1.0-beta-1
 1.0-alpha-8
 900
-1.11.271
+1.11.375
 2.3.4
 1.5
 

hadoop git commit: YARN-8474. Fixed ApiServiceClient kerberos negotiation. Contributed by Billie Rinaldi

2018-08-16 Thread eyang
Repository: hadoop
Updated Branches:
  refs/heads/trunk cb21eaa02 -> 8990eaf59


YARN-8474. Fixed ApiServiceClient kerberos negotiation.
   Contributed by Billie Rinaldi


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8990eaf5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8990eaf5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8990eaf5

Branch: refs/heads/trunk
Commit: 8990eaf5925afa533fbd9c3641859a146dc5a22c
Parents: cb21eaa
Author: Eric Yang 
Authored: Thu Aug 16 12:46:37 2018 -0400
Committer: Eric Yang 
Committed: Thu Aug 16 12:46:37 2018 -0400

--
 .../hadoop-yarn-services-api/pom.xml| 57 ++
 .../yarn/service/client/ApiServiceClient.java   | 83 ++--
 .../client/TestSecureApiServiceClient.java  | 83 
 3 files changed, 217 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8990eaf5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/pom.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/pom.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/pom.xml
index ab76218..7386e41 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/pom.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/pom.xml
@@ -93,10 +93,18 @@
 
 
   org.apache.hadoop
+  hadoop-yarn-client
+
+
+  org.apache.hadoop
   hadoop-yarn-common
 
 
   org.apache.hadoop
+  hadoop-yarn-registry
+
+
+  org.apache.hadoop
   hadoop-yarn-server-common
 
 
@@ -104,6 +112,14 @@
   hadoop-common
 
 
+  org.apache.hadoop
+  hadoop-annotations
+
+
+  org.apache.hadoop
+  hadoop-auth
+
+
   org.slf4j
   slf4j-api
 
@@ -120,6 +136,42 @@
   jsr311-api
 
 
+  javax.servlet
+  javax.servlet-api
+
+
+  commons-codec
+  commons-codec
+
+
+  commons-io
+  commons-io
+
+
+  org.apache.commons
+  commons-lang3
+
+
+  com.google.guava
+  guava
+
+
+  com.sun.jersey
+  jersey-client
+
+
+  org.eclipse.jetty
+  jetty-server
+
+
+  org.eclipse.jetty
+  jetty-util
+
+
+  org.eclipse.jetty
+  jetty-servlet
+
+
   org.mockito
   mockito-all
   test
@@ -155,6 +207,11 @@
   curator-test
   test
 
+
+  org.apache.hadoop
+  hadoop-minikdc
+  test
+
 
   
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8990eaf5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
index f5162e9..9229446 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
@@ -20,21 +20,28 @@ import static 
org.apache.hadoop.yarn.service.utils.ServiceApiUtil.jsonSerDeser;
 
 import java.io.File;
 import java.io.IOException;
+import java.net.URI;
+import java.nio.charset.StandardCharsets;
+import java.security.PrivilegedExceptionAction;
 import java.text.MessageFormat;
 import java.util.List;
 import java.util.Map;
 
+import javax.ws.rs.core.HttpHeaders;
 import javax.ws.rs.core.MediaType;
 import javax.ws.rs.core.UriBuilder;
 
 import com.google.common.base.Preconditions;
+
+import org.apache.commons.codec.binary.Base64;
 import com.google.common.base.Strings;
 import org.apache.commons.lang3.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.security.UserGroupInformation;
-import 

hadoop git commit: HADOOP-15669. ABFS: Improve HTTPS Performance. Contributed by Vishwajeet Dusane.

2018-08-16 Thread tmarquardt
Repository: hadoop
Updated Branches:
  refs/heads/HADOOP-15407 7262485ae -> fa6fc264c


HADOOP-15669. ABFS: Improve HTTPS Performance.
Contributed by Vishwajeet Dusane.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fa6fc264
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fa6fc264
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fa6fc264

Branch: refs/heads/HADOOP-15407
Commit: fa6fc264c1088d6307800ec7ee1d7ff84f36e2ea
Parents: 7262485
Author: Thomas Marquardt 
Authored: Thu Aug 16 16:19:13 2018 +
Committer: Thomas Marquardt 
Committed: Thu Aug 16 16:19:13 2018 +

--
 hadoop-project/pom.xml  |   7 +
 hadoop-tools/hadoop-azure/pom.xml   |   7 +-
 .../hadoop/fs/azurebfs/AbfsConfiguration.java   |   8 +
 .../azurebfs/constants/ConfigurationKeys.java   |   1 +
 .../constants/FileSystemConfigurations.java |   4 +
 .../hadoop/fs/azurebfs/services/AbfsClient.java |  43 +++-
 .../fs/azurebfs/services/AbfsHttpOperation.java |  11 +
 .../fs/azurebfs/utils/SSLSocketFactoryEx.java   | 240 +++
 .../TestAbfsConfigurationFieldsValidation.java  |  42 +++-
 .../fs/azurebfs/services/TestAbfsClient.java|  40 +++-
 10 files changed, 375 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa6fc264/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 64aa43e..61a8650 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1345,6 +1345,13 @@
 7.0.0
  
 
+  
+  
+org.wildfly.openssl
+wildfly-openssl
+1.0.4.Final
+  
+
   
 org.threadly
 threadly

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa6fc264/hadoop-tools/hadoop-azure/pom.xml
--
diff --git a/hadoop-tools/hadoop-azure/pom.xml 
b/hadoop-tools/hadoop-azure/pom.xml
index 7d0406c..7152f638 100644
--- a/hadoop-tools/hadoop-azure/pom.xml
+++ b/hadoop-tools/hadoop-azure/pom.xml
@@ -197,13 +197,18 @@
   jackson-mapper-asl
   compile
 
+
 
   org.codehaus.jackson
   jackson-core-asl
   compile
 
 
-
+
+  org.wildfly.openssl
+  wildfly-openssl
+  compile
+
 
 
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa6fc264/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
index 1fb5df9..e647ae8 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
@@ -44,6 +44,10 @@ import 
org.apache.hadoop.fs.azurebfs.diagnostics.LongConfigurationBasicValidator
 import 
org.apache.hadoop.fs.azurebfs.diagnostics.StringConfigurationBasicValidator;
 import org.apache.hadoop.fs.azurebfs.services.KeyProvider;
 import org.apache.hadoop.fs.azurebfs.services.SimpleKeyProvider;
+import org.apache.hadoop.fs.azurebfs.utils.SSLSocketFactoryEx;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE_SSL_CHANNEL_MODE_KEY;
+import static 
org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.DEFAULT_FS_AZURE_SSL_CHANNEL_MODE;
 
 /**
  * Configuration for Azure Blob FileSystem.
@@ -270,6 +274,10 @@ public class AbfsConfiguration{
 return this.userAgentId;
   }
 
+  public SSLSocketFactoryEx.SSLChannelMode getPreferredSSLFactoryOption() {
+return configuration.getEnum(FS_AZURE_SSL_CHANNEL_MODE_KEY, 
DEFAULT_FS_AZURE_SSL_CHANNEL_MODE);
+  }
+
   void validateStorageAccountKeys() throws InvalidConfigurationValueException {
 Base64StringConfigurationBasicValidator validator = new 
Base64StringConfigurationBasicValidator(
 ConfigurationKeys.FS_AZURE_ACCOUNT_KEY_PROPERTY_NAME, "", true);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa6fc264/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
index 9c805a2..16ddd90 100644
--- 

hadoop git commit: YARN-7953. [GQ] Data structures for federation global queues calculations. Contributed by Abhishek Modi.

2018-08-16 Thread botong
Repository: hadoop
Updated Branches:
  refs/heads/YARN-7402 91dd58b76 -> 717874a16


YARN-7953. [GQ] Data structures for federation global queues calculations. 
Contributed by Abhishek Modi.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/717874a1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/717874a1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/717874a1

Branch: refs/heads/YARN-7402
Commit: 717874a166aa22f8334dda72b35c55dbad1eccf7
Parents: 91dd58b
Author: Botong Huang 
Authored: Thu Aug 16 08:28:35 2018 -0700
Committer: Botong Huang 
Committed: Thu Aug 16 08:28:35 2018 -0700

--
 .../pom.xml |   3 +
 ...ederationGlobalQueueValidationException.java |  28 +
 .../globalqueues/FederationGlobalView.java  | 198 +
 .../globalqueues/FederationQueue.java   | 761 +++
 .../globalqueues/package-info.java  |  17 +
 .../globalqueues/GlobalQueueTestUtil.java   | 133 
 .../globalqueues/TestFederationQueue.java   |  98 +++
 .../resources/globalqueues/basic-queue.json |   9 +
 .../globalqueues/tree-queue-adaptable.json  |  96 +++
 .../test/resources/globalqueues/tree-queue.json | 128 
 10 files changed, 1471 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/717874a1/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/pom.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/pom.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/pom.xml
index c137c9e..f0097af 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/pom.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/pom.xml
@@ -108,6 +108,9 @@
   
 src/test/resources/schedulerInfo1.json
 src/test/resources/schedulerInfo2.json
+src/test/resources/globalqueues/basic-queue.json
+src/test/resources/globalqueues/tree-queue.json
+
src/test/resources/globalqueues/tree-queue-adaptable.json
   
 
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/717874a1/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/globalqueues/FederationGlobalQueueValidationException.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/globalqueues/FederationGlobalQueueValidationException.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/globalqueues/FederationGlobalQueueValidationException.java
new file mode 100644
index 000..3a18763
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/globalqueues/FederationGlobalQueueValidationException.java
@@ -0,0 +1,28 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.globalpolicygenerator.globalqueues;
+
+/**
+ * Exception thrown when FederationQueue is not valid.
+ */
+public class FederationGlobalQueueValidationException extends Exception {
+
+  public FederationGlobalQueueValidationException(String s) {
+super(s);
+  }
+}


hadoop git commit: YARN-8656. container-executor should not write cgroup tasks files for docker containers. Contributed by Jim Brennan

2018-08-16 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/branch-3.1 05547b1e0 -> 819a2a6f1


YARN-8656. container-executor should not write cgroup tasks files for docker 
containers. Contributed by Jim Brennan

(cherry picked from commit cb21eaa026d80a2c9836030d959c0dd7f87c4d6b)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/819a2a6f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/819a2a6f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/819a2a6f

Branch: refs/heads/branch-3.1
Commit: 819a2a6f10d1701b48a960f8436ea8dfded7e601
Parents: 05547b1
Author: Jason Lowe 
Authored: Thu Aug 16 10:06:17 2018 -0500
Committer: Jason Lowe 
Committed: Thu Aug 16 10:09:56 2018 -0500

--
 .../runtime/DockerLinuxContainerRuntime.java|  4 +--
 .../impl/container-executor.c   | 21 +---
 .../impl/container-executor.h   |  3 +--
 .../main/native/container-executor/impl/main.c  | 26 +++-
 .../runtime/TestDockerContainerRuntime.java |  8 +++---
 5 files changed, 10 insertions(+), 52 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/819a2a6f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
index 4fe81e7..5c1b494 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
@@ -1160,7 +1160,6 @@ public class DockerLinuxContainerRuntime implements 
LinuxContainerRuntime {
 List localDirs = ctx.getExecutionAttribute(LOCAL_DIRS);
 @SuppressWarnings("unchecked")
 List logDirs = ctx.getExecutionAttribute(LOG_DIRS);
-String resourcesOpts = ctx.getExecutionAttribute(RESOURCES_OPTIONS);
 
 PrivilegedOperation launchOp = new PrivilegedOperation(
 PrivilegedOperation.OperationType.LAUNCH_DOCKER_CONTAINER);
@@ -1178,8 +1177,7 @@ public class DockerLinuxContainerRuntime implements 
LinuxContainerRuntime {
 localDirs),
 StringUtils.join(PrivilegedOperation.LINUX_FILE_PATH_SEPARATOR,
 logDirs),
-commandFile,
-resourcesOpts);
+commandFile);
 
 String tcCommandFile = ctx.getExecutionAttribute(TC_COMMAND_FILE);
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/819a2a6f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
index 6734b94..f8b89ee 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
@@ -1547,9 +1547,7 @@ int launch_docker_container_as_user(const char * user, 
const char *app_id,
   const char *container_id, const char *work_dir,
   const char *script_name, const char *cred_file,
   const char *pid_file, char* const* local_dirs,
-  char* const* log_dirs, const char *command_file,
-  const char *resources_key,
-  char* const* resources_values) {
+  char* const* log_dirs, const char *command_file) 
{
   int exit_code = -1;
   char *script_file_dest = NULL;
   char *cred_file_dest = NULL;
@@ -1732,23 +1730,6 @@ int 

hadoop git commit: YARN-8656. container-executor should not write cgroup tasks files for docker containers. Contributed by Jim Brennan

2018-08-16 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/trunk 6df606f1b -> cb21eaa02


YARN-8656. container-executor should not write cgroup tasks files for docker 
containers. Contributed by Jim Brennan


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cb21eaa0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cb21eaa0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cb21eaa0

Branch: refs/heads/trunk
Commit: cb21eaa026d80a2c9836030d959c0dd7f87c4d6b
Parents: 6df606f
Author: Jason Lowe 
Authored: Thu Aug 16 10:06:17 2018 -0500
Committer: Jason Lowe 
Committed: Thu Aug 16 10:06:17 2018 -0500

--
 .../runtime/DockerLinuxContainerRuntime.java|  4 +--
 .../impl/container-executor.c   | 21 +---
 .../impl/container-executor.h   |  3 +--
 .../main/native/container-executor/impl/main.c  | 26 +++-
 .../runtime/TestDockerContainerRuntime.java |  8 +++---
 5 files changed, 10 insertions(+), 52 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cb21eaa0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
index 5d6f61e..1872830 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
@@ -1156,7 +1156,6 @@ public class DockerLinuxContainerRuntime implements 
LinuxContainerRuntime {
 List localDirs = ctx.getExecutionAttribute(LOCAL_DIRS);
 @SuppressWarnings("unchecked")
 List logDirs = ctx.getExecutionAttribute(LOG_DIRS);
-String resourcesOpts = ctx.getExecutionAttribute(RESOURCES_OPTIONS);
 
 PrivilegedOperation launchOp = new PrivilegedOperation(
 PrivilegedOperation.OperationType.LAUNCH_DOCKER_CONTAINER);
@@ -1174,8 +1173,7 @@ public class DockerLinuxContainerRuntime implements 
LinuxContainerRuntime {
 localDirs),
 StringUtils.join(PrivilegedOperation.LINUX_FILE_PATH_SEPARATOR,
 logDirs),
-commandFile,
-resourcesOpts);
+commandFile);
 
 String tcCommandFile = ctx.getExecutionAttribute(TC_COMMAND_FILE);
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cb21eaa0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
index 6734b94..f8b89ee 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
@@ -1547,9 +1547,7 @@ int launch_docker_container_as_user(const char * user, 
const char *app_id,
   const char *container_id, const char *work_dir,
   const char *script_name, const char *cred_file,
   const char *pid_file, char* const* local_dirs,
-  char* const* log_dirs, const char *command_file,
-  const char *resources_key,
-  char* const* resources_values) {
+  char* const* log_dirs, const char *command_file) 
{
   int exit_code = -1;
   char *script_file_dest = NULL;
   char *cred_file_dest = NULL;
@@ -1732,23 +1730,6 @@ int launch_docker_container_as_user(const char * user, 
const char *app_id,
   }
 
   if (pid 

hadoop git commit: HDFS-13829. Remove redundant condition judgement in DirectoryScanner#scan. Contributed by liaoyuxiangqin.

2018-08-16 Thread yqlin
Repository: hadoop
Updated Branches:
  refs/heads/trunk 7dc79a8b5 -> 6df606f1b


HDFS-13829. Remove redundant condition judgement in DirectoryScanner#scan. 
Contributed by liaoyuxiangqin.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6df606f1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6df606f1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6df606f1

Branch: refs/heads/trunk
Commit: 6df606f1b4edabd15ae2896c5df0fe675bcf0138
Parents: 7dc79a8
Author: Yiqun Lin 
Authored: Thu Aug 16 18:44:18 2018 +0800
Committer: Yiqun Lin 
Committed: Thu Aug 16 18:44:18 2018 +0800

--
 .../org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6df606f1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
index 39665e3..10951e9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
@@ -450,7 +450,7 @@ public class DirectoryScanner implements Runnable {
   if (d < blockpoolReport.length) {
 // There may be multiple on-disk records for the same block, don't 
increment
 // the memory record pointer if so.
-ScanInfo nextInfo = blockpoolReport[Math.min(d, 
blockpoolReport.length - 1)];
+ScanInfo nextInfo = blockpoolReport[d];
 if (nextInfo.getBlockId() != info.getBlockId()) {
   ++m;
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org