[2/3] hadoop git commit: HADOOP-14749. review s3guard docs & code prior to merge. Contributed by Steve Loughran

2017-08-12 Thread stevel
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e531ae25/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md
--
diff --git 
a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md 
b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md
index c28e354..fe67d69 100644
--- a/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md
+++ b/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md
@@ -20,7 +20,7 @@
 
 ## Overview
 
-*S3Guard* is an experimental feature for the S3A client of the S3 Filesystem,
+*S3Guard* is an experimental feature for the S3A client of the S3 object store,
 which can use a (consistent) database as the store of metadata about objects
 in an S3 bucket.
 
@@ -34,8 +34,8 @@ processes.
 1. Permits a consistent view of the object store. Without this, changes in
 objects may not be immediately visible, especially in listing operations.
 
-1. Create a platform for future performance improvements for running Hadoop
-   workloads on top of object stores
+1. Offers a platform for future performance improvements for running Hadoop
+workloads on top of object stores
 
 The basic idea is that, for each operation in the Hadoop S3 client (s3a) that
 reads or modifies metadata, a shadow copy of that metadata is stored in a
@@ -60,19 +60,22 @@ S3 Repository to use the feature. Clients reading the data 
may work directly
 with the S3A data, in which case the normal S3 consistency guarantees apply.
 
 
-## Configuring S3Guard
+## Setting up S3Guard
 
 The latest configuration parameters are defined in `core-default.xml`.  You
 should consult that file for full information, but a summary is provided here.
 
 
-### 1. Choose your MetadataStore implementation.
+### 1. Choose the Database
 
-By default, S3Guard is not enabled.  S3A uses "`NullMetadataStore`", which is a
-MetadataStore that does nothing.
+A core concept of S3Guard is that the directory listing data of the object
+store, *the metadata* is replicated in a higher-performance, consistent,
+database. In S3Guard, this database is called *The Metadata Store*
 
-The funtional MetadataStore back-end uses Amazon's DynamoDB database service.  
The
- following setting will enable this MetadataStore:
+By default, S3Guard is not enabled.
+
+The Metadata Store to use in production is bonded to Amazon's DynamoDB
+database service.  The following setting will enable this Metadata Store:
 
 ```xml
 
@@ -81,8 +84,8 @@ The funtional MetadataStore back-end uses Amazon's DynamoDB 
database service.  T
 
 ```
 
-
-Note that the Null metadata store can be explicitly requested if desired.
+Note that the `NullMetadataStore` store can be explicitly requested if desired.
+This offers no metadata storage, and effectively disables S3Guard.
 
 ```xml
 
@@ -91,10 +94,10 @@ Note that the Null metadata store can be explicitly 
requested if desired.
 
 ```
 
-### 2. Configure S3Guard settings
+### 2. Configure S3Guard Settings
 
-More settings will be added here in the future as we add to S3Guard.
-Currently the only MetadataStore-independent setting, besides the
+More settings will may be added in the future.
+Currently the only Metadata Store-independent setting, besides the
 implementation class above, is the *allow authoritative* flag.
 
 It is recommended that you leave the default setting here:
@@ -107,25 +110,32 @@ It is recommended that you leave the default setting here:
 
 ```
 
-Setting this to true is currently an experimental feature.  When true, the
+Setting this to `true` is currently an experimental feature.  When true, the
 S3A client will avoid round-trips to S3 when getting directory listings, if
-there is a fully-cached version of the directory stored in the MetadataStore.
+there is a fully-cached version of the directory stored in the Metadata Store.
 
 Note that if this is set to true, it may exacerbate or persist existing race
 conditions around multiple concurrent modifications and listings of a given
 directory tree.
 
+In particular: **If the Metadata Store is declared as authoritative,
+all interactions with the S3 bucket(s) must be through S3A clients sharing
+the same Metadata Store**
+
 
-### 3. Configure the MetadataStore.
+### 3. Configure the Metadata Store.
 
-Here are the `DynamoDBMetadataStore` settings.  Other MetadataStore
- implementations will have their own configuration parameters.
+Here are the `DynamoDBMetadataStore` settings.  Other Metadata Store
+implementations will have their own configuration parameters.
+
+
+### 4. Name Your Table
 
 First, choose the name of the table you wish to use for the S3Guard metadata
-storage in your DynamoDB instance.  If you leave the default blank value, a
+storage in your DynamoDB instance.  If you leave it unset/empty, a
 separate table will be created for each S3 bucket you access, and that
-bucket's name will be used for the name of the DynamoDB 

[1/3] hadoop git commit: HADOOP-14749. review s3guard docs & code prior to merge. Contributed by Steve Loughran

2017-08-12 Thread stevel
Repository: hadoop
Updated Branches:
  refs/heads/HADOOP-13345 b114f2488 -> e531ae251


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e531ae25/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestPathMetadataDynamoDBTranslation.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestPathMetadataDynamoDBTranslation.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestPathMetadataDynamoDBTranslation.java
index 16f4523..ffd64ef 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestPathMetadataDynamoDBTranslation.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestPathMetadataDynamoDBTranslation.java
@@ -28,26 +28,23 @@ import 
com.amazonaws.services.dynamodbv2.document.PrimaryKey;
 import com.amazonaws.services.dynamodbv2.model.AttributeDefinition;
 import com.amazonaws.services.dynamodbv2.model.KeySchemaElement;
 import com.google.common.base.Preconditions;
-import org.apache.hadoop.fs.FileStatus;
+import org.junit.Assert;
 import org.junit.BeforeClass;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.rules.Timeout;
 
+import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.s3a.S3AFileStatus;
 import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.test.LambdaTestUtils;
 
 import static com.amazonaws.services.dynamodbv2.model.KeyType.HASH;
 import static com.amazonaws.services.dynamodbv2.model.KeyType.RANGE;
 import static com.amazonaws.services.dynamodbv2.model.ScalarAttributeType.S;
 import static org.hamcrest.CoreMatchers.anyOf;
 import static org.hamcrest.CoreMatchers.is;
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertNotNull;
-import static org.junit.Assert.assertNull;
-import static org.junit.Assert.assertThat;
-import static org.junit.Assert.fail;
 
 import static 
org.apache.hadoop.fs.s3a.s3guard.PathMetadataDynamoDBTranslation.*;
 import static 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.VERSION_MARKER;
@@ -57,7 +54,7 @@ import static 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.VERSION;
  * Test the PathMetadataDynamoDBTranslation is able to translate between domain
  * model objects and DynamoDB items.
  */
-public class TestPathMetadataDynamoDBTranslation {
+public class TestPathMetadataDynamoDBTranslation extends Assert {
 
   private static final Path TEST_DIR_PATH = new 
Path("s3a://test-bucket/myDir");
   private static final Item TEST_DIR_ITEM = new Item();
@@ -151,7 +148,7 @@ public class TestPathMetadataDynamoDBTranslation {
 assertEquals(bSize, status.getBlockSize());
 
 /*
- * S3AFileStatue#getModificationTime() report the current time, so the
+ * S3AFileStatue#getModificationTime() reports the current time, so the
  * following assertion is failing.
  *
  * long modTime = item.hasAttribute(MOD_TIME) ? item.getLong(MOD_TIME) : 0;
@@ -195,11 +192,8 @@ public class TestPathMetadataDynamoDBTranslation {
 
   @Test
   public void testPathToKey() throws Exception {
-try {
-  pathToKey(new Path("/"));
-  fail("Root path should have not been mapped to any PrimaryKey");
-} catch (IllegalArgumentException ignored) {
-}
+LambdaTestUtils.intercept(IllegalArgumentException.class,
+() -> pathToKey(new Path("/")));
 doTestPathToKey(TEST_DIR_PATH);
 doTestPathToKey(TEST_FILE_PATH);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e531ae25/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestS3Guard.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestS3Guard.java
 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestS3Guard.java
index c2ff758..745e7aa 100644
--- 
a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestS3Guard.java
+++ 
b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestS3Guard.java
@@ -18,13 +18,14 @@
 
 package org.apache.hadoop.fs.s3a.s3guard;
 
-import org.apache.hadoop.fs.FileStatus;
-import org.apache.hadoop.fs.Path;
+import java.util.Arrays;
+import java.util.List;
+
 import org.junit.Assert;
 import org.junit.Test;
 
-import java.util.Arrays;
-import java.util.List;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
 
 /**
  * Tests for the {@link S3Guard} utility class.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e531ae25/hadoop-tools/hadoop-aws/src/test/resources/log4j.properties
--
diff --git a/hadoop-tools/hadoop-aws/src/test/resources/log4j.properties 
b/hadoop-tools/hadoop-aws/src/test/resources/log4j.properties

[3/3] hadoop git commit: HADOOP-14749. review s3guard docs & code prior to merge. Contributed by Steve Loughran

2017-08-12 Thread stevel
HADOOP-14749. review s3guard docs & code prior to merge.
Contributed by Steve Loughran


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e531ae25
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e531ae25
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e531ae25

Branch: refs/heads/HADOOP-13345
Commit: e531ae251fac73f727f457039f3e16fb2a10069a
Parents: b114f24
Author: Steve Loughran 
Authored: Sat Aug 12 21:59:11 2017 +0100
Committer: Steve Loughran 
Committed: Sat Aug 12 21:59:11 2017 +0100

--
 hadoop-tools/hadoop-aws/pom.xml |  16 +-
 .../java/org/apache/hadoop/fs/s3a/Listing.java  |   6 +-
 .../org/apache/hadoop/fs/s3a/S3AFileStatus.java |  11 -
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java |  72 +--
 .../hadoop/fs/s3a/S3AInstrumentation.java   |   2 +-
 .../org/apache/hadoop/fs/s3a/Statistic.java |   2 +-
 .../org/apache/hadoop/fs/s3a/UploadInfo.java|   4 +-
 .../fs/s3a/s3guard/DirListingMetadata.java  |  21 +-
 .../fs/s3a/s3guard/DynamoDBClientFactory.java   |   3 +-
 .../fs/s3a/s3guard/DynamoDBMetadataStore.java   |  94 ++--
 .../fs/s3a/s3guard/LocalMetadataStore.java  |  13 +-
 .../hadoop/fs/s3a/s3guard/MetadataStore.java|   5 +-
 .../s3guard/MetadataStoreListFilesIterator.java |   5 +-
 .../fs/s3a/s3guard/NullMetadataStore.java   |   9 -
 .../hadoop/fs/s3a/s3guard/PathMetadata.java |   6 +
 .../PathMetadataDynamoDBTranslation.java|   6 +-
 .../apache/hadoop/fs/s3a/s3guard/S3Guard.java   |  87 ++--
 .../hadoop/fs/s3a/s3guard/S3GuardTool.java  |  76 +--
 .../hadoop/fs/s3a/s3guard/package-info.java |   2 +-
 .../src/site/markdown/tools/hadoop-aws/index.md |   2 +-
 .../site/markdown/tools/hadoop-aws/s3guard.md   | 485 +++
 .../site/markdown/tools/hadoop-aws/testing.md   | 213 ++--
 .../apache/hadoop/fs/s3a/S3ATestConstants.java  |   2 +-
 .../org/apache/hadoop/fs/s3a/S3ATestUtils.java  |  47 +-
 .../fs/s3a/s3guard/AbstractMSContract.java  |   4 +-
 .../s3guard/AbstractS3GuardToolTestBase.java| 161 ++
 .../s3a/s3guard/DynamoDBLocalClientFactory.java |  16 +-
 .../s3a/s3guard/ITestS3GuardConcurrentOps.java  |  19 +-
 .../s3a/s3guard/ITestS3GuardToolDynamoDB.java   |  79 +--
 .../fs/s3a/s3guard/ITestS3GuardToolLocal.java   |  68 +--
 .../fs/s3a/s3guard/MetadataStoreTestBase.java   |  51 +-
 .../fs/s3a/s3guard/S3GuardToolTestBase.java | 159 --
 .../fs/s3a/s3guard/TestDirListingMetadata.java  |  12 +-
 .../s3a/s3guard/TestDynamoDBMetadataStore.java  |  71 ++-
 .../fs/s3a/s3guard/TestLocalMetadataStore.java  |  12 +-
 .../fs/s3a/s3guard/TestNullMetadataStore.java   |   9 +-
 .../TestPathMetadataDynamoDBTranslation.java|  20 +-
 .../hadoop/fs/s3a/s3guard/TestS3Guard.java  |   9 +-
 .../src/test/resources/log4j.properties |   2 +-
 39 files changed, 1114 insertions(+), 767 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e531ae25/hadoop-tools/hadoop-aws/pom.xml
--
diff --git a/hadoop-tools/hadoop-aws/pom.xml b/hadoop-tools/hadoop-aws/pom.xml
index 62371c3..4161a50 100644
--- a/hadoop-tools/hadoop-aws/pom.xml
+++ b/hadoop-tools/hadoop-aws/pom.xml
@@ -170,7 +170,7 @@
 
${fs.s3a.scale.test.huge.filesize}
 
${fs.s3a.scale.test.huge.partitionsize}
 
${fs.s3a.scale.test.timeout}
-
+
 
${fs.s3a.s3guard.test.enabled}
 
${fs.s3a.s3guard.test.authoritative}
 
${fs.s3a.s3guard.test.implementation}
@@ -216,7 +216,7 @@
 
${fs.s3a.scale.test.huge.filesize}
 
${fs.s3a.scale.test.huge.partitionsize}
 
${fs.s3a.scale.test.timeout}
-
+
 
${fs.s3a.s3guard.test.enabled}
 
${fs.s3a.s3guard.test.implementation}
 
${fs.s3a.s3guard.test.authoritative}
@@ -262,7 +262,7 @@
 
${fs.s3a.scale.test.enabled}
 
${fs.s3a.scale.test.huge.filesize}
 
${fs.s3a.scale.test.timeout}
-
+
 
${fs.s3a.s3guard.test.enabled}
 
${fs.s3a.s3guard.test.implementation}
 
${fs.s3a.s3guard.test.authoritative}
@@ -289,7 +289,7 @@
   
 
 
-
+
 
   s3guard
   
@@ -302,7 +302,7 @@
   
 
 
-
+
 
   dynamo
   
@@ -315,7 +315,7 @@
   
 
 
-
+
 
   dynamodblocal
   
@@ -328,8 +328,8 @@
   
 
 
-
+
 
   non-auth
   


[23/50] [abbrv] hadoop git commit: YARN-5927. BaseContainerManagerTest::waitForNMContainerState timeout accounting is not accurate. (Kai Sasaki via kasha)

2017-08-12 Thread inigoiri
YARN-5927. BaseContainerManagerTest::waitForNMContainerState timeout accounting 
is not accurate. (Kai Sasaki via kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8c4b6d16
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8c4b6d16
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8c4b6d16

Branch: refs/heads/HDFS-10467
Commit: 8c4b6d16a526610a03ccc85665744ad071e37400
Parents: 07fff43
Author: Karthik Kambatla 
Authored: Fri Aug 11 12:14:06 2017 -0700
Committer: Karthik Kambatla 
Committed: Fri Aug 11 12:15:43 2017 -0700

--
 .../containermanager/BaseContainerManagerTest.java| 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8c4b6d16/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java
index 7980a80..d266ac1 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java
@@ -310,13 +310,13 @@ public abstract class BaseContainerManagerTest {
 new HashSet<>(finalStates);
 int timeoutSecs = 0;
 do {
-  Thread.sleep(2000);
+  Thread.sleep(1000);
   containerStatus =
   containerManager.getContainerStatuses(request)
   .getContainerStatuses().get(0);
   LOG.info("Waiting for container to get into one of states " + fStates
   + ". Current state is " + containerStatus.getState());
-  timeoutSecs += 2;
+  timeoutSecs += 1;
 } while (!fStates.contains(containerStatus.getState())
 && timeoutSecs < timeOutMax);
 LOG.info("Container state is " + containerStatus.getState());
@@ -371,7 +371,7 @@ public abstract class BaseContainerManagerTest {
 .containermanager.container.ContainerState currentState = null;
 int timeoutSecs = 0;
 do {
-  Thread.sleep(2000);
+  Thread.sleep(1000);
   container =
   containerManager.getContext().getContainers().get(containerID);
   if (container != null) {
@@ -381,9 +381,9 @@ public abstract class BaseContainerManagerTest {
 LOG.info("Waiting for NM container to get into one of the following " +
 "states: " + finalStates + ". Current state is " + currentState);
   }
-  timeoutSecs += 2;
+  timeoutSecs += 1;
 } while (!finalStates.contains(currentState)
-&& timeoutSecs++ < timeOutMax);
+&& timeoutSecs < timeOutMax);
 LOG.info("Container state is " + currentState);
 Assert.assertTrue("ContainerState is not correct (timedout)",
 finalStates.contains(currentState));


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[29/50] [abbrv] hadoop git commit: YARN-6896. Federation: routing REST invocations transparently to multiple RMs (part 1 - basic execution). (Contributed by Giovanni Matteo Fumarola via curino)

2017-08-12 Thread inigoiri
YARN-6896. Federation: routing REST invocations transparently to multiple RMs 
(part 1 - basic execution). (Contributed by Giovanni Matteo Fumarola via curino)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cc59b5fb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cc59b5fb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cc59b5fb

Branch: refs/heads/HDFS-10467
Commit: cc59b5fb26ccf58dffcd8850fa12ec65250f127d
Parents: 0996acd
Author: Carlo Curino 
Authored: Fri Aug 11 15:58:01 2017 -0700
Committer: Carlo Curino 
Committed: Fri Aug 11 15:58:01 2017 -0700

--
 .../hadoop/yarn/conf/YarnConfiguration.java |  10 +
 .../yarn/conf/TestYarnConfigurationFields.java  |   2 +
 .../webapp/DefaultRequestInterceptorREST.java   |  16 +-
 .../webapp/FederationInterceptorREST.java   | 750 +++
 .../webapp/BaseRouterWebServicesTest.java   |  37 +-
 .../MockDefaultRequestInterceptorREST.java  | 136 
 .../webapp/TestFederationInterceptorREST.java   | 379 ++
 .../TestFederationInterceptorRESTRetry.java | 274 +++
 .../TestableFederationInterceptorREST.java  |  54 ++
 .../src/site/markdown/Federation.md |   2 +-
 10 files changed, 1646 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cc59b5fb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index cd4d569..8acaef8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -2721,6 +2721,16 @@ public class YarnConfiguration extends Configuration {
   "org.apache.hadoop.yarn.server.router.webapp."
   + "DefaultRequestInterceptorREST";
 
+  /**
+   * The interceptor class used in FederationInterceptorREST to communicate 
with
+   * each SubCluster.
+   */
+  public static final String ROUTER_WEBAPP_DEFAULT_INTERCEPTOR_CLASS =
+  ROUTER_WEBAPP_PREFIX + "default-interceptor-class";
+  public static final String DEFAULT_ROUTER_WEBAPP_DEFAULT_INTERCEPTOR_CLASS =
+  "org.apache.hadoop.yarn.server.router.webapp."
+  + "DefaultRequestInterceptorREST";
+
   
   // Other Configs
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cc59b5fb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
index b9ad31a..91a8b0a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
@@ -81,6 +81,8 @@ public class TestYarnConfigurationFields extends 
TestConfigurationFieldsBase {
 .add(YarnConfiguration.ROUTER_CLIENTRM_ADDRESS);
 configurationPropsToSkipCompare
 .add(YarnConfiguration.ROUTER_RMADMIN_ADDRESS);
+configurationPropsToSkipCompare
+.add(YarnConfiguration.ROUTER_WEBAPP_DEFAULT_INTERCEPTOR_CLASS);
 
 // Federation policies configs to be ignored
 configurationPropsToSkipCompare

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cc59b5fb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/DefaultRequestInterceptorREST.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/DefaultRequestInterceptorREST.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/DefaultRequestInterceptorREST.java
index aa8e3eb..abd8ca6 100644
--- 

[15/50] [abbrv] hadoop git commit: HADOOP-14743. CompositeGroupsMapping should not swallow exceptions. Contributed by Wei-Chiu Chuang.

2017-08-12 Thread inigoiri
HADOOP-14743. CompositeGroupsMapping should not swallow exceptions. Contributed 
by Wei-Chiu Chuang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a8b75466
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a8b75466
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a8b75466

Branch: refs/heads/HDFS-10467
Commit: a8b75466b21edfe8b12beb4420492817f0e03147
Parents: 54356b1
Author: Wei-Chiu Chuang 
Authored: Thu Aug 10 09:35:27 2017 -0700
Committer: Wei-Chiu Chuang 
Committed: Thu Aug 10 09:35:27 2017 -0700

--
 .../java/org/apache/hadoop/security/CompositeGroupsMapping.java  | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a8b75466/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/CompositeGroupsMapping.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/CompositeGroupsMapping.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/CompositeGroupsMapping.java
index b8cfdf7..b762df2 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/CompositeGroupsMapping.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/CompositeGroupsMapping.java
@@ -74,7 +74,9 @@ public class CompositeGroupsMapping
   try {
 groups = provider.getGroups(user);
   } catch (Exception e) {
-//LOG.warn("Exception trying to get groups for user " + user, e);  
+LOG.warn("Unable to get groups for user {} via {} because: {}",
+user, provider.getClass().getSimpleName(), e.toString());
+LOG.debug("Stacktrace: ", e);
   }
   if (groups != null && ! groups.isEmpty()) {
 groupSet.addAll(groups);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[36/50] [abbrv] hadoop git commit: HDFS-10880. Federation Mount Table State Store internal API. Contributed by Jason Kace and Inigo Goiri.

2017-08-12 Thread inigoiri
HDFS-10880. Federation Mount Table State Store internal API. Contributed by 
Jason Kace and Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ca78fcb4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ca78fcb4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ca78fcb4

Branch: refs/heads/HDFS-10467
Commit: ca78fcb4a59c63b3b23b1da55ab91fd832c90f6c
Parents: 0c23c8c
Author: Inigo Goiri 
Authored: Fri Aug 4 18:00:12 2017 -0700
Committer: Inigo Goiri 
Committed: Sat Aug 12 09:36:24 2017 -0700

--
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |   7 +-
 .../federation/resolver/MountTableManager.java  |  80 +++
 .../federation/resolver/MountTableResolver.java | 544 +++
 .../federation/resolver/PathLocation.java   | 124 -
 .../resolver/order/DestinationOrder.java|  29 +
 .../federation/resolver/order/package-info.java |  29 +
 .../federation/router/FederationUtil.java   |  56 +-
 .../hdfs/server/federation/router/Router.java   |   3 +-
 .../federation/store/MountTableStore.java   |  49 ++
 .../federation/store/StateStoreService.java |   2 +
 .../store/impl/MountTableStoreImpl.java | 116 
 .../protocol/AddMountTableEntryRequest.java |  47 ++
 .../protocol/AddMountTableEntryResponse.java|  42 ++
 .../protocol/GetMountTableEntriesRequest.java   |  49 ++
 .../protocol/GetMountTableEntriesResponse.java  |  53 ++
 .../protocol/RemoveMountTableEntryRequest.java  |  49 ++
 .../protocol/RemoveMountTableEntryResponse.java |  42 ++
 .../protocol/UpdateMountTableEntryRequest.java  |  51 ++
 .../protocol/UpdateMountTableEntryResponse.java |  43 ++
 .../pb/AddMountTableEntryRequestPBImpl.java |  84 +++
 .../pb/AddMountTableEntryResponsePBImpl.java|  76 +++
 .../pb/GetMountTableEntriesRequestPBImpl.java   |  76 +++
 .../pb/GetMountTableEntriesResponsePBImpl.java  | 104 
 .../pb/RemoveMountTableEntryRequestPBImpl.java  |  76 +++
 .../pb/RemoveMountTableEntryResponsePBImpl.java |  76 +++
 .../pb/UpdateMountTableEntryRequestPBImpl.java  |  96 
 .../pb/UpdateMountTableEntryResponsePBImpl.java |  76 +++
 .../federation/store/records/MountTable.java| 301 ++
 .../store/records/impl/pb/MountTablePBImpl.java | 213 
 .../src/main/proto/FederationProtocol.proto |  61 ++-
 .../hdfs/server/federation/MockResolver.java|   9 +-
 .../resolver/TestMountTableResolver.java| 396 ++
 .../store/FederationStateStoreTestUtils.java|  16 +
 .../store/TestStateStoreMountTable.java | 250 +
 .../store/driver/TestStateStoreDriverBase.java  |  12 +
 .../store/records/TestMountTable.java   | 176 ++
 36 files changed, 3437 insertions(+), 76 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ca78fcb4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index acd4790..f156fdb 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -27,6 +27,8 @@ import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyRackFaultTolerant;
 import 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaLruTracker;
+import 
org.apache.hadoop.hdfs.server.federation.resolver.FileSubclusterResolver;
+import org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver;
 import 
org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
 import 
org.apache.hadoop.hdfs.server.federation.resolver.MembershipNamenodeResolver;
 import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreDriver;
@@ -1160,8 +1162,9 @@ public class DFSConfigKeys extends 
CommonConfigurationKeys {
   // HDFS Router State Store connection
   public static final String FEDERATION_FILE_RESOLVER_CLIENT_CLASS =
   FEDERATION_ROUTER_PREFIX + "file.resolver.client.class";
-  public static final String FEDERATION_FILE_RESOLVER_CLIENT_CLASS_DEFAULT =
-  "org.apache.hadoop.hdfs.server.federation.MockResolver";
+  public static final Class
+  FEDERATION_FILE_RESOLVER_CLIENT_CLASS_DEFAULT =
+  MountTableResolver.class;
   public static final String FEDERATION_NAMENODE_RESOLVER_CLIENT_CLASS =
   

[35/50] [abbrv] hadoop git commit: HDFS-10880. Federation Mount Table State Store internal API. Contributed by Jason Kace and Inigo Goiri.

2017-08-12 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ca78fcb4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RemoveMountTableEntryRequestPBImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RemoveMountTableEntryRequestPBImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RemoveMountTableEntryRequestPBImpl.java
new file mode 100644
index 000..7f7c998
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RemoveMountTableEntryRequestPBImpl.java
@@ -0,0 +1,76 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb;
+
+import java.io.IOException;
+
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RemoveMountTableEntryRequestProto;
+import 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos.RemoveMountTableEntryRequestProtoOrBuilder;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
+import org.apache.hadoop.hdfs.server.federation.store.records.impl.pb.PBRecord;
+
+import com.google.protobuf.Message;
+
+/**
+ * Protobuf implementation of the state store API object
+ * RemoveMountTableEntryRequest.
+ */
+public class RemoveMountTableEntryRequestPBImpl
+extends RemoveMountTableEntryRequest implements PBRecord {
+
+  private FederationProtocolPBTranslator translator =
+  new FederationProtocolPBTranslator(
+  RemoveMountTableEntryRequestProto.class);
+
+  public RemoveMountTableEntryRequestPBImpl() {
+  }
+
+  public RemoveMountTableEntryRequestPBImpl(
+  RemoveMountTableEntryRequestProto proto) {
+this.setProto(proto);
+  }
+
+  @Override
+  public RemoveMountTableEntryRequestProto getProto() {
+return this.translator.build();
+  }
+
+  @Override
+  public void setProto(Message proto) {
+this.translator.setProto(proto);
+  }
+
+  @Override
+  public void readInstance(String base64String) throws IOException {
+this.translator.readInstance(base64String);
+  }
+
+  @Override
+  public String getSrcPath() {
+return this.translator.getProtoOrBuilder().getSrcPath();
+  }
+
+  @Override
+  public void setSrcPath(String path) {
+this.translator.getBuilder().setSrcPath(path);
+  }
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ca78fcb4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RemoveMountTableEntryResponsePBImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RemoveMountTableEntryResponsePBImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RemoveMountTableEntryResponsePBImpl.java
new file mode 100644
index 000..0c943ac
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/RemoveMountTableEntryResponsePBImpl.java
@@ -0,0 +1,76 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * 

[42/50] [abbrv] hadoop git commit: HDFS-11546. Federation Router RPC server. Contributed by Jason Kace and Inigo Goiri.

2017-08-12 Thread inigoiri
HDFS-11546. Federation Router RPC server. Contributed by Jason Kace and Inigo 
Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b8e03592
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b8e03592
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b8e03592

Branch: refs/heads/HDFS-10467
Commit: b8e0359289104602bda8991c61b4b98cc9d3a8b7
Parents: fe3672c
Author: Inigo Goiri 
Authored: Thu May 11 09:57:03 2017 -0700
Committer: Inigo Goiri 
Committed: Sat Aug 12 09:36:24 2017 -0700

--
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |   38 +
 .../resolver/FederationNamespaceInfo.java   |   46 +-
 .../federation/resolver/RemoteLocation.java |   46 +-
 .../federation/router/ConnectionContext.java|  104 +
 .../federation/router/ConnectionManager.java|  408 
 .../federation/router/ConnectionPool.java   |  314 +++
 .../federation/router/ConnectionPoolId.java |  117 ++
 .../router/RemoteLocationContext.java   |   38 +-
 .../server/federation/router/RemoteMethod.java  |  164 ++
 .../server/federation/router/RemoteParam.java   |   71 +
 .../hdfs/server/federation/router/Router.java   |   58 +-
 .../federation/router/RouterRpcClient.java  |  856 
 .../federation/router/RouterRpcServer.java  | 1867 +-
 .../src/main/resources/hdfs-default.xml |   95 +
 .../server/federation/FederationTestUtils.java  |   80 +-
 .../hdfs/server/federation/MockResolver.java|   90 +-
 .../server/federation/RouterConfigBuilder.java  |   20 +-
 .../server/federation/RouterDFSCluster.java |  535 +++--
 .../server/federation/router/TestRouter.java|   31 +-
 .../server/federation/router/TestRouterRpc.java |  869 
 .../router/TestRouterRpcMultiDestination.java   |  216 ++
 21 files changed, 5675 insertions(+), 388 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b8e03592/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 2b6d0e8..ca24fd5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -1102,6 +1102,44 @@ public class DFSConfigKeys extends 
CommonConfigurationKeys {
   // HDFS Router-based federation
   public static final String FEDERATION_ROUTER_PREFIX =
   "dfs.federation.router.";
+  public static final String DFS_ROUTER_DEFAULT_NAMESERVICE =
+  FEDERATION_ROUTER_PREFIX + "default.nameserviceId";
+  public static final String DFS_ROUTER_HANDLER_COUNT_KEY =
+  FEDERATION_ROUTER_PREFIX + "handler.count";
+  public static final int DFS_ROUTER_HANDLER_COUNT_DEFAULT = 10;
+  public static final String DFS_ROUTER_READER_QUEUE_SIZE_KEY =
+  FEDERATION_ROUTER_PREFIX + "reader.queue.size";
+  public static final int DFS_ROUTER_READER_QUEUE_SIZE_DEFAULT = 100;
+  public static final String DFS_ROUTER_READER_COUNT_KEY =
+  FEDERATION_ROUTER_PREFIX + "reader.count";
+  public static final int DFS_ROUTER_READER_COUNT_DEFAULT = 1;
+  public static final String DFS_ROUTER_HANDLER_QUEUE_SIZE_KEY =
+  FEDERATION_ROUTER_PREFIX + "handler.queue.size";
+  public static final int DFS_ROUTER_HANDLER_QUEUE_SIZE_DEFAULT = 100;
+  public static final String DFS_ROUTER_RPC_BIND_HOST_KEY =
+  FEDERATION_ROUTER_PREFIX + "rpc-bind-host";
+  public static final int DFS_ROUTER_RPC_PORT_DEFAULT = ;
+  public static final String DFS_ROUTER_RPC_ADDRESS_KEY =
+  FEDERATION_ROUTER_PREFIX + "rpc-address";
+  public static final String DFS_ROUTER_RPC_ADDRESS_DEFAULT =
+  "0.0.0.0:" + DFS_ROUTER_RPC_PORT_DEFAULT;
+  public static final String DFS_ROUTER_RPC_ENABLE =
+  FEDERATION_ROUTER_PREFIX + "rpc.enable";
+  public static final boolean DFS_ROUTER_RPC_ENABLE_DEFAULT = true;
+
+  // HDFS Router NN client
+  public static final String DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE =
+  FEDERATION_ROUTER_PREFIX + "connection.pool-size";
+  public static final int DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE_DEFAULT =
+  64;
+  public static final String DFS_ROUTER_NAMENODE_CONNECTION_POOL_CLEAN =
+  FEDERATION_ROUTER_PREFIX + "connection.pool.clean.ms";
+  public static final long DFS_ROUTER_NAMENODE_CONNECTION_POOL_CLEAN_DEFAULT =
+  TimeUnit.MINUTES.toMillis(1);
+  public static final String DFS_ROUTER_NAMENODE_CONNECTION_CLEAN_MS =
+  FEDERATION_ROUTER_PREFIX + "connection.clean.ms";
+  public 

[19/50] [abbrv] hadoop git commit: HADOOP-14754. TestCommonConfigurationFields failed: core-default.xml has 2 wasb properties missing in classes. Contributed by John Zhuge.

2017-08-12 Thread inigoiri
HADOOP-14754. TestCommonConfigurationFields failed: core-default.xml has 2 wasb 
properties missing in classes.
Contributed by John Zhuge.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d964062f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d964062f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d964062f

Branch: refs/heads/HDFS-10467
Commit: d964062f66c0772f4b1a029bfcdff921fbaaf91c
Parents: f13ca94
Author: Steve Loughran 
Authored: Fri Aug 11 10:18:17 2017 +0100
Committer: Steve Loughran 
Committed: Fri Aug 11 10:18:17 2017 +0100

--
 .../org/apache/hadoop/conf/TestCommonConfigurationFields.java  | 6 ++
 1 file changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d964062f/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
index da37e68..d0e0a35 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java
@@ -103,6 +103,12 @@ public class TestCommonConfigurationFields extends 
TestConfigurationFieldsBase {
 xmlPrefixToSkipCompare.add("fs.s3n.");
 xmlPrefixToSkipCompare.add("s3native.");
 
+// WASB properties are in a different subtree.
+// - org.apache.hadoop.fs.azure.NativeAzureFileSystem
+xmlPrefixToSkipCompare.add("fs.wasb.impl");
+xmlPrefixToSkipCompare.add("fs.wasbs.impl");
+xmlPrefixToSkipCompare.add("fs.azure.");
+
 // ADL properties are in a different subtree
 // - org.apache.hadoop.hdfs.web.ADLConfKeys
 xmlPrefixToSkipCompare.add("adl.");


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[47/50] [abbrv] hadoop git commit: HDFS-10687. Federation Membership State Store internal API. Contributed by Jason Kace and Inigo Goiri.

2017-08-12 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b85b5f31/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/UpdateNamenodeRegistrationResponse.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/UpdateNamenodeRegistrationResponse.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/UpdateNamenodeRegistrationResponse.java
new file mode 100644
index 000..1f0d556
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/UpdateNamenodeRegistrationResponse.java
@@ -0,0 +1,51 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.store.protocol;
+
+import java.io.IOException;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import 
org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreSerializer;
+
+/**
+ * API response for overriding an existing namenode registration in the state
+ * store.
+ */
+public abstract class UpdateNamenodeRegistrationResponse {
+
+  public static UpdateNamenodeRegistrationResponse newInstance() {
+return StateStoreSerializer.newRecord(
+UpdateNamenodeRegistrationResponse.class);
+  }
+
+  public static UpdateNamenodeRegistrationResponse newInstance(boolean status)
+  throws IOException {
+UpdateNamenodeRegistrationResponse response = newInstance();
+response.setResult(status);
+return response;
+  }
+
+  @Private
+  @Unstable
+  public abstract boolean getResult();
+
+  @Private
+  @Unstable
+  public abstract void setResult(boolean result);
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b85b5f31/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/FederationProtocolPBTranslator.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/FederationProtocolPBTranslator.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/FederationProtocolPBTranslator.java
new file mode 100644
index 000..baad113
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/protocol/impl/pb/FederationProtocolPBTranslator.java
@@ -0,0 +1,145 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb;
+
+import java.io.IOException;
+import java.lang.reflect.Method;
+
+import org.apache.commons.codec.binary.Base64;
+
+import com.google.protobuf.GeneratedMessage;
+import com.google.protobuf.Message;
+import com.google.protobuf.Message.Builder;
+import com.google.protobuf.MessageOrBuilder;
+
+/**
+ * Helper class for setting/getting data elements in an object backed by a
+ * protobuf implementation.
+ */
+public class FederationProtocolPBTranslator {
+
+  /** Optional proto byte stream used to create this object. */
+  private P proto;
+  /** The class of the proto handler for this 

[11/50] [abbrv] hadoop git commit: HDFS-12278. LeaseManager operations are inefficient in 2.8. Contributed by Rushabh S Shah.

2017-08-12 Thread inigoiri
HDFS-12278. LeaseManager operations are inefficient in 2.8. Contributed by 
Rushabh S Shah.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b5c02f95
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b5c02f95
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b5c02f95

Branch: refs/heads/HDFS-10467
Commit: b5c02f95b5a2fcb8931d4a86f8192caa18009ea9
Parents: ec69414
Author: Kihwal Lee 
Authored: Wed Aug 9 16:46:05 2017 -0500
Committer: Kihwal Lee 
Committed: Wed Aug 9 16:46:05 2017 -0500

--
 .../hadoop/hdfs/server/namenode/LeaseManager.java | 18 --
 1 file changed, 12 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b5c02f95/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
index 6578ba9..35ec063 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
@@ -26,10 +26,11 @@ import java.util.Collections;
 import java.util.Comparator;
 import java.util.HashSet;
 import java.util.List;
-import java.util.PriorityQueue;
+import java.util.NavigableSet;
 import java.util.Set;
 import java.util.SortedMap;
 import java.util.TreeMap;
+import java.util.TreeSet;
 import java.util.concurrent.Callable;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
@@ -87,11 +88,15 @@ public class LeaseManager {
   // Mapping: leaseHolder -> Lease
   private final SortedMap leases = new TreeMap<>();
   // Set of: Lease
-  private final PriorityQueue sortedLeases = new PriorityQueue<>(512,
+  private final NavigableSet sortedLeases = new TreeSet<>(
   new Comparator() {
 @Override
 public int compare(Lease o1, Lease o2) {
-  return Long.signum(o1.getLastUpdate() - o2.getLastUpdate());
+  if (o1.getLastUpdate() != o2.getLastUpdate()) {
+return Long.signum(o1.getLastUpdate() - o2.getLastUpdate());
+  } else {
+return o1.holder.compareTo(o2.holder);
+  }
 }
   });
   // INodeID -> Lease
@@ -528,9 +533,10 @@ public class LeaseManager {
 
 long start = monotonicNow();
 
-while(!sortedLeases.isEmpty() && sortedLeases.peek().expiredHardLimit()
-  && !isMaxLockHoldToReleaseLease(start)) {
-  Lease leaseToCheck = sortedLeases.peek();
+while(!sortedLeases.isEmpty() &&
+sortedLeases.first().expiredHardLimit()
+&& !isMaxLockHoldToReleaseLease(start)) {
+  Lease leaseToCheck = sortedLeases.first();
   LOG.info(leaseToCheck + " has expired hard limit");
 
   final List removing = new ArrayList<>();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[39/50] [abbrv] hadoop git commit: HDFS-11546. Federation Router RPC server. Contributed by Jason Kace and Inigo Goiri.

2017-08-12 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b8e03592/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
index ee6f57d..2875750 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
@@ -43,7 +43,7 @@ import org.apache.hadoop.util.Time;
 
 /**
  * In-memory cache/mock of a namenode and file resolver. Stores the most
- * recently updated NN information for each nameservice and block pool. Also
+ * recently updated NN information for each nameservice and block pool. It also
  * stores a virtual mount table for resolving global namespace paths to local 
NN
  * paths.
  */
@@ -51,82 +51,93 @@ public class MockResolver
 implements ActiveNamenodeResolver, FileSubclusterResolver {
 
   private Map resolver =
-  new HashMap();
-  private Map locations =
-  new HashMap();
-  private Set namespaces =
-  new HashSet();
+  new HashMap<>();
+  private Map locations = new HashMap<>();
+  private Set namespaces = new HashSet<>();
   private String defaultNamespace = null;
 
+
   public MockResolver(Configuration conf, StateStoreService store) {
 this.cleanRegistrations();
   }
 
-  public void addLocation(String mount, String nameservice, String location) {
-RemoteLocation remoteLocation = new RemoteLocation(nameservice, location);
-List locationsList = locations.get(mount);
+  public void addLocation(String mount, String nsId, String location) {
+List locationsList = this.locations.get(mount);
 if (locationsList == null) {
-  locationsList = new LinkedList();
-  locations.put(mount, locationsList);
+  locationsList = new LinkedList<>();
+  this.locations.put(mount, locationsList);
 }
+
+final RemoteLocation remoteLocation = new RemoteLocation(nsId, location);
 if (!locationsList.contains(remoteLocation)) {
   locationsList.add(remoteLocation);
 }
 
 if (this.defaultNamespace == null) {
-  this.defaultNamespace = nameservice;
+  this.defaultNamespace = nsId;
 }
   }
 
   public synchronized void cleanRegistrations() {
-this.resolver =
-new HashMap();
-this.namespaces = new HashSet();
+this.resolver = new HashMap<>();
+this.namespaces = new HashSet<>();
   }
 
   @Override
   public void updateActiveNamenode(
-  String ns, InetSocketAddress successfulAddress) {
+  String nsId, InetSocketAddress successfulAddress) {
 
 String address = successfulAddress.getHostName() + ":" +
 successfulAddress.getPort();
-String key = ns;
+String key = nsId;
 if (key != null) {
   // Update the active entry
   @SuppressWarnings("unchecked")
-  List iterator =
-  (List) resolver.get(key);
-  for (FederationNamenodeContext namenode : iterator) {
+  List namenodes =
+  (List) this.resolver.get(key);
+  for (FederationNamenodeContext namenode : namenodes) {
 if (namenode.getRpcAddress().equals(address)) {
   MockNamenodeContext nn = (MockNamenodeContext) namenode;
   nn.setState(FederationNamenodeServiceState.ACTIVE);
   break;
 }
   }
-  Collections.sort(iterator, new NamenodePriorityComparator());
+  // This operation modifies the list so we need to be careful
+  synchronized(namenodes) {
+Collections.sort(namenodes, new NamenodePriorityComparator());
+  }
 }
   }
 
   @Override
   public List
   getNamenodesForNameserviceId(String nameserviceId) {
-return resolver.get(nameserviceId);
+// Return a copy of the list because it is updated periodically
+List namenodes =
+this.resolver.get(nameserviceId);
+return Collections.unmodifiableList(new ArrayList<>(namenodes));
   }
 
   @Override
   public List getNamenodesForBlockPoolId(
   String blockPoolId) {
-return resolver.get(blockPoolId);
+// Return a copy of the list because it is updated periodically
+List namenodes =
+this.resolver.get(blockPoolId);
+return Collections.unmodifiableList(new ArrayList<>(namenodes));
   }
 
   private static class MockNamenodeContext
   implements FederationNamenodeContext {
+
+private String namenodeId;
+private String nameserviceId;
+
 private String webAddress;
 private String rpcAddress;
 private String serviceAddress;
 private String lifelineAddress;
-private String namenodeId;
-private String 

[49/50] [abbrv] hadoop git commit: HDFS-10646. Federation admin tool. Contributed by Inigo Goiri.

2017-08-12 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/04c92c9b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdmin.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdmin.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdmin.java
new file mode 100644
index 000..170247f
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdmin.java
@@ -0,0 +1,261 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import static 
org.apache.hadoop.hdfs.server.federation.store.FederationStateStoreTestUtils.synchronizeRecords;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder;
+import org.apache.hadoop.hdfs.server.federation.RouterDFSCluster.RouterContext;
+import org.apache.hadoop.hdfs.server.federation.StateStoreDFSCluster;
+import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager;
+import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
+import org.apache.hadoop.hdfs.server.federation.store.StateStoreService;
+import org.apache.hadoop.hdfs.server.federation.store.impl.MountTableStoreImpl;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryRequest;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryResponse;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesResponse;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.RemoveMountTableEntryRequest;
+import 
org.apache.hadoop.hdfs.server.federation.store.protocol.UpdateMountTableEntryRequest;
+import org.apache.hadoop.hdfs.server.federation.store.records.MountTable;
+import org.apache.hadoop.util.Time;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * The administrator interface of the {@link Router} implemented by
+ * {@link RouterAdminServer}.
+ */
+public class TestRouterAdmin {
+
+  private static StateStoreDFSCluster cluster;
+  private static RouterContext routerContext;
+  public static final String RPC_BEAN =
+  "Hadoop:service=Router,name=FederationRPC";
+  private static List mockMountTable;
+  private static StateStoreService stateStore;
+
+  @BeforeClass
+  public static void globalSetUp() throws Exception {
+cluster = new StateStoreDFSCluster(false, 1);
+// Build and start a router with State Store + admin + RPC
+Configuration conf = new RouterConfigBuilder()
+.stateStore()
+.admin()
+.rpc()
+.build();
+cluster.addRouterOverrides(conf);
+cluster.startRouters();
+routerContext = cluster.getRandomRouter();
+mockMountTable = cluster.generateMockMountTable();
+Router router = routerContext.getRouter();
+stateStore = router.getStateStore();
+  }
+
+  @AfterClass
+  public static void tearDown() {
+cluster.stopRouter(routerContext);
+  }
+
+  @Before
+  public void testSetup() throws Exception {
+assertTrue(
+synchronizeRecords(stateStore, mockMountTable, MountTable.class));
+  }
+
+  @Test
+  public void testAddMountTable() throws IOException {
+MountTable newEntry = MountTable.newInstance(
+"/testpath", Collections.singletonMap("ns0", "/testdir"),
+Time.now(), Time.now());
+
+RouterClient client = routerContext.getAdminClient();
+MountTableManager mountTable = client.getMountTableManager();
+
+// Existing mount table size
+List records = 

[43/50] [abbrv] hadoop git commit: HDFS-10882. Federation State Store Interface API. Contributed by Jason Kace and Inigo Goiri.

2017-08-12 Thread inigoiri
HDFS-10882. Federation State Store Interface API. Contributed by Jason Kace and 
Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b93d7242
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b93d7242
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b93d7242

Branch: refs/heads/HDFS-10467
Commit: b93d72428101dde1dd33ab1a96bcbc29a49a9142
Parents: a88f570
Author: Inigo 
Authored: Thu Apr 6 19:18:52 2017 -0700
Committer: Inigo Goiri 
Committed: Sat Aug 12 09:36:24 2017 -0700

--
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  11 ++
 .../server/federation/store/RecordStore.java| 100 
 .../store/driver/StateStoreSerializer.java  | 119 +++
 .../driver/impl/StateStoreSerializerPBImpl.java | 115 ++
 .../store/records/impl/pb/PBRecord.java |  47 
 .../store/records/impl/pb/package-info.java |  29 +
 .../src/main/resources/hdfs-default.xml |   8 ++
 7 files changed, 429 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b93d7242/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 0eb42ce..320e1f3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -25,6 +25,7 @@ import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyRackFaultTolerant;
 import 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaLruTracker;
+import 
org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreSerializerPBImpl;
 import org.apache.hadoop.http.HttpConfig;
 
 /** 
@@ -1108,6 +1109,16 @@ public class DFSConfigKeys extends 
CommonConfigurationKeys {
   public static final String FEDERATION_NAMENODE_RESOLVER_CLIENT_CLASS_DEFAULT 
=
   "org.apache.hadoop.hdfs.server.federation.MockResolver";
 
+  // HDFS Router-based federation State Store
+  public static final String FEDERATION_STORE_PREFIX =
+  FEDERATION_ROUTER_PREFIX + "store.";
+
+  public static final String FEDERATION_STORE_SERIALIZER_CLASS =
+  DFSConfigKeys.FEDERATION_STORE_PREFIX + "serializer";
+  public static final Class
+  FEDERATION_STORE_SERIALIZER_CLASS_DEFAULT =
+  StateStoreSerializerPBImpl.class;
+
   // dfs.client.retry confs are moved to HdfsClientConfigKeys.Retry 
   @Deprecated
   public static final String  DFS_CLIENT_RETRY_POLICY_ENABLED_KEY

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b93d7242/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/RecordStore.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/RecordStore.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/RecordStore.java
new file mode 100644
index 000..524f432
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/RecordStore.java
@@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.store;
+
+import java.lang.reflect.Constructor;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import 

[26/50] [abbrv] hadoop git commit: YARN-6884. AllocationFileLoaderService.loadQueue() has an if without braces (Contributed by weiyuan via Daniel Templeton)

2017-08-12 Thread inigoiri
YARN-6884. AllocationFileLoaderService.loadQueue() has an if without braces
(Contributed by weiyuan via Daniel Templeton)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c7680d4c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c7680d4c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c7680d4c

Branch: refs/heads/HDFS-10467
Commit: c7680d4cc4d9302a5b5efcf2467bd32ecea99585
Parents: 218588b
Author: Daniel Templeton 
Authored: Fri Aug 11 14:22:02 2017 -0700
Committer: Daniel Templeton 
Committed: Fri Aug 11 14:22:02 2017 -0700

--
 .../scheduler/fair/AllocationFileLoaderService.java| 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c7680d4c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
index bc204cb..bf5b4c5 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
@@ -294,8 +294,9 @@ public class AllocationFileLoaderService extends 
AbstractService {
   NodeList fields = element.getChildNodes();
   for (int j = 0; j < fields.getLength(); j++) {
 Node fieldNode = fields.item(j);
-if (!(fieldNode instanceof Element))
+if (!(fieldNode instanceof Element)) {
   continue;
+}
 Element field = (Element) fieldNode;
 if ("maxRunningApps".equals(field.getTagName())) {
   String text = ((Text)field.getFirstChild()).getData().trim();
@@ -490,8 +491,9 @@ public class AllocationFileLoaderService extends 
AbstractService {
 
 for (int j = 0; j < fields.getLength(); j++) {
   Node fieldNode = fields.item(j);
-  if (!(fieldNode instanceof Element))
+  if (!(fieldNode instanceof Element)) {
 continue;
+  }
   Element field = (Element) fieldNode;
   if ("minResources".equals(field.getTagName())) {
 String text = ((Text)field.getFirstChild()).getData().trim();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[45/50] [abbrv] hadoop git commit: HDFS-10881. Federation State Store Driver API. Contributed by Jason Kace and Inigo Goiri.

2017-08-12 Thread inigoiri
HDFS-10881. Federation State Store Driver API. Contributed by Jason Kace and 
Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a88f570b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a88f570b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a88f570b

Branch: refs/heads/HDFS-10467
Commit: a88f570b37161649028cf53dcc232c494631ef21
Parents: 6f4d9f1
Author: Inigo 
Authored: Wed Mar 29 19:35:06 2017 -0700
Committer: Inigo Goiri 
Committed: Sat Aug 12 09:36:24 2017 -0700

--
 .../store/StateStoreUnavailableException.java   |  33 
 .../federation/store/StateStoreUtils.java   |  72 +++
 .../store/driver/StateStoreDriver.java  | 172 +
 .../driver/StateStoreRecordOperations.java  | 164 
 .../store/driver/impl/StateStoreBaseImpl.java   |  69 +++
 .../store/driver/impl/package-info.java |  39 
 .../federation/store/driver/package-info.java   |  37 
 .../federation/store/protocol/package-info.java |  31 +++
 .../federation/store/records/BaseRecord.java| 189 +++
 .../federation/store/records/QueryResult.java   |  56 ++
 .../federation/store/records/package-info.java  |  36 
 11 files changed, 898 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a88f570b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreUnavailableException.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreUnavailableException.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreUnavailableException.java
new file mode 100644
index 000..4e6f8c8
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreUnavailableException.java
@@ -0,0 +1,33 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.store;
+
+import java.io.IOException;
+
+/**
+ * Thrown when the state store is not reachable or available. Cached APIs and
+ * queries may succeed. Client should retry again later.
+ */
+public class StateStoreUnavailableException extends IOException {
+
+  private static final long serialVersionUID = 1L;
+
+  public StateStoreUnavailableException(String msg) {
+super(msg);
+  }
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a88f570b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreUtils.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreUtils.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreUtils.java
new file mode 100644
index 000..8c681df
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/StateStoreUtils.java
@@ -0,0 +1,72 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language 

[32/50] [abbrv] hadoop git commit: HDFS-10630. Federation State Store FS Implementation. Contributed by Jason Kace and Inigo Goiri.

2017-08-12 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe3672c9/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreDriverBase.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreDriverBase.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreDriverBase.java
new file mode 100644
index 000..7f0b36a
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/store/driver/TestStateStoreDriverBase.java
@@ -0,0 +1,483 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.store.driver;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.lang.reflect.Method;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+
+import org.apache.hadoop.conf.Configuration;
+import 
org.apache.hadoop.hdfs.server.federation.store.FederationStateStoreTestUtils;
+import org.apache.hadoop.hdfs.server.federation.store.StateStoreService;
+import org.apache.hadoop.hdfs.server.federation.store.records.BaseRecord;
+import org.apache.hadoop.hdfs.server.federation.store.records.Query;
+import org.apache.hadoop.hdfs.server.federation.store.records.QueryResult;
+import org.junit.AfterClass;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Base tests for the driver. The particular implementations will use this to
+ * test their functionality.
+ */
+public class TestStateStoreDriverBase {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(TestStateStoreDriverBase.class);
+
+  private static StateStoreService stateStore;
+  private static Configuration conf;
+
+
+  /**
+   * Get the State Store driver.
+   * @return State Store driver.
+   */
+  protected StateStoreDriver getStateStoreDriver() {
+return stateStore.getDriver();
+  }
+
+  @AfterClass
+  public static void tearDownCluster() {
+if (stateStore != null) {
+  stateStore.stop();
+}
+  }
+
+  /**
+   * Get a new State Store using this configuration.
+   *
+   * @param config Configuration for the State Store.
+   * @throws Exception If we cannot get the State Store.
+   */
+  public static void getStateStore(Configuration config) throws Exception {
+conf = config;
+stateStore = FederationStateStoreTestUtils.getStateStore(conf);
+  }
+
+  private  T generateFakeRecord(Class recordClass)
+  throws IllegalArgumentException, IllegalAccessException, IOException {
+
+// TODO add record
+return null;
+  }
+
+  /**
+   * Validate if a record is the same.
+   *
+   * @param original
+   * @param committed
+   * @param assertEquals Assert if the records are equal or just return.
+   * @return
+   * @throws IllegalArgumentException
+   * @throws IllegalAccessException
+   */
+  private boolean validateRecord(
+  BaseRecord original, BaseRecord committed, boolean assertEquals)
+  throws IllegalArgumentException, IllegalAccessException {
+
+boolean ret = true;
+
+Map fields = getFields(original);
+for (String key : fields.keySet()) {
+  if (key.equals("dateModified") ||
+  key.equals("dateCreated") ||
+  key.equals("proto")) {
+// Fields are updated/set on commit and fetch and may not match
+// the fields that are initialized in a non-committed object.
+continue;
+  }
+  Object data1 = getField(original, key);
+  Object data2 = getField(committed, key);
+  if (assertEquals) {
+assertEquals("Field " + key + " does not match", data1, data2);
+  } else if (!data1.equals(data2)) {
+ret = false;
+  }
+}
+
+long now = 

[20/50] [abbrv] hadoop git commit: HADOOP-10392. Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem) (ajisakaa via aw)

2017-08-12 Thread inigoiri
HADOOP-10392. Use FileSystem#makeQualified(Path) instead of 
Path#makeQualified(FileSystem) (ajisakaa via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4222c971
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4222c971
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4222c971

Branch: refs/heads/HDFS-10467
Commit: 4222c971080f2b150713727092c7197df58c88e5
Parents: d964062
Author: Allen Wittenauer 
Authored: Fri Aug 11 09:25:56 2017 -0700
Committer: Allen Wittenauer 
Committed: Fri Aug 11 09:25:56 2017 -0700

--
 .../java/org/apache/hadoop/fs/FileUtil.java |  4 +--
 .../org/apache/hadoop/fs/ftp/FTPFileSystem.java |  4 +--
 .../java/org/apache/hadoop/io/SequenceFile.java |  2 +-
 .../apache/hadoop/fs/TestLocalFileSystem.java   |  6 ++---
 .../java/org/apache/hadoop/io/FileBench.java|  2 +-
 .../mapred/MiniMRClientClusterFactory.java  |  4 +--
 .../mapred/TestCombineFileInputFormat.java  |  6 ++---
 .../TestCombineSequenceFileInputFormat.java |  7 +++--
 .../mapred/TestCombineTextInputFormat.java  |  7 +++--
 .../mapred/TestConcatenatedCompressedInput.java |  6 ++---
 .../org/apache/hadoop/mapred/TestMapRed.java|  4 +--
 .../hadoop/mapred/TestMiniMRChildTask.java  |  4 +--
 .../hadoop/mapred/TestTextInputFormat.java  |  8 +++---
 .../TestWrappedRecordReaderClassloader.java |  4 +--
 .../lib/join/TestWrappedRRClassloader.java  |  4 +--
 .../mapreduce/util/MRAsyncDiskService.java  |  2 +-
 .../apache/hadoop/mapreduce/v2/TestMRJobs.java  |  4 +--
 .../v2/TestMRJobsWithHistoryService.java|  4 +--
 .../org/apache/hadoop/tools/HadoopArchives.java |  2 +-
 .../apache/hadoop/mapred/gridmix/Gridmix.java   |  2 +-
 .../hadoop/mapred/gridmix/PseudoLocalFs.java|  8 +-
 .../hadoop/mapred/gridmix/TestFilePool.java |  4 +--
 .../hadoop/mapred/gridmix/TestFileQueue.java|  8 +++---
 .../mapred/gridmix/TestPseudoLocalFs.java   |  2 +-
 .../hadoop/mapred/gridmix/TestUserResolve.java  |  4 +--
 .../hadoop/fs/swift/util/SwiftTestUtils.java|  2 +-
 .../fs/swift/SwiftFileSystemBaseTest.java   |  2 +-
 .../TestSwiftFileSystemPartitionedUploads.java  |  4 +--
 .../hadoop/tools/rumen/TestHistograms.java  |  6 ++---
 .../org/apache/hadoop/streaming/StreamJob.java  | 27 ++--
 30 files changed, 78 insertions(+), 75 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
index eb8a5c3..72b9615 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
@@ -295,8 +295,8 @@ public class FileUtil {
 Path dst)
 throws IOException {
 if (srcFS == dstFS) {
-  String srcq = src.makeQualified(srcFS).toString() + Path.SEPARATOR;
-  String dstq = dst.makeQualified(dstFS).toString() + Path.SEPARATOR;
+  String srcq = srcFS.makeQualified(src).toString() + Path.SEPARATOR;
+  String dstq = dstFS.makeQualified(dst).toString() + Path.SEPARATOR;
   if (dstq.startsWith(srcq)) {
 if (srcq.length() == dstq.length()) {
   throw new IOException("Cannot copy " + src + " to itself.");

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4222c971/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
index 4c1236b..644cf4e 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
@@ -505,7 +505,7 @@ public class FTPFileSystem extends FileSystem {
   long modTime = -1; // Modification time of root dir not known.
   Path root = new Path("/");
   return new FileStatus(length, isDir, blockReplication, blockSize,
-  modTime, root.makeQualified(this));
+  modTime, this.makeQualified(root));
 }
 String pathName = parentPath.toUri().getPath();
 FTPFile[] ftpFiles = client.listFiles(pathName);
@@ -546,7 

[31/50] [abbrv] hadoop git commit: HDFS-11303. Hedged read might hang infinitely if read data from all DN failed . Contributed by Chen Zhang, Wei-chiu Chuang, and John Zhuge.

2017-08-12 Thread inigoiri
HDFS-11303. Hedged read might hang infinitely if read data from all DN failed . 
Contributed by Chen Zhang, Wei-chiu Chuang, and John Zhuge.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8b242f09
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8b242f09
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8b242f09

Branch: refs/heads/HDFS-10467
Commit: 8b242f09a61a7536d2422546bfa6c2aaf1d57ed6
Parents: 28d97b7
Author: John Zhuge 
Authored: Thu Aug 10 14:04:36 2017 -0700
Committer: John Zhuge 
Committed: Fri Aug 11 19:42:07 2017 -0700

--
 .../org/apache/hadoop/hdfs/DFSInputStream.java  | 11 ++--
 .../java/org/apache/hadoop/hdfs/TestPread.java  | 63 
 2 files changed, 70 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8b242f09/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
index dcc997c..6bff172 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
@@ -1131,8 +1131,9 @@ public class DFSInputStream extends FSInputStream
 Future firstRequest = hedgedService
 .submit(getFromDataNodeCallable);
 futures.add(firstRequest);
+Future future = null;
 try {
-  Future future = hedgedService.poll(
+  future = hedgedService.poll(
   conf.getHedgedReadThresholdMillis(), TimeUnit.MILLISECONDS);
   if (future != null) {
 ByteBuffer result = future.get();
@@ -1142,16 +1143,18 @@ public class DFSInputStream extends FSInputStream
   }
   DFSClient.LOG.debug("Waited {}ms to read from {}; spawning hedged "
   + "read", conf.getHedgedReadThresholdMillis(), chosenNode.info);
-  // Ignore this node on next go around.
-  ignored.add(chosenNode.info);
   dfsClient.getHedgedReadMetrics().incHedgedReadOps();
   // continue; no need to refresh block locations
 } catch (ExecutionException e) {
-  // Ignore
+  futures.remove(future);
 } catch (InterruptedException e) {
   throw new InterruptedIOException(
   "Interrupted while waiting for reading task");
 }
+// Ignore this node on next go around.
+// If poll timeout and the request still ongoing, don't consider it
+// again. If read data failed, don't consider it either.
+ignored.add(chosenNode.info);
   } else {
 // We are starting up a 'hedged' read. We have a read already
 // ongoing. Call getBestNodeDNAddrPair instead of chooseDataNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8b242f09/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPread.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPread.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPread.java
index 85fc97b..bcb02b3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPread.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPread.java
@@ -59,6 +59,8 @@ import org.mockito.invocation.InvocationOnMock;
 import org.mockito.stubbing.Answer;
 
 import com.google.common.base.Supplier;
+import org.slf4j.LoggerFactory;
+import org.slf4j.Logger;
 
 /**
  * This class tests the DFS positional read functionality in a single node
@@ -72,6 +74,9 @@ public class TestPread {
   boolean simulatedStorage;
   boolean isHedgedRead;
 
+  private static final Logger LOG =
+  LoggerFactory.getLogger(TestPread.class.getName());
+
   @Before
   public void setup() {
 simulatedStorage = false;
@@ -551,6 +556,64 @@ public class TestPread {
 }
   }
 
+  @Test(timeout=3)
+  public void testHedgedReadFromAllDNFailed() throws IOException {
+Configuration conf = new Configuration();
+int numHedgedReadPoolThreads = 5;
+final int hedgedReadTimeoutMillis = 50;
+
+conf.setInt(HdfsClientConfigKeys.HedgedRead.THREADPOOL_SIZE_KEY,
+numHedgedReadPoolThreads);
+conf.setLong(HdfsClientConfigKeys.HedgedRead.THRESHOLD_MILLIS_KEY,
+hedgedReadTimeoutMillis);
+conf.setInt(HdfsClientConfigKeys.Retry.WINDOW_BASE_KEY, 

[25/50] [abbrv] hadoop git commit: YARN-6952. Enable scheduling monitor in FS (Contributed by Yufei Gu via Daniel Templeton)

2017-08-12 Thread inigoiri
YARN-6952. Enable scheduling monitor in FS (Contributed by Yufei Gu via Daniel 
Templeton)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/218588be
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/218588be
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/218588be

Branch: refs/heads/HDFS-10467
Commit: 218588be773123404af4fd26eed5c9e3625feaa7
Parents: bbbf0e2
Author: Daniel Templeton 
Authored: Fri Aug 11 14:02:38 2017 -0700
Committer: Daniel Templeton 
Committed: Fri Aug 11 14:04:19 2017 -0700

--
 .../yarn/server/resourcemanager/ResourceManager.java  |  9 +++--
 .../resourcemanager/monitor/SchedulingEditPolicy.java |  4 ++--
 .../server/resourcemanager/monitor/SchedulingMonitor.java |  4 +---
 .../capacity/ProportionalCapacityPreemptionPolicy.java|  4 ++--
 .../monitor/invariants/InvariantsChecker.java | 10 +-
 .../monitor/invariants/MetricsInvariantChecker.java   |  7 +++
 6 files changed, 16 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/218588be/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
index cb7daf9..5333f25 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
@@ -90,7 +90,6 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.ContainerAlloca
 import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNode;
 import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeEvent;
 import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeEventType;
-import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.PreemptableResourceScheduler;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEvent;
@@ -698,8 +697,7 @@ public class ResourceManager extends CompositeService 
implements Recoverable {
 }
   }
 
-  // creating monitors that handle preemption
-  createPolicyMonitors();
+  createSchedulerMonitors();
 
   masterService = createApplicationMasterService();
   addService(masterService) ;
@@ -800,9 +798,8 @@ public class ResourceManager extends CompositeService 
implements Recoverable {
 
 }
 
-protected void createPolicyMonitors() {
-  if (scheduler instanceof PreemptableResourceScheduler
-  && conf.getBoolean(YarnConfiguration.RM_SCHEDULER_ENABLE_MONITORS,
+protected void createSchedulerMonitors() {
+  if (conf.getBoolean(YarnConfiguration.RM_SCHEDULER_ENABLE_MONITORS,
   YarnConfiguration.DEFAULT_RM_SCHEDULER_ENABLE_MONITORS)) {
 LOG.info("Loading policy monitors");
 List policies = conf.getInstances(

http://git-wip-us.apache.org/repos/asf/hadoop/blob/218588be/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/SchedulingEditPolicy.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/SchedulingEditPolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/SchedulingEditPolicy.java
index 47458a3..d2550e6 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/SchedulingEditPolicy.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/SchedulingEditPolicy.java
@@ -19,12 +19,12 @@ package 

[37/50] [abbrv] hadoop git commit: HDFS-10629. Federation Roter. Contributed by Jason Kace and Inigo Goiri.

2017-08-12 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/6f4d9f10/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
new file mode 100644
index 000..ee6f57d
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java
@@ -0,0 +1,290 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.hadoop.conf.Configuration;
+import 
org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
+import 
org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeContext;
+import 
org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeServiceState;
+import 
org.apache.hadoop.hdfs.server.federation.resolver.FederationNamespaceInfo;
+import 
org.apache.hadoop.hdfs.server.federation.resolver.FileSubclusterResolver;
+import 
org.apache.hadoop.hdfs.server.federation.resolver.NamenodePriorityComparator;
+import org.apache.hadoop.hdfs.server.federation.resolver.NamenodeStatusReport;
+import org.apache.hadoop.hdfs.server.federation.resolver.PathLocation;
+import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
+import org.apache.hadoop.hdfs.server.federation.store.StateStoreService;
+import org.apache.hadoop.util.Time;
+
+/**
+ * In-memory cache/mock of a namenode and file resolver. Stores the most
+ * recently updated NN information for each nameservice and block pool. Also
+ * stores a virtual mount table for resolving global namespace paths to local 
NN
+ * paths.
+ */
+public class MockResolver
+implements ActiveNamenodeResolver, FileSubclusterResolver {
+
+  private Map resolver =
+  new HashMap();
+  private Map locations =
+  new HashMap();
+  private Set namespaces =
+  new HashSet();
+  private String defaultNamespace = null;
+
+  public MockResolver(Configuration conf, StateStoreService store) {
+this.cleanRegistrations();
+  }
+
+  public void addLocation(String mount, String nameservice, String location) {
+RemoteLocation remoteLocation = new RemoteLocation(nameservice, location);
+List locationsList = locations.get(mount);
+if (locationsList == null) {
+  locationsList = new LinkedList();
+  locations.put(mount, locationsList);
+}
+if (!locationsList.contains(remoteLocation)) {
+  locationsList.add(remoteLocation);
+}
+
+if (this.defaultNamespace == null) {
+  this.defaultNamespace = nameservice;
+}
+  }
+
+  public synchronized void cleanRegistrations() {
+this.resolver =
+new HashMap();
+this.namespaces = new HashSet();
+  }
+
+  @Override
+  public void updateActiveNamenode(
+  String ns, InetSocketAddress successfulAddress) {
+
+String address = successfulAddress.getHostName() + ":" +
+successfulAddress.getPort();
+String key = ns;
+if (key != null) {
+  // Update the active entry
+  @SuppressWarnings("unchecked")
+  List iterator =
+  (List) resolver.get(key);
+  for (FederationNamenodeContext namenode : iterator) {
+if (namenode.getRpcAddress().equals(address)) {
+  MockNamenodeContext nn = (MockNamenodeContext) namenode;
+  nn.setState(FederationNamenodeServiceState.ACTIVE);
+  break;
+}
+  }
+  Collections.sort(iterator, new NamenodePriorityComparator());
+}
+  }
+
+  @Override
+  public List
+  getNamenodesForNameserviceId(String nameserviceId) {
+return resolver.get(nameserviceId);
+  }

[16/50] [abbrv] hadoop git commit: HDFS-11957. Enable POSIX ACL inheritance by default. Contributed by John Zhuge.

2017-08-12 Thread inigoiri
HDFS-11957. Enable POSIX ACL inheritance by default. Contributed by John Zhuge.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/312e57b9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/312e57b9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/312e57b9

Branch: refs/heads/HDFS-10467
Commit: 312e57b95477ec95e6735f5721c646ad1df019f8
Parents: a8b7546
Author: John Zhuge 
Authored: Fri Jun 9 08:42:16 2017 -0700
Committer: John Zhuge 
Committed: Thu Aug 10 10:30:47 2017 -0700

--
 .../java/org/apache/hadoop/hdfs/DFSConfigKeys.java|  2 +-
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml   |  2 +-
 .../src/site/markdown/HdfsPermissionsGuide.md |  2 +-
 .../test/java/org/apache/hadoop/cli/TestAclCLI.java   |  2 ++
 .../hadoop/hdfs/server/namenode/FSAclBaseTest.java|  8 
 .../hdfs/server/namenode/TestFSImageWithAcl.java  | 14 --
 6 files changed, 17 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/312e57b9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index dc9bf76..f4c383e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -269,7 +269,7 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
   public static final String DFS_NAMENODE_POSIX_ACL_INHERITANCE_ENABLED_KEY =
   "dfs.namenode.posix.acl.inheritance.enabled";
   public static final boolean
-  DFS_NAMENODE_POSIX_ACL_INHERITANCE_ENABLED_DEFAULT = false;
+  DFS_NAMENODE_POSIX_ACL_INHERITANCE_ENABLED_DEFAULT = true;
   public static final String  DFS_NAMENODE_XATTRS_ENABLED_KEY = 
"dfs.namenode.xattrs.enabled";
   public static final boolean DFS_NAMENODE_XATTRS_ENABLED_DEFAULT = true;
   public static final String  DFS_ADMIN = "dfs.cluster.administrators";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/312e57b9/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 4942967..03becc9 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -459,7 +459,7 @@
 
   
 dfs.namenode.posix.acl.inheritance.enabled
-false
+true
 
   Set to true to enable POSIX style ACL inheritance. When it is enabled
   and the create request comes from a compatible client, the NameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/312e57b9/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsPermissionsGuide.md
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsPermissionsGuide.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsPermissionsGuide.md
index c502534..82b5cec 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsPermissionsGuide.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsPermissionsGuide.md
@@ -322,7 +322,7 @@ Configuration Parameters
 
 *   `dfs.namenode.posix.acl.inheritance.enabled`
 
-Set to true to enable POSIX style ACL inheritance. Disabled by default.
+Set to true to enable POSIX style ACL inheritance. Enabled by default.
 When it is enabled and the create request comes from a compatible client,
 the NameNode will apply default ACLs from the parent directory to
 the create mode and ignore the client umask. If no default ACL is found,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/312e57b9/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java
index 75111bb..9cf2180 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestAclCLI.java
@@ -34,6 +34,8 @@ public class TestAclCLI extends CLITestHelperDFS {
 
   protected void initConf() {
 

[12/50] [abbrv] hadoop git commit: MAPREDUCE-6923. Optimize MapReduce Shuffle I/O for small partitions. Contributed by Robert Schmidtke.

2017-08-12 Thread inigoiri
MAPREDUCE-6923. Optimize MapReduce Shuffle I/O for small partitions. 
Contributed by Robert Schmidtke.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ac7d0604
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ac7d0604
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ac7d0604

Branch: refs/heads/HDFS-10467
Commit: ac7d0604bc73c0925eff240ad9837e14719d57b7
Parents: b5c02f9
Author: Ravi Prakash 
Authored: Wed Aug 9 15:39:52 2017 -0700
Committer: Ravi Prakash 
Committed: Wed Aug 9 15:39:52 2017 -0700

--
 .../main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java  | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ac7d0604/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java
index cb9b5e0..79045f9 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java
@@ -111,7 +111,10 @@ public class FadvisedFileRegion extends DefaultFileRegion {
 
 long trans = actualCount;
 int readSize;
-ByteBuffer byteBuffer = ByteBuffer.allocate(this.shuffleBufferSize);
+ByteBuffer byteBuffer = ByteBuffer.allocate(
+Math.min(
+this.shuffleBufferSize,
+trans > Integer.MAX_VALUE ? Integer.MAX_VALUE : (int) trans));
 
 while(trans > 0L &&
 (readSize = fileChannel.read(byteBuffer, this.position+position)) > 0) 
{


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[13/50] [abbrv] hadoop git commit: YARN-6631. Refactor loader.js in new Yarn UI. Contributed by Akhil P B.

2017-08-12 Thread inigoiri
YARN-6631. Refactor loader.js in new Yarn UI. Contributed by Akhil P B.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8d953c23
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8d953c23
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8d953c23

Branch: refs/heads/HDFS-10467
Commit: 8d953c2359c5b12cf5b1f3c14be3ff5bb74242d0
Parents: ac7d060
Author: Sunil G 
Authored: Thu Aug 10 11:53:26 2017 +0530
Committer: Sunil G 
Committed: Thu Aug 10 11:53:26 2017 +0530

--
 .../src/main/webapp/app/initializers/loader.js  | 42 +---
 1 file changed, 19 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8d953c23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/initializers/loader.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/initializers/loader.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/initializers/loader.js
index aa8fb07..55f6e1b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/initializers/loader.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/initializers/loader.js
@@ -20,25 +20,27 @@
 
 import Ember from 'ember';
 
-function getTimeLineURL() {
-  return '/conf?name=yarn.timeline-service.webapp.address';
+function getTimeLineURL(rmhost) {
+  var url = window.location.protocol + '//' +
+(ENV.hosts.localBaseAddress? ENV.hosts.localBaseAddress + '/' : '') + 
rmhost;
+
+  url += '/conf?name=yarn.timeline-service.webapp.address';
+  Ember.Logger.log("Get Timeline Address URL: " + url);
+  return url;
 }
 
 function updateConfigs(application) {
   var hostname = window.location.hostname;
-  var rmhost = hostname +
-(window.location.port ? ':' + window.location.port: '');
-
-  Ember.Logger.log("RM Address:" + rmhost);
+  var rmhost = hostname + (window.location.port ? ':' + window.location.port: 
'');
 
   if(!ENV.hosts.rmWebAddress) {
-ENV = {
-   hosts: {
-  rmWebAddress: rmhost,
-},
-};
+ENV.hosts.rmWebAddress = rmhost;
+  } else {
+rmhost = ENV.hosts.rmWebAddress;
   }
 
+  Ember.Logger.log("RM Address: " + rmhost);
+
   if(!ENV.hosts.timelineWebAddress) {
 var timelinehost = "";
 $.ajax({
@@ -46,7 +48,7 @@ function updateConfigs(application) {
   dataType: 'json',
   async: true,
   context: this,
-  url: getTimeLineURL(),
+  url: getTimeLineURL(rmhost),
   success: function(data) {
 timelinehost = data.property.value;
 ENV.hosts.timelineWebAddress = timelinehost;
@@ -54,24 +56,18 @@ function updateConfigs(application) {
 var address = timelinehost.split(":")[0];
 var port = timelinehost.split(":")[1];
 
-Ember.Logger.log("Timeline Address from RM:" + address + ":" + port);
+Ember.Logger.log("Timeline Address from RM: " + timelinehost);
 
 if(address === "0.0.0.0" || address === "localhost") {
   var updatedAddress =  hostname + ":" + port;
-
-  /* Timeline v2 is not supporting CORS, so make as default*/
-  ENV = {
- hosts: {
-rmWebAddress: rmhost,
-timelineWebAddress: updatedAddress,
-  },
-  };
-  Ember.Logger.log("Timeline Updated Address:" + updatedAddress);
+  ENV.hosts.timelineWebAddress = updatedAddress;
+  Ember.Logger.log("Timeline Updated Address: " + updatedAddress);
 }
 application.advanceReadiness();
-  },
+  }
 });
   } else {
+Ember.Logger.log("Timeline Address: " + ENV.hosts.timelineWebAddress);
 application.advanceReadiness();
   }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[40/50] [abbrv] hadoop git commit: HDFS-11546. Federation Router RPC server. Contributed by Jason Kace and Inigo Goiri.

2017-08-12 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b8e03592/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
index 24792bb..4bae71e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
@@ -17,16 +17,109 @@
  */
 package org.apache.hadoop.hdfs.server.federation.router;
 
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_ROUTER_HANDLER_COUNT_DEFAULT;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_ROUTER_HANDLER_COUNT_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_ROUTER_HANDLER_QUEUE_SIZE_DEFAULT;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_ROUTER_HANDLER_QUEUE_SIZE_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_ROUTER_READER_COUNT_DEFAULT;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_ROUTER_READER_COUNT_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_ROUTER_READER_QUEUE_SIZE_DEFAULT;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_ROUTER_READER_QUEUE_SIZE_KEY;
+
+import java.io.FileNotFoundException;
 import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.Collection;
+import java.util.EnumSet;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.TreeMap;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.crypto.CryptoProtocolVersion;
+import org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries;
+import org.apache.hadoop.fs.CacheFlag;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.ContentSummary;
+import org.apache.hadoop.fs.CreateFlag;
+import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FsServerDefaults;
+import org.apache.hadoop.fs.Options;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.QuotaUsage;
+import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.fs.XAttr;
+import org.apache.hadoop.fs.XAttrSetFlag;
+import org.apache.hadoop.fs.permission.AclEntry;
+import org.apache.hadoop.fs.permission.AclStatus;
+import org.apache.hadoop.fs.permission.FsAction;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.hdfs.AddBlockFlag;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DFSUtil;
+import org.apache.hadoop.hdfs.inotify.EventBatchList;
+import org.apache.hadoop.hdfs.protocol.AddingECPolicyResponse;
+import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
+import org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry;
+import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo;
+import org.apache.hadoop.hdfs.protocol.CachePoolEntry;
+import org.apache.hadoop.hdfs.protocol.CachePoolInfo;
 import org.apache.hadoop.hdfs.protocol.ClientProtocol;
+import org.apache.hadoop.hdfs.protocol.CorruptFileBlocks;
+import org.apache.hadoop.hdfs.protocol.DatanodeID;
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
+import org.apache.hadoop.hdfs.protocol.DirectoryListing;
+import org.apache.hadoop.hdfs.protocol.EncryptionZone;
+import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy;
+import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants.RollingUpgradeAction;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants.SafeModeAction;
+import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
+import org.apache.hadoop.hdfs.protocol.LastBlockWithStatus;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
+import org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo;
+import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
+import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
+import 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.ClientNamenodeProtocol;
+import org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB;
+import 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB;
+import org.apache.hadoop.hdfs.security.token.block.DataEncryptionKey;
+import 
org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
 import 
org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
+import 

[46/50] [abbrv] hadoop git commit: HDFS-10687. Federation Membership State Store internal API. Contributed by Jason Kace and Inigo Goiri.

2017-08-12 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b85b5f31/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestNamenodeResolver.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestNamenodeResolver.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestNamenodeResolver.java
new file mode 100644
index 000..2d74505
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestNamenodeResolver.java
@@ -0,0 +1,284 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.resolver;
+
+import static 
org.apache.hadoop.hdfs.server.federation.FederationTestUtils.NAMENODES;
+import static 
org.apache.hadoop.hdfs.server.federation.FederationTestUtils.NAMESERVICES;
+import static 
org.apache.hadoop.hdfs.server.federation.FederationTestUtils.ROUTERS;
+import static 
org.apache.hadoop.hdfs.server.federation.FederationTestUtils.createNamenodeReport;
+import static 
org.apache.hadoop.hdfs.server.federation.FederationTestUtils.verifyException;
+import static 
org.apache.hadoop.hdfs.server.federation.store.FederationStateStoreTestUtils.clearRecords;
+import static 
org.apache.hadoop.hdfs.server.federation.store.FederationStateStoreTestUtils.getStateStoreConfiguration;
+import static 
org.apache.hadoop.hdfs.server.federation.store.FederationStateStoreTestUtils.newStateStore;
+import static 
org.apache.hadoop.hdfs.server.federation.store.FederationStateStoreTestUtils.waitStateStore;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ha.HAServiceProtocol.HAServiceState;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.server.federation.store.StateStoreService;
+import 
org.apache.hadoop.hdfs.server.federation.store.StateStoreUnavailableException;
+import org.apache.hadoop.hdfs.server.federation.store.records.MembershipState;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Test the basic {@link ActiveNamenodeResolver} functionality.
+ */
+public class TestNamenodeResolver {
+
+  private static StateStoreService stateStore;
+  private static ActiveNamenodeResolver namenodeResolver;
+
+  @BeforeClass
+  public static void create() throws Exception {
+
+Configuration conf = getStateStoreConfiguration();
+
+// Reduce expirations to 5 seconds
+conf.setLong(
+DFSConfigKeys.FEDERATION_STORE_MEMBERSHIP_EXPIRATION_MS,
+TimeUnit.SECONDS.toMillis(5));
+
+stateStore = newStateStore(conf);
+assertNotNull(stateStore);
+
+namenodeResolver = new MembershipNamenodeResolver(conf, stateStore);
+namenodeResolver.setRouterId(ROUTERS[0]);
+  }
+
+  @AfterClass
+  public static void destroy() throws Exception {
+stateStore.stop();
+stateStore.close();
+  }
+
+  @Before
+  public void setup() throws IOException, InterruptedException {
+// Wait for state store to connect
+stateStore.loadDriver();
+waitStateStore(stateStore, 1);
+
+// Clear NN registrations
+boolean cleared = clearRecords(stateStore, MembershipState.class);
+assertTrue(cleared);
+  }
+
+  @Test
+  public void testStateStoreDisconnected() throws Exception {
+
+// Add an entry to the store
+NamenodeStatusReport report = createNamenodeReport(
+NAMESERVICES[0], NAMENODES[0], HAServiceState.ACTIVE);
+assertTrue(namenodeResolver.registerNamenode(report));
+
+// Close the data store driver
+stateStore.closeDriver();
+assertFalse(stateStore.isDriverReady());
+
+// Flush the caches
+stateStore.refreshCaches(true);
+
+// Verify commands 

[50/50] [abbrv] hadoop git commit: HDFS-10646. Federation admin tool. Contributed by Inigo Goiri.

2017-08-12 Thread inigoiri
HDFS-10646. Federation admin tool. Contributed by Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/04c92c9b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/04c92c9b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/04c92c9b

Branch: refs/heads/HDFS-10467
Commit: 04c92c9ba723e426fe0b63d8ede34fbaed28da0d
Parents: ca78fcb
Author: Inigo Goiri 
Authored: Tue Aug 8 14:44:43 2017 -0700
Committer: Inigo Goiri 
Committed: Sat Aug 12 09:37:03 2017 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/pom.xml |   1 +
 .../hadoop-hdfs/src/main/bin/hdfs   |   5 +
 .../hadoop-hdfs/src/main/bin/hdfs.cmd   |   7 +-
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  19 ++
 .../hdfs/protocolPB/RouterAdminProtocolPB.java  |  44 +++
 ...uterAdminProtocolServerSideTranslatorPB.java | 151 
 .../RouterAdminProtocolTranslatorPB.java| 150 
 .../resolver/MembershipNamenodeResolver.java|  34 +-
 .../hdfs/server/federation/router/Router.java   |  52 +++
 .../federation/router/RouterAdminServer.java| 183 ++
 .../server/federation/router/RouterClient.java  |  76 +
 .../hdfs/tools/federation/RouterAdmin.java  | 341 +++
 .../hdfs/tools/federation/package-info.java |  28 ++
 .../src/main/proto/RouterProtocol.proto |  47 +++
 .../src/main/resources/hdfs-default.xml |  46 +++
 .../server/federation/RouterConfigBuilder.java  |  26 ++
 .../server/federation/RouterDFSCluster.java |  43 ++-
 .../server/federation/StateStoreDFSCluster.java | 148 
 .../federation/router/TestRouterAdmin.java  | 261 ++
 19 files changed, 1644 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/04c92c9b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
index a2233b5..b9edfe2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
@@ -337,6 +337,7 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   editlog.proto
   fsimage.proto
   FederationProtocol.proto
+  RouterProtocol.proto
 
   
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/04c92c9b/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
index b1f44a4..d51a8e2 100755
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
@@ -31,6 +31,7 @@ function hadoop_usage
   hadoop_add_option "--hosts filename" "list of hosts to use in worker mode"
   hadoop_add_option "--workers" "turn on worker mode"
 
+<<< HEAD
   hadoop_add_subcommand "balancer" daemon "run a cluster balancing utility"
   hadoop_add_subcommand "cacheadmin" admin "configure the HDFS cache"
   hadoop_add_subcommand "classpath" client "prints the class path needed to 
get the hadoop jar and the required libraries"
@@ -42,6 +43,7 @@ function hadoop_usage
   hadoop_add_subcommand "diskbalancer" daemon "Distributes data evenly among 
disks on a given node"
   hadoop_add_subcommand "envvars" client "display computed Hadoop environment 
variables"
   hadoop_add_subcommand "ec" admin "run a HDFS ErasureCoding CLI"
+  hadoop_add_subcommand "federation" admin "manage Router-based federation"
   hadoop_add_subcommand "fetchdt" client "fetch a delegation token from the 
NameNode"
   hadoop_add_subcommand "fsck" admin "run a DFS filesystem checking utility"
   hadoop_add_subcommand "getconf" client "get config values from configuration"
@@ -181,6 +183,9 @@ function hdfscmd_case
   HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"
   HADOOP_CLASSNAME='org.apache.hadoop.hdfs.server.federation.router.Router'
 ;;
+federation)
+  HADOOP_CLASSNAME='org.apache.hadoop.hdfs.tools.federation.RouterAdmin'
+;;
 secondarynamenode)
   HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"
   
HADOOP_CLASSNAME='org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode'

http://git-wip-us.apache.org/repos/asf/hadoop/blob/04c92c9b/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
index b9853d6..53bdf70 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
+++ 

[18/50] [abbrv] hadoop git commit: HDFS-12287. Remove a no-longer applicable TODO comment in DatanodeManager. Contributed by Chen Liang.

2017-08-12 Thread inigoiri
HDFS-12287. Remove a no-longer applicable TODO comment in DatanodeManager. 
Contributed by Chen Liang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f13ca949
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f13ca949
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f13ca949

Branch: refs/heads/HDFS-10467
Commit: f13ca94954072c9b898b142a5ff86f2c1f3ee55a
Parents: a32e013
Author: Yiqun Lin 
Authored: Fri Aug 11 14:13:45 2017 +0800
Committer: Yiqun Lin 
Committed: Fri Aug 11 14:13:45 2017 +0800

--
 .../apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java | 2 --
 1 file changed, 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f13ca949/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index d705fec..78783ca 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -212,8 +212,6 @@ public class DatanodeManager {
 this.namesystem = namesystem;
 this.blockManager = blockManager;
 
-// TODO: Enables DFSNetworkTopology by default after more stress
-// testings/validations.
 this.useDfsNetworkTopology = conf.getBoolean(
 DFSConfigKeys.DFS_USE_DFS_NETWORK_TOPOLOGY_KEY,
 DFSConfigKeys.DFS_USE_DFS_NETWORK_TOPOLOGY_DEFAULT);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[33/50] [abbrv] hadoop git commit: HDFS-10630. Federation State Store FS Implementation. Contributed by Jason Kace and Inigo Goiri.

2017-08-12 Thread inigoiri
HDFS-10630. Federation State Store FS Implementation. Contributed by Jason Kace 
and Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fe3672c9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fe3672c9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fe3672c9

Branch: refs/heads/HDFS-10467
Commit: fe3672c9219d101d7cbee6668b119c5403ad3a36
Parents: b93d724
Author: Inigo Goiri 
Authored: Tue May 2 15:49:53 2017 -0700
Committer: Inigo Goiri 
Committed: Sat Aug 12 09:36:24 2017 -0700

--
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  14 +
 .../federation/router/PeriodicService.java  | 198 
 .../StateStoreConnectionMonitorService.java |  67 +++
 .../federation/store/StateStoreService.java | 152 +-
 .../federation/store/StateStoreUtils.java   |  51 +-
 .../store/driver/StateStoreDriver.java  |  31 +-
 .../driver/StateStoreRecordOperations.java  |  17 +-
 .../store/driver/impl/StateStoreBaseImpl.java   |  31 +-
 .../driver/impl/StateStoreFileBaseImpl.java | 429 
 .../store/driver/impl/StateStoreFileImpl.java   | 161 +++
 .../driver/impl/StateStoreFileSystemImpl.java   | 178 +++
 .../driver/impl/StateStoreSerializableImpl.java |  77 +++
 .../federation/store/records/BaseRecord.java|  20 +-
 .../server/federation/store/records/Query.java  |  66 +++
 .../src/main/resources/hdfs-default.xml |  16 +
 .../store/FederationStateStoreTestUtils.java| 232 +
 .../store/driver/TestStateStoreDriverBase.java  | 483 +++
 .../store/driver/TestStateStoreFile.java|  64 +++
 .../store/driver/TestStateStoreFileSystem.java  |  88 
 19 files changed, 2329 insertions(+), 46 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe3672c9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 320e1f3..2b6d0e8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -18,6 +18,8 @@
 
 package org.apache.hadoop.hdfs;
 
+import java.util.concurrent.TimeUnit;
+
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
@@ -25,6 +27,8 @@ import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyRackFaultTolerant;
 import 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaLruTracker;
+import org.apache.hadoop.hdfs.server.federation.store.driver.StateStoreDriver;
+import 
org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreFileImpl;
 import 
org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreSerializerPBImpl;
 import org.apache.hadoop.http.HttpConfig;
 
@@ -1119,6 +1123,16 @@ public class DFSConfigKeys extends 
CommonConfigurationKeys {
   FEDERATION_STORE_SERIALIZER_CLASS_DEFAULT =
   StateStoreSerializerPBImpl.class;
 
+  public static final String FEDERATION_STORE_DRIVER_CLASS =
+  FEDERATION_STORE_PREFIX + "driver.class";
+  public static final Class
+  FEDERATION_STORE_DRIVER_CLASS_DEFAULT = StateStoreFileImpl.class;
+
+  public static final String FEDERATION_STORE_CONNECTION_TEST_MS =
+  FEDERATION_STORE_PREFIX + "connection.test";
+  public static final long FEDERATION_STORE_CONNECTION_TEST_MS_DEFAULT =
+  TimeUnit.MINUTES.toMillis(1);
+
   // dfs.client.retry confs are moved to HdfsClientConfigKeys.Retry 
   @Deprecated
   public static final String  DFS_CLIENT_RETRY_POLICY_ENABLED_KEY

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe3672c9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/PeriodicService.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/PeriodicService.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/PeriodicService.java
new file mode 100644
index 000..5e1
--- /dev/null
+++ 

[38/50] [abbrv] hadoop git commit: HDFS-10629. Federation Roter. Contributed by Jason Kace and Inigo Goiri.

2017-08-12 Thread inigoiri
HDFS-10629. Federation Roter. Contributed by Jason Kace and Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6f4d9f10
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6f4d9f10
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6f4d9f10

Branch: refs/heads/HDFS-10467
Commit: 6f4d9f106309f88614242bf037b6d496a97981e9
Parents: 8b242f0
Author: Inigo 
Authored: Tue Mar 28 14:30:59 2017 -0700
Committer: Inigo Goiri 
Committed: Sat Aug 12 09:36:24 2017 -0700

--
 .../hadoop-hdfs/src/main/bin/hdfs   |   5 +
 .../hadoop-hdfs/src/main/bin/hdfs.cmd   |   8 +-
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  17 +
 .../resolver/ActiveNamenodeResolver.java| 117 +++
 .../resolver/FederationNamenodeContext.java |  87 +++
 .../FederationNamenodeServiceState.java |  46 ++
 .../resolver/FederationNamespaceInfo.java   |  99 +++
 .../resolver/FileSubclusterResolver.java|  75 ++
 .../resolver/NamenodePriorityComparator.java|  63 ++
 .../resolver/NamenodeStatusReport.java  | 195 +
 .../federation/resolver/PathLocation.java   | 122 +++
 .../federation/resolver/RemoteLocation.java |  74 ++
 .../federation/resolver/package-info.java   |  41 +
 .../federation/router/FederationUtil.java   | 117 +++
 .../router/RemoteLocationContext.java   |  38 +
 .../hdfs/server/federation/router/Router.java   | 263 +++
 .../federation/router/RouterRpcServer.java  | 102 +++
 .../server/federation/router/package-info.java  |  31 +
 .../federation/store/StateStoreService.java |  77 ++
 .../server/federation/store/package-info.java   |  62 ++
 .../src/main/resources/hdfs-default.xml |  16 +
 .../server/federation/FederationTestUtils.java  | 233 ++
 .../hdfs/server/federation/MockResolver.java| 290 +++
 .../server/federation/RouterConfigBuilder.java  |  40 +
 .../server/federation/RouterDFSCluster.java | 767 +++
 .../server/federation/router/TestRouter.java|  96 +++
 26 files changed, 3080 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6f4d9f10/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
index e6405b5..b1f44a4 100755
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
@@ -57,6 +57,7 @@ function hadoop_usage
   hadoop_add_subcommand "oiv" admin "apply the offline fsimage viewer to an 
fsimage"
   hadoop_add_subcommand "oiv_legacy" admin "apply the offline fsimage viewer 
to a legacy fsimage"
   hadoop_add_subcommand "portmap" daemon "run a portmap service"
+  hadoop_add_subcommand "router" daemon "run the DFS router"
   hadoop_add_subcommand "secondarynamenode" daemon "run the DFS secondary 
namenode"
   hadoop_add_subcommand "snapshotDiff" client "diff two snapshots of a 
directory or diff the current directory contents with a snapshot"
   hadoop_add_subcommand "storagepolicies" admin "list/get/set block storage 
policies"
@@ -176,6 +177,10 @@ function hdfscmd_case
   HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"
   HADOOP_CLASSNAME=org.apache.hadoop.portmap.Portmap
 ;;
+router)
+  HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"
+  HADOOP_CLASSNAME='org.apache.hadoop.hdfs.server.federation.router.Router'
+;;
 secondarynamenode)
   HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"
   
HADOOP_CLASSNAME='org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode'

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6f4d9f10/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
index 2181e47..b9853d6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs.cmd
@@ -59,7 +59,7 @@ if "%1" == "--loglevel" (
 )
   )
 
-  set hdfscommands=dfs namenode secondarynamenode journalnode zkfc datanode 
dfsadmin haadmin fsck balancer jmxget oiv oev fetchdt getconf groups 
snapshotDiff lsSnapshottableDir cacheadmin mover storagepolicies classpath 
crypto debug
+  set hdfscommands=dfs namenode secondarynamenode journalnode zkfc datanode 
dfsadmin haadmin fsck balancer jmxget oiv oev fetchdt getconf groups 
snapshotDiff lsSnapshottableDir cacheadmin mover storagepolicies classpath 
crypto router debug
   for %%i in ( %hdfscommands% ) do (
 if %hdfs-command% == %%i set hdfscommand=true

[48/50] [abbrv] hadoop git commit: HDFS-10687. Federation Membership State Store internal API. Contributed by Jason Kace and Inigo Goiri.

2017-08-12 Thread inigoiri
HDFS-10687. Federation Membership State Store internal API. Contributed by 
Jason Kace and Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b85b5f31
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b85b5f31
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b85b5f31

Branch: refs/heads/HDFS-10467
Commit: b85b5f31aca0f98e1ba4e514f41b84bdc3be73ed
Parents: 51822e3
Author: Inigo Goiri 
Authored: Mon Jul 31 10:55:21 2017 -0700
Committer: Inigo Goiri 
Committed: Sat Aug 12 09:36:24 2017 -0700

--
 .../dev-support/findbugsExcludeFile.xml |   3 +
 hadoop-hdfs-project/hadoop-hdfs/pom.xml |   1 +
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  17 +-
 .../resolver/MembershipNamenodeResolver.java| 290 
 .../federation/router/FederationUtil.java   |  42 +-
 .../federation/store/CachedRecordStore.java | 237 ++
 .../federation/store/MembershipStore.java   | 126 +
 .../federation/store/StateStoreCache.java   |  36 ++
 .../store/StateStoreCacheUpdateService.java |  67 +++
 .../federation/store/StateStoreService.java | 202 +++-
 .../store/impl/MembershipStoreImpl.java | 311 +
 .../federation/store/impl/package-info.java |  31 ++
 .../GetNamenodeRegistrationsRequest.java|  52 +++
 .../GetNamenodeRegistrationsResponse.java   |  55 +++
 .../store/protocol/GetNamespaceInfoRequest.java |  30 ++
 .../protocol/GetNamespaceInfoResponse.java  |  52 +++
 .../protocol/NamenodeHeartbeatRequest.java  |  52 +++
 .../protocol/NamenodeHeartbeatResponse.java |  49 ++
 .../UpdateNamenodeRegistrationRequest.java  |  72 +++
 .../UpdateNamenodeRegistrationResponse.java |  51 ++
 .../impl/pb/FederationProtocolPBTranslator.java | 145 ++
 .../GetNamenodeRegistrationsRequestPBImpl.java  |  87 
 .../GetNamenodeRegistrationsResponsePBImpl.java |  99 
 .../impl/pb/GetNamespaceInfoRequestPBImpl.java  |  60 +++
 .../impl/pb/GetNamespaceInfoResponsePBImpl.java |  95 
 .../impl/pb/NamenodeHeartbeatRequestPBImpl.java |  93 
 .../pb/NamenodeHeartbeatResponsePBImpl.java |  71 +++
 ...UpdateNamenodeRegistrationRequestPBImpl.java |  95 
 ...pdateNamenodeRegistrationResponsePBImpl.java |  73 +++
 .../store/protocol/impl/pb/package-info.java|  29 ++
 .../store/records/MembershipState.java  | 329 +
 .../store/records/MembershipStats.java  | 126 +
 .../records/impl/pb/MembershipStatePBImpl.java  | 334 +
 .../records/impl/pb/MembershipStatsPBImpl.java  | 191 
 .../src/main/proto/FederationProtocol.proto | 107 +
 .../src/main/resources/hdfs-default.xml |  18 +-
 .../resolver/TestNamenodeResolver.java  | 284 
 .../store/FederationStateStoreTestUtils.java|  23 +-
 .../federation/store/TestStateStoreBase.java|  81 
 .../store/TestStateStoreMembershipState.java| 463 +++
 .../store/driver/TestStateStoreDriverBase.java  |  69 ++-
 .../store/records/TestMembershipState.java  | 129 ++
 42 files changed, 4745 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b85b5f31/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml 
b/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
index 2a7824a..c934564 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/dev-support/findbugsExcludeFile.xml
@@ -15,6 +15,9 @@

  
  
+   
+ 
+ 

  
  

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b85b5f31/hadoop-hdfs-project/hadoop-hdfs/pom.xml
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/pom.xml 
b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
index 1c50d31..a2233b5 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
@@ -336,6 +336,7 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd;>
   QJournalProtocol.proto
   editlog.proto
   fsimage.proto
+  FederationProtocol.proto
 
   
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b85b5f31/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 

[44/50] [abbrv] hadoop git commit: HDFS-12223. Rebasing HDFS-10467. Contributed by Inigo Goiri.

2017-08-12 Thread inigoiri
HDFS-12223. Rebasing HDFS-10467. Contributed by Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/51822e30
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/51822e30
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/51822e30

Branch: refs/heads/HDFS-10467
Commit: 51822e30a3f067bbd8b326369f525a12853e422e
Parents: b8e0359
Author: Inigo Goiri 
Authored: Fri Jul 28 15:55:10 2017 -0700
Committer: Inigo Goiri 
Committed: Sat Aug 12 09:36:24 2017 -0700

--
 .../federation/router/RouterRpcServer.java  | 59 +---
 1 file changed, 51 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/51822e30/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
index 4bae71e..eaaab39 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
@@ -64,8 +64,9 @@ import org.apache.hadoop.hdfs.AddBlockFlag;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.inotify.EventBatchList;
-import org.apache.hadoop.hdfs.protocol.AddingECPolicyResponse;
+import org.apache.hadoop.hdfs.protocol.AddECPolicyResponse;
 import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
+import org.apache.hadoop.hdfs.protocol.BlocksStats;
 import org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry;
 import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo;
 import org.apache.hadoop.hdfs.protocol.CachePoolEntry;
@@ -75,6 +76,7 @@ import org.apache.hadoop.hdfs.protocol.CorruptFileBlocks;
 import org.apache.hadoop.hdfs.protocol.DatanodeID;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.apache.hadoop.hdfs.protocol.DirectoryListing;
+import org.apache.hadoop.hdfs.protocol.ECBlockGroupsStats;
 import org.apache.hadoop.hdfs.protocol.EncryptionZone;
 import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy;
 import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
@@ -85,6 +87,7 @@ import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
 import org.apache.hadoop.hdfs.protocol.LastBlockWithStatus;
 import org.apache.hadoop.hdfs.protocol.LocatedBlock;
 import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
+import org.apache.hadoop.hdfs.protocol.OpenFileEntry;
 import org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo;
 import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
 import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
@@ -1736,13 +1739,6 @@ public class RouterRpcServer extends AbstractService 
implements ClientProtocol {
   }
 
   @Override // ClientProtocol
-  public AddingECPolicyResponse[] addErasureCodingPolicies(
-  ErasureCodingPolicy[] policies) throws IOException {
-checkOperation(OperationCategory.WRITE, false);
-return null;
-  }
-
-  @Override // ClientProtocol
   public void unsetErasureCodingPolicy(String src) throws IOException {
 checkOperation(OperationCategory.WRITE, false);
   }
@@ -1808,6 +1804,53 @@ public class RouterRpcServer extends AbstractService 
implements ClientProtocol {
 return null;
   }
 
+  @Override
+  public AddECPolicyResponse[] addErasureCodingPolicies(
+  ErasureCodingPolicy[] arg0) throws IOException {
+checkOperation(OperationCategory.WRITE, false);
+return null;
+  }
+
+  @Override
+  public void removeErasureCodingPolicy(String arg0) throws IOException {
+checkOperation(OperationCategory.WRITE, false);
+  }
+
+  @Override
+  public void disableErasureCodingPolicy(String arg0) throws IOException {
+checkOperation(OperationCategory.WRITE, false);
+  }
+
+  @Override
+  public void enableErasureCodingPolicy(String arg0) throws IOException {
+checkOperation(OperationCategory.WRITE, false);
+  }
+
+  @Override
+  public ECBlockGroupsStats getECBlockGroupsStats() throws IOException {
+checkOperation(OperationCategory.READ, false);
+return null;
+  }
+
+  @Override
+  public HashMap getErasureCodingCodecs() throws IOException {
+checkOperation(OperationCategory.READ, false);
+return null;
+  }
+
+  @Override
+  public BlocksStats getBlocksStats() throws IOException {
+checkOperation(OperationCategory.READ, false);
+return null;
+  }
+
+  @Override
+  public 

[30/50] [abbrv] hadoop git commit: YARN-6687. Validate that the duration of the periodic reservation is less than the periodicity. (subru via curino)

2017-08-12 Thread inigoiri
YARN-6687. Validate that the duration of the periodic reservation is less than 
the periodicity. (subru via curino)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/28d97b79
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/28d97b79
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/28d97b79

Branch: refs/heads/HDFS-10467
Commit: 28d97b79b69bb2be02d9320105e155eeed6f9e78
Parents: cc59b5f
Author: Carlo Curino 
Authored: Fri Aug 11 16:58:04 2017 -0700
Committer: Carlo Curino 
Committed: Fri Aug 11 16:58:04 2017 -0700

--
 .../reservation/ReservationInputValidator.java  | 18 ++--
 .../TestReservationInputValidator.java  | 93 
 2 files changed, 106 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/28d97b79/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/ReservationInputValidator.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/ReservationInputValidator.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/ReservationInputValidator.java
index 0e9a825..027d066 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/ReservationInputValidator.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/ReservationInputValidator.java
@@ -129,11 +129,12 @@ public class ReservationInputValidator {
   Resources.multiply(rr.getCapability(), rr.getConcurrency()));
 }
 // verify the allocation is possible (skip for ANY)
-if (contract.getDeadline() - contract.getArrival() < minDuration
+long duration = contract.getDeadline() - contract.getArrival();
+if (duration < minDuration
 && type != ReservationRequestInterpreter.R_ANY) {
   message =
   "The time difference ("
-  + (contract.getDeadline() - contract.getArrival())
+  + (duration)
   + ") between arrival (" + contract.getArrival() + ") "
   + "and deadline (" + contract.getDeadline() + ") must "
   + " be greater or equal to the minimum resource duration ("
@@ -158,15 +159,22 @@ public class ReservationInputValidator {
 // check that the recurrence is a positive long value.
 String recurrenceExpression = contract.getRecurrenceExpression();
 try {
-  Long recurrence = Long.parseLong(recurrenceExpression);
+  long recurrence = Long.parseLong(recurrenceExpression);
   if (recurrence < 0) {
 message = "Negative Period : " + recurrenceExpression + ". Please try"
-+ " again with a non-negative long value as period";
++ " again with a non-negative long value as period.";
+throw RPCUtil.getRemoteException(message);
+  }
+  // verify duration is less than recurrence for periodic reservations
+  if (recurrence > 0 && duration > recurrence) {
+message = "Duration of the requested reservation: " + duration
++ " is greater than the recurrence: " + recurrence
++ ". Please try again with a smaller duration.";
 throw RPCUtil.getRemoteException(message);
   }
 } catch (NumberFormatException e) {
   message = "Invalid period " + recurrenceExpression + ". Please try"
-  + " again with a non-negative long value as period";
+  + " again with a non-negative long value as period.";
   throw RPCUtil.getRemoteException(message);
 }
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/28d97b79/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestReservationInputValidator.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestReservationInputValidator.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestReservationInputValidator.java
index 2917cd9..90a681d 100644

[14/50] [abbrv] hadoop git commit: HADOOP-14183. Remove service loader config file for wasb fs. Contributed by Esfandiar Manii.

2017-08-12 Thread inigoiri
HADOOP-14183. Remove service loader config file for wasb fs.
Contributed by Esfandiar Manii.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/54356b1e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/54356b1e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/54356b1e

Branch: refs/heads/HDFS-10467
Commit: 54356b1e8366a23fff1bb45601efffc743306efc
Parents: 8d953c2
Author: Steve Loughran 
Authored: Thu Aug 10 16:46:33 2017 +0100
Committer: Steve Loughran 
Committed: Thu Aug 10 16:46:33 2017 +0100

--
 .../src/main/resources/core-default.xml| 12 
 .../services/org.apache.hadoop.fs.FileSystem   | 17 -
 2 files changed, 12 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/54356b1e/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml 
b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
index 593fd85..e6b6919 100644
--- a/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
+++ b/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
@@ -1322,6 +1322,18 @@
 
 
 
+  fs.wasb.impl
+  org.apache.hadoop.fs.azure.NativeAzureFileSystem
+  The implementation class of the Native Azure 
Filesystem
+
+
+
+  fs.wasbs.impl
+  org.apache.hadoop.fs.azure.NativeAzureFileSystem$Secure
+  The implementation class of the Secure Native Azure 
Filesystem
+
+
+
   fs.azure.secure.mode
   false
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/54356b1e/hadoop-tools/hadoop-azure/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
 
b/hadoop-tools/hadoop-azure/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
deleted file mode 100644
index 9f4922b..000
--- 
a/hadoop-tools/hadoop-azure/src/main/resources/META-INF/services/org.apache.hadoop.fs.FileSystem
+++ /dev/null
@@ -1,17 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-org.apache.hadoop.fs.azure.NativeAzureFileSystem
-org.apache.hadoop.fs.azure.NativeAzureFileSystem$Secure
\ No newline at end of file


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[17/50] [abbrv] hadoop git commit: MAPREDUCE-6870. Add configuration for MR job to finish when all reducers are complete. (Peter Bacsko via Haibo Chen)

2017-08-12 Thread inigoiri
MAPREDUCE-6870. Add configuration for MR job to finish when all reducers are 
complete. (Peter Bacsko via Haibo Chen)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a32e0138
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a32e0138
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a32e0138

Branch: refs/heads/HDFS-10467
Commit: a32e0138fb63c92902e6613001f38a87c8a41321
Parents: 312e57b
Author: Haibo Chen 
Authored: Thu Aug 10 15:17:36 2017 -0700
Committer: Haibo Chen 
Committed: Thu Aug 10 15:17:36 2017 -0700

--
 .../mapreduce/v2/app/job/impl/JobImpl.java  |  35 -
 .../mapreduce/v2/app/job/impl/TestJobImpl.java  | 139 +++
 .../apache/hadoop/mapreduce/MRJobConfig.java|   6 +-
 .../src/main/resources/mapred-default.xml   |   8 ++
 4 files changed, 160 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a32e0138/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
index 4d155d0..6880b6c 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
@@ -644,6 +644,8 @@ public class JobImpl implements 
org.apache.hadoop.mapreduce.v2.app.job.Job,
   private float reduceProgress;
   private float cleanupProgress;
   private boolean isUber = false;
+  private boolean finishJobWhenReducersDone;
+  private boolean completingJob = false;
 
   private Credentials jobCredentials;
   private Token jobToken;
@@ -717,6 +719,9 @@ public class JobImpl implements 
org.apache.hadoop.mapreduce.v2.app.job.Job,
 this.maxFetchFailuresNotifications = conf.getInt(
 MRJobConfig.MAX_FETCH_FAILURES_NOTIFICATIONS,
 MRJobConfig.DEFAULT_MAX_FETCH_FAILURES_NOTIFICATIONS);
+this.finishJobWhenReducersDone = conf.getBoolean(
+MRJobConfig.FINISH_JOB_WHEN_REDUCERS_DONE,
+MRJobConfig.DEFAULT_FINISH_JOB_WHEN_REDUCERS_DONE);
   }
 
   protected StateMachine 
getStateMachine() {
@@ -2021,7 +2026,9 @@ public class JobImpl implements 
org.apache.hadoop.mapreduce.v2.app.job.Job,
 TimeUnit.MILLISECONDS);
 return JobStateInternal.FAIL_WAIT;
   }
-  
+
+  checkReadyForCompletionWhenAllReducersDone(job);
+
   return job.checkReadyForCommit();
 }
 
@@ -2052,6 +2059,32 @@ public class JobImpl implements 
org.apache.hadoop.mapreduce.v2.app.job.Job,
   }
   job.metrics.killedTask(task);
 }
+
+   /** Improvement: if all reducers have finished, we check if we have
+   restarted mappers that are still running. This can happen in a
+   situation when a node becomes UNHEALTHY and mappers are rescheduled.
+   See MAPREDUCE-6870 for details */
+private void checkReadyForCompletionWhenAllReducersDone(JobImpl job) {
+  if (job.finishJobWhenReducersDone) {
+int totalReduces = job.getTotalReduces();
+int completedReduces = job.getCompletedReduces();
+
+if (totalReduces > 0 && totalReduces == completedReduces
+&& !job.completingJob) {
+
+  for (TaskId mapTaskId : job.mapTasks) {
+MapTaskImpl task = (MapTaskImpl) job.tasks.get(mapTaskId);
+if (!task.isFinished()) {
+  LOG.info("Killing map task " + task.getID());
+  job.eventHandler.handle(
+  new TaskEvent(task.getID(), TaskEventType.T_KILL));
+}
+  }
+
+  job.completingJob = true;
+}
+  }
+}
   }
 
   // Transition class for handling jobs with no tasks

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a32e0138/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
 

[34/50] [abbrv] hadoop git commit: HDFS-11826. Federation Namenode Heartbeat. Contributed by Inigo Goiri.

2017-08-12 Thread inigoiri
HDFS-11826. Federation Namenode Heartbeat. Contributed by Inigo Goiri.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0c23c8cc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0c23c8cc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0c23c8cc

Branch: refs/heads/HDFS-10467
Commit: 0c23c8cc5cb5c3cb9118da2f47e17484b564c7f1
Parents: b85b5f3
Author: Inigo Goiri 
Authored: Tue Aug 1 14:40:27 2017 -0700
Committer: Inigo Goiri 
Committed: Sat Aug 12 09:36:24 2017 -0700

--
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  14 +
 .../java/org/apache/hadoop/hdfs/DFSUtil.java|  38 ++
 .../resolver/NamenodeStatusReport.java  | 193 ++
 .../federation/router/FederationUtil.java   |  66 
 .../router/NamenodeHeartbeatService.java| 350 +++
 .../hdfs/server/federation/router/Router.java   | 112 ++
 .../src/main/resources/hdfs-default.xml |  32 ++
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  |   8 +
 .../hdfs/server/federation/MockResolver.java|   9 +-
 .../server/federation/RouterConfigBuilder.java  |  22 ++
 .../server/federation/RouterDFSCluster.java |  43 +++
 .../router/TestNamenodeHeartbeat.java   | 168 +
 .../server/federation/router/TestRouter.java|   3 +
 13 files changed, 1057 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0c23c8cc/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index d7c2d18..acd4790 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -1129,6 +1129,20 @@ public class DFSConfigKeys extends 
CommonConfigurationKeys {
   FEDERATION_ROUTER_PREFIX + "rpc.enable";
   public static final boolean DFS_ROUTER_RPC_ENABLE_DEFAULT = true;
 
+  // HDFS Router heartbeat
+  public static final String DFS_ROUTER_HEARTBEAT_ENABLE =
+  FEDERATION_ROUTER_PREFIX + "heartbeat.enable";
+  public static final boolean DFS_ROUTER_HEARTBEAT_ENABLE_DEFAULT = true;
+  public static final String DFS_ROUTER_HEARTBEAT_INTERVAL_MS =
+  FEDERATION_ROUTER_PREFIX + "heartbeat.interval";
+  public static final long DFS_ROUTER_HEARTBEAT_INTERVAL_MS_DEFAULT =
+  TimeUnit.SECONDS.toMillis(5);
+  public static final String DFS_ROUTER_MONITOR_NAMENODE =
+  FEDERATION_ROUTER_PREFIX + "monitor.namenode";
+  public static final String DFS_ROUTER_MONITOR_LOCAL_NAMENODE =
+  FEDERATION_ROUTER_PREFIX + "monitor.localnamenode.enable";
+  public static final boolean DFS_ROUTER_MONITOR_LOCAL_NAMENODE_DEFAULT = true;
+
   // HDFS Router NN client
   public static final String DFS_ROUTER_NAMENODE_CONNECTION_POOL_SIZE =
   FEDERATION_ROUTER_PREFIX + "connection.pool-size";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0c23c8cc/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
index 47e1c0d..0ea5e3e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
@@ -1237,6 +1237,44 @@ public class DFSUtil {
   }
 
   /**
+   * Map a logical namenode ID to its web address. Use the given nameservice if
+   * specified, or the configured one if none is given.
+   *
+   * @param conf Configuration
+   * @param nsId which nameservice nnId is a part of, optional
+   * @param nnId the namenode ID to get the service addr for
+   * @return the service addr, null if it could not be determined
+   */
+  public static String getNamenodeWebAddr(final Configuration conf, String 
nsId,
+  String nnId) {
+
+if (nsId == null) {
+  nsId = getOnlyNameServiceIdOrNull(conf);
+}
+
+String webAddrKey = DFSUtilClient.concatSuffixes(
+DFSConfigKeys.DFS_NAMENODE_HTTP_ADDRESS_KEY, nsId, nnId);
+
+String webAddr =
+conf.get(webAddrKey, DFSConfigKeys.DFS_NAMENODE_HTTP_ADDRESS_DEFAULT);
+return webAddr;
+  }
+
+  /**
+   * Get all of the Web addresses of the individual NNs in a given nameservice.
+   *
+   * @param conf Configuration
+   * @param nsId the 

[21/50] [abbrv] hadoop git commit: HADOOP-14260. Configuration.dumpConfiguration should redact sensitive information. Contributed by John Zhuge.

2017-08-12 Thread inigoiri
HADOOP-14260. Configuration.dumpConfiguration should redact sensitive 
information. Contributed by John Zhuge.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/582648be
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/582648be
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/582648be

Branch: refs/heads/HDFS-10467
Commit: 582648befaf9908159f937d2cc8f549583a3483e
Parents: 4222c97
Author: John Zhuge 
Authored: Thu Aug 10 16:28:22 2017 -0700
Committer: John Zhuge 
Committed: Fri Aug 11 10:16:08 2017 -0700

--
 .../org/apache/hadoop/conf/Configuration.java   | 15 +++---
 .../apache/hadoop/conf/TestConfiguration.java   | 48 ++--
 2 files changed, 53 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/582648be/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
index 65e8569..edaee68 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
@@ -3146,7 +3146,8 @@ public class Configuration implements 
Iterable>,
   JsonGenerator dumpGenerator = dumpFactory.createGenerator(out);
   dumpGenerator.writeStartObject();
   dumpGenerator.writeFieldName("property");
-  appendJSONProperty(dumpGenerator, config, propertyName);
+  appendJSONProperty(dumpGenerator, config, propertyName,
+  new ConfigRedactor(config));
   dumpGenerator.writeEndObject();
   dumpGenerator.flush();
 }
@@ -3186,11 +3187,11 @@ public class Configuration implements 
Iterable>,
 dumpGenerator.writeFieldName("properties");
 dumpGenerator.writeStartArray();
 dumpGenerator.flush();
+ConfigRedactor redactor = new ConfigRedactor(config);
 synchronized (config) {
   for (Map.Entry item: config.getProps().entrySet()) {
-appendJSONProperty(dumpGenerator,
-config,
-item.getKey().toString());
+appendJSONProperty(dumpGenerator, config, item.getKey().toString(),
+redactor);
   }
 }
 dumpGenerator.writeEndArray();
@@ -3208,12 +3209,14 @@ public class Configuration implements 
Iterable>,
* @throws IOException
*/
   private static void appendJSONProperty(JsonGenerator jsonGen,
-  Configuration config, String name) throws IOException {
+  Configuration config, String name, ConfigRedactor redactor)
+  throws IOException {
 // skip writing if given property name is empty or null
 if(!Strings.isNullOrEmpty(name) && jsonGen != null) {
   jsonGen.writeStartObject();
   jsonGen.writeStringField("key", name);
-  jsonGen.writeStringField("value", config.get(name));
+  jsonGen.writeStringField("value",
+  redactor.redact(name, config.get(name)));
   jsonGen.writeBooleanField("isFinal",
   config.finalParameters.contains(name));
   String[] resources = config.updatingResource.get(name);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/582648be/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
index 92d3290..91f25fa 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java
@@ -49,6 +49,7 @@ import static org.junit.Assert.assertArrayEquals;
 
 import org.apache.commons.lang.StringUtils;
 import org.apache.hadoop.conf.Configuration.IntegerRanges;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.net.NetUtils;
@@ -82,6 +83,11 @@ public class TestConfiguration extends TestCase {
   /** Four apostrophes. */
   public static final String ESCAPED = "";
 
+  private static final String SENSITIVE_CONFIG_KEYS =
+  CommonConfigurationKeysPublic.HADOOP_SECURITY_SENSITIVE_CONFIG_KEYS;
+
+  private BufferedWriter out;
+
   @Override
   protected void setUp() 

[01/50] [abbrv] hadoop git commit: HDFS-11975. Provide a system-default EC policy. Contributed by Huichun Lu [Forced Update!]

2017-08-12 Thread inigoiri
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-10467 96ce12784 -> 04c92c9ba (forced update)


HDFS-11975. Provide a system-default EC policy. Contributed by Huichun Lu


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a53b8b6f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a53b8b6f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a53b8b6f

Branch: refs/heads/HDFS-10467
Commit: a53b8b6fdce111b1e35ad0dc563eb53d1c58462f
Parents: ad2a350
Author: Kai Zheng 
Authored: Wed Aug 9 10:12:58 2017 +0800
Committer: Kai Zheng 
Committed: Wed Aug 9 10:12:58 2017 +0800

--
 .../hadoop/hdfs/DistributedFileSystem.java  |  2 --
 .../ClientNamenodeProtocolTranslatorPB.java |  4 ++-
 .../src/main/proto/erasurecoding.proto  |  2 +-
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  4 +++
 ...tNamenodeProtocolServerSideTranslatorPB.java |  4 ++-
 .../namenode/ErasureCodingPolicyManager.java| 12 +--
 .../hdfs/server/namenode/NameNodeRpcServer.java | 14 +++-
 .../org/apache/hadoop/hdfs/tools/ECAdmin.java   | 14 
 .../src/main/resources/hdfs-default.xml |  8 +
 .../src/site/markdown/HDFSErasureCoding.md  |  8 +
 .../hadoop/hdfs/TestErasureCodingPolicies.java  | 24 --
 .../server/namenode/TestEnabledECPolicies.java  | 10 +++---
 .../test/resources/testErasureCodingConf.xml| 35 
 13 files changed, 117 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a53b8b6f/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
index 13c5eb9..cd368d4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
@@ -2515,8 +2515,6 @@ public class DistributedFileSystem extends FileSystem {
   public void setErasureCodingPolicy(final Path path,
   final String ecPolicyName) throws IOException {
 Path absF = fixRelativePart(path);
-Preconditions.checkNotNull(ecPolicyName, "Erasure coding policy cannot be" 
+
-" null.");
 new FileSystemLinkResolver() {
   @Override
   public Void doCall(final Path p) throws IOException {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a53b8b6f/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
index 388788c..aed4117 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
@@ -1518,7 +1518,9 @@ public class ClientNamenodeProtocolTranslatorPB implements
 final SetErasureCodingPolicyRequestProto.Builder builder =
 SetErasureCodingPolicyRequestProto.newBuilder();
 builder.setSrc(src);
-builder.setEcPolicyName(ecPolicyName);
+if (ecPolicyName != null) {
+  builder.setEcPolicyName(ecPolicyName);
+}
 SetErasureCodingPolicyRequestProto req = builder.build();
 try {
   rpcProxy.setErasureCodingPolicy(null, req);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a53b8b6f/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/erasurecoding.proto
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/erasurecoding.proto 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/erasurecoding.proto
index 65baab6..9f80350 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/erasurecoding.proto
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/erasurecoding.proto
@@ -25,7 +25,7 @@ import "hdfs.proto";
 
 message SetErasureCodingPolicyRequestProto {
   required string src = 1;
-  required string ecPolicyName = 2;
+  optional string ecPolicyName = 2;
 }
 
 message SetErasureCodingPolicyResponseProto {


[28/50] [abbrv] hadoop git commit: YARN-6882. AllocationFileLoaderService.reloadAllocations() should use the diamond operator (Contributed by Larry Lo via Daniel Templeton)

2017-08-12 Thread inigoiri
YARN-6882. AllocationFileLoaderService.reloadAllocations() should use the 
diamond operator
(Contributed by Larry Lo via Daniel Templeton)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0996acde
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0996acde
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0996acde

Branch: refs/heads/HDFS-10467
Commit: 0996acde6c325667aa19ae0740eb6b40bf4a682a
Parents: 65364de
Author: Daniel Templeton 
Authored: Fri Aug 11 14:50:46 2017 -0700
Committer: Daniel Templeton 
Committed: Fri Aug 11 14:50:46 2017 -0700

--
 .../scheduler/fair/AllocationFileLoaderService.java  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0996acde/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
index bf5b4c5..313a27a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
@@ -266,7 +266,7 @@ public class AllocationFileLoaderService extends 
AbstractService {
 Map configuredQueues = new HashMap<>();
 
 for (FSQueueType queueType : FSQueueType.values()) {
-  configuredQueues.put(queueType, new HashSet());
+  configuredQueues.put(queueType, new HashSet<>());
 }
 
 // Read and parse the allocations file.
@@ -280,7 +280,7 @@ public class AllocationFileLoaderService extends 
AbstractService {
   throw new AllocationConfigurationException("Bad fair scheduler config " +
   "file: top-level element not ");
 NodeList elements = root.getChildNodes();
-List queueElements = new ArrayList();
+List queueElements = new ArrayList<>();
 Element placementPolicyElement = null;
 for (int i = 0; i < elements.getLength(); i++) {
   Node node = elements.item(i);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[41/50] [abbrv] hadoop git commit: HDFS-11546. Federation Router RPC server. Contributed by Jason Kace and Inigo Goiri.

2017-08-12 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b8e03592/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
new file mode 100644
index 000..3a32be1
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
@@ -0,0 +1,856 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.Method;
+import java.net.InetSocketAddress;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.LinkedHashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.ThreadFactory;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.NameNodeProxiesClient.ProxyAndInfo;
+import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
+import org.apache.hadoop.hdfs.protocol.ClientProtocol;
+import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
+import 
org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver;
+import 
org.apache.hadoop.hdfs.server.federation.resolver.FederationNamenodeContext;
+import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
+import org.apache.hadoop.io.retry.RetryPolicies;
+import org.apache.hadoop.io.retry.RetryPolicy;
+import org.apache.hadoop.io.retry.RetryPolicy.RetryAction.RetryDecision;
+import org.apache.hadoop.ipc.RemoteException;
+import org.apache.hadoop.ipc.StandbyException;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+
+/**
+ * A client proxy for Router -> NN communication using the NN ClientProtocol.
+ * 
+ * Provides routers to invoke remote ClientProtocol methods and handle
+ * retries/failover.
+ * 
+ * invokeSingle Make a single request to a single namespace
+ * invokeSequential Make a sequential series of requests to multiple
+ * ordered namespaces until a condition is met.
+ * invokeConcurrent Make concurrent requests to multiple namespaces and
+ * return all of the results.
+ * 
+ * Also maintains a cached pool of connections to NNs. Connections are managed
+ * by the ConnectionManager and are unique to each user + NN. The size of the
+ * connection pool can be configured. Larger pools allow for more simultaneous
+ * requests to a single NN from a single user.
+ */
+public class RouterRpcClient {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(RouterRpcClient.class);
+
+
+  /** Router identifier. */
+  private final String routerId;
+
+  /** Interface to identify the active NN for a nameservice or blockpool ID. */
+  private final ActiveNamenodeResolver namenodeResolver;
+
+  /** Connection pool to the Namenodes per user for performance. */
+  private final ConnectionManager connectionManager;
+  /** Service to run asynchronous calls. */
+  private final ExecutorService executorService;
+  /** Retry policy for router -> NN communication. */
+  private final RetryPolicy retryPolicy;
+
+  /** Pattern to parse a stack trace line. */
+  private static final Pattern STACK_TRACE_PATTERN =
+  Pattern.compile("\\tat (.*)\\.(.*)\\((.*):(\\d*)\\)");
+
+
+  /**
+   * Create a router RPC 

[09/50] [abbrv] hadoop git commit: YARN-6033. Add support for sections in container-executor configuration file. (Varun Vasudev via wandga)

2017-08-12 Thread inigoiri
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec694145/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_configuration.cc
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_configuration.cc
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_configuration.cc
new file mode 100644
index 000..6ee0ab2
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_configuration.cc
@@ -0,0 +1,432 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include 
+#include 
+
+extern "C" {
+#include "util.h"
+#include "configuration.h"
+#include "configuration.c"
+}
+
+
+namespace ContainerExecutor {
+  class TestConfiguration : public ::testing::Test {
+  protected:
+virtual void SetUp() {
+  new_config_format_file = "test-configurations/configuration-1.cfg";
+  old_config_format_file = "test-configurations/old-config.cfg";
+  mixed_config_format_file = "test-configurations/configuration-2.cfg";
+  loadConfigurations();
+  return;
+}
+
+void loadConfigurations() {
+  int ret = 0;
+  ret = read_config(new_config_format_file.c_str(), _config_format);
+  ASSERT_EQ(0, ret);
+  ret = read_config(old_config_format_file.c_str(), _config_format);
+  ASSERT_EQ(0, ret);
+  ret = read_config(mixed_config_format_file.c_str(),
+_config_format);
+  ASSERT_EQ(0, ret);
+}
+
+virtual void TearDown() {
+  free_configuration(_config_format);
+  free_configuration(_config_format);
+  return;
+}
+
+std::string new_config_format_file;
+std::string old_config_format_file;
+std::string mixed_config_format_file;
+struct configuration new_config_format;
+struct configuration old_config_format;
+struct configuration mixed_config_format;
+  };
+
+
+  TEST_F(TestConfiguration, test_get_configuration_values_delimiter) {
+char **split_values;
+split_values = get_configuration_values_delimiter(NULL, "", 
_config_format, "%");
+ASSERT_EQ(NULL, split_values);
+split_values = get_configuration_values_delimiter("yarn.local.dirs", NULL,
+  _config_format, "%");
+ASSERT_EQ(NULL, split_values);
+split_values = get_configuration_values_delimiter("yarn.local.dirs", "",
+  NULL, "%");
+ASSERT_EQ(NULL, split_values);
+split_values = get_configuration_values_delimiter("yarn.local.dirs", "",
+  _config_format, NULL);
+ASSERT_EQ(NULL, split_values);
+split_values = get_configuration_values_delimiter("yarn.local.dirs", 
"abcd",
+  _config_format, "%");
+ASSERT_EQ(NULL, split_values);
+split_values = get_configuration_values_delimiter("yarn.local.dirs", "",
+  _config_format, "%");
+ASSERT_STREQ("/var/run/yarn", split_values[0]);
+ASSERT_STREQ("/tmp/mydir", split_values[1]);
+ASSERT_EQ(NULL, split_values[2]);
+free(split_values);
+split_values = get_configuration_values_delimiter("allowed.system.users",
+  "", _config_format, "%");
+ASSERT_STREQ("nobody,daemon", split_values[0]);
+ASSERT_EQ(NULL, split_values[1]);
+free(split_values);
+  }
+
+  TEST_F(TestConfiguration, test_get_configuration_values) {
+char **split_values;
+split_values = get_configuration_values(NULL, "", _config_format);
+ASSERT_EQ(NULL, split_values);
+split_values = get_configuration_values("yarn.local.dirs", NULL, 
_config_format);
+ASSERT_EQ(NULL, split_values);
+split_values = get_configuration_values("yarn.local.dirs", "", NULL);
+ASSERT_EQ(NULL, split_values);
+split_values = get_configuration_values("yarn.local.dirs", "abcd", 
_config_format);
+ASSERT_EQ(NULL, split_values);
+split_values = 

[05/50] [abbrv] hadoop git commit: HDFS-12117. HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST Interface. Contributed by Wellington Chevreuil.

2017-08-12 Thread inigoiri
HDFS-12117. HttpFS does not seem to support SNAPSHOT related methods for 
WebHDFS REST Interface. Contributed by Wellington Chevreuil.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8a4bff02
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8a4bff02
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8a4bff02

Branch: refs/heads/HDFS-10467
Commit: 8a4bff02c1534c6bf529726f2bbe414ac4c172e8
Parents: 9a3c237
Author: Wei-Chiu Chuang 
Authored: Tue Aug 8 23:58:53 2017 -0700
Committer: Wei-Chiu Chuang 
Committed: Tue Aug 8 23:58:53 2017 -0700

--
 .../hadoop/fs/http/client/HttpFSFileSystem.java |  47 ++-
 .../hadoop/fs/http/server/FSOperations.java | 105 ++
 .../http/server/HttpFSParametersProvider.java   |  45 ++
 .../hadoop/fs/http/server/HttpFSServer.java |  36 +
 .../fs/http/client/BaseTestHttpFSWith.java  | 110 ++-
 .../hadoop/fs/http/server/TestHttpFSServer.java | 140 ++-
 6 files changed, 479 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8a4bff02/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
index d139100..1059a02 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
@@ -124,6 +124,8 @@ public class HttpFSFileSystem extends FileSystem
   public static final String POLICY_NAME_PARAM = "storagepolicy";
   public static final String OFFSET_PARAM = "offset";
   public static final String LENGTH_PARAM = "length";
+  public static final String SNAPSHOT_NAME_PARAM = "snapshotname";
+  public static final String OLD_SNAPSHOT_NAME_PARAM = "oldsnapshotname";
 
   public static final Short DEFAULT_PERMISSION = 0755;
   public static final String ACLSPEC_DEFAULT = "";
@@ -144,6 +146,8 @@ public class HttpFSFileSystem extends FileSystem
 
   public static final String UPLOAD_CONTENT_TYPE= "application/octet-stream";
 
+  public static final String SNAPSHOT_JSON = "Path";
+
   public enum FILE_TYPE {
 FILE, DIRECTORY, SYMLINK;
 
@@ -229,7 +233,9 @@ public class HttpFSFileSystem extends FileSystem
 DELETE(HTTP_DELETE), SETXATTR(HTTP_PUT), GETXATTRS(HTTP_GET),
 REMOVEXATTR(HTTP_PUT), LISTXATTRS(HTTP_GET), LISTSTATUS_BATCH(HTTP_GET),
 GETALLSTORAGEPOLICY(HTTP_GET), GETSTORAGEPOLICY(HTTP_GET),
-SETSTORAGEPOLICY(HTTP_PUT), UNSETSTORAGEPOLICY(HTTP_POST);
+SETSTORAGEPOLICY(HTTP_PUT), UNSETSTORAGEPOLICY(HTTP_POST),
+CREATESNAPSHOT(HTTP_PUT), DELETESNAPSHOT(HTTP_DELETE),
+RENAMESNAPSHOT(HTTP_PUT);
 
 private String httpMethod;
 
@@ -1434,4 +1440,43 @@ public class HttpFSFileSystem extends FileSystem
 Operation.UNSETSTORAGEPOLICY.getMethod(), params, src, true);
 HttpExceptionUtils.validateResponse(conn, HttpURLConnection.HTTP_OK);
   }
+
+  @Override
+  public final Path createSnapshot(Path path, String snapshotName)
+  throws IOException {
+Map params = new HashMap();
+params.put(OP_PARAM, Operation.CREATESNAPSHOT.toString());
+if (snapshotName != null) {
+  params.put(SNAPSHOT_NAME_PARAM, snapshotName);
+}
+HttpURLConnection conn = 
getConnection(Operation.CREATESNAPSHOT.getMethod(),
+params, path, true);
+HttpExceptionUtils.validateResponse(conn, HttpURLConnection.HTTP_OK);
+JSONObject json = (JSONObject) HttpFSUtils.jsonParse(conn);
+return new Path((String) json.get(SNAPSHOT_JSON));
+  }
+
+  @Override
+  public void renameSnapshot(Path path, String snapshotOldName,
+ String snapshotNewName) throws IOException {
+Map params = new HashMap();
+params.put(OP_PARAM, Operation.RENAMESNAPSHOT.toString());
+params.put(SNAPSHOT_NAME_PARAM, snapshotNewName);
+params.put(OLD_SNAPSHOT_NAME_PARAM, snapshotOldName);
+HttpURLConnection conn = 
getConnection(Operation.RENAMESNAPSHOT.getMethod(),
+params, path, true);
+HttpExceptionUtils.validateResponse(conn, HttpURLConnection.HTTP_OK);
+  }
+
+  @Override
+  public void deleteSnapshot(Path path, String snapshotName)
+  throws IOException {
+Map params = new HashMap();
+params.put(OP_PARAM, Operation.DELETESNAPSHOT.toString());

[07/50] [abbrv] hadoop git commit: HDFS-12157. Do fsyncDirectory(..) outside of FSDataset lock. Contributed by inayakumar B.

2017-08-12 Thread inigoiri
HDFS-12157. Do fsyncDirectory(..) outside of FSDataset lock. Contributed by 
inayakumar B.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/69afa26f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/69afa26f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/69afa26f

Branch: refs/heads/HDFS-10467
Commit: 69afa26f19adad4c630a307c274130eb8b697141
Parents: 1a18d5e
Author: Kihwal Lee 
Authored: Wed Aug 9 09:03:51 2017 -0500
Committer: Kihwal Lee 
Committed: Wed Aug 9 09:03:51 2017 -0500

--
 .../datanode/fsdataset/impl/FsDatasetImpl.java  | 46 ++--
 1 file changed, 24 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/69afa26f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index 53e2fc6..16df709 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -991,8 +991,7 @@ class FsDatasetImpl implements FsDatasetSpi {
 replicaInfo, smallBufferSize, conf);
 
 // Finalize the copied files
-newReplicaInfo = finalizeReplica(block.getBlockPoolId(), newReplicaInfo,
-false);
+newReplicaInfo = finalizeReplica(block.getBlockPoolId(), newReplicaInfo);
 try (AutoCloseableLock lock = datasetLock.acquire()) {
   // Increment numBlocks here as this block moved without knowing to BPS
   FsVolumeImpl volume = (FsVolumeImpl) newReplicaInfo.getVolume();
@@ -1295,7 +1294,7 @@ class FsDatasetImpl implements FsDatasetSpi 
{
   replicaInfo.bumpReplicaGS(newGS);
   // finalize the replica if RBW
   if (replicaInfo.getState() == ReplicaState.RBW) {
-finalizeReplica(b.getBlockPoolId(), replicaInfo, false);
+finalizeReplica(b.getBlockPoolId(), replicaInfo);
   }
   return replicaInfo;
 }
@@ -1625,23 +1624,39 @@ class FsDatasetImpl implements 
FsDatasetSpi {
   @Override // FsDatasetSpi
   public void finalizeBlock(ExtendedBlock b, boolean fsyncDir)
   throws IOException {
+ReplicaInfo replicaInfo = null;
+ReplicaInfo finalizedReplicaInfo = null;
 try (AutoCloseableLock lock = datasetLock.acquire()) {
   if (Thread.interrupted()) {
 // Don't allow data modifications from interrupted threads
 throw new IOException("Cannot finalize block from Interrupted Thread");
   }
-  ReplicaInfo replicaInfo = getReplicaInfo(b);
+  replicaInfo = getReplicaInfo(b);
   if (replicaInfo.getState() == ReplicaState.FINALIZED) {
 // this is legal, when recovery happens on a file that has
 // been opened for append but never modified
 return;
   }
-  finalizeReplica(b.getBlockPoolId(), replicaInfo, fsyncDir);
+  finalizedReplicaInfo = finalizeReplica(b.getBlockPoolId(), replicaInfo);
+}
+/*
+ * Sync the directory after rename from tmp/rbw to Finalized if
+ * configured. Though rename should be atomic operation, sync on both
+ * dest and src directories are done because IOUtils.fsync() calls
+ * directory's channel sync, not the journal itself.
+ */
+if (fsyncDir && finalizedReplicaInfo instanceof FinalizedReplica
+&& replicaInfo instanceof LocalReplica) {
+  FinalizedReplica finalizedReplica =
+  (FinalizedReplica) finalizedReplicaInfo;
+  finalizedReplica.fsyncDirectory();
+  LocalReplica localReplica = (LocalReplica) replicaInfo;
+  localReplica.fsyncDirectory();
 }
   }
 
-  private ReplicaInfo finalizeReplica(String bpid,
-  ReplicaInfo replicaInfo, boolean fsyncDir) throws IOException {
+  private ReplicaInfo finalizeReplica(String bpid, ReplicaInfo replicaInfo)
+  throws IOException {
 try (AutoCloseableLock lock = datasetLock.acquire()) {
   ReplicaInfo newReplicaInfo = null;
   if (replicaInfo.getState() == ReplicaState.RUR &&
@@ -1656,19 +1671,6 @@ class FsDatasetImpl implements 
FsDatasetSpi {
 
 newReplicaInfo = v.addFinalizedBlock(
 bpid, replicaInfo, replicaInfo, replicaInfo.getBytesReserved());
-/*
- * Sync the directory after rename from tmp/rbw to Finalized if
- * configured. Though rename should be atomic operation, sync 

[24/50] [abbrv] hadoop git commit: HADOOP-14741. Refactor curator based ZooKeeper communication into common library. (Íñigo Goiri via Subru).

2017-08-12 Thread inigoiri
HADOOP-14741. Refactor curator based ZooKeeper communication into common 
library. (Íñigo Goiri via Subru).


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bbbf0e2a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bbbf0e2a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bbbf0e2a

Branch: refs/heads/HDFS-10467
Commit: bbbf0e2a4136b30cad9dfd36ef138650a1adea60
Parents: 8c4b6d1
Author: Subru Krishnan 
Authored: Fri Aug 11 13:58:45 2017 -0700
Committer: Subru Krishnan 
Committed: Fri Aug 11 13:58:45 2017 -0700

--
 .../hadoop/fs/CommonConfigurationKeys.java  |  21 ++
 .../hadoop/util/curator/ZKCuratorManager.java   | 294 +++
 .../hadoop/util/curator/package-info.java   |  27 ++
 .../src/main/resources/core-default.xml |  46 +++
 .../util/curator/TestZKCuratorManager.java  |  95 ++
 .../hadoop/yarn/conf/YarnConfiguration.java |  13 +-
 .../yarn/conf/TestYarnConfigurationFields.java  |   9 +
 .../src/main/resources/yarn-default.xml |  53 
 ...ActiveStandbyElectorBasedElectorService.java |   5 +-
 .../yarn/server/resourcemanager/RMZKUtils.java  |  81 -
 .../server/resourcemanager/ResourceManager.java |  83 +++---
 .../recovery/ZKRMStateStore.java|  38 ++-
 .../server/resourcemanager/RMHATestBase.java|   5 +-
 13 files changed, 567 insertions(+), 203 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bbbf0e2a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
index e53f71e..0da4bbd 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
@@ -377,4 +377,25 @@ public class CommonConfigurationKeys extends 
CommonConfigurationKeysPublic {
 
   // HDFS client HTrace configuration.
   public static final String  FS_CLIENT_HTRACE_PREFIX = "fs.client.htrace.";
+
+  // Global ZooKeeper configuration keys
+  public static final String ZK_PREFIX = "hadoop.zk.";
+  /** ACL for the ZooKeeper ensemble. */
+  public static final String ZK_ACL = ZK_PREFIX + "acl";
+  public static final String ZK_ACL_DEFAULT = "world:anyone:rwcda";
+  /** Authentication for the ZooKeeper ensemble. */
+  public static final String ZK_AUTH = ZK_PREFIX + "auth";
+
+  /** Address of the ZooKeeper ensemble. */
+  public static final String ZK_ADDRESS = ZK_PREFIX + "address";
+  /** Maximum number of retries for a ZooKeeper operation. */
+  public static final String ZK_NUM_RETRIES = ZK_PREFIX + "num-retries";
+  public static final intZK_NUM_RETRIES_DEFAULT = 1000;
+  /** Timeout for a ZooKeeper operation in ZooKeeper in milliseconds. */
+  public static final String ZK_TIMEOUT_MS = ZK_PREFIX + "timeout-ms";
+  public static final intZK_TIMEOUT_MS_DEFAULT = 1;
+  /** How often to retry a ZooKeeper operation  in milliseconds. */
+  public static final String ZK_RETRY_INTERVAL_MS =
+  ZK_PREFIX + "retry-interval-ms";
+  public static final intZK_RETRY_INTERVAL_MS_DEFAULT = 1000;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bbbf0e2a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/curator/ZKCuratorManager.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/curator/ZKCuratorManager.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/curator/ZKCuratorManager.java
new file mode 100644
index 000..3adf028
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/curator/ZKCuratorManager.java
@@ -0,0 +1,294 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, 

[22/50] [abbrv] hadoop git commit: HADOOP-14760. Add missing override to LoadBalancingKMSClientProvider.

2017-08-12 Thread inigoiri
HADOOP-14760. Add missing override to LoadBalancingKMSClientProvider.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/07fff43f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/07fff43f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/07fff43f

Branch: refs/heads/HDFS-10467
Commit: 07fff43f4a1e724c83ff8fcc90fac64aa04a39eb
Parents: 582648b
Author: Xiao Chen 
Authored: Fri Aug 11 11:41:16 2017 -0700
Committer: Xiao Chen 
Committed: Fri Aug 11 11:41:41 2017 -0700

--
 .../hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java| 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/07fff43f/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java
index 6b20c99..6e010b1 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java
@@ -292,7 +292,9 @@ public class LoadBalancingKMSClientProvider extends 
KeyProvider implements
 }
   }
 
-  public EncryptedKeyVersion reencryptEncryptedKey(EncryptedKeyVersion ekv)
+  @Override
+  public EncryptedKeyVersion reencryptEncryptedKey(
+  final EncryptedKeyVersion ekv)
   throws IOException, GeneralSecurityException {
 try {
   return doOp(new ProviderCallable() {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[27/50] [abbrv] hadoop git commit: YARN-6967. Limit application attempt's diagnostic message size thoroughly (Contributed by Chengbing Liu via Daniel Templeton)

2017-08-12 Thread inigoiri
YARN-6967. Limit application attempt's diagnostic message size thoroughly
(Contributed by Chengbing Liu via Daniel Templeton)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/65364def
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/65364def
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/65364def

Branch: refs/heads/HDFS-10467
Commit: 65364defb4a633ca20b39ebc38cd9c0db63a5835
Parents: c7680d4
Author: Daniel Templeton 
Authored: Fri Aug 11 14:28:55 2017 -0700
Committer: Daniel Templeton 
Committed: Fri Aug 11 14:28:55 2017 -0700

--
 .../rmapp/attempt/RMAppAttemptImpl.java | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/65364def/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
index 4210c54..254768b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
@@ -1315,7 +1315,7 @@ public class RMAppAttemptImpl implements RMAppAttempt, 
Recoverable {
 // AFTER the initial saving on app-attempt-start
 // These fields can be visible from outside only after they are saved in
 // StateStore
-String diags = null;
+BoundedAppender diags = new BoundedAppender(diagnostics.limit);
 
 // don't leave the tracking URL pointing to a non-existent AM
 if (conf.getBoolean(YarnConfiguration.APPLICATION_HISTORY_ENABLED,
@@ -1329,15 +1329,15 @@ public class RMAppAttemptImpl implements RMAppAttempt, 
Recoverable {
 int exitStatus = ContainerExitStatus.INVALID;
 switch (event.getType()) {
 case LAUNCH_FAILED:
-  diags = event.getDiagnosticMsg();
+  diags.append(event.getDiagnosticMsg());
   break;
 case REGISTERED:
-  diags = getUnexpectedAMRegisteredDiagnostics();
+  diags.append(getUnexpectedAMRegisteredDiagnostics());
   break;
 case UNREGISTERED:
   RMAppAttemptUnregistrationEvent unregisterEvent =
   (RMAppAttemptUnregistrationEvent) event;
-  diags = unregisterEvent.getDiagnosticMsg();
+  diags.append(unregisterEvent.getDiagnosticMsg());
   // reset finalTrackingUrl to url sent by am
   finalTrackingUrl = 
sanitizeTrackingUrl(unregisterEvent.getFinalTrackingUrl());
   finalStatus = unregisterEvent.getFinalApplicationStatus();
@@ -1345,16 +1345,16 @@ public class RMAppAttemptImpl implements RMAppAttempt, 
Recoverable {
 case CONTAINER_FINISHED:
   RMAppAttemptContainerFinishedEvent finishEvent =
   (RMAppAttemptContainerFinishedEvent) event;
-  diags = getAMContainerCrashedDiagnostics(finishEvent);
+  diags.append(getAMContainerCrashedDiagnostics(finishEvent));
   exitStatus = finishEvent.getContainerStatus().getExitStatus();
   break;
 case KILL:
   break;
 case FAIL:
-  diags = event.getDiagnosticMsg();
+  diags.append(event.getDiagnosticMsg());
   break;
 case EXPIRE:
-  diags = getAMExpiredDiagnostics(event);
+  diags.append(getAMExpiredDiagnostics(event));
   break;
 default:
   break;
@@ -1368,7 +1368,7 @@ public class RMAppAttemptImpl implements RMAppAttempt, 
Recoverable {
 ApplicationAttemptStateData.newInstance(
 applicationAttemptId,  getMasterContainer(),
 rmStore.getCredentialsFromAppAttempt(this),
-startTime, stateToBeStored, finalTrackingUrl, diags,
+startTime, stateToBeStored, finalTrackingUrl, diags.toString(),
 finalStatus, exitStatus,
   getFinishTime(), resUsage.getMemorySeconds(),
   resUsage.getVcoreSeconds(),


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[10/50] [abbrv] hadoop git commit: YARN-6033. Add support for sections in container-executor configuration file. (Varun Vasudev via wandga)

2017-08-12 Thread inigoiri
YARN-6033. Add support for sections in container-executor configuration file. 
(Varun Vasudev via wandga)

Change-Id: Ibc6d2a959debe5d8ff2b51504149742449d1f1da


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ec694145
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ec694145
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ec694145

Branch: refs/heads/HDFS-10467
Commit: ec694145cf9c0ade7606813871ca2a4a371def8e
Parents: 63cfcb9
Author: Wangda Tan 
Authored: Wed Aug 9 10:51:29 2017 -0700
Committer: Wangda Tan 
Committed: Wed Aug 9 10:51:29 2017 -0700

--
 .../hadoop-yarn-server-nodemanager/pom.xml  |  38 ++
 .../src/CMakeLists.txt  |  22 +
 .../container-executor/impl/configuration.c | 672 +--
 .../container-executor/impl/configuration.h | 182 +++--
 .../impl/container-executor.c   |  39 +-
 .../impl/container-executor.h   |  52 +-
 .../container-executor/impl/get_executable.c|   1 +
 .../main/native/container-executor/impl/main.c  |  17 +-
 .../main/native/container-executor/impl/util.c  | 134 
 .../main/native/container-executor/impl/util.h  | 115 
 .../test-configurations/configuration-1.cfg |  31 +
 .../test-configurations/configuration-2.cfg |  28 +
 .../test/test-configurations/old-config.cfg |  25 +
 .../test/test-container-executor.c  |  15 +-
 .../test/test_configuration.cc  | 432 
 .../native/container-executor/test/test_main.cc |  29 +
 .../native/container-executor/test/test_util.cc | 138 
 17 files changed, 1649 insertions(+), 321 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec694145/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
index 28ee0d9..a50a769 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
@@ -215,6 +215,44 @@
   ${project.build.directory}/native-results
 
   
+  
+cetest
+cmake-test
+test
+
+  
+  cetest
+  
${project.build.directory}/native/test
+  ${basedir}/src
+  
${project.build.directory}/native/test/cetest
+  
+--gtest_filter=-Perf.
+
--gtest_output=xml:${project.build.directory}/surefire-reports/TEST-cetest.xml
+  
+  
${project.build.directory}/surefire-reports
+
+  
+
+  
+  
+org.apache.maven.plugins
+maven-antrun-plugin
+
+  
+make
+compile
+
+  run
+
+
+  
+
+  
+
+  
+
+  
 
   
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec694145/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
index 5b52536..100d7ca 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
@@ -19,6 +19,9 @@ cmake_minimum_required(VERSION 2.6 FATAL_ERROR)
 list(APPEND CMAKE_MODULE_PATH 
${CMAKE_SOURCE_DIR}/../../../../../hadoop-common-project/hadoop-common)
 include(HadoopCommon)
 
+# Set gtest path
+set(GTEST_SRC_DIR 
${CMAKE_SOURCE_DIR}/../../../../../hadoop-common-project/hadoop-common/src/main/native/gtest)
+
 # determine if container-executor.conf.dir is an absolute
 # path in case the OS we're compiling on doesn't have
 # a hook in get_executable. We'll use this define
@@ -80,12 +83,20 @@ endfunction()
 include_directories(
 ${CMAKE_CURRENT_SOURCE_DIR}
 

[04/50] [abbrv] hadoop git commit: HDFS-12182. BlockManager.metaSave does not distinguish between "under replicated" and "missing" blocks. Contributed by Wellington Chevreuil.

2017-08-12 Thread inigoiri
HDFS-12182. BlockManager.metaSave does not distinguish between "under 
replicated" and "missing" blocks. Contributed by Wellington Chevreuil.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9a3c2379
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9a3c2379
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9a3c2379

Branch: refs/heads/HDFS-10467
Commit: 9a3c2379ef24cdca5153abf4b63fde1131ff8989
Parents: 07694fc
Author: Wei-Chiu Chuang 
Authored: Tue Aug 8 23:43:24 2017 -0700
Committer: Wei-Chiu Chuang 
Committed: Tue Aug 8 23:44:18 2017 -0700

--
 .../server/blockmanagement/BlockManager.java| 27 --
 .../blockmanagement/TestBlockManager.java   | 54 
 .../hdfs/server/namenode/TestMetaSave.java  |  2 +
 3 files changed, 79 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9a3c2379/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index fc754a0..6129db8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -705,17 +705,36 @@ public class BlockManager implements BlockStatsMXBean {
 datanodeManager.fetchDatanodes(live, dead, false);
 out.println("Live Datanodes: " + live.size());
 out.println("Dead Datanodes: " + dead.size());
+
 //
-// Dump contents of neededReconstruction
+// Need to iterate over all queues from neededReplications
+// except for the QUEUE_WITH_CORRUPT_BLOCKS)
 //
 synchronized (neededReconstruction) {
   out.println("Metasave: Blocks waiting for reconstruction: "
-  + neededReconstruction.size());
-  for (Block block : neededReconstruction) {
+  + neededReconstruction.getLowRedundancyBlockCount());
+  for (int i = 0; i < neededReconstruction.LEVEL; i++) {
+if (i != neededReconstruction.QUEUE_WITH_CORRUPT_BLOCKS) {
+  for (Iterator it = neededReconstruction.iterator(i);
+   it.hasNext();) {
+Block block = it.next();
+dumpBlockMeta(block, out);
+  }
+}
+  }
+  //
+  // Now prints corrupt blocks separately
+  //
+  out.println("Metasave: Blocks currently missing: " +
+  neededReconstruction.getCorruptBlockSize());
+  for (Iterator it = neededReconstruction.
+  iterator(neededReconstruction.QUEUE_WITH_CORRUPT_BLOCKS);
+   it.hasNext();) {
+Block block = it.next();
 dumpBlockMeta(block, out);
   }
 }
-
+
 // Dump any postponed over-replicated blocks
 out.println("Mis-replicated blocks that have been postponed:");
 for (Block block : postponedMisreplicatedBlocks) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9a3c2379/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
index 6b1a979..42aeadf 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
@@ -1459,4 +1459,58 @@ public class TestBlockManager {
 }
   }
 
+  @Test
+  public void testMetaSaveMissingReplicas() throws Exception {
+List origStorages = getStorages(0, 1);
+List origNodes = getNodes(origStorages);
+BlockInfo block = makeBlockReplicasMissing(0, origNodes);
+File file = new File("test.log");
+PrintWriter out = new PrintWriter(file);
+bm.metaSave(out);
+out.flush();
+FileInputStream fstream = new FileInputStream(file);
+DataInputStream in = new DataInputStream(fstream);
+BufferedReader reader = new BufferedReader(new InputStreamReader(in));
+StringBuffer buffer = new StringBuffer();
+String line;
+try {
+  while ((line = reader.readLine()) != null) {
+buffer.append(line);
+  }
+  String output = 

[06/50] [abbrv] hadoop git commit: YARN-6515. Fix warnings from Spotbugs in hadoop-yarn-server-nodemanager. Contributed by Naganarasimha G R.

2017-08-12 Thread inigoiri
YARN-6515. Fix warnings from Spotbugs in hadoop-yarn-server-nodemanager. 
Contributed by Naganarasimha G R.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1a18d5e5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1a18d5e5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1a18d5e5

Branch: refs/heads/HDFS-10467
Commit: 1a18d5e514d13aa3a88e9b6089394a27296d6bc3
Parents: 8a4bff0
Author: Akira Ajisaka 
Authored: Wed Aug 9 21:56:34 2017 +0900
Committer: Akira Ajisaka 
Committed: Wed Aug 9 21:56:43 2017 +0900

--
 .../server/nodemanager/NodeStatusUpdaterImpl.java| 11 +--
 .../localizer/ContainerLocalizer.java| 15 ---
 .../containermanager/monitor/ContainerMetrics.java   |  2 +-
 3 files changed, 14 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1a18d5e5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
index 00073d8..b5ec383 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
@@ -639,7 +639,6 @@ public class NodeStatusUpdaterImpl extends AbstractService 
implements
   public void removeOrTrackCompletedContainersFromContext(
   List containerIds) throws IOException {
 Set removedContainers = new HashSet();
-Set removedNullContainers = new HashSet();
 
 pendingContainersToRemove.addAll(containerIds);
 Iterator iter = pendingContainersToRemove.iterator();
@@ -649,7 +648,6 @@ public class NodeStatusUpdaterImpl extends AbstractService 
implements
   Container nmContainer = context.getContainers().get(containerId);
   if (nmContainer == null) {
 iter.remove();
-removedNullContainers.add(containerId);
   } else if (nmContainer.getContainerState().equals(
 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerState.DONE))
 {
 context.getContainers().remove(containerId);
@@ -712,11 +710,12 @@ public class NodeStatusUpdaterImpl extends 
AbstractService implements
   public void removeVeryOldStoppedContainersFromCache() {
 synchronized (recentlyStoppedContainers) {
   long currentTime = System.currentTimeMillis();
-  Iterator i =
-  recentlyStoppedContainers.keySet().iterator();
+  Iterator> i =
+  recentlyStoppedContainers.entrySet().iterator();
   while (i.hasNext()) {
-ContainerId cid = i.next();
-if (recentlyStoppedContainers.get(cid) < currentTime) {
+Entry mapEntry = i.next();
+ContainerId cid = mapEntry.getKey();
+if (mapEntry.getValue() < currentTime) {
   if (!context.getContainers().containsKey(cid)) {
 i.remove();
 try {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1a18d5e5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ContainerLocalizer.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ContainerLocalizer.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ContainerLocalizer.java
index 8a46491..bb4b7f3 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ContainerLocalizer.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ContainerLocalizer.java
@@ -17,6 +17,8 @@
 */
 package 

[08/50] [abbrv] hadoop git commit: YARN-6958. Moving logging APIs over to slf4j in hadoop-yarn-server-timelineservice. Contributed by Yeliang Cang.

2017-08-12 Thread inigoiri
YARN-6958. Moving logging APIs over to slf4j in 
hadoop-yarn-server-timelineservice. Contributed by Yeliang Cang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/63cfcb90
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/63cfcb90
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/63cfcb90

Branch: refs/heads/HDFS-10467
Commit: 63cfcb90ac6fbb79ba9ed6b3044cd999fc74e58c
Parents: 69afa26
Author: Akira Ajisaka 
Authored: Wed Aug 9 23:58:22 2017 +0900
Committer: Akira Ajisaka 
Committed: Wed Aug 9 23:58:22 2017 +0900

--
 .../server/timeline/LevelDBCacheTimelineStore.java| 14 +++---
 .../reader/filter/TimelineFilterUtils.java|  7 ---
 .../storage/HBaseTimelineReaderImpl.java  |  8 
 .../storage/HBaseTimelineWriterImpl.java  |  8 
 .../storage/TimelineSchemaCreator.java|  7 ---
 .../storage/application/ApplicationTable.java |  7 ---
 .../storage/apptoflow/AppToFlowTable.java |  7 ---
 .../timelineservice/storage/common/ColumnHelper.java  |  8 +---
 .../storage/common/HBaseTimelineStorageUtils.java |  8 
 .../timelineservice/storage/entity/EntityTable.java   |  7 ---
 .../storage/flow/FlowActivityTable.java   |  7 ---
 .../storage/flow/FlowRunCoprocessor.java  |  7 ---
 .../timelineservice/storage/flow/FlowRunTable.java|  7 ---
 .../timelineservice/storage/flow/FlowScanner.java |  7 ---
 .../storage/reader/TimelineEntityReader.java  |  7 ---
 .../collector/AppLevelTimelineCollector.java  |  7 ---
 .../collector/NodeTimelineCollectorManager.java   |  8 
 .../PerNodeTimelineCollectorsAuxService.java  | 10 +-
 .../timelineservice/collector/TimelineCollector.java  |  7 ---
 .../collector/TimelineCollectorManager.java   |  8 
 .../collector/TimelineCollectorWebService.java|  8 
 .../timelineservice/reader/TimelineReaderServer.java  |  9 +
 .../reader/TimelineReaderWebServices.java |  8 
 .../storage/FileSystemTimelineReaderImpl.java |  8 
 .../storage/common/TimelineStorageUtils.java  |  4 
 25 files changed, 102 insertions(+), 91 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/63cfcb90/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/LevelDBCacheTimelineStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/LevelDBCacheTimelineStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/LevelDBCacheTimelineStore.java
index 7379dd6..f7a3d01 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/LevelDBCacheTimelineStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timeline-pluginstorage/src/main/java/org/apache/hadoop/yarn/server/timeline/LevelDBCacheTimelineStore.java
@@ -19,8 +19,6 @@
 package org.apache.hadoop.yarn.server.timeline;
 
 import com.fasterxml.jackson.databind.ObjectMapper;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.conf.Configuration;
@@ -34,6 +32,8 @@ import org.fusesource.leveldbjni.JniDBFactory;
 import org.iq80.leveldb.DB;
 import org.iq80.leveldb.DBIterator;
 import org.iq80.leveldb.Options;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import java.io.File;
 import java.io.IOException;
@@ -58,8 +58,8 @@ import java.util.Map;
 @Private
 @Unstable
 public class LevelDBCacheTimelineStore extends KeyValueBasedTimelineStore {
-  private static final Log LOG
-  = LogFactory.getLog(LevelDBCacheTimelineStore.class);
+  private static final Logger LOG
+  = LoggerFactory.getLogger(LevelDBCacheTimelineStore.class);
   private static final String CACHED_LDB_FILE_PREFIX = "-timeline-cache.ldb";
   private String dbId;
   private DB entityDb;
@@ -102,7 +102,7 @@ public class LevelDBCacheTimelineStore extends 
KeyValueBasedTimelineStore {
 localFS.setPermission(dbPath, LeveldbUtils.LEVELDB_DIR_UMASK);
   }
 } finally {
-  IOUtils.cleanup(LOG, 

[03/50] [abbrv] hadoop git commit: HADOOP-14355. Update maven-war-plugin to 3.1.0.

2017-08-12 Thread inigoiri
HADOOP-14355. Update maven-war-plugin to 3.1.0.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/07694fc6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/07694fc6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/07694fc6

Branch: refs/heads/HDFS-10467
Commit: 07694fc65ae6d97a430a7dd67a6277e5795c321f
Parents: ebabc70
Author: Akira Ajisaka 
Authored: Wed Aug 9 13:20:03 2017 +0900
Committer: Akira Ajisaka 
Committed: Wed Aug 9 13:20:03 2017 +0900

--
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/07694fc6/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 5aabdc7..8151016 100755
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -126,7 +126,7 @@
 2.6
 2.4.3
 2.5
-2.4
+3.1.0
 2.3
 1.2
 
1.5


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[02/50] [abbrv] hadoop git commit: HADOOP-14628. Upgrade maven enforcer plugin to 3.0.0-M1.

2017-08-12 Thread inigoiri
HADOOP-14628. Upgrade maven enforcer plugin to 3.0.0-M1.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ebabc709
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ebabc709
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ebabc709

Branch: refs/heads/HDFS-10467
Commit: ebabc7094c6bcbd9063744331c69e3fba615fa62
Parents: a53b8b6
Author: Akira Ajisaka 
Authored: Wed Aug 9 13:16:31 2017 +0900
Committer: Akira Ajisaka 
Committed: Wed Aug 9 13:18:16 2017 +0900

--
 hadoop-client-modules/hadoop-client-check-invariants/pom.xml  | 1 -
 hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml | 1 -
 pom.xml   | 2 +-
 3 files changed, 1 insertion(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ebabc709/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
--
diff --git a/hadoop-client-modules/hadoop-client-check-invariants/pom.xml 
b/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
index e495a69..2f31fa6 100644
--- a/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
+++ b/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
@@ -46,7 +46,6 @@
   
 org.apache.maven.plugins
 maven-enforcer-plugin
-1.4
 
   
 org.codehaus.mojo

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ebabc709/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
--
diff --git a/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml 
b/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
index 68d1f5b..0e23db9 100644
--- a/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
+++ b/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
@@ -50,7 +50,6 @@
   
 org.apache.maven.plugins
 maven-enforcer-plugin
-1.4
 
   
 org.codehaus.mojo

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ebabc709/pom.xml
--
diff --git a/pom.xml b/pom.xml
index d82cd9f..22a4b59 100644
--- a/pom.xml
+++ b/pom.xml
@@ -97,7 +97,7 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xs
 1.7
 2.4
 2.10
-1.4.1
+3.0.0-M1
 2.10.4
 1.5
 
1.5


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org