hadoop git commit: HDFS-8785. TestDistributedFileSystem is failing in trunk. Contributed by Xiaoyu Yao.

2015-07-27 Thread xyao
Repository: hadoop
Updated Branches:
  refs/heads/trunk 1df78688c - 2196e39e1


HDFS-8785. TestDistributedFileSystem is failing in trunk. Contributed by Xiaoyu 
Yao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2196e39e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2196e39e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2196e39e

Branch: refs/heads/trunk
Commit: 2196e39e142b0f8d1944805db2bfacd4e3244625
Parents: 1df7868
Author: Xiaoyu Yao x...@apache.org
Authored: Mon Jul 27 07:28:41 2015 -0700
Committer: Xiaoyu Yao x...@apache.org
Committed: Mon Jul 27 07:28:41 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt|  2 ++
 .../apache/hadoop/hdfs/TestDistributedFileSystem.java  | 13 -
 2 files changed, 10 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2196e39e/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 1ddf7da..cc2a833 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1084,6 +1084,8 @@ Release 2.8.0 - UNRELEASED
 HDFS-8810. Correct assertions in TestDFSInotifyEventInputStream class.
 (Surendra Singh Lilhore via aajisaka)
 
+HDFS-8785. TestDistributedFileSystem is failing in trunk. (Xiaoyu Yao)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2196e39e/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
index 0b77210..6012c5d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
@@ -1189,19 +1189,22 @@ public class TestDistributedFileSystem {
 try {
   cluster.waitActive();
   DistributedFileSystem dfs = cluster.getFileSystem();
-  // Write 1 MB to a dummy socket to ensure the write times out
+  // Write 10 MB to a dummy socket to ensure the write times out
   ServerSocket socket = new ServerSocket(0);
   Peer peer = dfs.getClient().newConnectedPeer(
 (InetSocketAddress) socket.getLocalSocketAddress(), null, null);
   long start = Time.now();
   try {
-byte[] buf = new byte[1024 * 1024];
+byte[] buf = new byte[10 * 1024 * 1024];
 peer.getOutputStream().write(buf);
-Assert.fail(write should timeout);
+long delta = Time.now() - start;
+Assert.fail(write finish in  + delta +  ms + but should 
timedout);
   } catch (SocketTimeoutException ste) {
 long delta = Time.now() - start;
-Assert.assertTrue(write timedout too soon, delta = timeout * 0.9);
-Assert.assertTrue(write timedout too late, delta = timeout * 1.1);
+Assert.assertTrue(write timedout too soon in  + delta +  ms,
+delta = timeout * 0.9);
+Assert.assertTrue(write timedout too late in  + delta +  ms,
+delta = timeout * 1.2);
   } catch (Throwable t) {
 Assert.fail(wrong exception: + t);
   }



[2/5] hadoop git commit: HDFS-8810. Correct assertions in TestDFSInotifyEventInputStream class. Contributed by Surendra Singh Lilhore.

2015-07-27 Thread aw
HDFS-8810. Correct assertions in TestDFSInotifyEventInputStream class. 
Contributed by Surendra Singh Lilhore.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1df78688
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1df78688
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1df78688

Branch: refs/heads/HADOOP-12111
Commit: 1df78688c69476f89d16f93bc74a4f05d0b1a3da
Parents: 42d4e0a
Author: Akira Ajisaka aajis...@apache.org
Authored: Mon Jul 27 13:17:24 2015 +0900
Committer: Akira Ajisaka aajis...@apache.org
Committed: Mon Jul 27 13:17:24 2015 +0900

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  | 3 +++
 .../org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java   | 4 ++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1df78688/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 3614e01..1ddf7da 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1081,6 +1081,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-8773. Few FSNamesystem metrics are not documented in the Metrics page.
 (Rakesh R via cnauroth)
 
+HDFS-8810. Correct assertions in TestDFSInotifyEventInputStream class.
+(Surendra Singh Lilhore via aajisaka)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1df78688/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
index 65569d0..e7bbcac 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
@@ -164,7 +164,7 @@ public class TestDFSInotifyEventInputStream {
   Event.RenameEvent re2 = (Event.RenameEvent) batch.getEvents()[0];
   Assert.assertTrue(re2.getDstPath().equals(/file2));
   Assert.assertTrue(re2.getSrcPath().equals(/file4));
-  Assert.assertTrue(re.getTimestamp()  0);
+  Assert.assertTrue(re2.getTimestamp()  0);
   LOG.info(re2.toString());
 
   // AddOp with overwrite
@@ -378,7 +378,7 @@ public class TestDFSInotifyEventInputStream {
   Event.RenameEvent re3 = (Event.RenameEvent) batch.getEvents()[0];
   Assert.assertTrue(re3.getDstPath().equals(/dir/file5));
   Assert.assertTrue(re3.getSrcPath().equals(/file5));
-  Assert.assertTrue(re.getTimestamp()  0);
+  Assert.assertTrue(re3.getTimestamp()  0);
   LOG.info(re3.toString());
 
   // TruncateOp



hadoop git commit: HADOOP-12265. Pylint should be installed in test-patch docker environment (Kengo Seki via aw)

2015-07-27 Thread aw
Repository: hadoop
Updated Branches:
  refs/heads/HADOOP-12111 a20c52b62 - 4d4f288d3


HADOOP-12265. Pylint should be installed in test-patch docker environment 
(Kengo Seki via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4d4f288d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4d4f288d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4d4f288d

Branch: refs/heads/HADOOP-12111
Commit: 4d4f288d3037d5a7a2b570ca87a685e4797cc29f
Parents: a20c52b
Author: Allen Wittenauer a...@apache.org
Authored: Mon Jul 27 11:05:48 2015 -0700
Committer: Allen Wittenauer a...@apache.org
Committed: Mon Jul 27 11:05:48 2015 -0700

--
 dev-support/docker/Dockerfile   | 2 +-
 dev-support/docs/precommit-basic.md | 1 +
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4d4f288d/dev-support/docker/Dockerfile
--
diff --git a/dev-support/docker/Dockerfile b/dev-support/docker/Dockerfile
index f761f8b..862819f 100644
--- a/dev-support/docker/Dockerfile
+++ b/dev-support/docker/Dockerfile
@@ -44,7 +44,7 @@ RUN apt-get update  apt-get install --no-install-recommends 
-y \
 libjansson-dev \
 fuse libfuse-dev \
 libcurl4-openssl-dev \
-python python2.7
+python python2.7 pylint
 
 # Install Forrest
 RUN mkdir -p /usr/local/apache-forrest ; \

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4d4f288d/dev-support/docs/precommit-basic.md
--
diff --git a/dev-support/docs/precommit-basic.md 
b/dev-support/docs/precommit-basic.md
index ee2e063..a830cdb 100644
--- a/dev-support/docs/precommit-basic.md
+++ b/dev-support/docs/precommit-basic.md
@@ -37,6 +37,7 @@ test-patch has the following requirements:
 * bash v3.2 or higher
 * findbugs 3.x installed
 * shellcheck installed
+* pylint installed
 * GNU diff
 * GNU patch
 * POSIX awk



[1/5] hadoop git commit: YARN-3958. TestYarnConfigurationFields should be moved to hadoop-yarn-api module. Contributed by Varun Saxena.

2015-07-27 Thread aw
Repository: hadoop
Updated Branches:
  refs/heads/HADOOP-12111 8d6dbbb28 - a20c52b62


YARN-3958. TestYarnConfigurationFields should be moved to hadoop-yarn-api 
module. Contributed by Varun Saxena.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/42d4e0ae
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/42d4e0ae
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/42d4e0ae

Branch: refs/heads/HADOOP-12111
Commit: 42d4e0ae99d162fde52902cb86e29f2c82a084c8
Parents: 156f24e
Author: Akira Ajisaka aajis...@apache.org
Authored: Mon Jul 27 11:43:25 2015 +0900
Committer: Akira Ajisaka aajis...@apache.org
Committed: Mon Jul 27 11:43:25 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../hadoop-yarn/hadoop-yarn-api/pom.xml |  34 +
 .../yarn/conf/TestYarnConfigurationFields.java  | 136 +++
 .../yarn/conf/TestYarnConfigurationFields.java  | 136 ---
 4 files changed, 173 insertions(+), 136 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/42d4e0ae/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 883d009..3b7d8a8 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -685,6 +685,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3973. Recent changes to application priority management break 
 reservation system from YARN-1051. (Carlo Curino via wangda)
 
+YARN-3958. TestYarnConfigurationFields should be moved to hadoop-yarn-api
+module. (Varun Saxena via aajisaka)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/42d4e0ae/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
index dc9c469..5c4156b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
@@ -62,9 +62,31 @@
   groupIdcom.google.protobuf/groupId
   artifactIdprotobuf-java/artifactId
 /dependency
+
+dependency
+  groupIdorg.apache.hadoop/groupId
+  artifactIdhadoop-common/artifactId
+  typetest-jar/type
+  scopetest/scope
+/dependency
+
+dependency
+  groupIdjunit/groupId
+  artifactIdjunit/artifactId
+  scopetest/scope
+/dependency
   /dependencies
 
   build
+resources
+  resource
+
directory${basedir}/../hadoop-yarn-common/src/main/resources/directory
+includes
+  includeyarn-default.xml/include
+/includes
+filteringfalse/filtering
+  /resource
+/resources
 plugins
   plugin
 groupIdorg.apache.hadoop/groupId
@@ -105,6 +127,18 @@
   /execution
 /executions
   /plugin
+
+  plugin
+artifactIdmaven-jar-plugin/artifactId
+executions
+  execution
+goals
+  goaltest-jar/goal
+/goals
+phasetest-compile/phase
+  /execution
+/executions
+  /plugin
 /plugins
   /build
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/42d4e0ae/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
new file mode 100644
index 000..e89a90d
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
@@ -0,0 +1,136 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language 

[3/5] hadoop git commit: HDFS-8785. TestDistributedFileSystem is failing in trunk. Contributed by Xiaoyu Yao.

2015-07-27 Thread aw
HDFS-8785. TestDistributedFileSystem is failing in trunk. Contributed by Xiaoyu 
Yao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2196e39e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2196e39e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2196e39e

Branch: refs/heads/HADOOP-12111
Commit: 2196e39e142b0f8d1944805db2bfacd4e3244625
Parents: 1df7868
Author: Xiaoyu Yao x...@apache.org
Authored: Mon Jul 27 07:28:41 2015 -0700
Committer: Xiaoyu Yao x...@apache.org
Committed: Mon Jul 27 07:28:41 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt|  2 ++
 .../apache/hadoop/hdfs/TestDistributedFileSystem.java  | 13 -
 2 files changed, 10 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2196e39e/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 1ddf7da..cc2a833 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1084,6 +1084,8 @@ Release 2.8.0 - UNRELEASED
 HDFS-8810. Correct assertions in TestDFSInotifyEventInputStream class.
 (Surendra Singh Lilhore via aajisaka)
 
+HDFS-8785. TestDistributedFileSystem is failing in trunk. (Xiaoyu Yao)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2196e39e/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
index 0b77210..6012c5d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
@@ -1189,19 +1189,22 @@ public class TestDistributedFileSystem {
 try {
   cluster.waitActive();
   DistributedFileSystem dfs = cluster.getFileSystem();
-  // Write 1 MB to a dummy socket to ensure the write times out
+  // Write 10 MB to a dummy socket to ensure the write times out
   ServerSocket socket = new ServerSocket(0);
   Peer peer = dfs.getClient().newConnectedPeer(
 (InetSocketAddress) socket.getLocalSocketAddress(), null, null);
   long start = Time.now();
   try {
-byte[] buf = new byte[1024 * 1024];
+byte[] buf = new byte[10 * 1024 * 1024];
 peer.getOutputStream().write(buf);
-Assert.fail(write should timeout);
+long delta = Time.now() - start;
+Assert.fail(write finish in  + delta +  ms + but should 
timedout);
   } catch (SocketTimeoutException ste) {
 long delta = Time.now() - start;
-Assert.assertTrue(write timedout too soon, delta = timeout * 0.9);
-Assert.assertTrue(write timedout too late, delta = timeout * 1.1);
+Assert.assertTrue(write timedout too soon in  + delta +  ms,
+delta = timeout * 0.9);
+Assert.assertTrue(write timedout too late in  + delta +  ms,
+delta = timeout * 1.2);
   } catch (Throwable t) {
 Assert.fail(wrong exception: + t);
   }



[4/5] hadoop git commit: Merge branch 'trunk' into HADOOP-12111

2015-07-27 Thread aw
Merge branch 'trunk' into HADOOP-12111


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ce41c537
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ce41c537
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ce41c537

Branch: refs/heads/HADOOP-12111
Commit: ce41c53791bc3ed775efc9629eba59f9a70c9f95
Parents: 8d6dbbb 2196e39
Author: Allen Wittenauer a...@apache.org
Authored: Mon Jul 27 10:52:18 2015 -0700
Committer: Allen Wittenauer a...@apache.org
Committed: Mon Jul 27 10:52:18 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   5 +
 .../hdfs/TestDFSInotifyEventInputStream.java|   4 +-
 .../hadoop/hdfs/TestDistributedFileSystem.java  |  13 +-
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../hadoop-yarn/hadoop-yarn-api/pom.xml |  34 +
 .../yarn/conf/TestYarnConfigurationFields.java  | 136 +++
 .../yarn/conf/TestYarnConfigurationFields.java  | 136 ---
 7 files changed, 188 insertions(+), 143 deletions(-)
--




[5/5] hadoop git commit: HADOOP-12226. CHANGED_MODULES is wrong for ant (addendum patch) (aw)

2015-07-27 Thread aw
HADOOP-12226. CHANGED_MODULES is wrong for ant (addendum patch) (aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a20c52b6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a20c52b6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a20c52b6

Branch: refs/heads/HADOOP-12111
Commit: a20c52b621a78f7d630cd8c6d56be1d5e46c2427
Parents: ce41c53
Author: Allen Wittenauer a...@apache.org
Authored: Mon Jul 27 10:53:50 2015 -0700
Committer: Allen Wittenauer a...@apache.org
Committed: Mon Jul 27 10:53:50 2015 -0700

--
 dev-support/test-patch.sh | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a20c52b6/dev-support/test-patch.sh
--
diff --git a/dev-support/test-patch.sh b/dev-support/test-patch.sh
index 4dc9cfd..1c9be9c 100755
--- a/dev-support/test-patch.sh
+++ b/dev-support/test-patch.sh
@@ -1175,8 +1175,7 @@ function find_changed_modules
   #shellcheck disable=SC2086,SC2116
   CHANGED_UNFILTERED_MODULES=$(echo ${CHANGED_UNFILTERED_MODULES})
 
-  if [[ ${BUILDTOOL} = maven
- ${QETESTMODE} = false ]]; then
+  if [[ ${BUILDTOOL} = maven ]]; then
 # Filter out modules without code
 for module in ${builddirs}; do
   ${GREP} packagingpom/packaging ${module}/pom.xml  /dev/null
@@ -1184,8 +1183,6 @@ function find_changed_modules
 buildmods=${buildmods} ${module}
   fi
 done
-  elif [[ ${QETESTMODE} = true ]]; then
-buildmods=${builddirs}
   fi
 
   #shellcheck disable=SC2086,SC2034



[1/2] hadoop git commit: YARN-3852. Add docker container support to container-executor. Contributed by Abin Shahab.

2015-07-27 Thread vvasudev
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 1cf5e4083 - ec0f801f5
  refs/heads/trunk 2196e39e1 - f36835ff9


YARN-3852. Add docker container support to container-executor. Contributed by 
Abin Shahab.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f36835ff
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f36835ff
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f36835ff

Branch: refs/heads/trunk
Commit: f36835ff9b878fa20fe58a30f9d1e8c47702d6d2
Parents: 2196e39
Author: Varun Vasudev vvasu...@apache.org
Authored: Mon Jul 27 10:12:30 2015 -0700
Committer: Varun Vasudev vvasu...@apache.org
Committed: Mon Jul 27 10:14:51 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../container-executor/impl/configuration.c |  17 +-
 .../container-executor/impl/configuration.h |   2 +
 .../impl/container-executor.c   | 417 ---
 .../impl/container-executor.h   |  25 +-
 .../main/native/container-executor/impl/main.c  |  97 -
 6 files changed, 480 insertions(+), 81 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f36835ff/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 3b7d8a8..4e54aea 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -150,6 +150,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3656. LowCost: A Cost-Based Placement Agent for YARN Reservations. 
 (Jonathan Yaniv and Ishai Menache via curino)
 
+YARN-3852. Add docker container support to container-executor
+(Abin Shahab via vvasudev)
+
   IMPROVEMENTS
 
 YARN-644. Basic null check is not performed on passed in arguments before

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f36835ff/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
index eaa1f19..2825367 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
@@ -291,27 +291,23 @@ char ** get_values(const char * key) {
   return extract_values(value);
 }
 
-/**
- * Extracts array of values from the '%' separated list of values.
- */
-char ** extract_values(char *value) {
+char ** extract_values_delim(char *value, const char *delim) {
   char ** toPass = NULL;
   char *tempTok = NULL;
   char *tempstr = NULL;
   int size = 0;
   int toPassSize = MAX_SIZE;
-
   //first allocate any array of 10
   if(value != NULL) {
 toPass = (char **) malloc(sizeof(char *) * toPassSize);
-tempTok = strtok_r((char *)value, %, tempstr);
+tempTok = strtok_r((char *)value, delim, tempstr);
 while (tempTok != NULL) {
   toPass[size++] = tempTok;
   if(size == toPassSize) {
 toPassSize += MAX_SIZE;
 toPass = (char **) realloc(toPass,(sizeof(char *) * toPassSize));
   }
-  tempTok = strtok_r(NULL, %, tempstr);
+  tempTok = strtok_r(NULL, delim, tempstr);
 }
   }
   if (toPass != NULL) {
@@ -320,6 +316,13 @@ char ** extract_values(char *value) {
   return toPass;
 }
 
+/**
+ * Extracts array of values from the '%' separated list of values.
+ */
+char ** extract_values(char *value) {
+  extract_values_delim(value, %);
+}
+
 // free an entry set of values
 void free_values(char** values) {
   if (*values != NULL) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f36835ff/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h
index 133e67b..390a5b5 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h
+++ 

[26/50] [abbrv] hadoop git commit: HDFS-6682. Add a metric to expose the timestamp of the oldest under-replicated block. (aajisaka)

2015-07-27 Thread zjshen
HDFS-6682. Add a metric to expose the timestamp of the oldest under-replicated 
block. (aajisaka)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/43df21a3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/43df21a3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/43df21a3

Branch: refs/heads/YARN-2928
Commit: 43df21a3e5c27637d3909bc7066277db6e6b
Parents: 875458a
Author: Akira Ajisaka aajis...@apache.org
Authored: Fri Jul 24 11:37:23 2015 +0900
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:34 2015 -0700

--
 .../hadoop-common/src/site/markdown/Metrics.md  |  1 +
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../server/blockmanagement/BlockManager.java|  4 ++
 .../blockmanagement/UnderReplicatedBlocks.java  | 33 --
 .../hdfs/server/namenode/FSNamesystem.java  |  9 +++-
 .../TestUnderReplicatedBlocks.java  | 48 
 6 files changed, 93 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/43df21a3/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
--
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md 
b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
index 646cda5..2e6c095 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
@@ -201,6 +201,7 @@ Each metrics record contains tags such as HAState and 
Hostname as additional inf
 | Name | Description |
 |: |: |
 | `MissingBlocks` | Current number of missing blocks |
+| `TimeOfTheOldestBlockToBeReplicated` | The timestamp of the oldest block to 
be replicated. If there are no under-replicated or corrupt blocks, return 0. |
 | `ExpiredHeartbeats` | Total number of expired heartbeats |
 | `TransactionsSinceLastCheckpoint` | Total number of transactions since last 
checkpoint |
 | `TransactionsSinceLastLogRoll` | Total number of transactions since last 
edit log roll |

http://git-wip-us.apache.org/repos/asf/hadoop/blob/43df21a3/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index bcc1e25..f86d41e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -747,6 +747,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-8730. Clean up the import statements in ClientProtocol.
 (Takanobu Asanuma via wheat9)
 
+HDFS-6682. Add a metric to expose the timestamp of the oldest
+under-replicated block. (aajisaka)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/43df21a3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 7dce2a8..64603d0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -171,6 +171,10 @@ public class BlockManager implements BlockStatsMXBean {
   public int getPendingDataNodeMessageCount() {
 return pendingDNMessages.count();
   }
+  /** Used by metrics. */
+  public long getTimeOfTheOldestBlockToBeReplicated() {
+return neededReplications.getTimeOfTheOldestBlockToBeReplicated();
+  }
 
   /**replicationRecheckInterval is how often namenode checks for new 
replication work*/
   private final long replicationRecheckInterval;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/43df21a3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
index 000416e..d8aec99 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
+++ 

[47/50] [abbrv] hadoop git commit: YARN-3656. LowCost: A Cost-Based Placement Agent for YARN Reservations. (Jonathan Yaniv and Ishai Menache via curino)

2015-07-27 Thread zjshen
YARN-3656. LowCost: A Cost-Based Placement Agent for YARN Reservations. 
(Jonathan Yaniv and Ishai Menache via curino)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d32b8b9c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d32b8b9c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d32b8b9c

Branch: refs/heads/YARN-2928
Commit: d32b8b9c76199891d26e941ca3c2d5994e0af5ac
Parents: a02cd15
Author: ccurino ccur...@ubuntu.gateway.2wire.net
Authored: Sat Jul 25 07:39:47 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:38 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../reservation/AbstractReservationSystem.java  |   2 +
 .../reservation/GreedyReservationAgent.java | 390 -
 .../reservation/InMemoryPlan.java   |  13 +-
 .../InMemoryReservationAllocation.java  |   8 +-
 .../resourcemanager/reservation/Plan.java   |   1 +
 .../reservation/PlanContext.java|   2 +
 .../resourcemanager/reservation/PlanView.java   |  31 +-
 .../resourcemanager/reservation/Planner.java|  47 --
 .../RLESparseResourceAllocation.java|  55 +-
 .../reservation/ReservationAgent.java   |  72 --
 .../ReservationSchedulerConfiguration.java  |   6 +-
 .../reservation/ReservationSystem.java  |   5 +-
 .../reservation/ReservationSystemUtil.java  |   6 +-
 .../reservation/SimpleCapacityReplanner.java| 113 ---
 .../planning/AlignedPlannerWithGreedy.java  | 123 +++
 .../planning/GreedyReservationAgent.java|  97 +++
 .../reservation/planning/IterativePlanner.java  | 338 
 .../reservation/planning/Planner.java   |  49 ++
 .../reservation/planning/PlanningAlgorithm.java | 207 +
 .../reservation/planning/ReservationAgent.java  |  73 ++
 .../planning/SimpleCapacityReplanner.java   | 118 +++
 .../reservation/planning/StageAllocator.java|  55 ++
 .../planning/StageAllocatorGreedy.java  | 152 
 .../planning/StageAllocatorLowCostAligned.java  | 360 
 .../planning/StageEarliestStart.java|  46 ++
 .../planning/StageEarliestStartByDemand.java| 106 +++
 .../StageEarliestStartByJobArrival.java |  39 +
 .../planning/TryManyReservationAgents.java  | 114 +++
 .../reservation/ReservationSystemTestUtil.java  |   5 +-
 .../reservation/TestCapacityOverTimePolicy.java |   2 +-
 .../TestCapacitySchedulerPlanFollower.java  |   1 +
 .../reservation/TestFairReservationSystem.java  |   1 -
 .../TestFairSchedulerPlanFollower.java  |   1 +
 .../reservation/TestGreedyReservationAgent.java | 604 --
 .../reservation/TestInMemoryPlan.java   |   2 +
 .../reservation/TestNoOverCommitPolicy.java |   1 +
 .../TestRLESparseResourceAllocation.java|  51 +-
 .../TestSchedulerPlanFollowerBase.java  |   1 +
 .../TestSimpleCapacityReplanner.java| 162 
 .../planning/TestAlignedPlanner.java| 820 +++
 .../planning/TestGreedyReservationAgent.java| 611 ++
 .../planning/TestSimpleCapacityReplanner.java   | 170 
 43 files changed, 3634 insertions(+), 1429 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d32b8b9c/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index fa364f1..611fd4b 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -262,6 +262,9 @@ Release 2.8.0 - UNRELEASED
 YARN-2019. Retrospect on decision of making RM crashed if any exception 
throw 
 in ZKRMStateStore. (Jian He via junping_du)
 
+YARN-3656. LowCost: A Cost-Based Placement Agent for YARN Reservations. 
+(Jonathan Yaniv and Ishai Menache via curino)
+
   IMPROVEMENTS
 
 YARN-644. Basic null check is not performed on passed in arguments before

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d32b8b9c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/AbstractReservationSystem.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/AbstractReservationSystem.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/AbstractReservationSystem.java
index 8a15ac6..d2603c1 100644
--- 

[35/50] [abbrv] hadoop git commit: YARN-3026. Move application-specific container allocation logic from LeafQueue to FiCaSchedulerApp. Contributed by Wangda Tan

2015-07-27 Thread zjshen
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d725cf9d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
index dfeb30f..c660fcb 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
@@ -24,6 +24,7 @@ import java.util.List;
 import java.util.Map;
 import java.util.Set;
 
+import org.apache.commons.lang.mutable.MutableObject;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
@@ -39,6 +40,9 @@ import org.apache.hadoop.yarn.api.records.ResourceRequest;
 import org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger;
 import 
org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger.AuditConstants;
 import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
+import 
org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
+import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp;
+import 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttempt;
 import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer;
 import 
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerEvent;
 import 
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerEventType;
@@ -48,11 +52,22 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ActiveUsersManage
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.Allocation;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.NodeType;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.Queue;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerAppUtils;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt;
-import org.apache.hadoop.yarn.util.resource.Resources;
-import org.apache.hadoop.yarn.util.resource.ResourceCalculator;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSAssignment;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityHeadroomProvider;
-import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.SchedulingMode;
+import org.apache.hadoop.yarn.server.utils.BuilderUtils;
+import org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator;
+import org.apache.hadoop.yarn.util.resource.ResourceCalculator;
+import org.apache.hadoop.yarn.util.resource.Resources;
+
+import com.google.common.annotations.VisibleForTesting;
 
 /**
  * Represents an application attempt from the viewpoint of the FIFO or Capacity
@@ -61,14 +76,22 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp;
 @Private
 @Unstable
 public class FiCaSchedulerApp extends SchedulerApplicationAttempt {
-
   private static final Log LOG = LogFactory.getLog(FiCaSchedulerApp.class);
 
+  static final CSAssignment NULL_ASSIGNMENT =
+  new CSAssignment(Resources.createResource(0, 0), NodeType.NODE_LOCAL);
+
+  static final CSAssignment SKIP_ASSIGNMENT = new CSAssignment(true);
+
   private final SetContainerId containersToPreempt =
 new HashSetContainerId();
 
   private CapacityHeadroomProvider headroomProvider;
 
+  private ResourceCalculator rc = new DefaultResourceCalculator();
+
+  private ResourceScheduler scheduler;
+
   public FiCaSchedulerApp(ApplicationAttemptId applicationAttemptId, 
   String user, Queue queue, ActiveUsersManager activeUsersManager,
   RMContext rmContext) {
@@ -95,6 +118,12 @@ public class FiCaSchedulerApp extends 
SchedulerApplicationAttempt {
 

[27/50] [abbrv] hadoop git commit: HADOOP-12259. Utility to Dynamic port allocation (brahmareddy via rkanter)

2015-07-27 Thread zjshen
HADOOP-12259. Utility to Dynamic port allocation (brahmareddy via rkanter)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5e1eb48c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5e1eb48c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5e1eb48c

Branch: refs/heads/YARN-2928
Commit: 5e1eb48cff1ecf32287617ee56a6d84ec8f38ca7
Parents: 21c9cb8
Author: Robert Kanter rkan...@apache.org
Authored: Fri Jul 24 09:41:53 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:35 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  2 +
 .../org/apache/hadoop/net/ServerSocketUtil.java | 63 
 2 files changed, 65 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5e1eb48c/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 56edcac..d6d43f2 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -725,6 +725,8 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12189. Improve CallQueueManager#swapQueue to make queue elements
 drop nearly impossible. (Zhihai Xu via wang)
 
+HADOOP-12259. Utility to Dynamic port allocation (brahmareddy via rkanter)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5e1eb48c/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/ServerSocketUtil.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/ServerSocketUtil.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/ServerSocketUtil.java
new file mode 100644
index 000..0ce835f
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/ServerSocketUtil.java
@@ -0,0 +1,63 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.net;
+
+import java.io.IOException;
+import java.net.ServerSocket;
+import java.util.Random;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+public class ServerSocketUtil {
+
+  private static final Log LOG = LogFactory.getLog(ServerSocketUtil.class);
+
+  /**
+   * Port scan  allocate is how most other apps find ports
+   * 
+   * @param port given port
+   * @param retries number of retires
+   * @return
+   * @throws IOException
+   */
+  public static int getPort(int port, int retries) throws IOException {
+Random rand = new Random();
+int tryPort = port;
+int tries = 0;
+while (true) {
+  if (tries  0) {
+tryPort = port + rand.nextInt(65535 - port);
+  }
+  LOG.info(Using port  + tryPort);
+  try (ServerSocket s = new ServerSocket(tryPort)) {
+return tryPort;
+  } catch (IOException e) {
+tries++;
+if (tries = retries) {
+  LOG.info(Port is already in use; giving up);
+  throw e;
+} else {
+  LOG.info(Port is already in use; trying again);
+}
+  }
+}
+  }
+
+}



[32/50] [abbrv] hadoop git commit: HADOOP-12170. hadoop-common's JNIFlags.cmake is redundant and can be removed (Alan Burlison via Colin P. McCabe)

2015-07-27 Thread zjshen
HADOOP-12170. hadoop-common's JNIFlags.cmake is redundant and can be removed 
(Alan Burlison via Colin P. McCabe)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c3107456
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c3107456
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c3107456

Branch: refs/heads/YARN-2928
Commit: c31074567a0dd684bc29c8a218cb60a29dc29553
Parents: 97f742f
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Fri Jul 24 13:03:31 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:36 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |   3 +
 .../hadoop-common/src/JNIFlags.cmake| 124 ---
 2 files changed, 3 insertions(+), 124 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c3107456/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index d6d43f2..0da6194 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -727,6 +727,9 @@ Release 2.8.0 - UNRELEASED
 
 HADOOP-12259. Utility to Dynamic port allocation (brahmareddy via rkanter)
 
+HADOOP-12170. hadoop-common's JNIFlags.cmake is redundant and can be
+removed (Alan Burlison via Colin P. McCabe)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c3107456/hadoop-common-project/hadoop-common/src/JNIFlags.cmake
--
diff --git a/hadoop-common-project/hadoop-common/src/JNIFlags.cmake 
b/hadoop-common-project/hadoop-common/src/JNIFlags.cmake
deleted file mode 100644
index c558fe8..000
--- a/hadoop-common-project/hadoop-common/src/JNIFlags.cmake
+++ /dev/null
@@ -1,124 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# License); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an AS IS BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-cmake_minimum_required(VERSION 2.6 FATAL_ERROR)
-
-# If JVM_ARCH_DATA_MODEL is 32, compile all binaries as 32-bit.
-# This variable is set by maven.
-if (JVM_ARCH_DATA_MODEL EQUAL 32)
-# Force 32-bit code generation on amd64/x86_64, ppc64, sparc64
-if (CMAKE_COMPILER_IS_GNUCC AND CMAKE_SYSTEM_PROCESSOR MATCHES .*64)
-set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -m32)
-set(CMAKE_CXX_FLAGS ${CMAKE_CXX_FLAGS} -m32)
-set(CMAKE_LD_FLAGS ${CMAKE_LD_FLAGS} -m32)
-endif ()
-if (CMAKE_SYSTEM_PROCESSOR STREQUAL x86_64 OR CMAKE_SYSTEM_PROCESSOR 
STREQUAL amd64)
-# Set CMAKE_SYSTEM_PROCESSOR to ensure that find_package(JNI) will use
-# the 32-bit version of libjvm.so.
-set(CMAKE_SYSTEM_PROCESSOR i686)
-endif ()
-endif (JVM_ARCH_DATA_MODEL EQUAL 32)
-
-# Determine float ABI of JVM on ARM Linux
-if (CMAKE_SYSTEM_PROCESSOR MATCHES ^arm AND CMAKE_SYSTEM_NAME STREQUAL 
Linux)
-find_program(READELF readelf)
-if (READELF MATCHES NOTFOUND)
-message(WARNING readelf not found; JVM float ABI detection disabled)
-else (READELF MATCHES NOTFOUND)
-execute_process(
-COMMAND ${READELF} -A ${JAVA_JVM_LIBRARY}
-OUTPUT_VARIABLE JVM_ELF_ARCH
-ERROR_QUIET)
-if (NOT JVM_ELF_ARCH MATCHES Tag_ABI_VFP_args: VFP registers)
-message(Soft-float JVM detected)
-
-# Test compilation with -mfloat-abi=softfp using an arbitrary libc 
function
-# (typically fails with fatal error: bits/predefs.h: No such file 
or directory
-# if soft-float dev libraries are not installed)
-include(CMakePushCheckState)
-cmake_push_check_state()
-set(CMAKE_REQUIRED_FLAGS ${CMAKE_REQUIRED_FLAGS} 
-mfloat-abi=softfp)
-include(CheckSymbolExists)
-check_symbol_exists(exit stdlib.h SOFTFP_AVAILABLE)
-if (NOT SOFTFP_AVAILABLE)
-message(FATAL_ERROR Soft-float dev 

[05/50] [abbrv] hadoop git commit: YARN-3878. AsyncDispatcher can hang while stopping if it is configured for draining events on stop. Contributed by Varun Saxena

2015-07-27 Thread zjshen
YARN-3878. AsyncDispatcher can hang while stopping if it is configured for 
draining events on stop. Contributed by Varun Saxena


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d484101b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d484101b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d484101b

Branch: refs/heads/YARN-2928
Commit: d484101b2c021aebc5dfbd903016889f27d4e65b
Parents: 0a74126
Author: Jian He jia...@apache.org
Authored: Tue Jul 21 15:05:41 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:30 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../hadoop/yarn/event/AsyncDispatcher.java  |  8 +++
 .../hadoop/yarn/event/DrainDispatcher.java  | 11 +++-
 .../hadoop/yarn/event/TestAsyncDispatcher.java  | 62 
 4 files changed, 83 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d484101b/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 643ef47..48dbce6 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -797,6 +797,9 @@ Release 2.7.2 - UNRELEASED
 YARN-3535. Scheduler must re-request container resources when RMContainer 
transitions
 from ALLOCATED to KILLED (rohithsharma and peng.zhang via asuresh)
 
+YARN-3878. AsyncDispatcher can hang while stopping if it is configured for
+draining events on stop. (Varun Saxena via jianhe)
+
 Release 2.7.1 - 2015-07-06 
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d484101b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/event/AsyncDispatcher.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/event/AsyncDispatcher.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/event/AsyncDispatcher.java
index c54b9c7..48312a3 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/event/AsyncDispatcher.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/event/AsyncDispatcher.java
@@ -246,6 +246,9 @@ public class AsyncDispatcher extends AbstractService 
implements Dispatcher {
 if (!stopped) {
   LOG.warn(AsyncDispatcher thread interrupted, e);
 }
+// Need to reset drained flag to true if event queue is empty,
+// otherwise dispatcher will hang on stop.
+drained = eventQueue.isEmpty();
 throw new YarnRuntimeException(e);
   }
 };
@@ -287,6 +290,11 @@ public class AsyncDispatcher extends AbstractService 
implements Dispatcher {
   }
 
   @VisibleForTesting
+  protected boolean isEventThreadWaiting() {
+return eventHandlingThread.getState() == Thread.State.WAITING;
+  }
+
+  @VisibleForTesting
   protected boolean isDrained() {
 return this.drained;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d484101b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/event/DrainDispatcher.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/event/DrainDispatcher.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/event/DrainDispatcher.java
index da5ae44..e4a5a82 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/event/DrainDispatcher.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/event/DrainDispatcher.java
@@ -27,11 +27,20 @@ public class DrainDispatcher extends AsyncDispatcher {
 this(new LinkedBlockingQueueEvent());
   }
 
-  private DrainDispatcher(BlockingQueueEvent eventQueue) {
+  public DrainDispatcher(BlockingQueueEvent eventQueue) {
 super(eventQueue);
   }
 
   /**
+   *  Wait till event thread enters WAITING state (i.e. waiting for new 
events).
+   */
+  public void waitForEventThreadToWait() {
+while (!isEventThreadWaiting()) {
+  Thread.yield();
+}
+  }
+
+  /**
* Busy loop waiting for all queued events to drain.
*/
   public void await() {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d484101b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/event/TestAsyncDispatcher.java
--
diff --git 

[16/50] [abbrv] hadoop git commit: YARN-3954. Fix TestYarnConfigurationFields#testCompareConfigurationClassAgainstXml. (varun saxena via rohithsharmaks)

2015-07-27 Thread zjshen
YARN-3954. Fix 
TestYarnConfigurationFields#testCompareConfigurationClassAgainstXml. (varun 
saxena via rohithsharmaks)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c60d4cd8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c60d4cd8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c60d4cd8

Branch: refs/heads/YARN-2928
Commit: c60d4cd836fce365c7c152fddcdbf5bebc4c2d50
Parents: ba48ae5
Author: rohithsharmaks rohithsharm...@apache.org
Authored: Thu Jul 23 00:28:24 2015 +0530
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:32 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt   |  3 +++
 .../src/main/resources/yarn-default.xml   | 10 ++
 2 files changed, 13 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c60d4cd8/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index abfbc31..7557036 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -771,6 +771,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3932. SchedulerApplicationAttempt#getResourceUsageReport and UserInfo 
 should based on total-used-resources. (Bibin A Chundatt via wangda)
 
+YARN-3954. Fix 
TestYarnConfigurationFields#testCompareConfigurationClassAgainstXml.
+(varun saxena via rohithsharmaks)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c60d4cd8/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index e82a065..2281b99 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -2168,4 +2168,14 @@
 valuefalse/value
   /property
 
+  property
+description
+Defines maximum application priority in a cluster.
+If an application is submitted with a priority higher than this value, it 
will be
+reset to this maximum value.
+/description
+nameyarn.cluster.max-application-priority/name
+value0/value
+  /property
+
 /configuration



[01/50] [abbrv] hadoop git commit: YARN-2003. Support for Application priority : Changes in RM and Capacity Scheduler. (Sunil G via wangda)

2015-07-27 Thread zjshen
Repository: hadoop
Updated Branches:
  refs/heads/YARN-2928 967bef7e0 - a7153ade7


YARN-2003. Support for Application priority : Changes in RM and Capacity 
Scheduler. (Sunil G via wangda)

Conflicts:

hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2cf4a87f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2cf4a87f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2cf4a87f

Branch: refs/heads/YARN-2928
Commit: 2cf4a87f00b32fafc3dd1a685beb39e52f630b79
Parents: fffb454
Author: Wangda Tan wan...@apache.org
Authored: Tue Jul 21 09:56:59 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:53:35 2015 -0700

--
 .../sls/scheduler/ResourceSchedulerWrapper.java |  10 +
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../hadoop/yarn/conf/YarnConfiguration.java |   5 +
 .../server/resourcemanager/RMAppManager.java|  21 +-
 .../server/resourcemanager/rmapp/RMAppImpl.java |  15 +-
 .../scheduler/AbstractYarnScheduler.java|  10 +
 .../server/resourcemanager/scheduler/Queue.java |   8 +
 .../scheduler/SchedulerApplication.java |  22 ++
 .../scheduler/SchedulerApplicationAttempt.java  |  15 +-
 .../scheduler/YarnScheduler.java|  20 ++
 .../scheduler/capacity/AbstractCSQueue.java |   7 +
 .../scheduler/capacity/CapacityScheduler.java   |  73 +++-
 .../CapacitySchedulerConfiguration.java |  13 +
 .../scheduler/capacity/LeafQueue.java   |  19 +-
 .../scheduler/common/fica/FiCaSchedulerApp.java |   8 +
 .../scheduler/event/AppAddedSchedulerEvent.java |  28 +-
 .../resourcemanager/scheduler/fair/FSQueue.java |   6 +
 .../scheduler/fifo/FifoScheduler.java   |   6 +
 .../scheduler/policy/FifoComparator.java|  11 +-
 .../scheduler/policy/SchedulableEntity.java |   5 +
 .../yarn/server/resourcemanager/MockRM.java |  31 +-
 .../server/resourcemanager/TestAppManager.java  |   1 +
 .../TestWorkPreservingRMRestart.java|   2 +-
 ...pacityPreemptionPolicyForNodePartitions.java |   1 +
 .../capacity/TestApplicationLimits.java |   5 +-
 .../capacity/TestApplicationPriority.java   | 345 +++
 .../capacity/TestCapacityScheduler.java |   5 +
 .../scheduler/policy/MockSchedulableEntity.java |  13 +-
 .../security/TestDelegationTokenRenewer.java|  10 +-
 .../TestRMWebServicesAppsModification.java  |   2 +-
 30 files changed, 665 insertions(+), 55 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2cf4a87f/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java
--
diff --git 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java
 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java
index 08cb1e6..14e2645 100644
--- 
a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java
+++ 
b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/ResourceSchedulerWrapper.java
@@ -53,6 +53,7 @@ import org.apache.hadoop.yarn.api.records.ContainerExitStatus;
 import org.apache.hadoop.yarn.api.records.ContainerId;
 import org.apache.hadoop.yarn.api.records.ContainerStatus;
 import org.apache.hadoop.yarn.api.records.NodeId;
+import org.apache.hadoop.yarn.api.records.Priority;
 import org.apache.hadoop.yarn.api.records.QueueACL;
 import org.apache.hadoop.yarn.api.records.QueueInfo;
 import org.apache.hadoop.yarn.api.records.QueueUserACLInfo;
@@ -949,4 +950,13 @@ final public class ResourceSchedulerWrapper
   ContainerStatus containerStatus, RMContainerEventType event) {
 // do nothing
   }
+
+  @Override
+  public Priority checkAndGetApplicationPriority(Priority priority,
+  String user, String queueName, ApplicationId applicationId)
+  throws YarnException {
+// TODO Dummy implementation.
+return Priority.newInstance(0);
+  }
+
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2cf4a87f/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 86de507..e5ea802 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -254,6 +254,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3116. RM notifies NM whether a container is an AM container or normal
 task container. (Giovanni Matteo Fumarola via zjshen)
 
+

[23/50] [abbrv] hadoop git commit: YARN-3845. Scheduler page does not render RGBA color combinations in IE11. (Contributed by Mohammad Shahid Khan)

2015-07-27 Thread zjshen
YARN-3845. Scheduler page does not render RGBA color combinations in IE11. 
(Contributed by Mohammad Shahid Khan)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/88e8cd55
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/88e8cd55
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/88e8cd55

Branch: refs/heads/YARN-2928
Commit: 88e8cd5550b366343d8683dc7f169e583299b429
Parents: 43df21a
Author: Rohith Sharma K S rohithsharm...@apache.org
Authored: Fri Jul 24 12:43:06 2015 +0530
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:34 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt   |  3 +++
 .../apache/hadoop/yarn/webapp/view/TwoColumnLayout.java   |  2 +-
 .../resourcemanager/webapp/CapacitySchedulerPage.java |  7 ---
 .../resourcemanager/webapp/DefaultSchedulerPage.java  |  4 ++--
 .../server/resourcemanager/webapp/FairSchedulerPage.java  | 10 ++
 5 files changed, 16 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/88e8cd55/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 71ad286..2192811 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -784,6 +784,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3900. Protobuf layout of yarn_security_token causes errors in other 
protos
 that include it (adhoot via rkanter)
 
+YARN-3845. Scheduler page does not render RGBA color combinations in IE11. 
+(Contributed by Mohammad Shahid Khan)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/88e8cd55/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/TwoColumnLayout.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/TwoColumnLayout.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/TwoColumnLayout.java
index b8f5f75..4d7752d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/TwoColumnLayout.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/TwoColumnLayout.java
@@ -126,7 +126,7 @@ public class TwoColumnLayout extends HtmlPage {
 styles.add(join('#', tableId, _paginate span {font-weight:normal}));
 styles.add(join('#', tableId,  .progress {width:8em}));
 styles.add(join('#', tableId, _processing {top:-1.5em; font-size:1em;));
-styles.add(  color:#000; background:rgba(255, 255, 255, 0.8)});
+styles.add(  color:#000; background:#fefefe});
 for (String style : innerStyles) {
   styles.add(join('#', tableId,  , style));
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/88e8cd55/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
index a784601..12a3013 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
@@ -59,9 +59,10 @@ class CapacitySchedulerPage extends RmView {
   static final float Q_MAX_WIDTH = 0.8f;
   static final float Q_STATS_POS = Q_MAX_WIDTH + 0.05f;
   static final String Q_END = left:101%;
-  static final String Q_GIVEN = left:0%;background:none;border:1px dashed 
rgba(0,0,0,0.25);
-  static final String Q_OVER = background:rgba(255, 140, 0, 0.8);
-  static final String Q_UNDER = background:rgba(50, 205, 50, 0.8);
+  static final String Q_GIVEN =
+  left:0%;background:none;border:1px dashed #BFBFBF;
+  static final String Q_OVER = background:#FFA333;
+  static final String Q_UNDER = background:#5BD75B;
 
   @RequestScoped
   static class CSQInfo {


[12/50] [abbrv] hadoop git commit: HADOOP-12017. Hadoop archives command should use configurable replication factor when closing (Contributed by Bibin A Chundatt)

2015-07-27 Thread zjshen
HADOOP-12017. Hadoop archives command should use configurable replication 
factor when closing (Contributed by Bibin A Chundatt)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ef499f3a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ef499f3a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ef499f3a

Branch: refs/heads/YARN-2928
Commit: ef499f3a690dc8394684efae711a90b5479b66fd
Parents: 38a2348
Author: Vinayakumar B vinayakum...@apache.org
Authored: Wed Jul 22 10:25:49 2015 +0530
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:31 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 +++
 .../org/apache/hadoop/tools/HadoopArchives.java | 21 ++--
 .../src/site/markdown/HadoopArchives.md.vm  |  2 +-
 .../apache/hadoop/tools/TestHadoopArchives.java | 26 
 4 files changed, 33 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef499f3a/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 5b51bce..3d101d4 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -992,6 +992,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12051. ProtobufRpcEngine.invoke() should use Exception.toString()
 over getMessage() in logging/span events. (Varun Saxena via stevel)
 
+HADOOP-12017. Hadoop archives command should use configurable replication
+factor when closing (Bibin A Chundatt via vinayakumarb)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef499f3a/hadoop-tools/hadoop-archives/src/main/java/org/apache/hadoop/tools/HadoopArchives.java
--
diff --git 
a/hadoop-tools/hadoop-archives/src/main/java/org/apache/hadoop/tools/HadoopArchives.java
 
b/hadoop-tools/hadoop-archives/src/main/java/org/apache/hadoop/tools/HadoopArchives.java
index 330830b..ee14850 100644
--- 
a/hadoop-tools/hadoop-archives/src/main/java/org/apache/hadoop/tools/HadoopArchives.java
+++ 
b/hadoop-tools/hadoop-archives/src/main/java/org/apache/hadoop/tools/HadoopArchives.java
@@ -100,15 +100,17 @@ public class HadoopArchives implements Tool {
   static final String SRC_PARENT_LABEL = NAME + .parent.path;
   /** the size of the blocks that will be created when archiving **/
   static final String HAR_BLOCKSIZE_LABEL = NAME + .block.size;
-  /**the size of the part files that will be created when archiving **/
+  /** the replication factor for the file in archiving. **/
+  static final String HAR_REPLICATION_LABEL = NAME + .replication.factor;
+  /** the size of the part files that will be created when archiving **/
   static final String HAR_PARTSIZE_LABEL = NAME + .partfile.size;
 
   /** size of each part file size **/
   long partSize = 2 * 1024 * 1024 * 1024l;
   /** size of blocks in hadoop archives **/
   long blockSize = 512 * 1024 * 1024l;
-  /** the desired replication degree; default is 10 **/
-  short repl = 10;
+  /** the desired replication degree; default is 3 **/
+  short repl = 3;
 
   private static final String usage = archive
   +  -archiveName NAME.har -p parent path [-r replication factor] +
@@ -475,6 +477,7 @@ public class HadoopArchives implements Tool {
 conf.setLong(HAR_PARTSIZE_LABEL, partSize);
 conf.set(DST_HAR_LABEL, archiveName);
 conf.set(SRC_PARENT_LABEL, parentPath.makeQualified(fs).toString());
+conf.setInt(HAR_REPLICATION_LABEL, repl);
 Path outputPath = new Path(dest, archiveName);
 FileOutputFormat.setOutputPath(conf, outputPath);
 FileSystem outFs = outputPath.getFileSystem(conf);
@@ -549,8 +552,6 @@ public class HadoopArchives implements Tool {
 } finally {
   srcWriter.close();
 }
-//increase the replication of src files
-jobfs.setReplication(srcFiles, repl);
 conf.setInt(SRC_COUNT_LABEL, numFiles);
 conf.setLong(TOTAL_SIZE_LABEL, totalSize);
 int numMaps = (int)(totalSize/partSize);
@@ -587,6 +588,7 @@ public class HadoopArchives implements Tool {
 FileSystem destFs = null;
 byte[] buffer;
 int buf_size = 128 * 1024;
+private int replication = 3;
 long blockSize = 512 * 1024 * 1024l;
 
 // configure the mapper and create 
@@ -595,7 +597,7 @@ public class HadoopArchives implements Tool {
 // tmp files. 
 public void configure(JobConf conf) {
   this.conf = conf;
-
+  replication = conf.getInt(HAR_REPLICATION_LABEL, 3);
   // this is tightly tied to map reduce
   // since it does not expose an api 
   // to 

[19/50] [abbrv] hadoop git commit: HDFS-8730. Clean up the import statements in ClientProtocol. Contributed by Takanobu Asanuma.

2015-07-27 Thread zjshen
HDFS-8730. Clean up the import statements in ClientProtocol. Contributed by 
Takanobu Asanuma.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0fffd53d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0fffd53d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0fffd53d

Branch: refs/heads/YARN-2928
Commit: 0fffd53daa7fd3ff0dc83c03e5b28b89cd134b10
Parents: 9ca634c
Author: Haohui Mai whe...@apache.org
Authored: Thu Jul 23 10:30:17 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:33 2015 -0700

--
 .../hadoop/hdfs/protocol/ClientProtocol.java| 306 +++
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 2 files changed, 182 insertions(+), 127 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0fffd53d/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
index 381be30..713c23c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
@@ -17,7 +17,6 @@
  */
 package org.apache.hadoop.hdfs.protocol;
 
-import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.util.EnumSet;
 import java.util.List;
@@ -29,14 +28,9 @@ import 
org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries;
 import org.apache.hadoop.fs.CacheFlag;
 import org.apache.hadoop.fs.ContentSummary;
 import org.apache.hadoop.fs.CreateFlag;
-import org.apache.hadoop.fs.FileAlreadyExistsException;
 import org.apache.hadoop.fs.FsServerDefaults;
-import org.apache.hadoop.fs.InvalidPathException;
 import org.apache.hadoop.fs.Options;
-import org.apache.hadoop.fs.Options.Rename;
-import org.apache.hadoop.fs.ParentNotDirectoryException;
 import org.apache.hadoop.fs.StorageType;
-import org.apache.hadoop.fs.UnresolvedLinkException;
 import org.apache.hadoop.fs.XAttr;
 import org.apache.hadoop.fs.XAttrSetFlag;
 import org.apache.hadoop.fs.permission.AclEntry;
@@ -48,14 +42,11 @@ import 
org.apache.hadoop.hdfs.protocol.HdfsConstants.RollingUpgradeAction;
 import org.apache.hadoop.hdfs.security.token.block.DataEncryptionKey;
 import 
org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
 import 
org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSelector;
-import org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException;
-import org.apache.hadoop.hdfs.server.namenode.SafeModeException;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport;
 import org.apache.hadoop.io.EnumSetWritable;
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.io.retry.AtMostOnce;
 import org.apache.hadoop.io.retry.Idempotent;
-import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.KerberosInfo;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.token.TokenInfo;
@@ -121,9 +112,12 @@ public interface ClientProtocol {
*
* @return file length and array of blocks with their locations
*
-   * @throws AccessControlException If access is denied
-   * @throws FileNotFoundException If file codesrc/code does not exist
-   * @throws UnresolvedLinkException If codesrc/code contains a symlink
+   * @throws org.apache.hadoop.security.AccessControlException If access is
+   *   denied
+   * @throws java.io.FileNotFoundException If file codesrc/code does not
+   *   exist
+   * @throws org.apache.hadoop.fs.UnresolvedLinkException If codesrc/code
+   *   contains a symlink
* @throws IOException If an I/O error occurred
*/
   @Idempotent
@@ -166,24 +160,29 @@ public interface ClientProtocol {
*
* @return the status of the created file, it could be null if the server
*   doesn't support returning the file status
-   * @throws AccessControlException If access is denied
+   * @throws org.apache.hadoop.security.AccessControlException If access is
+   *   denied
* @throws AlreadyBeingCreatedException if the path does not exist.
* @throws DSQuotaExceededException If file creation violates disk space
*   quota restriction
-   * @throws FileAlreadyExistsException If file codesrc/code already exists
-   * @throws FileNotFoundException If parent of codesrc/code does not exist
-   *   and codecreateParent/code is false
-   * @throws 

[14/50] [abbrv] hadoop git commit: YARN-3956. Fix TestNodeManagerHardwareUtils fails on Mac (Varun Vasudev via wangda)

2015-07-27 Thread zjshen
YARN-3956. Fix TestNodeManagerHardwareUtils fails on Mac (Varun Vasudev via 
wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/02ca2ace
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/02ca2ace
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/02ca2ace

Branch: refs/heads/YARN-2928
Commit: 02ca2ace513c3449aec7427c8daea7ed63f3650f
Parents: c60d4cd
Author: Wangda Tan wan...@apache.org
Authored: Wed Jul 22 11:59:31 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:32 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt | 2 ++
 .../server/nodemanager/util/TestNodeManagerHardwareUtils.java   | 5 +
 2 files changed, 7 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/02ca2ace/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 7557036..0ebe25d 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -774,6 +774,8 @@ Release 2.8.0 - UNRELEASED
 YARN-3954. Fix 
TestYarnConfigurationFields#testCompareConfigurationClassAgainstXml.
 (varun saxena via rohithsharmaks)
 
+YARN-3956. Fix TestNodeManagerHardwareUtils fails on Mac (Varun Vasudev 
via wangda)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/02ca2ace/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/util/TestNodeManagerHardwareUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/util/TestNodeManagerHardwareUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/util/TestNodeManagerHardwareUtils.java
index 5bf8cb7..84a045d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/util/TestNodeManagerHardwareUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/util/TestNodeManagerHardwareUtils.java
@@ -30,6 +30,11 @@ import org.mockito.Mockito;
 public class TestNodeManagerHardwareUtils {
 
   static class TestResourceCalculatorPlugin extends ResourceCalculatorPlugin {
+
+TestResourceCalculatorPlugin() {
+  super(null);
+}
+
 @Override
 public long getVirtualMemorySize() {
   return 0;



[29/50] [abbrv] hadoop git commit: HDFS-8806. Inconsistent metrics: number of missing blocks with replication factor 1 not properly cleared. Contributed by Zhe Zhang.

2015-07-27 Thread zjshen
HDFS-8806. Inconsistent metrics: number of missing blocks with replication 
factor 1 not properly cleared. Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c26ca418
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c26ca418
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c26ca418

Branch: refs/heads/YARN-2928
Commit: c26ca41811c82adcbc8873ff26939957329057a6
Parents: 88e8cd5
Author: Akira Ajisaka aajis...@apache.org
Authored: Fri Jul 24 18:28:44 2015 +0900
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:35 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java | 3 ++-
 2 files changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c26ca418/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index f86d41e..b348a5a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1097,6 +1097,9 @@ Release 2.7.2 - UNRELEASED
 HDFS-6945. BlockManager should remove a block from excessReplicateMap and
 decrement ExcessBlocks metric when the block is removed. (aajisaka)
 
+HDFS-8806. Inconsistent metrics: number of missing blocks with replication
+factor 1 not properly cleared. (Zhe Zhang via aajisaka)
+
 Release 2.7.1 - 2015-07-06
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c26ca418/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
index d8aec99..128aae6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
@@ -101,10 +101,11 @@ class UnderReplicatedBlocks implements 
IterableBlockInfo {
   /**
* Empty the queues and timestamps.
*/
-  void clear() {
+  synchronized void clear() {
 for (int i = 0; i  LEVEL; i++) {
   priorityQueues.get(i).clear();
 }
+corruptReplOneBlocks = 0;
 timestampsMap.clear();
   }
 



[20/50] [abbrv] hadoop git commit: YARN-3941. Proportional Preemption policy should try to avoid sending duplicate PREEMPT_CONTAINER event to scheduler. (Sunil G via wangda)

2015-07-27 Thread zjshen
YARN-3941. Proportional Preemption policy should try to avoid sending duplicate 
PREEMPT_CONTAINER event to scheduler. (Sunil G via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d177e2a4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d177e2a4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d177e2a4

Branch: refs/heads/YARN-2928
Commit: d177e2a445b3397833b7403be6b3e829c358f692
Parents: d436e8c
Author: Wangda Tan wan...@apache.org
Authored: Thu Jul 23 10:07:57 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:33 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt | 2 ++
 .../capacity/ProportionalCapacityPreemptionPolicy.java  | 9 ++---
 .../capacity/TestProportionalCapacityPreemptionPolicy.java  | 6 +++---
 3 files changed, 11 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d177e2a4/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 94e8056..a1c5fb3 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -779,6 +779,8 @@ Release 2.8.0 - UNRELEASED
 
 YARN-3956. Fix TestNodeManagerHardwareUtils fails on Mac (Varun Vasudev 
via wangda)
 
+YARN-3941. Proportional Preemption policy should try to avoid sending 
duplicate PREEMPT_CONTAINER event to scheduler. (Sunil G via wangda)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d177e2a4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
index 1152cef..77df059 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
@@ -260,13 +260,16 @@ public class ProportionalCapacityPreemptionPolicy 
implements SchedulingEditPolic
   SchedulerEventType.KILL_CONTAINER));
   preempted.remove(container);
 } else {
+  if (preempted.get(container) != null) {
+// We already updated the information to scheduler earlier, we need
+// not have to raise another event.
+continue;
+  }
   //otherwise just send preemption events
   rmContext.getDispatcher().getEventHandler().handle(
   new ContainerPreemptEvent(appAttemptId, container,
   SchedulerEventType.PREEMPT_CONTAINER));
-  if (preempted.get(container) == null) {
-preempted.put(container, clock.getTime());
-  }
+  preempted.put(container, clock.getTime());
 }
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d177e2a4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
index bc4d0dc..8d9f48a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
+++ 

[07/50] [abbrv] hadoop git commit: HDFS-8721. Add a metric for number of encryption zones. Contributed by Rakesh R.

2015-07-27 Thread zjshen
HDFS-8721. Add a metric for number of encryption zones. Contributed by Rakesh R.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/72df83bc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/72df83bc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/72df83bc

Branch: refs/heads/YARN-2928
Commit: 72df83bc0751c10438a3a08633287a53749b183e
Parents: 8c7a8a6
Author: cnauroth cnaur...@apache.org
Authored: Tue Jul 21 13:55:58 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:30 2015 -0700

--
 .../hadoop-common/src/site/markdown/Metrics.md| 1 +
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../hadoop/hdfs/server/namenode/EncryptionZoneManager.java| 7 +++
 .../org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java  | 6 ++
 .../hdfs/server/namenode/metrics/FSNamesystemMBean.java   | 5 +
 .../test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java | 6 ++
 .../hadoop/hdfs/server/namenode/TestFSNamesystemMBean.java| 5 +
 7 files changed, 33 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/72df83bc/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
--
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md 
b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
index ca89745..2b23508 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
@@ -216,6 +216,7 @@ Each metrics record contains tags such as HAState and 
Hostname as additional inf
 | `TotalLoad` | Current number of connections |
 | `SnapshottableDirectories` | Current number of snapshottable directories |
 | `Snapshots` | Current number of snapshots |
+| `NumEncryptionZones` | Current number of encryption zones |
 | `BlocksTotal` | Current number of allocated blocks in the system |
 | `FilesTotal` | Current number of files and directories |
 | `PendingReplicationBlocks` | Current number of blocks pending to be 
replicated |

http://git-wip-us.apache.org/repos/asf/hadoop/blob/72df83bc/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index a29a090..7c771b0 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -734,6 +734,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-7483. Display information per tier on the Namenode UI.
 (Benoy Antony and wheat9 via wheat9)
 
+HDFS-8721. Add a metric for number of encryption zones.
+(Rakesh R via cnauroth)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/72df83bc/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
index 3fe748d..7c3c895 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
@@ -360,4 +360,11 @@ public class EncryptionZoneManager {
 final boolean hasMore = (numResponses  tailMap.size());
 return new BatchedListEntriesEncryptionZone(zones, hasMore);
   }
+
+  /**
+   * @return number of encryption zones.
+   */
+  public int getNumEncryptionZones() {
+return encryptionZones.size();
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/72df83bc/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 7c6d6a1..fd37fbe 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -4075,6 +4075,12 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 

[09/50] [abbrv] hadoop git commit: HADOOP-12184. Remove unused Linux-specific constants in NativeIO (Martin Walsh via Colin P. McCabe)

2015-07-27 Thread zjshen
HADOOP-12184. Remove unused Linux-specific constants in NativeIO (Martin Walsh 
via Colin P. McCabe)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ddc71968
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ddc71968
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ddc71968

Branch: refs/heads/YARN-2928
Commit: ddc71968a17959db4711f83b2678a2867f3d2f3d
Parents: 807b222
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Wed Jul 22 11:11:38 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:31 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt  | 3 +++
 .../src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java| 4 
 2 files changed, 3 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ddc71968/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index c0e5c92..ff7d2ad 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -713,6 +713,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12214. Parse 'HadoopArchive' commandline using cli Options.
 (vinayakumarb)
 
+HADOOP-12184. Remove unused Linux-specific constants in NativeIO (Martin
+Walsh via Colin P. McCabe)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ddc71968/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
index 688b955..77a40ea 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
@@ -67,9 +67,6 @@ public class NativeIO {
 public static final int O_APPEND   = 02000;
 public static final int O_NONBLOCK = 04000;
 public static final int O_SYNC   =  01;
-public static final int O_ASYNC  =  02;
-public static final int O_FSYNC = O_SYNC;
-public static final int O_NDELAY = O_NONBLOCK;
 
 // Flags for posix_fadvise() from bits/fcntl.h
 /* No further special treatment.  */
@@ -356,7 +353,6 @@ public class NativeIO {
   public static final int   S_IFREG  = 010;  /* regular */
   public static final int   S_IFLNK  = 012;  /* symbolic link */
   public static final int   S_IFSOCK = 014;  /* socket */
-  public static final int   S_IFWHT  = 016;  /* whiteout */
   public static final int S_ISUID = 0004000;  /* set user id on execution 
*/
   public static final int S_ISGID = 0002000;  /* set group id on execution 
*/
   public static final int S_ISVTX = 0001000;  /* save swapped text even 
after use */



[06/50] [abbrv] hadoop git commit: HDFS-8773. Few FSNamesystem metrics are not documented in the Metrics page. Contributed by Rakesh R.

2015-07-27 Thread zjshen
HDFS-8773. Few FSNamesystem metrics are not documented in the Metrics page. 
Contributed by Rakesh R.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0a74126d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0a74126d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0a74126d

Branch: refs/heads/YARN-2928
Commit: 0a74126d0f146bd9b4c6d686ea735a3e6a51a136
Parents: 72df83b
Author: cnauroth cnaur...@apache.org
Authored: Tue Jul 21 14:12:03 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:30 2015 -0700

--
 .../hadoop-common/src/site/markdown/Metrics.md  | 5 +
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 3 +++
 2 files changed, 8 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0a74126d/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
--
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md 
b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
index 2b23508..646cda5 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
@@ -231,6 +231,11 @@ Each metrics record contains tags such as HAState and 
Hostname as additional inf
 | `BlockCapacity` | Current number of block capacity |
 | `StaleDataNodes` | Current number of DataNodes marked stale due to delayed 
heartbeat |
 | `TotalFiles` | Current number of files and directories (same as FilesTotal) |
+| `MissingReplOneBlocks` | Current number of missing blocks with replication 
factor 1 |
+| `NumFilesUnderConstruction` | Current number of files under construction |
+| `NumActiveClients` | Current number of active clients holding lease |
+| `HAState` | (HA-only) Current state of the NameNode: initializing or active 
or standby or stopping state |
+| `FSState` | Current state of the file system: Safemode or Operational |
 
 JournalNode
 ---

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0a74126d/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 7c771b0..8122045 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1062,6 +1062,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-7582. Enforce maximum number of ACL entries separately per access
 and default. (vinayakumarb)
 
+HDFS-8773. Few FSNamesystem metrics are not documented in the Metrics page.
+(Rakesh R via cnauroth)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES



[11/50] [abbrv] hadoop git commit: HDFS-8795. Improve InvalidateBlocks#node2blocks. (yliu)

2015-07-27 Thread zjshen
HDFS-8795. Improve InvalidateBlocks#node2blocks. (yliu)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/500e5f31
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/500e5f31
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/500e5f31

Branch: refs/heads/YARN-2928
Commit: 500e5f31299e5300f3a933ca59ad23c47d53d7e6
Parents: ef499f3
Author: yliu y...@apache.org
Authored: Wed Jul 22 15:16:50 2015 +0800
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:31 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 2 ++
 .../hadoop/hdfs/server/blockmanagement/InvalidateBlocks.java| 5 +++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/500e5f31/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 50803de..66cb89e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -740,6 +740,8 @@ Release 2.8.0 - UNRELEASED
 HDFS-8495. Consolidate append() related implementation into a single class.
 (Rakesh R via wheat9)
 
+HDFS-8795. Improve InvalidateBlocks#node2blocks. (yliu)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/500e5f31/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/InvalidateBlocks.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/InvalidateBlocks.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/InvalidateBlocks.java
index a465f85..c486095 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/InvalidateBlocks.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/InvalidateBlocks.java
@@ -22,9 +22,9 @@ import java.text.SimpleDateFormat;
 import java.util.ArrayList;
 import java.util.Calendar;
 import java.util.GregorianCalendar;
+import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
-import java.util.TreeMap;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
@@ -36,6 +36,7 @@ import org.apache.hadoop.util.Time;
 import org.apache.hadoop.hdfs.DFSUtil;
 
 import com.google.common.annotations.VisibleForTesting;
+
 import org.slf4j.Logger;
 
 /**
@@ -47,7 +48,7 @@ import org.slf4j.Logger;
 class InvalidateBlocks {
   /** Mapping: DatanodeInfo - Collection of Blocks */
   private final MapDatanodeInfo, LightWeightHashSetBlock node2blocks =
-  new TreeMapDatanodeInfo, LightWeightHashSetBlock();
+  new HashMapDatanodeInfo, LightWeightHashSetBlock();
   /** The total number of blocks in the map. */
   private long numBlocks = 0L;
 



[17/50] [abbrv] hadoop git commit: YARN-3932. SchedulerApplicationAttempt#getResourceUsageReport and UserInfo should based on total-used-resources. (Bibin A Chundatt via wangda)

2015-07-27 Thread zjshen
YARN-3932. SchedulerApplicationAttempt#getResourceUsageReport and UserInfo 
should based on total-used-resources. (Bibin A Chundatt via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ba48ae55
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ba48ae55
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ba48ae55

Branch: refs/heads/YARN-2928
Commit: ba48ae555aaa8372e257abba25315d000ba926f3
Parents: ddc7196
Author: Wangda Tan wan...@apache.org
Authored: Wed Jul 22 11:54:02 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:32 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../scheduler/SchedulerApplicationAttempt.java  |  2 +-
 .../scheduler/capacity/LeafQueue.java   |  8 ++-
 .../TestCapacitySchedulerNodeLabelUpdate.java   | 64 
 4 files changed, 74 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ba48ae55/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 48dbce6..abfbc31 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -768,6 +768,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3885. ProportionalCapacityPreemptionPolicy doesn't preempt if queue 
is 
 more than 2 level. (Ajith S via wangda)
 
+YARN-3932. SchedulerApplicationAttempt#getResourceUsageReport and UserInfo 
+should based on total-used-resources. (Bibin A Chundatt via wangda)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ba48ae55/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
index cf543bd..317e61c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
@@ -598,7 +598,7 @@ public class SchedulerApplicationAttempt implements 
SchedulableEntity {
 AggregateAppResourceUsage runningResourceUsage =
 getRunningAggregateAppResourceUsage();
 Resource usedResourceClone =
-Resources.clone(attemptResourceUsage.getUsed());
+Resources.clone(attemptResourceUsage.getAllUsed());
 Resource reservedResourceClone =
 Resources.clone(attemptResourceUsage.getReserved());
 return ApplicationResourceUsageReport.newInstance(liveContainers.size(),

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ba48ae55/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
index 0ce4d68..5c283f4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
@@ -439,7 +439,7 @@ public class LeafQueue extends AbstractCSQueue {
 for (Map.EntryString, User entry : users.entrySet()) {
   User user = entry.getValue();
   usersToReturn.add(new UserInfo(entry.getKey(), Resources.clone(user
-  .getUsed()), user.getActiveApplications(), user
+  .getAllUsed()), user.getActiveApplications(), user
   

[22/50] [abbrv] hadoop git commit: HADOOP-12161. Add getStoragePolicy API to the FileSystem interface. (Contributed by Brahma Reddy Battula)

2015-07-27 Thread zjshen
HADOOP-12161. Add getStoragePolicy API to the FileSystem interface. 
(Contributed by Brahma Reddy Battula)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9ca634cd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9ca634cd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9ca634cd

Branch: refs/heads/YARN-2928
Commit: 9ca634cdf75323cfa572ddd5860121d24d39ee4b
Parents: d177e2a
Author: Arpit Agarwal a...@apache.org
Authored: Thu Jul 23 10:13:04 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:33 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 +++
 .../apache/hadoop/fs/AbstractFileSystem.java| 13 +
 .../java/org/apache/hadoop/fs/FileContext.java  | 20 
 .../java/org/apache/hadoop/fs/FileSystem.java   | 13 +
 .../org/apache/hadoop/fs/FilterFileSystem.java  |  6 ++
 .../java/org/apache/hadoop/fs/FilterFs.java |  6 ++
 .../org/apache/hadoop/fs/viewfs/ChRootedFs.java |  6 ++
 .../org/apache/hadoop/fs/viewfs/ViewFs.java | 15 +++
 .../org/apache/hadoop/fs/TestHarFileSystem.java |  3 +++
 .../main/java/org/apache/hadoop/fs/Hdfs.java|  5 +
 .../java/org/apache/hadoop/hdfs/DFSClient.java  | 18 ++
 .../hadoop/hdfs/DistributedFileSystem.java  | 19 +++
 .../hadoop/hdfs/TestBlockStoragePolicy.java | 17 +
 13 files changed, 144 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ca634cd/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index ff7d2ad..f1a3bc9 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -716,6 +716,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12184. Remove unused Linux-specific constants in NativeIO (Martin
 Walsh via Colin P. McCabe)
 
+HADOOP-12161. Add getStoragePolicy API to the FileSystem interface.
+(Brahma Reddy Battula via Arpit Agarwal)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ca634cd/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
index cb3fb86..2bc3859 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
@@ -1237,6 +1237,19 @@ public abstract class AbstractFileSystem {
   }
 
   /**
+   * Retrieve the storage policy for a given file or directory.
+   *
+   * @param src file or directory path.
+   * @return storage policy for give file.
+   * @throws IOException
+   */
+  public BlockStoragePolicySpi getStoragePolicy(final Path src)
+  throws IOException {
+throw new UnsupportedOperationException(getClass().getSimpleName()
++  doesn't support getStoragePolicy);
+  }
+
+  /**
* Retrieve all the storage policies supported by this file system.
*
* @return all storage policies supported by this filesystem.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ca634cd/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
index 0f21a61..a98d662 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
@@ -49,6 +49,7 @@ import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.fs.permission.FsPermission;
 import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY;
 import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_DEFAULT_NAME_DEFAULT;
+
 import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.ipc.RpcClientException;
 import org.apache.hadoop.ipc.RpcServerException;
@@ -2692,6 +2693,25 @@ public class FileContext {
   }
 
   /**
+   * Query the effective storage policy ID for the given file or directory.
+   *
+ 

[13/50] [abbrv] hadoop git commit: HDFS-8495. Consolidate append() related implementation into a single class. Contributed by Rakesh R.

2015-07-27 Thread zjshen
HDFS-8495. Consolidate append() related implementation into a single class. 
Contributed by Rakesh R.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/38a23484
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/38a23484
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/38a23484

Branch: refs/heads/YARN-2928
Commit: 38a234849f357e64fa78e106101bb0e491be7f4e
Parents: d484101
Author: Haohui Mai whe...@apache.org
Authored: Tue Jul 21 17:25:23 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:31 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../hdfs/server/namenode/FSDirAppendOp.java | 261 +++
 .../server/namenode/FSDirStatAndListingOp.java  |   2 +-
 .../hdfs/server/namenode/FSDirTruncateOp.java   |  16 +-
 .../hdfs/server/namenode/FSDirWriteFileOp.java  |   6 +-
 .../hdfs/server/namenode/FSEditLogLoader.java   |   4 +-
 .../hdfs/server/namenode/FSNamesystem.java  | 241 ++---
 7 files changed, 304 insertions(+), 229 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/38a23484/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 8122045..50803de 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -737,6 +737,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-8721. Add a metric for number of encryption zones.
 (Rakesh R via cnauroth)
 
+HDFS-8495. Consolidate append() related implementation into a single class.
+(Rakesh R via wheat9)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/38a23484/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAppendOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAppendOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAppendOp.java
new file mode 100644
index 000..abb2dc8
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAppendOp.java
@@ -0,0 +1,261 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.fs.permission.FsAction;
+import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
+import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
+import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
+import org.apache.hadoop.hdfs.protocol.LastBlockWithStatus;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.protocol.QuotaExceededException;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockManager;
+import org.apache.hadoop.hdfs.server.namenode.FSNamesystem.RecoverLeaseOp;
+import org.apache.hadoop.hdfs.server.namenode.NameNodeLayoutVersion.Feature;
+
+import com.google.common.base.Preconditions;
+
+/**
+ * Helper class to perform append operation.
+ */
+final class FSDirAppendOp {
+
+  /**
+   * Private constructor for preventing FSDirAppendOp object creation.
+   * Static-only class.
+   */
+  private FSDirAppendOp() {}
+
+  /**
+   * Append to an existing file.
+   * p
+   *
+   * The method returns the last block of the file if this is a partial block,
+   * which can still be used for writing more data. The client uses the
+   * returned block locations to form the data 

[37/50] [abbrv] hadoop git commit: YARN-3925. ContainerLogsUtils#getContainerLogFile fails to read container log files from full disks. Contributed by zhihai xu

2015-07-27 Thread zjshen
YARN-3925. ContainerLogsUtils#getContainerLogFile fails to read container log 
files from full disks. Contributed by zhihai xu


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cbb3a64c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cbb3a64c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cbb3a64c

Branch: refs/heads/YARN-2928
Commit: cbb3a64cfefea548b02ededac7527d9da6f29c8d
Parents: d725cf9
Author: Jason Lowe jl...@apache.org
Authored: Fri Jul 24 22:14:39 2015 +
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:37 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  2 +
 .../nodemanager/LocalDirsHandlerService.java| 35 +-
 .../webapp/TestContainerLogsPage.java   | 48 
 3 files changed, 83 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cbb3a64c/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index cd033f9..69f550f 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -831,6 +831,8 @@ Release 2.7.2 - UNRELEASED
 YARN-3969. Allow jobs to be submitted to reservation that is active 
 but does not have any allocations. (subru via curino)
 
+YARN-3925. ContainerLogsUtils#getContainerLogFile fails to read container
+log files from full disks. (zhihai xu via jlowe)
 
 Release 2.7.1 - 2015-07-06 
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cbb3a64c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LocalDirsHandlerService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LocalDirsHandlerService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LocalDirsHandlerService.java
index 0a61035..6709c90 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LocalDirsHandlerService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LocalDirsHandlerService.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.yarn.server.nodemanager;
 
+import java.io.File;
 import java.io.IOException;
 import java.net.URI;
 import java.util.ArrayList;
@@ -31,6 +32,7 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileContext;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.LocalDirAllocator;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsPermission;
@@ -467,6 +469,35 @@ public class LocalDirsHandlerService extends 
AbstractService {
 return disksTurnedGood;
   }
 
+  private Path getPathToRead(String pathStr, ListString dirs)
+  throws IOException {
+// remove the leading slash from the path (to make sure that the uri
+// resolution results in a valid path on the dir being checked)
+if (pathStr.startsWith(/)) {
+  pathStr = pathStr.substring(1);
+}
+
+FileSystem localFS = FileSystem.getLocal(getConfig());
+for (String dir : dirs) {
+  try {
+Path tmpDir = new Path(dir);
+File tmpFile = tmpDir.isAbsolute()
+? new File(localFS.makeQualified(tmpDir).toUri())
+: new File(dir);
+Path file = new Path(tmpFile.getPath(), pathStr);
+if (localFS.exists(file)) {
+  return file;
+}
+  } catch (IOException ie) {
+// ignore
+LOG.warn(Failed to find  + pathStr +  at  + dir, ie);
+  }
+}
+
+throw new IOException(Could not find  + pathStr +  in any of +
+ the directories);
+  }
+
   public Path getLocalPathForWrite(String pathStr) throws IOException {
 return localDirsAllocator.getLocalPathForWrite(pathStr, getConfig());
   }
@@ -484,9 +515,9 @@ public class LocalDirsHandlerService extends 
AbstractService {
   }
 
   public Path getLogPathToRead(String pathStr) throws IOException {
-return logDirsAllocator.getLocalPathToRead(pathStr, getConfig());
+return getPathToRead(pathStr, getLogDirsForRead());
   }
-  
+
   public static String[] validatePaths(String[] paths) {
 ArrayListString validPaths = new ArrayListString();
 for (int i = 0; i  

[42/50] [abbrv] hadoop git commit: HADOOP-12237. releasedocmaker.py doesn't work behind a proxy (Tsuyoshi Ozawa via aw)

2015-07-27 Thread zjshen
HADOOP-12237. releasedocmaker.py doesn't work behind a proxy (Tsuyoshi Ozawa 
via aw)

(cherry picked from commit b41fe3111ae37478cbace2a07e6ac35a676ef978)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a02cd154
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a02cd154
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a02cd154

Branch: refs/heads/YARN-2928
Commit: a02cd1544b1de2c7a9f5056c7dbef0d965f2dbbf
Parents: cdb9a42
Author: Allen Wittenauer a...@apache.org
Authored: Mon Jul 20 09:47:46 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:38 2015 -0700

--
 dev-support/releasedocmaker.py | 11 ---
 1 file changed, 8 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a02cd154/dev-support/releasedocmaker.py
--
diff --git a/dev-support/releasedocmaker.py b/dev-support/releasedocmaker.py
index 409d8e3..d2e5dda 100755
--- a/dev-support/releasedocmaker.py
+++ b/dev-support/releasedocmaker.py
@@ -24,6 +24,7 @@ import os
 import re
 import sys
 import urllib
+import urllib2
 try:
   import json
 except ImportError:
@@ -125,7 +126,7 @@ class GetVersions:
 versions.sort()
 print Looking for %s through %s%(versions[0],versions[-1])
 for p in projects:
-  resp = 
urllib.urlopen(https://issues.apache.org/jira/rest/api/2/project/%s/versions%p)
+  resp = 
urllib2.urlopen(https://issues.apache.org/jira/rest/api/2/project/%s/versions%p)
   data = json.loads(resp.read())
   for d in data:
 if d['name'][0].isdigit and versions[0] = d['name'] and d['name'] = 
versions[-1]:
@@ -288,7 +289,7 @@ class JiraIter:
 self.projects = projects
 v=str(version).replace(-SNAPSHOT,)
 
-resp = urllib.urlopen(https://issues.apache.org/jira/rest/api/2/field;)
+resp = urllib2.urlopen(https://issues.apache.org/jira/rest/api/2/field;)
 data = json.loads(resp.read())
 
 self.fieldIdMap = {}
@@ -301,7 +302,7 @@ class JiraIter:
 count=100
 while (at  end):
   params = urllib.urlencode({'jql': project in ('+' , 
'.join(projects)+') and fixVersion in ('+v+') and resolution = Fixed, 
'startAt':at, 'maxResults':count})
-  resp = 
urllib.urlopen(https://issues.apache.org/jira/rest/api/2/search?%s%params)
+  resp = 
urllib2.urlopen(https://issues.apache.org/jira/rest/api/2/search?%s%params)
   data = json.loads(resp.read())
   if (data.has_key('errorMessages')):
 raise Exception(data['errorMessages'])
@@ -407,6 +408,10 @@ def main():
   if (len(options.versions) = 0):
 parser.error(At least one version needs to be supplied)
 
+  proxy = urllib2.ProxyHandler()
+  opener = urllib2.build_opener(proxy)
+  urllib2.install_opener(opener)
+
   projects = options.projects
 
   if (options.range is True):



[02/50] [abbrv] hadoop git commit: HADOOP-11762. Enable swift distcp to secure HDFS (Chen He via aw)

2015-07-27 Thread zjshen
HADOOP-11762. Enable swift distcp to secure HDFS (Chen He via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/352310f2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/352310f2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/352310f2

Branch: refs/heads/YARN-2928
Commit: 352310f2d125d4eadecfff19ef3def17a4dda2d8
Parents: 88ed983
Author: Allen Wittenauer a...@apache.org
Authored: Tue Jul 21 11:19:29 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:29 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 2 ++
 .../hadoop/fs/swift/snative/SwiftNativeFileSystem.java  | 9 +
 .../apache/hadoop/fs/swift/TestSwiftFileSystemBasicOps.java | 7 +++
 3 files changed, 18 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/352310f2/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 24709e0..5b51bce 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -495,6 +495,8 @@ Trunk (Unreleased)
 HADOOP-12107. long running apps may have a huge number of StatisticsData
 instances under FileSystem (Sangjin Lee via Ming Ma)
 
+HADOOP-11762. Enable swift distcp to secure HDFS (Chen He via aw)
+
   OPTIMIZATIONS
 
 HADOOP-7761. Improve the performance of raw comparisons. (todd)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/352310f2/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystem.java
 
b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystem.java
index e9faaf2..7f93c38 100644
--- 
a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystem.java
+++ 
b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystem.java
@@ -222,6 +222,15 @@ public class SwiftNativeFileSystem extends FileSystem {
   }
 
   /**
+   * Override getCononicalServiceName because we don't support token in Swift
+   */
+  @Override
+  public String getCanonicalServiceName() {
+// Does not support Token
+return null;
+  }
+
+  /**
* Return an array containing hostnames, offset and size of
* portions of the given file.  For a nonexistent
* file or regions, null will be returned.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/352310f2/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemBasicOps.java
--
diff --git 
a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemBasicOps.java
 
b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemBasicOps.java
index c7e8b57..c84be6b 100644
--- 
a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemBasicOps.java
+++ 
b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemBasicOps.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.fs.swift;
 
+import org.junit.Assert;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.fs.FileStatus;
@@ -286,4 +287,10 @@ public class TestSwiftFileSystemBasicOps extends 
SwiftFileSystemBaseTest {
 }
   }
 
+  @Test(timeout = SWIFT_TEST_TIMEOUT)
+  public void testGetCanonicalServiceName() {
+Assert.assertNull(fs.getCanonicalServiceName());
+  }
+
+
 }



[24/50] [abbrv] hadoop git commit: YARN-3900. Protobuf layout of yarn_security_token causes errors in other protos that include it (adhoot via rkanter)

2015-07-27 Thread zjshen
YARN-3900. Protobuf layout of yarn_security_token causes errors in other protos 
that include it (adhoot via rkanter)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/742872be
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/742872be
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/742872be

Branch: refs/heads/YARN-2928
Commit: 742872be0ad0a34121f5c86031fc505ee6a7a094
Parents: 472ca61
Author: Robert Kanter rkan...@apache.org
Authored: Thu Jul 23 14:42:49 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:34 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../hadoop-yarn/hadoop-yarn-common/pom.xml  |  2 +-
 .../main/proto/server/yarn_security_token.proto | 70 
 .../src/main/proto/yarn_security_token.proto| 70 
 .../pom.xml |  2 +-
 .../hadoop-yarn-server-resourcemanager/pom.xml  |  2 +-
 .../resourcemanager/recovery/TestProtos.java| 36 ++
 7 files changed, 112 insertions(+), 73 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/742872be/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index a1c5fb3..71ad286 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -781,6 +781,9 @@ Release 2.8.0 - UNRELEASED
 
 YARN-3941. Proportional Preemption policy should try to avoid sending 
duplicate PREEMPT_CONTAINER event to scheduler. (Sunil G via wangda)
 
+YARN-3900. Protobuf layout of yarn_security_token causes errors in other 
protos
+that include it (adhoot via rkanter)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/742872be/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
index 2704726..7c6e719 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
@@ -254,7 +254,7 @@
 param${basedir}/src/main/proto/param
   /imports
   source
-directory${basedir}/src/main/proto/server/directory
+directory${basedir}/src/main/proto/directory
 includes
   includeyarn_security_token.proto/include
 /includes

http://git-wip-us.apache.org/repos/asf/hadoop/blob/742872be/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/proto/server/yarn_security_token.proto
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/proto/server/yarn_security_token.proto
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/proto/server/yarn_security_token.proto
deleted file mode 100644
index 339e99e..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/proto/server/yarn_security_token.proto
+++ /dev/null
@@ -1,70 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * License); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an AS IS BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-option java_package = org.apache.hadoop.yarn.proto;
-option java_outer_classname = YarnSecurityTokenProtos;
-option java_generic_services = true;
-option java_generate_equals_and_hash = true;
-package hadoop.yarn;
-
-import yarn_protos.proto;
-
-// None of the following records are supposed to be exposed to users.
-
-message NMTokenIdentifierProto {
-  optional ApplicationAttemptIdProto appAttemptId = 1;
-  optional NodeIdProto nodeId = 2;
-  optional string appSubmitter = 3;
-  optional int32 keyId = 4 [default = -1];
-}
-
-message AMRMTokenIdentifierProto {
-  optional ApplicationAttemptIdProto appAttemptId = 1;
-  optional int32 keyId = 2 [default = -1];
-}
-
-message 

[03/50] [abbrv] hadoop git commit: MAPREDUCE-5801. Uber mode's log message is missing a vcore reason (Steven Wong via aw)

2015-07-27 Thread zjshen
MAPREDUCE-5801. Uber mode's log message is missing a vcore reason  (Steven Wong 
via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/88ed9835
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/88ed9835
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/88ed9835

Branch: refs/heads/YARN-2928
Commit: 88ed9835d37e85ebabde27879e14224ee791cd89
Parents: ec19590
Author: Allen Wittenauer a...@apache.org
Authored: Tue Jul 21 10:58:52 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:29 2015 -0700

--
 hadoop-mapreduce-project/CHANGES.txt  | 3 +++
 .../java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java | 2 ++
 2 files changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/88ed9835/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 60b05c6..d70a0f3 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -294,6 +294,9 @@ Trunk (Unreleased)
 
 MAPREDUCE-6078. native-task: fix gtest build on macosx (Binglin Chang)
 
+MAPREDUCE-5801. Uber mode's log message is missing a vcore reason
+(Steven Wong via aw)
+
 Release 2.8.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/88ed9835/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
index 731bcba..4c3b3fe 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
@@ -1289,6 +1289,8 @@ public class JobImpl implements 
org.apache.hadoop.mapreduce.v2.app.job.Job,
 msg.append( too much CPU;);
   if (!smallMemory)
 msg.append( too much RAM;);
+  if (!smallCpu)
+  msg.append( too much CPU;);
   if (!notChainJob)
 msg.append( chainjob;);
   LOG.info(msg.toString());



[10/50] [abbrv] hadoop git commit: HADOOP-12239. StorageException complaining no lease ID when updating FolderLastModifiedTime in WASB. Contributed by Duo Xu.

2015-07-27 Thread zjshen
HADOOP-12239. StorageException complaining  no lease ID when updating 
FolderLastModifiedTime in WASB. Contributed by Duo Xu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/807b2225
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/807b2225
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/807b2225

Branch: refs/heads/YARN-2928
Commit: 807b2225d5434a72676e69254d11d93ad4a811d2
Parents: 500e5f3
Author: cnauroth cnaur...@apache.org
Authored: Wed Jul 22 11:16:49 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:31 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt  | 3 +++
 .../org/apache/hadoop/fs/azure/NativeAzureFileSystem.java| 8 ++--
 2 files changed, 9 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/807b2225/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 3d101d4..c0e5c92 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -995,6 +995,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12017. Hadoop archives command should use configurable replication
 factor when closing (Bibin A Chundatt via vinayakumarb)
 
+HADOOP-12239. StorageException complaining  no lease ID when updating
+FolderLastModifiedTime in WASB. (Duo Xu via cnauroth)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/807b2225/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
index a567b33..bb9941b 100644
--- 
a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
+++ 
b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
@@ -1360,8 +1360,12 @@ public class NativeAzureFileSystem extends FileSystem {
   String parentKey = pathToKey(parentFolder);
   FileMetadata parentMetadata = store.retrieveMetadata(parentKey);
   if (parentMetadata != null  parentMetadata.isDir() 
-  parentMetadata.getBlobMaterialization() == 
BlobMaterialization.Explicit) {
-store.updateFolderLastModifiedTime(parentKey, parentFolderLease);
+parentMetadata.getBlobMaterialization() == 
BlobMaterialization.Explicit) {
+if (parentFolderLease != null) {
+  store.updateFolderLastModifiedTime(parentKey, parentFolderLease);
+} else {
+  updateParentFolderLastModifiedTime(key);
+}
   } else {
 // Make sure that the parent folder exists.
 // Create it using inherited permissions from the first existing 
directory going up the path



[25/50] [abbrv] hadoop git commit: HADOOP-12009: Clarify FileSystem.listStatus() sorting order fix FileSystemContractBaseTest:testListStatus. (J.Andreina via jghoman)

2015-07-27 Thread zjshen
HADOOP-12009: Clarify FileSystem.listStatus() sorting order  fix 
FileSystemContractBaseTest:testListStatus. (J.Andreina via jghoman)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/875458a7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/875458a7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/875458a7

Branch: refs/heads/YARN-2928
Commit: 875458a7183f1ea79fcebe82a148aaa1dc05ebaa
Parents: 742872b
Author: Jakob Homan jgho...@gmail.com
Authored: Thu Jul 23 17:46:13 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:34 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt|  3 +++
 .../main/java/org/apache/hadoop/fs/FileSystem.java | 17 -
 .../src/site/markdown/filesystem/filesystem.md |  4 
 .../hadoop/fs/FileSystemContractBaseTest.java  | 11 ---
 4 files changed, 31 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/875458a7/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 6c18add..56edcac 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -497,6 +497,9 @@ Trunk (Unreleased)
 
 HADOOP-11762. Enable swift distcp to secure HDFS (Chen He via aw)
 
+HADOOP-12009. Clarify FileSystem.listStatus() sorting order  fix
+FileSystemContractBaseTest:testListStatus. (J.Andreina via jghoman)
+
   OPTIMIZATIONS
 
 HADOOP-7761. Improve the performance of raw comparisons. (todd)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/875458a7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index a01d3ea..8f32644 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -1501,7 +1501,9 @@ public abstract class FileSystem extends Configured 
implements Closeable {
   /**
* List the statuses of the files/directories in the given path if the path 
is
* a directory.
-   * 
+   * p
+   * Does not guarantee to return the List of files/directories status in a
+   * sorted order.
* @param f given path
* @return the statuses of the files/directories in the given patch
* @throws FileNotFoundException when the path does not exist;
@@ -1543,6 +1545,9 @@ public abstract class FileSystem extends Configured 
implements Closeable {
   /**
* Filter files/directories in the given path using the user-supplied path
* filter.
+   * p
+   * Does not guarantee to return the List of files/directories status in a
+   * sorted order.
* 
* @param f
*  a path name
@@ -1563,6 +1568,9 @@ public abstract class FileSystem extends Configured 
implements Closeable {
   /**
* Filter files/directories in the given list of paths using default
* path filter.
+   * p
+   * Does not guarantee to return the List of files/directories status in a
+   * sorted order.
* 
* @param files
*  a list of paths
@@ -1579,6 +1587,9 @@ public abstract class FileSystem extends Configured 
implements Closeable {
   /**
* Filter files/directories in the given list of paths using user-supplied
* path filter.
+   * p
+   * Does not guarantee to return the List of files/directories status in a
+   * sorted order.
* 
* @param files
*  a list of paths
@@ -1739,6 +1750,8 @@ public abstract class FileSystem extends Configured 
implements Closeable {
* while consuming the entries. Each file system implementation should
* override this method and provide a more efficient implementation, if
* possible. 
+   * Does not guarantee to return the iterator that traverses statuses
+   * of the files in a sorted order.
*
* @param p target path
* @return remote iterator
@@ -1766,6 +1779,8 @@ public abstract class FileSystem extends Configured 
implements Closeable {
 
   /**
* List the statuses and block locations of the files in the given path.
+   * Does not guarantee to return the iterator that traverses statuses
+   * of the files in a sorted order.
* 
* If the path is a directory, 
*   if recursive is false, returns files in the directory;


[46/50] [abbrv] hadoop git commit: YARN-3656. LowCost: A Cost-Based Placement Agent for YARN Reservations. (Jonathan Yaniv and Ishai Menache via curino)

2015-07-27 Thread zjshen
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d32b8b9c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/PlanningAlgorithm.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/PlanningAlgorithm.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/PlanningAlgorithm.java
new file mode 100644
index 000..9a0a0f0
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/PlanningAlgorithm.java
@@ -0,0 +1,207 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.resourcemanager.reservation.planning;
+
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.hadoop.yarn.api.records.ReservationDefinition;
+import org.apache.hadoop.yarn.api.records.ReservationId;
+import org.apache.hadoop.yarn.api.records.Resource;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.InMemoryReservationAllocation;
+import org.apache.hadoop.yarn.server.resourcemanager.reservation.Plan;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.RLESparseResourceAllocation;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationAllocation;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationInterval;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.exceptions.ContractValidationException;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.exceptions.PlanningException;
+
+/**
+ * An abstract class that follows the general behavior of planning algorithms.
+ */
+public abstract class PlanningAlgorithm implements ReservationAgent {
+
+  /**
+   * Performs the actual allocation for a ReservationDefinition within a Plan.
+   *
+   * @param reservationId the identifier of the reservation
+   * @param user the user who owns the reservation
+   * @param plan the Plan to which the reservation must be fitted
+   * @param contract encapsulates the resources required by the user for his
+   *  session
+   * @param oldReservation the existing reservation (null if none)
+   * @return whether the allocateUser function was successful or not
+   *
+   * @throws PlanningException if the session cannot be fitted into the plan
+   * @throws ContractValidationException
+   */
+  protected boolean allocateUser(ReservationId reservationId, String user,
+  Plan plan, ReservationDefinition contract,
+  ReservationAllocation oldReservation) throws PlanningException,
+  ContractValidationException {
+
+// Adjust the ResourceDefinition to account for system imperfections
+// (e.g., scheduling delays for large containers).
+ReservationDefinition adjustedContract = adjustContract(plan, contract);
+
+// Compute the job allocation
+RLESparseResourceAllocation allocation =
+computeJobAllocation(plan, reservationId, adjustedContract);
+
+// If no job allocation was found, fail
+if (allocation == null) {
+  throw new PlanningException(
+  The planning algorithm could not find a valid allocation
+  +  for your request);
+}
+
+// Translate the allocation to a map (with zero paddings)
+long step = plan.getStep();
+long jobArrival = stepRoundUp(adjustedContract.getArrival(), step);
+long jobDeadline = stepRoundUp(adjustedContract.getDeadline(), step);
+MapReservationInterval, Resource mapAllocations =
+allocationsToPaddedMap(allocation, jobArrival, jobDeadline);
+
+// Create the reservation
+ReservationAllocation capReservation =
+new InMemoryReservationAllocation(reservationId, // ID
+adjustedContract, // Contract
+user, // User name
+

[48/50] [abbrv] hadoop git commit: YARN-3958. TestYarnConfigurationFields should be moved to hadoop-yarn-api module. Contributed by Varun Saxena.

2015-07-27 Thread zjshen
YARN-3958. TestYarnConfigurationFields should be moved to hadoop-yarn-api 
module. Contributed by Varun Saxena.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/64efacf2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/64efacf2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/64efacf2

Branch: refs/heads/YARN-2928
Commit: 64efacf261ad99688973081f3b137349e8b036e0
Parents: d32b8b9
Author: Akira Ajisaka aajis...@apache.org
Authored: Mon Jul 27 11:43:25 2015 +0900
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:38 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../hadoop-yarn/hadoop-yarn-api/pom.xml |  34 +
 .../yarn/conf/TestYarnConfigurationFields.java  | 136 +++
 .../yarn/conf/TestYarnConfigurationFields.java  | 136 ---
 4 files changed, 173 insertions(+), 136 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/64efacf2/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 611fd4b..f2df960 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -800,6 +800,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3973. Recent changes to application priority management break 
 reservation system from YARN-1051. (Carlo Curino via wangda)
 
+YARN-3958. TestYarnConfigurationFields should be moved to hadoop-yarn-api
+module. (Varun Saxena via aajisaka)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/64efacf2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
index c5e98b5..ed74a44 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
@@ -66,9 +66,31 @@
   groupIdcom.google.protobuf/groupId
   artifactIdprotobuf-java/artifactId
 /dependency
+
+dependency
+  groupIdorg.apache.hadoop/groupId
+  artifactIdhadoop-common/artifactId
+  typetest-jar/type
+  scopetest/scope
+/dependency
+
+dependency
+  groupIdjunit/groupId
+  artifactIdjunit/artifactId
+  scopetest/scope
+/dependency
   /dependencies
 
   build
+resources
+  resource
+
directory${basedir}/../hadoop-yarn-common/src/main/resources/directory
+includes
+  includeyarn-default.xml/include
+/includes
+filteringfalse/filtering
+  /resource
+/resources
 plugins
   plugin
 groupIdorg.apache.hadoop/groupId
@@ -109,6 +131,18 @@
   /execution
 /executions
   /plugin
+
+  plugin
+artifactIdmaven-jar-plugin/artifactId
+executions
+  execution
+goals
+  goaltest-jar/goal
+/goals
+phasetest-compile/phase
+  /execution
+/executions
+  /plugin
 /plugins
   /build
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/64efacf2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
new file mode 100644
index 000..e89a90d
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
@@ -0,0 +1,136 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package 

[30/50] [abbrv] hadoop git commit: YARN-3969. Allow jobs to be submitted to reservation that is active but does not have any allocations. (subru via curino)

2015-07-27 Thread zjshen
YARN-3969. Allow jobs to be submitted to reservation that is active but does 
not have any allocations. (subru via curino)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/21c9cb81
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/21c9cb81
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/21c9cb81

Branch: refs/heads/YARN-2928
Commit: 21c9cb81d02f90684e6be288da319737bc7c2560
Parents: c26ca41
Author: carlo curino Carlo Curino
Authored: Thu Jul 23 19:33:59 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:35 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 +++
 .../scheduler/capacity/ReservationQueue.java|  4 ---
 .../capacity/TestReservationQueue.java  | 26 +++-
 3 files changed, 17 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/21c9cb81/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 2192811..5c6cf3c 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -979,6 +979,9 @@ Release 2.7.1 - 2015-07-06
 YARN-3850. NM fails to read files from full disks which can lead to
 container logs being lost and other issues (Varun Saxena via jlowe)
 
+YARN-3969. Allow jobs to be submitted to reservation that is active 
+but does not have any allocations. (subru via curino)
+
 Release 2.7.0 - 2015-04-20
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/21c9cb81/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ReservationQueue.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ReservationQueue.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ReservationQueue.java
index 4790cc7..976cf8c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ReservationQueue.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ReservationQueue.java
@@ -39,12 +39,9 @@ public class ReservationQueue extends LeafQueue {
 
   private PlanQueue parent;
 
-  private int maxSystemApps;
-
   public ReservationQueue(CapacitySchedulerContext cs, String queueName,
   PlanQueue parent) throws IOException {
 super(cs, queueName, parent, null);
-maxSystemApps = cs.getConfiguration().getMaximumSystemApplications();
 // the following parameters are common to all reservation in the plan
 updateQuotas(parent.getUserLimitForReservation(),
 parent.getUserLimitFactor(),
@@ -89,7 +86,6 @@ public class ReservationQueue extends LeafQueue {
 }
 setCapacity(capacity);
 setAbsoluteCapacity(getParent().getAbsoluteCapacity() * getCapacity());
-setMaxApplications((int) (maxSystemApps * getAbsoluteCapacity()));
 // note: we currently set maxCapacity to capacity
 // this might be revised later
 setMaxCapacity(entitlement.getMaxCapacity());

http://git-wip-us.apache.org/repos/asf/hadoop/blob/21c9cb81/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservationQueue.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservationQueue.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservationQueue.java
index 4e6c73d..e23e93c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservationQueue.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservationQueue.java
@@ -18,6 +18,7 @@
 
 

[39/50] [abbrv] hadoop git commit: YARN-3973. Recent changes to application priority management break reservation system from YARN-1051 (Carlo Curino via wangda)

2015-07-27 Thread zjshen
YARN-3973. Recent changes to application priority management break reservation 
system from YARN-1051 (Carlo Curino via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/27269108
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/27269108
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/27269108

Branch: refs/heads/YARN-2928
Commit: 27269108114b068b25d3f1cdf84e21f85a7c0842
Parents: cbb3a64
Author: Wangda Tan wan...@apache.org
Authored: Fri Jul 24 16:44:18 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:37 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt| 6 +-
 .../resourcemanager/scheduler/capacity/CapacityScheduler.java  | 2 +-
 2 files changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/27269108/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 69f550f..fa364f1 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -782,7 +782,8 @@ Release 2.8.0 - UNRELEASED
 
 YARN-3956. Fix TestNodeManagerHardwareUtils fails on Mac (Varun Vasudev 
via wangda)
 
-YARN-3941. Proportional Preemption policy should try to avoid sending 
duplicate PREEMPT_CONTAINER event to scheduler. (Sunil G via wangda)
+YARN-3941. Proportional Preemption policy should try to avoid sending 
duplicate 
+PREEMPT_CONTAINER event to scheduler. (Sunil G via wangda)
 
 YARN-3900. Protobuf layout of yarn_security_token causes errors in other 
protos
 that include it (adhoot via rkanter)
@@ -793,6 +794,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3957. FairScheduler NPE In FairSchedulerQueueInfo causing scheduler 
page to 
 return 500. (Anubhav Dhoot via kasha)
 
+YARN-3973. Recent changes to application priority management break 
+reservation system from YARN-1051. (Carlo Curino via wangda)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/27269108/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index 68e608a..0b39d35 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -1867,7 +1867,7 @@ public class CapacityScheduler extends
 
   private Priority getDefaultPriorityForQueue(String queueName) {
 Queue queue = getQueue(queueName);
-if (null == queue) {
+if (null == queue || null == queue.getDefaultApplicationPriority()) {
   // Return with default application priority
   return Priority.newInstance(CapacitySchedulerConfiguration
   .DEFAULT_CONFIGURATION_APPLICATION_PRIORITY);



[40/50] [abbrv] hadoop git commit: HADOOP-11807. add a lint mode to releasedocmaker (ramtin via aw)

2015-07-27 Thread zjshen
HADOOP-11807. add a lint mode to releasedocmaker (ramtin via aw)

(cherry picked from commit 8e657fba2fd33f7550597ea9c4c6e9a87aa1ef1c)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ba982062
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ba982062
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ba982062

Branch: refs/heads/YARN-2928
Commit: ba9820629f2643bf90db9e79b029bf4e09a003f3
Parents: 2726910
Author: Allen Wittenauer a...@apache.org
Authored: Sat Jun 27 08:59:50 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:37 2015 -0700

--
 dev-support/releasedocmaker.py  | 76 +---
 hadoop-common-project/hadoop-common/CHANGES.txt |  2 +
 2 files changed, 68 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ba982062/dev-support/releasedocmaker.py
--
diff --git a/dev-support/releasedocmaker.py b/dev-support/releasedocmaker.py
index 2ccc1c0..8e68b3c 100755
--- a/dev-support/releasedocmaker.py
+++ b/dev-support/releasedocmaker.py
@@ -87,8 +87,15 @@ def notableclean(str):
   str=str.rstrip()
   return str
 
+# clean output dir
+def cleanOutputDir(dir):
+files = os.listdir(dir)
+for name in files:
+os.remove(os.path.join(dir,name))
+os.rmdir(dir)
+
 def mstr(obj):
-  if (obj == None):
+  if (obj is None):
 return 
   return unicode(obj)
 
@@ -148,7 +155,7 @@ class Jira:
 return mstr(self.fields['description'])
 
   def getReleaseNote(self):
-if (self.notes == None):
+if (self.notes is None):
   field = self.parent.fieldIdMap['Release Note']
   if (self.fields.has_key(field)):
 self.notes=mstr(self.fields[field])
@@ -159,14 +166,14 @@ class Jira:
   def getPriority(self):
 ret = 
 pri = self.fields['priority']
-if(pri != None):
+if(pri is not None):
   ret = pri['name']
 return mstr(ret)
 
   def getAssignee(self):
 ret = 
 mid = self.fields['assignee']
-if(mid != None):
+if(mid is not None):
   ret = mid['displayName']
 return mstr(ret)
 
@@ -182,21 +189,21 @@ class Jira:
   def getType(self):
 ret = 
 mid = self.fields['issuetype']
-if(mid != None):
+if(mid is not None):
   ret = mid['name']
 return mstr(ret)
 
   def getReporter(self):
 ret = 
 mid = self.fields['reporter']
-if(mid != None):
+if(mid is not None):
   ret = mid['displayName']
 return mstr(ret)
 
   def getProject(self):
 ret = 
 mid = self.fields['project']
-if(mid != None):
+if(mid is not None):
   ret = mid['key']
 return mstr(ret)
 
@@ -214,7 +221,7 @@ class Jira:
 return False
 
   def getIncompatibleChange(self):
-if (self.incompat == None):
+if (self.incompat is None):
   field = self.parent.fieldIdMap['Hadoop Flags']
   self.reviewed=False
   self.incompat=False
@@ -227,6 +234,24 @@ class Jira:
   self.reviewed=True
 return self.incompat
 
+  def checkMissingComponent(self):
+  if (len(self.fields['components'])0):
+  return False
+  return True
+
+  def checkMissingAssignee(self):
+  if (self.fields['assignee'] is not None):
+  return False
+  return True
+
+  def checkVersionString(self):
+  field = self.parent.fieldIdMap['Fix Version/s']
+  for h in self.fields[field]:
+  found = re.match('^((\d+)(\.\d+)*).*$|^(\w+\-\d+)$', h['name'])
+  if not found:
+  return True
+  return False
+
   def getReleaseDate(self,version):
 for j in range(len(self.fields['fixVersions'])):
   if self.fields['fixVersions'][j]==version:
@@ -339,9 +364,11 @@ def main():
  help=build an index file)
   parser.add_option(-u,--usetoday, dest=usetoday, action=store_true,
  help=use current date for unreleased versions)
+  parser.add_option(-n,--lint, dest=lint, action=store_true,
+ help=use lint flag to exit on failures)
   (options, args) = parser.parse_args()
 
-  if (options.versions == None):
+  if (options.versions is None):
 options.versions = []
 
   if (len(args)  2):
@@ -396,6 +423,9 @@ def main():
   reloutputs.writeAll(relhead)
   choutputs.writeAll(chhead)
 
+  errorCount=0
+  warningCount=0
+  lintMessage=
   incompatlist=[]
   buglist=[]
   improvementlist=[]
@@ -408,6 +438,14 @@ def main():
   for jira in sorted(jlist):
 if jira.getIncompatibleChange():
   incompatlist.append(jira)
+  if (len(jira.getReleaseNote())==0):
+  warningCount+=1
+
+if jira.checkVersionString():
+   warningCount+=1
+
+if jira.checkMissingComponent() or jira.checkMissingAssignee():
+  errorCount+=1
 elif jira.getType() == 

[33/50] [abbrv] hadoop git commit: YARN-3969. Updating CHANGES.txt to reflect the correct set of branches where this is committed

2015-07-27 Thread zjshen
YARN-3969. Updating CHANGES.txt to reflect the correct set of branches where 
this is committed


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0c2ae54e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0c2ae54e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0c2ae54e

Branch: refs/heads/YARN-2928
Commit: 0c2ae54eab301cb479b61a1c1f97045395c18427
Parents: c310745
Author: carlo curino Carlo Curino
Authored: Fri Jul 24 13:38:44 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:36 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0c2ae54e/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 3525da7..fb21bf9 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -825,6 +825,10 @@ Release 2.7.2 - UNRELEASED
 YARN-3878. AsyncDispatcher can hang while stopping if it is configured for
 draining events on stop. (Varun Saxena via jianhe)
 
+YARN-3969. Allow jobs to be submitted to reservation that is active 
+but does not have any allocations. (subru via curino)
+
+
 Release 2.7.1 - 2015-07-06 
 
   INCOMPATIBLE CHANGES
@@ -985,8 +989,6 @@ Release 2.7.1 - 2015-07-06
 YARN-3850. NM fails to read files from full disks which can lead to
 container logs being lost and other issues (Varun Saxena via jlowe)
 
-YARN-3969. Allow jobs to be submitted to reservation that is active 
-but does not have any allocations. (subru via curino)
 
 Release 2.7.0 - 2015-04-20
 



[41/50] [abbrv] hadoop git commit: HADOOP-12202. releasedocmaker drops missing component and assignee entries (aw)

2015-07-27 Thread zjshen
HADOOP-12202. releasedocmaker drops missing component and assignee entries (aw)

(cherry picked from commit adbacf7010373dbe6df239688b4cebd4a93a69e4)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cdb9a426
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cdb9a426
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cdb9a426

Branch: refs/heads/YARN-2928
Commit: cdb9a426b8ce40263f4ab6cac5cb0b486012dcf2
Parents: ae2bda5
Author: Allen Wittenauer a...@apache.org
Authored: Tue Jul 7 14:30:32 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:38 2015 -0700

--
 dev-support/releasedocmaker.py | 24 
 1 file changed, 12 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cdb9a426/dev-support/releasedocmaker.py
--
diff --git a/dev-support/releasedocmaker.py b/dev-support/releasedocmaker.py
index 6e01260..409d8e3 100755
--- a/dev-support/releasedocmaker.py
+++ b/dev-support/releasedocmaker.py
@@ -420,6 +420,8 @@ def main():
   else:
 title=options.title
 
+  haderrors=False
+
   for v in versions:
 vstr=str(v)
 jlist = JiraIter(vstr,projects)
@@ -468,14 +470,6 @@ def main():
 for jira in sorted(jlist):
   if jira.getIncompatibleChange():
 incompatlist.append(jira)
-if (len(jira.getReleaseNote())==0):
-warningCount+=1
-
-  if jira.checkVersionString():
- warningCount+=1
-
-  if jira.checkMissingComponent() or jira.checkMissingAssignee():
-errorCount+=1
   elif jira.getType() == Bug:
 buglist.append(jira)
   elif jira.getType() == Improvement:
@@ -496,6 +490,7 @@ def main():
  notableclean(jira.getSummary()))
 
   if (jira.getIncompatibleChange()) and (len(jira.getReleaseNote())==0):
+warningCount+=1
 reloutputs.writeKeyRaw(jira.getProject(),\n---\n\n)
 reloutputs.writeKeyRaw(jira.getProject(), line)
 line ='\n**WARNING: No release note provided for this incompatible 
change.**\n\n'
@@ -503,9 +498,11 @@ def main():
 reloutputs.writeKeyRaw(jira.getProject(), line)
 
   if jira.checkVersionString():
+  warningCount+=1
   lintMessage += \nWARNING: Version string problem for %s  % 
jira.getId()
 
   if (jira.checkMissingComponent() or jira.checkMissingAssignee()):
+  errorCount+=1
   errorMessage=[]
   jira.checkMissingComponent() and errorMessage.append(component)
   jira.checkMissingAssignee() and errorMessage.append(assignee)
@@ -520,11 +517,11 @@ def main():
 if (options.lint is True):
 print lintMessage
 print ===
-print Error:%d, Warning:%d \n % (errorCount, warningCount)
-
+print %s: Error:%d, Warning:%d \n % (vstr, errorCount, warningCount)
 if (errorCount0):
-cleanOutputDir(version)
-sys.exit(1)
+   haderrors=True
+   cleanOutputDir(vstr)
+   continue
 
 reloutputs.writeAll(\n\n)
 reloutputs.close()
@@ -571,5 +568,8 @@ def main():
   if options.index:
 buildindex(title,options.license)
 
+  if haderrors is True:
+sys.exit(1)
+
 if __name__ == __main__:
   main()



[34/50] [abbrv] hadoop git commit: HDFS-8735. Inotify: All events classes should implement toString() API. Contributed by Surendra Singh Lilhore.

2015-07-27 Thread zjshen
HDFS-8735. Inotify: All events classes should implement toString() API. 
Contributed by Surendra Singh Lilhore.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a52fbe9e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a52fbe9e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a52fbe9e

Branch: refs/heads/YARN-2928
Commit: a52fbe9e5cf5936b50511dc06f6b7e4bfa848e08
Parents: 40b4f96
Author: Akira Ajisaka aajis...@apache.org
Authored: Sat Jul 25 02:56:55 2015 +0900
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:36 2015 -0700

--
 .../org/apache/hadoop/hdfs/inotify/Event.java   | 95 
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../hdfs/TestDFSInotifyEventInputStream.java| 26 ++
 3 files changed, 124 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a52fbe9e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
index dee17a9..6f2b5e2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
@@ -51,6 +51,7 @@ public abstract class Event {
   /**
* Sent when a file is closed after append or create.
*/
+  @InterfaceAudience.Public
   public static class CloseEvent extends Event {
 private String path;
 private long fileSize;
@@ -81,11 +82,20 @@ public abstract class Event {
 public long getTimestamp() {
   return timestamp;
 }
+
+@Override
+@InterfaceStability.Unstable
+public String toString() {
+  return CloseEvent [path= + path + , fileSize= + fileSize
+  + , timestamp= + timestamp + ];
+}
+
   }
 
   /**
* Sent when a new file is created (including overwrite).
*/
+  @InterfaceAudience.Public
   public static class CreateEvent extends Event {
 
 public static enum INodeType {
@@ -232,6 +242,25 @@ public abstract class Event {
 public long getDefaultBlockSize() {
   return defaultBlockSize;
 }
+
+@Override
+@InterfaceStability.Unstable
+public String toString() {
+  StringBuilder content = new StringBuilder();
+  content.append(CreateEvent [INodeType= + iNodeType + , path= + path
+  + , ctime= + ctime + , replication= + replication
+  + , ownerName= + ownerName + , groupName= + groupName
+  + , perms= + perms + , );
+
+  if (symlinkTarget != null) {
+content.append(symlinkTarget= + symlinkTarget + , );
+  }
+
+  content.append(overwrite= + overwrite + , defaultBlockSize=
+  + defaultBlockSize + ]);
+  return content.toString();
+}
+
   }
 
   /**
@@ -242,6 +271,7 @@ public abstract class Event {
* metadataType of the MetadataUpdateEvent will be null or will have their 
default
* values.
*/
+  @InterfaceAudience.Public
   public static class MetadataUpdateEvent extends Event {
 
 public static enum MetadataType {
@@ -400,11 +430,45 @@ public abstract class Event {
   return xAttrsRemoved;
 }
 
+@Override
+@InterfaceStability.Unstable
+public String toString() {
+  StringBuilder content = new StringBuilder();
+  content.append(MetadataUpdateEvent [path= + path + , metadataType=
+  + metadataType);
+  switch (metadataType) {
+  case TIMES:
+content.append(, mtime= + mtime + , atime= + atime);
+break;
+  case REPLICATION:
+content.append(, replication= + replication);
+break;
+  case OWNER:
+content.append(, ownerName= + ownerName
++ , groupName= + groupName);
+break;
+  case PERMS:
+content.append(, perms= + perms);
+break;
+  case ACLS:
+content.append(, acls= + acls);
+break;
+  case XATTRS:
+content.append(, xAttrs= + xAttrs + , xAttrsRemoved=
++ xAttrsRemoved);
+break;
+  default:
+break;
+  }
+  content.append(']');
+  return content.toString();
+}
   }
 
   /**
* Sent when a file, directory, or symlink is renamed.
*/
+  @InterfaceAudience.Public
   public static class RenameEvent extends Event {
 private String srcPath;
 private String dstPath;
@@ -456,11 +520,20 @@ public abstract class Event {
 public long getTimestamp() {
   return timestamp;
 }
+
+@Override
+@InterfaceStability.Unstable
+public String toString() {
+ 

[28/50] [abbrv] hadoop git commit: YARN-3967. Fetch the application report from the AHS if the RM does not know about it. Contributed by Mit Desai

2015-07-27 Thread zjshen
YARN-3967. Fetch the application report from the AHS if the RM does not
know about it. Contributed by Mit Desai


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/40b4f96b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/40b4f96b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/40b4f96b

Branch: refs/heads/YARN-2928
Commit: 40b4f96b1a28e39e13eea9ce6dde67a1fee29a32
Parents: 5e1eb48
Author: Xuan xg...@apache.org
Authored: Fri Jul 24 10:15:54 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 12:57:35 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../yarn/server/webproxy/AppReportFetcher.java  |  79 +++--
 .../server/webproxy/TestAppReportFetcher.java   | 117 +++
 3 files changed, 187 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/40b4f96b/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 5c6cf3c..44aa3aa 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -798,6 +798,9 @@ Release 2.7.2 - UNRELEASED
 YARN-3170. YARN architecture document needs updating. (Brahma Reddy Battula
 via ozawa)
 
+YARN-3967. Fetch the application report from the AHS if the RM does not 
know about it.
+(Mit Desai via xgong)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/40b4f96b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/AppReportFetcher.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/AppReportFetcher.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/AppReportFetcher.java
index 5c93413..6aa43eb 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/AppReportFetcher.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/AppReportFetcher.java
@@ -24,11 +24,15 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.ipc.RPC;
 import org.apache.hadoop.yarn.api.ApplicationClientProtocol;
+import org.apache.hadoop.yarn.api.ApplicationHistoryProtocol;
 import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationReportRequest;
 import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationReportResponse;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.records.ApplicationReport;
+import org.apache.hadoop.yarn.client.AHSProxy;
 import org.apache.hadoop.yarn.client.ClientRMProxy;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException;
 import org.apache.hadoop.yarn.exceptions.YarnException;
 import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
 import org.apache.hadoop.yarn.factories.RecordFactory;
@@ -41,38 +45,73 @@ public class AppReportFetcher {
   private static final Log LOG = LogFactory.getLog(AppReportFetcher.class);
   private final Configuration conf;
   private final ApplicationClientProtocol applicationsManager;
+  private final ApplicationHistoryProtocol historyManager;
   private final RecordFactory recordFactory = 
RecordFactoryProvider.getRecordFactory(null);
-  
+  private boolean isAHSEnabled;
+
   /**
-   * Create a new Connection to the RM to fetch Application reports.
+   * Create a new Connection to the RM/Application History Server
+   * to fetch Application reports.
* @param conf the conf to use to know where the RM is.
*/
   public AppReportFetcher(Configuration conf) {
+if (conf.getBoolean(YarnConfiguration.APPLICATION_HISTORY_ENABLED,
+YarnConfiguration.DEFAULT_APPLICATION_HISTORY_ENABLED)) {
+  isAHSEnabled = true;
+}
 this.conf = conf;
 try {
   applicationsManager = ClientRMProxy.createRMProxy(conf,
   ApplicationClientProtocol.class);
+  if (isAHSEnabled) {
+historyManager = getAHSProxy(conf);
+  } else {
+this.historyManager = null;
+  }
 } catch (IOException e) {
   throw new YarnRuntimeException(e);
 }
   }
   
   /**
-   * Just call directly into the applicationsManager given instead of creating
-   * a remote connection to it.  This is mostly for 

[50/50] [abbrv] hadoop git commit: Fixed the compilation failure caused by YARN-3925.

2015-07-27 Thread zjshen
Fixed the compilation failure caused by YARN-3925.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a7153ade
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a7153ade
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a7153ade

Branch: refs/heads/YARN-2928
Commit: a7153ade70fe90d65d5aa7bac249a54493ec71d4
Parents: c6f119d
Author: Zhijie Shen zjs...@apache.org
Authored: Mon Jul 27 13:06:31 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 13:06:31 2015 -0700

--
 .../yarn/server/nodemanager/webapp/TestContainerLogsPage.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a7153ade/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestContainerLogsPage.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestContainerLogsPage.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestContainerLogsPage.java
index 30f7984..524364d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestContainerLogsPage.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestContainerLogsPage.java
@@ -158,7 +158,7 @@ public class TestContainerLogsPage {
 LocalDirsHandlerService dirsHandler = new LocalDirsHandlerService();
 dirsHandler.init(conf);
 NMContext nmContext = new NodeManager.NMContext(null, null, dirsHandler,
-new ApplicationACLsManager(conf), new NMNullStateStoreService());
+new ApplicationACLsManager(conf), new NMNullStateStoreService(), conf);
 // Add an application and the corresponding containers
 String user = nobody;
 long clusterTimeStamp = 1234;



[2/2] hadoop git commit: YARN-3852. Add docker container support to container-executor. Contributed by Abin Shahab.

2015-07-27 Thread vvasudev
YARN-3852. Add docker container support to container-executor. Contributed by 
Abin Shahab.

(cherry picked from commit f36835ff9b878fa20fe58a30f9d1e8c47702d6d2)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ec0f801f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ec0f801f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ec0f801f

Branch: refs/heads/branch-2
Commit: ec0f801f52c265c1def98eca4d66bdd02e24c595
Parents: 1cf5e40
Author: Varun Vasudev vvasu...@apache.org
Authored: Mon Jul 27 10:12:30 2015 -0700
Committer: Varun Vasudev vvasu...@apache.org
Committed: Mon Jul 27 10:18:06 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../container-executor/impl/configuration.c |  17 +-
 .../container-executor/impl/configuration.h |   2 +
 .../impl/container-executor.c   | 417 ---
 .../impl/container-executor.h   |  25 +-
 .../main/native/container-executor/impl/main.c  |  97 -
 6 files changed, 480 insertions(+), 81 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec0f801f/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 6926731..993828d 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -95,6 +95,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3656. LowCost: A Cost-Based Placement Agent for YARN Reservations. 
 (Jonathan Yaniv and Ishai Menache via curino)
 
+YARN-3852. Add docker container support to container-executor
+(Abin Shahab via vvasudev)
+
   IMPROVEMENTS
 
 YARN-644. Basic null check is not performed on passed in arguments before

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec0f801f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
index eaa1f19..2825367 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
@@ -291,27 +291,23 @@ char ** get_values(const char * key) {
   return extract_values(value);
 }
 
-/**
- * Extracts array of values from the '%' separated list of values.
- */
-char ** extract_values(char *value) {
+char ** extract_values_delim(char *value, const char *delim) {
   char ** toPass = NULL;
   char *tempTok = NULL;
   char *tempstr = NULL;
   int size = 0;
   int toPassSize = MAX_SIZE;
-
   //first allocate any array of 10
   if(value != NULL) {
 toPass = (char **) malloc(sizeof(char *) * toPassSize);
-tempTok = strtok_r((char *)value, %, tempstr);
+tempTok = strtok_r((char *)value, delim, tempstr);
 while (tempTok != NULL) {
   toPass[size++] = tempTok;
   if(size == toPassSize) {
 toPassSize += MAX_SIZE;
 toPass = (char **) realloc(toPass,(sizeof(char *) * toPassSize));
   }
-  tempTok = strtok_r(NULL, %, tempstr);
+  tempTok = strtok_r(NULL, delim, tempstr);
 }
   }
   if (toPass != NULL) {
@@ -320,6 +316,13 @@ char ** extract_values(char *value) {
   return toPass;
 }
 
+/**
+ * Extracts array of values from the '%' separated list of values.
+ */
+char ** extract_values(char *value) {
+  extract_values_delim(value, %);
+}
+
 // free an entry set of values
 void free_values(char** values) {
   if (*values != NULL) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec0f801f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h
index 133e67b..390a5b5 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h
+++ 

[2/4] hadoop git commit: YARN-3853. Add docker container runtime support to LinuxContainterExecutor. Contributed by Sidharta Seethana.

2015-07-27 Thread vvasudev
YARN-3853. Add docker container runtime support to LinuxContainterExecutor. 
Contributed by Sidharta Seethana.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3e6fce91
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3e6fce91
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3e6fce91

Branch: refs/heads/trunk
Commit: 3e6fce91a471b4a5099de109582e7c6417e8a822
Parents: f36835f
Author: Varun Vasudev vvasu...@apache.org
Authored: Mon Jul 27 11:57:40 2015 -0700
Committer: Varun Vasudev vvasu...@apache.org
Committed: Mon Jul 27 11:57:40 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   4 +
 .../server/nodemanager/ContainerExecutor.java   |  23 +-
 .../nodemanager/DefaultContainerExecutor.java   |   2 +-
 .../nodemanager/DockerContainerExecutor.java|   2 +-
 .../nodemanager/LinuxContainerExecutor.java | 222 +++
 .../launcher/ContainerLaunch.java   |  15 +
 .../linux/privileged/PrivilegedOperation.java   |  46 +++-
 .../PrivilegedOperationException.java   |  30 +-
 .../privileged/PrivilegedOperationExecutor.java |  30 +-
 .../linux/resources/CGroupsHandler.java |   8 +
 .../linux/resources/CGroupsHandlerImpl.java |  12 +-
 .../runtime/DefaultLinuxContainerRuntime.java   | 148 ++
 .../DelegatingLinuxContainerRuntime.java| 110 
 .../runtime/DockerLinuxContainerRuntime.java| 273 +++
 .../linux/runtime/LinuxContainerRuntime.java|  38 +++
 .../runtime/LinuxContainerRuntimeConstants.java |  69 +
 .../linux/runtime/docker/DockerClient.java  |  82 ++
 .../linux/runtime/docker/DockerCommand.java |  66 +
 .../linux/runtime/docker/DockerLoadCommand.java |  30 ++
 .../linux/runtime/docker/DockerRunCommand.java  | 107 
 .../runtime/ContainerExecutionException.java|  85 ++
 .../runtime/ContainerRuntime.java   |  50 
 .../runtime/ContainerRuntimeConstants.java  |  33 +++
 .../runtime/ContainerRuntimeContext.java| 105 +++
 .../executor/ContainerLivenessContext.java  |  13 +
 .../executor/ContainerReacquisitionContext.java |  13 +
 .../executor/ContainerSignalContext.java|  13 +
 .../executor/ContainerStartContext.java |  23 +-
 .../TestLinuxContainerExecutorWithMocks.java| 118 +---
 .../TestPrivilegedOperationExecutor.java|   8 +-
 .../runtime/TestDockerContainerRuntime.java | 219 +++
 31 files changed, 1815 insertions(+), 182 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3e6fce91/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 4e54aea..534c55a 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -153,6 +153,10 @@ Release 2.8.0 - UNRELEASED
 YARN-3852. Add docker container support to container-executor
 (Abin Shahab via vvasudev)
 
+YARN-3853. Add docker container runtime support to LinuxContainterExecutor.
+(Sidharta Seethana via vvasudev)
+
+
   IMPROVEMENTS
 
 YARN-644. Basic null check is not performed on passed in arguments before

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3e6fce91/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
index 79f9b0d..68bfbbf 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
@@ -24,8 +24,10 @@ import java.io.OutputStream;
 import java.io.PrintStream;
 import java.util.ArrayList;
 import java.util.Arrays;
+import java.util.HashSet;
 import java.util.List;
 import java.util.Map;
+import java.util.Set;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ConcurrentMap;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
@@ -39,6 +41,7 @@ import org.apache.hadoop.conf.Configurable;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
 import 

[3/4] hadoop git commit: YARN-3853. Add docker container runtime support to LinuxContainterExecutor. Contributed by Sidharta Seethana.

2015-07-27 Thread vvasudev
http://git-wip-us.apache.org/repos/asf/hadoop/blob/9da487e0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerRunCommand.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerRunCommand.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerRunCommand.java
new file mode 100644
index 000..f9a890e
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerRunCommand.java
@@ -0,0 +1,107 @@
+/*
+ * *
+ *  Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  License); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an AS IS BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ * /
+ */
+
+package 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;
+
+import org.apache.hadoop.util.StringUtils;
+
+import java.util.ArrayList;
+import java.util.List;
+
+public class DockerRunCommand extends DockerCommand {
+  private static final String RUN_COMMAND = run;
+  private final String image;
+  private ListString overrrideCommandWithArgs;
+
+  /** The following are mandatory: */
+  public DockerRunCommand(String containerId, String user, String image) {
+super(RUN_COMMAND);
+super.addCommandArguments(--name= + containerId, --user= + user);
+this.image = image;
+  }
+
+  public DockerRunCommand removeContainerOnExit() {
+super.addCommandArguments(--rm);
+return this;
+  }
+
+  public DockerRunCommand detachOnRun() {
+super.addCommandArguments(-d);
+return this;
+  }
+
+  public DockerRunCommand setContainerWorkDir(String workdir) {
+super.addCommandArguments(--workdir= + workdir);
+return this;
+  }
+
+  public DockerRunCommand setNetworkType(String type) {
+super.addCommandArguments(--net= + type);
+return this;
+  }
+
+  public DockerRunCommand addMountLocation(String sourcePath, String
+  destinationPath) {
+super.addCommandArguments(-v, sourcePath + : + destinationPath);
+return this;
+  }
+
+  public DockerRunCommand setCGroupParent(String parentPath) {
+super.addCommandArguments(--cgroup-parent= + parentPath);
+return this;
+  }
+
+  public DockerRunCommand addDevice(String sourceDevice, String
+  destinationDevice) {
+super.addCommandArguments(--device= + sourceDevice + : +
+destinationDevice);
+return this;
+  }
+
+  public DockerRunCommand enableDetach() {
+super.addCommandArguments(--detach=true);
+return this;
+  }
+
+  public DockerRunCommand disableDetach() {
+super.addCommandArguments(--detach=false);
+return this;
+  }
+
+  public DockerRunCommand setOverrideCommandWithArgs(
+  ListString overrideCommandWithArgs) {
+this.overrrideCommandWithArgs = overrideCommandWithArgs;
+return this;
+  }
+
+  @Override
+  public String getCommandWithArguments() {
+ListString argList = new ArrayList();
+
+argList.add(super.getCommandWithArguments());
+argList.add(image);
+
+if (overrrideCommandWithArgs != null) {
+  argList.addAll(overrrideCommandWithArgs);
+}
+
+return StringUtils.join( , argList);
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9da487e0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/runtime/ContainerExecutionException.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/runtime/ContainerExecutionException.java
 

[1/4] hadoop git commit: YARN-3853. Add docker container runtime support to LinuxContainterExecutor. Contributed by Sidharta Seethana.

2015-07-27 Thread vvasudev
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 ec0f801f5 - 9da487e0f
  refs/heads/trunk f36835ff9 - 3e6fce91a


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3e6fce91/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerRunCommand.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerRunCommand.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerRunCommand.java
new file mode 100644
index 000..f9a890e
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerRunCommand.java
@@ -0,0 +1,107 @@
+/*
+ * *
+ *  Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  License); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an AS IS BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ * /
+ */
+
+package 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;
+
+import org.apache.hadoop.util.StringUtils;
+
+import java.util.ArrayList;
+import java.util.List;
+
+public class DockerRunCommand extends DockerCommand {
+  private static final String RUN_COMMAND = run;
+  private final String image;
+  private ListString overrrideCommandWithArgs;
+
+  /** The following are mandatory: */
+  public DockerRunCommand(String containerId, String user, String image) {
+super(RUN_COMMAND);
+super.addCommandArguments(--name= + containerId, --user= + user);
+this.image = image;
+  }
+
+  public DockerRunCommand removeContainerOnExit() {
+super.addCommandArguments(--rm);
+return this;
+  }
+
+  public DockerRunCommand detachOnRun() {
+super.addCommandArguments(-d);
+return this;
+  }
+
+  public DockerRunCommand setContainerWorkDir(String workdir) {
+super.addCommandArguments(--workdir= + workdir);
+return this;
+  }
+
+  public DockerRunCommand setNetworkType(String type) {
+super.addCommandArguments(--net= + type);
+return this;
+  }
+
+  public DockerRunCommand addMountLocation(String sourcePath, String
+  destinationPath) {
+super.addCommandArguments(-v, sourcePath + : + destinationPath);
+return this;
+  }
+
+  public DockerRunCommand setCGroupParent(String parentPath) {
+super.addCommandArguments(--cgroup-parent= + parentPath);
+return this;
+  }
+
+  public DockerRunCommand addDevice(String sourceDevice, String
+  destinationDevice) {
+super.addCommandArguments(--device= + sourceDevice + : +
+destinationDevice);
+return this;
+  }
+
+  public DockerRunCommand enableDetach() {
+super.addCommandArguments(--detach=true);
+return this;
+  }
+
+  public DockerRunCommand disableDetach() {
+super.addCommandArguments(--detach=false);
+return this;
+  }
+
+  public DockerRunCommand setOverrideCommandWithArgs(
+  ListString overrideCommandWithArgs) {
+this.overrrideCommandWithArgs = overrideCommandWithArgs;
+return this;
+  }
+
+  @Override
+  public String getCommandWithArguments() {
+ListString argList = new ArrayList();
+
+argList.add(super.getCommandWithArguments());
+argList.add(image);
+
+if (overrrideCommandWithArgs != null) {
+  argList.addAll(overrrideCommandWithArgs);
+}
+
+return StringUtils.join( , argList);
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3e6fce91/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/runtime/ContainerExecutionException.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/runtime/ContainerExecutionException.java
 

hadoop git commit: YARN-3908. Fixed bugs in HBaseTimelineWriterImpl. Contributed by Vrushali C and Sangjin Lee.

2015-07-27 Thread zjshen
Repository: hadoop
Updated Branches:
  refs/heads/YARN-2928 a7153ade7 - df0ec473a


YARN-3908. Fixed bugs in HBaseTimelineWriterImpl. Contributed by Vrushali C and 
Sangjin Lee.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/df0ec473
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/df0ec473
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/df0ec473

Branch: refs/heads/YARN-2928
Commit: df0ec473a84871b0effd7ca6faac776210d7df09
Parents: a7153ad
Author: Zhijie Shen zjs...@apache.org
Authored: Mon Jul 27 15:50:28 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Jul 27 15:50:28 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 ++
 .../records/timelineservice/TimelineEvent.java  |  4 +-
 .../storage/HBaseTimelineWriterImpl.java| 18 ++-
 .../storage/common/ColumnHelper.java| 21 
 .../storage/common/ColumnPrefix.java|  7 +--
 .../storage/common/Separator.java   |  7 +++
 .../storage/entity/EntityColumnPrefix.java  | 15 --
 .../storage/entity/EntityTable.java |  6 ++-
 .../storage/TestHBaseTimelineWriterImpl.java| 56 ++--
 9 files changed, 111 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/df0ec473/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index f2df960..0653c50 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -115,6 +115,9 @@ Branch YARN-2928: Timeline Server Next Generation: Phase 1
 YARN-3792. Test case failures in TestDistributedShell and some issue fixes
 related to ATSV2 (Naganarasimha G R via sjlee)
 
+YARN-3908. Fixed bugs in HBaseTimelineWriterImpl. (Vrushali C and Sangjin
+Lee via zjshen)
+
 Trunk - Unreleased
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/df0ec473/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEvent.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEvent.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEvent.java
index 1dbf7e5..a563658 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEvent.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEvent.java
@@ -33,6 +33,8 @@ import java.util.Map;
 @InterfaceAudience.Public
 @InterfaceStability.Unstable
 public class TimelineEvent implements ComparableTimelineEvent {
+  public static final long INVALID_TIMESTAMP = 0L;
+
   private String id;
   private HashMapString, Object info = new HashMap();
   private long timestamp;
@@ -83,7 +85,7 @@ public class TimelineEvent implements 
ComparableTimelineEvent {
   }
 
   public boolean isValid() {
-return (id != null  timestamp != 0L);
+return (id != null  timestamp != INVALID_TIMESTAMP);
   }
 
   @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/df0ec473/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineWriterImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineWriterImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineWriterImpl.java
index 876ad6a..cd2e76e 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineWriterImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/HBaseTimelineWriterImpl.java
@@ -141,6 +141,13 @@ public class HBaseTimelineWriterImpl extends 
AbstractService implements
 EntityColumn.MODIFIED_TIME.store(rowKey, entityTable, null,
 te.getModifiedTime());
 EntityColumn.FLOW_VERSION.store(rowKey, entityTable, null, flowVersion);
+MapString, Object info = te.getInfo();
+

[33/37] hadoop git commit: HDFS-8785. TestDistributedFileSystem is failing in trunk. Contributed by Xiaoyu Yao.

2015-07-27 Thread arp
HDFS-8785. TestDistributedFileSystem is failing in trunk. Contributed by Xiaoyu 
Yao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2196e39e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2196e39e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2196e39e

Branch: refs/heads/HDFS-7240
Commit: 2196e39e142b0f8d1944805db2bfacd4e3244625
Parents: 1df7868
Author: Xiaoyu Yao x...@apache.org
Authored: Mon Jul 27 07:28:41 2015 -0700
Committer: Xiaoyu Yao x...@apache.org
Committed: Mon Jul 27 07:28:41 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt|  2 ++
 .../apache/hadoop/hdfs/TestDistributedFileSystem.java  | 13 -
 2 files changed, 10 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2196e39e/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 1ddf7da..cc2a833 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1084,6 +1084,8 @@ Release 2.8.0 - UNRELEASED
 HDFS-8810. Correct assertions in TestDFSInotifyEventInputStream class.
 (Surendra Singh Lilhore via aajisaka)
 
+HDFS-8785. TestDistributedFileSystem is failing in trunk. (Xiaoyu Yao)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2196e39e/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
index 0b77210..6012c5d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
@@ -1189,19 +1189,22 @@ public class TestDistributedFileSystem {
 try {
   cluster.waitActive();
   DistributedFileSystem dfs = cluster.getFileSystem();
-  // Write 1 MB to a dummy socket to ensure the write times out
+  // Write 10 MB to a dummy socket to ensure the write times out
   ServerSocket socket = new ServerSocket(0);
   Peer peer = dfs.getClient().newConnectedPeer(
 (InetSocketAddress) socket.getLocalSocketAddress(), null, null);
   long start = Time.now();
   try {
-byte[] buf = new byte[1024 * 1024];
+byte[] buf = new byte[10 * 1024 * 1024];
 peer.getOutputStream().write(buf);
-Assert.fail(write should timeout);
+long delta = Time.now() - start;
+Assert.fail(write finish in  + delta +  ms + but should 
timedout);
   } catch (SocketTimeoutException ste) {
 long delta = Time.now() - start;
-Assert.assertTrue(write timedout too soon, delta = timeout * 0.9);
-Assert.assertTrue(write timedout too late, delta = timeout * 1.1);
+Assert.assertTrue(write timedout too soon in  + delta +  ms,
+delta = timeout * 0.9);
+Assert.assertTrue(write timedout too late in  + delta +  ms,
+delta = timeout * 1.2);
   } catch (Throwable t) {
 Assert.fail(wrong exception: + t);
   }



[31/37] hadoop git commit: YARN-3958. TestYarnConfigurationFields should be moved to hadoop-yarn-api module. Contributed by Varun Saxena.

2015-07-27 Thread arp
YARN-3958. TestYarnConfigurationFields should be moved to hadoop-yarn-api 
module. Contributed by Varun Saxena.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/42d4e0ae
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/42d4e0ae
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/42d4e0ae

Branch: refs/heads/HDFS-7240
Commit: 42d4e0ae99d162fde52902cb86e29f2c82a084c8
Parents: 156f24e
Author: Akira Ajisaka aajis...@apache.org
Authored: Mon Jul 27 11:43:25 2015 +0900
Committer: Akira Ajisaka aajis...@apache.org
Committed: Mon Jul 27 11:43:25 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../hadoop-yarn/hadoop-yarn-api/pom.xml |  34 +
 .../yarn/conf/TestYarnConfigurationFields.java  | 136 +++
 .../yarn/conf/TestYarnConfigurationFields.java  | 136 ---
 4 files changed, 173 insertions(+), 136 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/42d4e0ae/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 883d009..3b7d8a8 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -685,6 +685,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3973. Recent changes to application priority management break 
 reservation system from YARN-1051. (Carlo Curino via wangda)
 
+YARN-3958. TestYarnConfigurationFields should be moved to hadoop-yarn-api
+module. (Varun Saxena via aajisaka)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/42d4e0ae/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
index dc9c469..5c4156b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
@@ -62,9 +62,31 @@
   groupIdcom.google.protobuf/groupId
   artifactIdprotobuf-java/artifactId
 /dependency
+
+dependency
+  groupIdorg.apache.hadoop/groupId
+  artifactIdhadoop-common/artifactId
+  typetest-jar/type
+  scopetest/scope
+/dependency
+
+dependency
+  groupIdjunit/groupId
+  artifactIdjunit/artifactId
+  scopetest/scope
+/dependency
   /dependencies
 
   build
+resources
+  resource
+
directory${basedir}/../hadoop-yarn-common/src/main/resources/directory
+includes
+  includeyarn-default.xml/include
+/includes
+filteringfalse/filtering
+  /resource
+/resources
 plugins
   plugin
 groupIdorg.apache.hadoop/groupId
@@ -105,6 +127,18 @@
   /execution
 /executions
   /plugin
+
+  plugin
+artifactIdmaven-jar-plugin/artifactId
+executions
+  execution
+goals
+  goaltest-jar/goal
+/goals
+phasetest-compile/phase
+  /execution
+/executions
+  /plugin
 /plugins
   /build
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/42d4e0ae/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
new file mode 100644
index 000..e89a90d
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
@@ -0,0 +1,136 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package 

[35/37] hadoop git commit: YARN-3853. Add docker container runtime support to LinuxContainterExecutor. Contributed by Sidharta Seethana.

2015-07-27 Thread arp
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3e6fce91/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerRunCommand.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerRunCommand.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerRunCommand.java
new file mode 100644
index 000..f9a890e
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerRunCommand.java
@@ -0,0 +1,107 @@
+/*
+ * *
+ *  Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  License); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an AS IS BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ * /
+ */
+
+package 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.docker;
+
+import org.apache.hadoop.util.StringUtils;
+
+import java.util.ArrayList;
+import java.util.List;
+
+public class DockerRunCommand extends DockerCommand {
+  private static final String RUN_COMMAND = run;
+  private final String image;
+  private ListString overrrideCommandWithArgs;
+
+  /** The following are mandatory: */
+  public DockerRunCommand(String containerId, String user, String image) {
+super(RUN_COMMAND);
+super.addCommandArguments(--name= + containerId, --user= + user);
+this.image = image;
+  }
+
+  public DockerRunCommand removeContainerOnExit() {
+super.addCommandArguments(--rm);
+return this;
+  }
+
+  public DockerRunCommand detachOnRun() {
+super.addCommandArguments(-d);
+return this;
+  }
+
+  public DockerRunCommand setContainerWorkDir(String workdir) {
+super.addCommandArguments(--workdir= + workdir);
+return this;
+  }
+
+  public DockerRunCommand setNetworkType(String type) {
+super.addCommandArguments(--net= + type);
+return this;
+  }
+
+  public DockerRunCommand addMountLocation(String sourcePath, String
+  destinationPath) {
+super.addCommandArguments(-v, sourcePath + : + destinationPath);
+return this;
+  }
+
+  public DockerRunCommand setCGroupParent(String parentPath) {
+super.addCommandArguments(--cgroup-parent= + parentPath);
+return this;
+  }
+
+  public DockerRunCommand addDevice(String sourceDevice, String
+  destinationDevice) {
+super.addCommandArguments(--device= + sourceDevice + : +
+destinationDevice);
+return this;
+  }
+
+  public DockerRunCommand enableDetach() {
+super.addCommandArguments(--detach=true);
+return this;
+  }
+
+  public DockerRunCommand disableDetach() {
+super.addCommandArguments(--detach=false);
+return this;
+  }
+
+  public DockerRunCommand setOverrideCommandWithArgs(
+  ListString overrideCommandWithArgs) {
+this.overrrideCommandWithArgs = overrideCommandWithArgs;
+return this;
+  }
+
+  @Override
+  public String getCommandWithArguments() {
+ListString argList = new ArrayList();
+
+argList.add(super.getCommandWithArguments());
+argList.add(image);
+
+if (overrrideCommandWithArgs != null) {
+  argList.addAll(overrrideCommandWithArgs);
+}
+
+return StringUtils.join( , argList);
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3e6fce91/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/runtime/ContainerExecutionException.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/runtime/ContainerExecutionException.java
 

[34/37] hadoop git commit: YARN-3852. Add docker container support to container-executor. Contributed by Abin Shahab.

2015-07-27 Thread arp
YARN-3852. Add docker container support to container-executor. Contributed by 
Abin Shahab.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f36835ff
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f36835ff
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f36835ff

Branch: refs/heads/HDFS-7240
Commit: f36835ff9b878fa20fe58a30f9d1e8c47702d6d2
Parents: 2196e39
Author: Varun Vasudev vvasu...@apache.org
Authored: Mon Jul 27 10:12:30 2015 -0700
Committer: Varun Vasudev vvasu...@apache.org
Committed: Mon Jul 27 10:14:51 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../container-executor/impl/configuration.c |  17 +-
 .../container-executor/impl/configuration.h |   2 +
 .../impl/container-executor.c   | 417 ---
 .../impl/container-executor.h   |  25 +-
 .../main/native/container-executor/impl/main.c  |  97 -
 6 files changed, 480 insertions(+), 81 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f36835ff/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 3b7d8a8..4e54aea 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -150,6 +150,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3656. LowCost: A Cost-Based Placement Agent for YARN Reservations. 
 (Jonathan Yaniv and Ishai Menache via curino)
 
+YARN-3852. Add docker container support to container-executor
+(Abin Shahab via vvasudev)
+
   IMPROVEMENTS
 
 YARN-644. Basic null check is not performed on passed in arguments before

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f36835ff/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
index eaa1f19..2825367 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
@@ -291,27 +291,23 @@ char ** get_values(const char * key) {
   return extract_values(value);
 }
 
-/**
- * Extracts array of values from the '%' separated list of values.
- */
-char ** extract_values(char *value) {
+char ** extract_values_delim(char *value, const char *delim) {
   char ** toPass = NULL;
   char *tempTok = NULL;
   char *tempstr = NULL;
   int size = 0;
   int toPassSize = MAX_SIZE;
-
   //first allocate any array of 10
   if(value != NULL) {
 toPass = (char **) malloc(sizeof(char *) * toPassSize);
-tempTok = strtok_r((char *)value, %, tempstr);
+tempTok = strtok_r((char *)value, delim, tempstr);
 while (tempTok != NULL) {
   toPass[size++] = tempTok;
   if(size == toPassSize) {
 toPassSize += MAX_SIZE;
 toPass = (char **) realloc(toPass,(sizeof(char *) * toPassSize));
   }
-  tempTok = strtok_r(NULL, %, tempstr);
+  tempTok = strtok_r(NULL, delim, tempstr);
 }
   }
   if (toPass != NULL) {
@@ -320,6 +316,13 @@ char ** extract_values(char *value) {
   return toPass;
 }
 
+/**
+ * Extracts array of values from the '%' separated list of values.
+ */
+char ** extract_values(char *value) {
+  extract_values_delim(value, %);
+}
+
 // free an entry set of values
 void free_values(char** values) {
   if (*values != NULL) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f36835ff/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h
index 133e67b..390a5b5 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h
@@ -46,6 

[29/37] hadoop git commit: YARN-3656. LowCost: A Cost-Based Placement Agent for YARN Reservations. (Jonathan Yaniv and Ishai Menache via curino)

2015-07-27 Thread arp
http://git-wip-us.apache.org/repos/asf/hadoop/blob/156f24ea/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/PlanningAlgorithm.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/PlanningAlgorithm.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/PlanningAlgorithm.java
new file mode 100644
index 000..9a0a0f0
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/PlanningAlgorithm.java
@@ -0,0 +1,207 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.resourcemanager.reservation.planning;
+
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.hadoop.yarn.api.records.ReservationDefinition;
+import org.apache.hadoop.yarn.api.records.ReservationId;
+import org.apache.hadoop.yarn.api.records.Resource;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.InMemoryReservationAllocation;
+import org.apache.hadoop.yarn.server.resourcemanager.reservation.Plan;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.RLESparseResourceAllocation;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationAllocation;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationInterval;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.exceptions.ContractValidationException;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.exceptions.PlanningException;
+
+/**
+ * An abstract class that follows the general behavior of planning algorithms.
+ */
+public abstract class PlanningAlgorithm implements ReservationAgent {
+
+  /**
+   * Performs the actual allocation for a ReservationDefinition within a Plan.
+   *
+   * @param reservationId the identifier of the reservation
+   * @param user the user who owns the reservation
+   * @param plan the Plan to which the reservation must be fitted
+   * @param contract encapsulates the resources required by the user for his
+   *  session
+   * @param oldReservation the existing reservation (null if none)
+   * @return whether the allocateUser function was successful or not
+   *
+   * @throws PlanningException if the session cannot be fitted into the plan
+   * @throws ContractValidationException
+   */
+  protected boolean allocateUser(ReservationId reservationId, String user,
+  Plan plan, ReservationDefinition contract,
+  ReservationAllocation oldReservation) throws PlanningException,
+  ContractValidationException {
+
+// Adjust the ResourceDefinition to account for system imperfections
+// (e.g., scheduling delays for large containers).
+ReservationDefinition adjustedContract = adjustContract(plan, contract);
+
+// Compute the job allocation
+RLESparseResourceAllocation allocation =
+computeJobAllocation(plan, reservationId, adjustedContract);
+
+// If no job allocation was found, fail
+if (allocation == null) {
+  throw new PlanningException(
+  The planning algorithm could not find a valid allocation
+  +  for your request);
+}
+
+// Translate the allocation to a map (with zero paddings)
+long step = plan.getStep();
+long jobArrival = stepRoundUp(adjustedContract.getArrival(), step);
+long jobDeadline = stepRoundUp(adjustedContract.getDeadline(), step);
+MapReservationInterval, Resource mapAllocations =
+allocationsToPaddedMap(allocation, jobArrival, jobDeadline);
+
+// Create the reservation
+ReservationAllocation capReservation =
+new InMemoryReservationAllocation(reservationId, // ID
+adjustedContract, // Contract
+user, // User name
+

[18/37] hadoop git commit: YARN-3969. Updating CHANGES.txt to reflect the correct set of branches where this is committed

2015-07-27 Thread arp
YARN-3969. Updating CHANGES.txt to reflect the correct set of branches where 
this is committed


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fc42fa8a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fc42fa8a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fc42fa8a

Branch: refs/heads/HDFS-7240
Commit: fc42fa8ae3bc9d6d055090a7bb5e6f0c5972fcff
Parents: e4b0c74
Author: carlo curino Carlo Curino
Authored: Fri Jul 24 13:38:44 2015 -0700
Committer: carlo curino Carlo Curino
Committed: Fri Jul 24 13:38:44 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fc42fa8a/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 44e5510..d1546b2 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -710,6 +710,10 @@ Release 2.7.2 - UNRELEASED
 YARN-3878. AsyncDispatcher can hang while stopping if it is configured for
 draining events on stop. (Varun Saxena via jianhe)
 
+YARN-3969. Allow jobs to be submitted to reservation that is active 
+but does not have any allocations. (subru via curino)
+
+
 Release 2.7.1 - 2015-07-06 
 
   INCOMPATIBLE CHANGES
@@ -870,8 +874,6 @@ Release 2.7.1 - 2015-07-06
 YARN-3850. NM fails to read files from full disks which can lead to
 container logs being lost and other issues (Varun Saxena via jlowe)
 
-YARN-3969. Allow jobs to be submitted to reservation that is active 
-but does not have any allocations. (subru via curino)
 
 Release 2.7.0 - 2015-04-20
 



[02/37] hadoop git commit: YARN-2019. Retrospect on decision of making RM crashed if any exception throw in ZKRMStateStore. Contributed by Jian He.

2015-07-27 Thread arp
YARN-2019. Retrospect on decision of making RM crashed if any exception throw 
in ZKRMStateStore. Contributed by Jian He.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ee98d635
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ee98d635
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ee98d635

Branch: refs/heads/HDFS-7240
Commit: ee98d6354bbbcd0832d3e539ee097f837e5d0e31
Parents: e91ccfa
Author: Junping Du junping...@apache.org
Authored: Wed Jul 22 17:52:35 2015 -0700
Committer: Junping Du junping...@apache.org
Committed: Wed Jul 22 17:52:35 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 +++
 .../apache/hadoop/yarn/conf/YarnConfiguration.java  | 11 +++
 .../src/main/resources/yarn-default.xml | 16 
 .../resourcemanager/recovery/RMStateStore.java  |  9 +++--
 4 files changed, 37 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ee98d635/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index a5fd4e7..93962f1 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -144,6 +144,9 @@ Release 2.8.0 - UNRELEASED
 YARN-2003. Support for Application priority : Changes in RM and Capacity 
 Scheduler. (Sunil G via wangda)
 
+YARN-2019. Retrospect on decision of making RM crashed if any exception 
throw 
+in ZKRMStateStore. (Jian He via junping_du)
+
   IMPROVEMENTS
 
 YARN-644. Basic null check is not performed on passed in arguments before

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ee98d635/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 060635f..9832729 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -401,6 +401,11 @@ public class YarnConfiguration extends Configuration {
   public static final String RECOVERY_ENABLED = RM_PREFIX + recovery.enabled;
   public static final boolean DEFAULT_RM_RECOVERY_ENABLED = false;
 
+  public static final String YARN_FAIL_FAST = YARN_PREFIX + fail-fast;
+  public static final boolean DEFAULT_YARN_FAIL_FAST = true;
+
+  public static final String RM_FAIL_FAST = RM_PREFIX + fail-fast;
+
   @Private
   public static final String RM_WORK_PRESERVING_RECOVERY_ENABLED = RM_PREFIX
   + work-preserving-recovery.enabled;
@@ -2018,6 +2023,12 @@ public class YarnConfiguration extends Configuration {
 YARN_HTTP_POLICY_DEFAULT));
   }
 
+  public static boolean shouldRMFailFast(Configuration conf) {
+return conf.getBoolean(YarnConfiguration.RM_FAIL_FAST,
+conf.getBoolean(YarnConfiguration.YARN_FAIL_FAST,
+YarnConfiguration.DEFAULT_YARN_FAIL_FAST));
+  }
+
   @Private
   public static String getClusterId(Configuration conf) {
 String clusterId = conf.get(YarnConfiguration.RM_CLUSTER_ID);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ee98d635/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index d586f51..8b3a3af 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -324,6 +324,22 @@
   /property
 
   property
+descriptionShould RM fail fast if it encounters any errors. By defalt, it
+  points to ${yarn.fail-fast}. Errors include:
+  1) exceptions when state-store write/read operations fails.
+/description
+nameyarn.resourcemanager.fail-fast/name
+value${yarn.fail-fast}/value
+  /property
+
+  property
+descriptionShould YARN fail fast if it encounters any errors.
+/description
+nameyarn.fail-fast/name
+valuetrue/value
+  /property
+
+  property
 descriptionEnable RM work preserving recovery. This configuration is 
private
 to YARN for experimenting the feature.
 

[15/37] hadoop git commit: HDFS-8735. Inotify: All events classes should implement toString() API. Contributed by Surendra Singh Lilhore.

2015-07-27 Thread arp
HDFS-8735. Inotify: All events classes should implement toString() API. 
Contributed by Surendra Singh Lilhore.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f8f60918
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f8f60918
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f8f60918

Branch: refs/heads/HDFS-7240
Commit: f8f60918230dd466ae8dda1fbc28878e19273232
Parents: fbd6063
Author: Akira Ajisaka aajis...@apache.org
Authored: Sat Jul 25 02:56:55 2015 +0900
Committer: Akira Ajisaka aajis...@apache.org
Committed: Sat Jul 25 02:56:55 2015 +0900

--
 .../org/apache/hadoop/hdfs/inotify/Event.java   | 95 
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../hdfs/TestDFSInotifyEventInputStream.java| 26 ++
 3 files changed, 124 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f8f60918/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
index dee17a9..6f2b5e2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/inotify/Event.java
@@ -51,6 +51,7 @@ public abstract class Event {
   /**
* Sent when a file is closed after append or create.
*/
+  @InterfaceAudience.Public
   public static class CloseEvent extends Event {
 private String path;
 private long fileSize;
@@ -81,11 +82,20 @@ public abstract class Event {
 public long getTimestamp() {
   return timestamp;
 }
+
+@Override
+@InterfaceStability.Unstable
+public String toString() {
+  return CloseEvent [path= + path + , fileSize= + fileSize
+  + , timestamp= + timestamp + ];
+}
+
   }
 
   /**
* Sent when a new file is created (including overwrite).
*/
+  @InterfaceAudience.Public
   public static class CreateEvent extends Event {
 
 public static enum INodeType {
@@ -232,6 +242,25 @@ public abstract class Event {
 public long getDefaultBlockSize() {
   return defaultBlockSize;
 }
+
+@Override
+@InterfaceStability.Unstable
+public String toString() {
+  StringBuilder content = new StringBuilder();
+  content.append(CreateEvent [INodeType= + iNodeType + , path= + path
+  + , ctime= + ctime + , replication= + replication
+  + , ownerName= + ownerName + , groupName= + groupName
+  + , perms= + perms + , );
+
+  if (symlinkTarget != null) {
+content.append(symlinkTarget= + symlinkTarget + , );
+  }
+
+  content.append(overwrite= + overwrite + , defaultBlockSize=
+  + defaultBlockSize + ]);
+  return content.toString();
+}
+
   }
 
   /**
@@ -242,6 +271,7 @@ public abstract class Event {
* metadataType of the MetadataUpdateEvent will be null or will have their 
default
* values.
*/
+  @InterfaceAudience.Public
   public static class MetadataUpdateEvent extends Event {
 
 public static enum MetadataType {
@@ -400,11 +430,45 @@ public abstract class Event {
   return xAttrsRemoved;
 }
 
+@Override
+@InterfaceStability.Unstable
+public String toString() {
+  StringBuilder content = new StringBuilder();
+  content.append(MetadataUpdateEvent [path= + path + , metadataType=
+  + metadataType);
+  switch (metadataType) {
+  case TIMES:
+content.append(, mtime= + mtime + , atime= + atime);
+break;
+  case REPLICATION:
+content.append(, replication= + replication);
+break;
+  case OWNER:
+content.append(, ownerName= + ownerName
++ , groupName= + groupName);
+break;
+  case PERMS:
+content.append(, perms= + perms);
+break;
+  case ACLS:
+content.append(, acls= + acls);
+break;
+  case XATTRS:
+content.append(, xAttrs= + xAttrs + , xAttrsRemoved=
++ xAttrsRemoved);
+break;
+  default:
+break;
+  }
+  content.append(']');
+  return content.toString();
+}
   }
 
   /**
* Sent when a file, directory, or symlink is renamed.
*/
+  @InterfaceAudience.Public
   public static class RenameEvent extends Event {
 private String srcPath;
 private String dstPath;
@@ -456,11 +520,20 @@ public abstract class Event {
 public long getTimestamp() {
   return timestamp;
 }
+
+@Override
+@InterfaceStability.Unstable
+public String toString() {
+ 

[12/37] hadoop git commit: HDFS-8806. Inconsistent metrics: number of missing blocks with replication factor 1 not properly cleared. Contributed by Zhe Zhang.

2015-07-27 Thread arp
HDFS-8806. Inconsistent metrics: number of missing blocks with replication 
factor 1 not properly cleared. Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/206d4933
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/206d4933
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/206d4933

Branch: refs/heads/HDFS-7240
Commit: 206d4933a567147b62f463c2daa3d063ad40822b
Parents: e202efa
Author: Akira Ajisaka aajis...@apache.org
Authored: Fri Jul 24 18:28:44 2015 +0900
Committer: Akira Ajisaka aajis...@apache.org
Committed: Fri Jul 24 18:28:44 2015 +0900

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java | 3 ++-
 2 files changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/206d4933/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index f86d41e..b348a5a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1097,6 +1097,9 @@ Release 2.7.2 - UNRELEASED
 HDFS-6945. BlockManager should remove a block from excessReplicateMap and
 decrement ExcessBlocks metric when the block is removed. (aajisaka)
 
+HDFS-8806. Inconsistent metrics: number of missing blocks with replication
+factor 1 not properly cleared. (Zhe Zhang via aajisaka)
+
 Release 2.7.1 - 2015-07-06
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/206d4933/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
index d8aec99..128aae6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
@@ -101,10 +101,11 @@ class UnderReplicatedBlocks implements 
IterableBlockInfo {
   /**
* Empty the queues and timestamps.
*/
-  void clear() {
+  synchronized void clear() {
 for (int i = 0; i  LEVEL; i++) {
   priorityQueues.get(i).clear();
 }
+corruptReplOneBlocks = 0;
 timestampsMap.clear();
   }
 



[11/37] hadoop git commit: YARN-3845. Scheduler page does not render RGBA color combinations in IE11. (Contributed by Mohammad Shahid Khan)

2015-07-27 Thread arp
YARN-3845. Scheduler page does not render RGBA color combinations in IE11. 
(Contributed by Mohammad Shahid Khan)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e202efaf
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e202efaf
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e202efaf

Branch: refs/heads/HDFS-7240
Commit: e202efaf932c940e6da5fe857ae55c0808fd4fdd
Parents: 02c0181
Author: Rohith Sharma K S rohithsharm...@apache.org
Authored: Fri Jul 24 12:43:06 2015 +0530
Committer: Rohith Sharma K S rohithsharm...@apache.org
Committed: Fri Jul 24 12:43:06 2015 +0530

--
 hadoop-yarn-project/CHANGES.txt   |  3 +++
 .../apache/hadoop/yarn/webapp/view/TwoColumnLayout.java   |  2 +-
 .../resourcemanager/webapp/CapacitySchedulerPage.java |  7 ---
 .../resourcemanager/webapp/DefaultSchedulerPage.java  |  4 ++--
 .../server/resourcemanager/webapp/FairSchedulerPage.java  | 10 ++
 5 files changed, 16 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e202efaf/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 3d41ba7..f23853b 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -669,6 +669,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3900. Protobuf layout of yarn_security_token causes errors in other 
protos
 that include it (adhoot via rkanter)
 
+YARN-3845. Scheduler page does not render RGBA color combinations in IE11. 
+(Contributed by Mohammad Shahid Khan)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e202efaf/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/TwoColumnLayout.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/TwoColumnLayout.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/TwoColumnLayout.java
index b8f5f75..4d7752d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/TwoColumnLayout.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/TwoColumnLayout.java
@@ -126,7 +126,7 @@ public class TwoColumnLayout extends HtmlPage {
 styles.add(join('#', tableId, _paginate span {font-weight:normal}));
 styles.add(join('#', tableId,  .progress {width:8em}));
 styles.add(join('#', tableId, _processing {top:-1.5em; font-size:1em;));
-styles.add(  color:#000; background:rgba(255, 255, 255, 0.8)});
+styles.add(  color:#000; background:#fefefe});
 for (String style : innerStyles) {
   styles.add(join('#', tableId,  , style));
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e202efaf/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
index a784601..12a3013 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
@@ -59,9 +59,10 @@ class CapacitySchedulerPage extends RmView {
   static final float Q_MAX_WIDTH = 0.8f;
   static final float Q_STATS_POS = Q_MAX_WIDTH + 0.05f;
   static final String Q_END = left:101%;
-  static final String Q_GIVEN = left:0%;background:none;border:1px dashed 
rgba(0,0,0,0.25);
-  static final String Q_OVER = background:rgba(255, 140, 0, 0.8);
-  static final String Q_UNDER = background:rgba(50, 205, 50, 0.8);
+  static final String Q_GIVEN =
+  left:0%;background:none;border:1px dashed #BFBFBF;
+  static final String Q_OVER = background:#FFA333;
+  static final String Q_UNDER = background:#5BD75B;
 
   @RequestScoped
   static class CSQInfo {


[21/37] hadoop git commit: YARN-3925. ContainerLogsUtils#getContainerLogFile fails to read container log files from full disks. Contributed by zhihai xu

2015-07-27 Thread arp
YARN-3925. ContainerLogsUtils#getContainerLogFile fails to read container log 
files from full disks. Contributed by zhihai xu


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ff9c13e0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ff9c13e0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ff9c13e0

Branch: refs/heads/HDFS-7240
Commit: ff9c13e0a739bb13115167dc661b6a16b2ed2c04
Parents: 83fe34a
Author: Jason Lowe jl...@apache.org
Authored: Fri Jul 24 22:14:39 2015 +
Committer: Jason Lowe jl...@apache.org
Committed: Fri Jul 24 22:14:39 2015 +

--
 hadoop-yarn-project/CHANGES.txt |  2 +
 .../nodemanager/LocalDirsHandlerService.java| 35 +-
 .../webapp/TestContainerLogsPage.java   | 48 
 3 files changed, 83 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ff9c13e0/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index cf00fe5..c295784 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -716,6 +716,8 @@ Release 2.7.2 - UNRELEASED
 YARN-3969. Allow jobs to be submitted to reservation that is active 
 but does not have any allocations. (subru via curino)
 
+YARN-3925. ContainerLogsUtils#getContainerLogFile fails to read container
+log files from full disks. (zhihai xu via jlowe)
 
 Release 2.7.1 - 2015-07-06 
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ff9c13e0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LocalDirsHandlerService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LocalDirsHandlerService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LocalDirsHandlerService.java
index 0a61035..6709c90 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LocalDirsHandlerService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LocalDirsHandlerService.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.yarn.server.nodemanager;
 
+import java.io.File;
 import java.io.IOException;
 import java.net.URI;
 import java.util.ArrayList;
@@ -31,6 +32,7 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileContext;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.LocalDirAllocator;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsPermission;
@@ -467,6 +469,35 @@ public class LocalDirsHandlerService extends 
AbstractService {
 return disksTurnedGood;
   }
 
+  private Path getPathToRead(String pathStr, ListString dirs)
+  throws IOException {
+// remove the leading slash from the path (to make sure that the uri
+// resolution results in a valid path on the dir being checked)
+if (pathStr.startsWith(/)) {
+  pathStr = pathStr.substring(1);
+}
+
+FileSystem localFS = FileSystem.getLocal(getConfig());
+for (String dir : dirs) {
+  try {
+Path tmpDir = new Path(dir);
+File tmpFile = tmpDir.isAbsolute()
+? new File(localFS.makeQualified(tmpDir).toUri())
+: new File(dir);
+Path file = new Path(tmpFile.getPath(), pathStr);
+if (localFS.exists(file)) {
+  return file;
+}
+  } catch (IOException ie) {
+// ignore
+LOG.warn(Failed to find  + pathStr +  at  + dir, ie);
+  }
+}
+
+throw new IOException(Could not find  + pathStr +  in any of +
+ the directories);
+  }
+
   public Path getLocalPathForWrite(String pathStr) throws IOException {
 return localDirsAllocator.getLocalPathForWrite(pathStr, getConfig());
   }
@@ -484,9 +515,9 @@ public class LocalDirsHandlerService extends 
AbstractService {
   }
 
   public Path getLogPathToRead(String pathStr) throws IOException {
-return logDirsAllocator.getLocalPathToRead(pathStr, getConfig());
+return getPathToRead(pathStr, getLogDirsForRead());
   }
-  
+
   public static String[] validatePaths(String[] paths) {
 ArrayListString validPaths = new ArrayListString();
 for (int i = 0; i  

[36/37] hadoop git commit: YARN-3853. Add docker container runtime support to LinuxContainterExecutor. Contributed by Sidharta Seethana.

2015-07-27 Thread arp
YARN-3853. Add docker container runtime support to LinuxContainterExecutor. 
Contributed by Sidharta Seethana.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3e6fce91
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3e6fce91
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3e6fce91

Branch: refs/heads/HDFS-7240
Commit: 3e6fce91a471b4a5099de109582e7c6417e8a822
Parents: f36835f
Author: Varun Vasudev vvasu...@apache.org
Authored: Mon Jul 27 11:57:40 2015 -0700
Committer: Varun Vasudev vvasu...@apache.org
Committed: Mon Jul 27 11:57:40 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   4 +
 .../server/nodemanager/ContainerExecutor.java   |  23 +-
 .../nodemanager/DefaultContainerExecutor.java   |   2 +-
 .../nodemanager/DockerContainerExecutor.java|   2 +-
 .../nodemanager/LinuxContainerExecutor.java | 222 +++
 .../launcher/ContainerLaunch.java   |  15 +
 .../linux/privileged/PrivilegedOperation.java   |  46 +++-
 .../PrivilegedOperationException.java   |  30 +-
 .../privileged/PrivilegedOperationExecutor.java |  30 +-
 .../linux/resources/CGroupsHandler.java |   8 +
 .../linux/resources/CGroupsHandlerImpl.java |  12 +-
 .../runtime/DefaultLinuxContainerRuntime.java   | 148 ++
 .../DelegatingLinuxContainerRuntime.java| 110 
 .../runtime/DockerLinuxContainerRuntime.java| 273 +++
 .../linux/runtime/LinuxContainerRuntime.java|  38 +++
 .../runtime/LinuxContainerRuntimeConstants.java |  69 +
 .../linux/runtime/docker/DockerClient.java  |  82 ++
 .../linux/runtime/docker/DockerCommand.java |  66 +
 .../linux/runtime/docker/DockerLoadCommand.java |  30 ++
 .../linux/runtime/docker/DockerRunCommand.java  | 107 
 .../runtime/ContainerExecutionException.java|  85 ++
 .../runtime/ContainerRuntime.java   |  50 
 .../runtime/ContainerRuntimeConstants.java  |  33 +++
 .../runtime/ContainerRuntimeContext.java| 105 +++
 .../executor/ContainerLivenessContext.java  |  13 +
 .../executor/ContainerReacquisitionContext.java |  13 +
 .../executor/ContainerSignalContext.java|  13 +
 .../executor/ContainerStartContext.java |  23 +-
 .../TestLinuxContainerExecutorWithMocks.java| 118 +---
 .../TestPrivilegedOperationExecutor.java|   8 +-
 .../runtime/TestDockerContainerRuntime.java | 219 +++
 31 files changed, 1815 insertions(+), 182 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3e6fce91/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 4e54aea..534c55a 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -153,6 +153,10 @@ Release 2.8.0 - UNRELEASED
 YARN-3852. Add docker container support to container-executor
 (Abin Shahab via vvasudev)
 
+YARN-3853. Add docker container runtime support to LinuxContainterExecutor.
+(Sidharta Seethana via vvasudev)
+
+
   IMPROVEMENTS
 
 YARN-644. Basic null check is not performed on passed in arguments before

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3e6fce91/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
index 79f9b0d..68bfbbf 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
@@ -24,8 +24,10 @@ import java.io.OutputStream;
 import java.io.PrintStream;
 import java.util.ArrayList;
 import java.util.Arrays;
+import java.util.HashSet;
 import java.util.List;
 import java.util.Map;
+import java.util.Set;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ConcurrentMap;
 import java.util.concurrent.locks.ReentrantReadWriteLock;
@@ -39,6 +41,7 @@ import org.apache.hadoop.conf.Configurable;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
 import 

[03/37] hadoop git commit: YARN-3941. Proportional Preemption policy should try to avoid sending duplicate PREEMPT_CONTAINER event to scheduler. (Sunil G via wangda)

2015-07-27 Thread arp
YARN-3941. Proportional Preemption policy should try to avoid sending duplicate 
PREEMPT_CONTAINER event to scheduler. (Sunil G via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3bba1800
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3bba1800
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3bba1800

Branch: refs/heads/HDFS-7240
Commit: 3bba1800513b38a4827f7552f348db87dc47c783
Parents: ee98d63
Author: Wangda Tan wan...@apache.org
Authored: Thu Jul 23 10:07:57 2015 -0700
Committer: Wangda Tan wan...@apache.org
Committed: Thu Jul 23 10:07:57 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt | 2 ++
 .../capacity/ProportionalCapacityPreemptionPolicy.java  | 9 ++---
 .../capacity/TestProportionalCapacityPreemptionPolicy.java  | 6 +++---
 3 files changed, 11 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3bba1800/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 93962f1..9416cd6 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -664,6 +664,8 @@ Release 2.8.0 - UNRELEASED
 
 YARN-3956. Fix TestNodeManagerHardwareUtils fails on Mac (Varun Vasudev 
via wangda)
 
+YARN-3941. Proportional Preemption policy should try to avoid sending 
duplicate PREEMPT_CONTAINER event to scheduler. (Sunil G via wangda)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3bba1800/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
index 1152cef..77df059 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
@@ -260,13 +260,16 @@ public class ProportionalCapacityPreemptionPolicy 
implements SchedulingEditPolic
   SchedulerEventType.KILL_CONTAINER));
   preempted.remove(container);
 } else {
+  if (preempted.get(container) != null) {
+// We already updated the information to scheduler earlier, we need
+// not have to raise another event.
+continue;
+  }
   //otherwise just send preemption events
   rmContext.getDispatcher().getEventHandler().handle(
   new ContainerPreemptEvent(appAttemptId, container,
   SchedulerEventType.PREEMPT_CONTAINER));
-  if (preempted.get(container) == null) {
-preempted.put(container, clock.getTime());
-  }
+  preempted.put(container, clock.getTime());
 }
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3bba1800/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
index bc4d0dc..8d9f48a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
+++ 

[08/37] hadoop git commit: HADOOP-12009: Clarify FileSystem.listStatus() sorting order fix FileSystemContractBaseTest:testListStatus. (J.Andreina via jghoman)

2015-07-27 Thread arp
HADOOP-12009: Clarify FileSystem.listStatus() sorting order  fix 
FileSystemContractBaseTest:testListStatus. (J.Andreina via jghoman)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ab3197c2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ab3197c2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ab3197c2

Branch: refs/heads/HDFS-7240
Commit: ab3197c20452e0dd908193d6854c204e6ee34645
Parents: 1d3026e
Author: Jakob Homan jgho...@gmail.com
Authored: Thu Jul 23 17:46:13 2015 -0700
Committer: Jakob Homan jgho...@gmail.com
Committed: Thu Jul 23 17:46:13 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt|  3 +++
 .../main/java/org/apache/hadoop/fs/FileSystem.java | 17 -
 .../src/site/markdown/filesystem/filesystem.md |  4 
 .../hadoop/fs/FileSystemContractBaseTest.java  | 11 ---
 4 files changed, 31 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ab3197c2/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 6c18add..56edcac 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -497,6 +497,9 @@ Trunk (Unreleased)
 
 HADOOP-11762. Enable swift distcp to secure HDFS (Chen He via aw)
 
+HADOOP-12009. Clarify FileSystem.listStatus() sorting order  fix
+FileSystemContractBaseTest:testListStatus. (J.Andreina via jghoman)
+
   OPTIMIZATIONS
 
 HADOOP-7761. Improve the performance of raw comparisons. (todd)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ab3197c2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index a01d3ea..8f32644 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -1501,7 +1501,9 @@ public abstract class FileSystem extends Configured 
implements Closeable {
   /**
* List the statuses of the files/directories in the given path if the path 
is
* a directory.
-   * 
+   * p
+   * Does not guarantee to return the List of files/directories status in a
+   * sorted order.
* @param f given path
* @return the statuses of the files/directories in the given patch
* @throws FileNotFoundException when the path does not exist;
@@ -1543,6 +1545,9 @@ public abstract class FileSystem extends Configured 
implements Closeable {
   /**
* Filter files/directories in the given path using the user-supplied path
* filter.
+   * p
+   * Does not guarantee to return the List of files/directories status in a
+   * sorted order.
* 
* @param f
*  a path name
@@ -1563,6 +1568,9 @@ public abstract class FileSystem extends Configured 
implements Closeable {
   /**
* Filter files/directories in the given list of paths using default
* path filter.
+   * p
+   * Does not guarantee to return the List of files/directories status in a
+   * sorted order.
* 
* @param files
*  a list of paths
@@ -1579,6 +1587,9 @@ public abstract class FileSystem extends Configured 
implements Closeable {
   /**
* Filter files/directories in the given list of paths using user-supplied
* path filter.
+   * p
+   * Does not guarantee to return the List of files/directories status in a
+   * sorted order.
* 
* @param files
*  a list of paths
@@ -1739,6 +1750,8 @@ public abstract class FileSystem extends Configured 
implements Closeable {
* while consuming the entries. Each file system implementation should
* override this method and provide a more efficient implementation, if
* possible. 
+   * Does not guarantee to return the iterator that traverses statuses
+   * of the files in a sorted order.
*
* @param p target path
* @return remote iterator
@@ -1766,6 +1779,8 @@ public abstract class FileSystem extends Configured 
implements Closeable {
 
   /**
* List the statuses and block locations of the files in the given path.
+   * Does not guarantee to return the iterator that traverses statuses
+   * of the files in a sorted order.
* 
* If the path is a directory, 
*   if recursive is false, returns files in the directory;


[13/37] hadoop git commit: HADOOP-12259. Utility to Dynamic port allocation (brahmareddy via rkanter)

2015-07-27 Thread arp
HADOOP-12259. Utility to Dynamic port allocation (brahmareddy via rkanter)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ee233ec9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ee233ec9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ee233ec9

Branch: refs/heads/HDFS-7240
Commit: ee233ec95ce8cfc8309d3adc072d926cd85eba08
Parents: 0fcb4a8
Author: Robert Kanter rkan...@apache.org
Authored: Fri Jul 24 09:41:53 2015 -0700
Committer: Robert Kanter rkan...@apache.org
Committed: Fri Jul 24 09:41:53 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  2 +
 .../org/apache/hadoop/net/ServerSocketUtil.java | 63 
 2 files changed, 65 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ee233ec9/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 56edcac..d6d43f2 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -725,6 +725,8 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12189. Improve CallQueueManager#swapQueue to make queue elements
 drop nearly impossible. (Zhihai Xu via wang)
 
+HADOOP-12259. Utility to Dynamic port allocation (brahmareddy via rkanter)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ee233ec9/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/ServerSocketUtil.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/ServerSocketUtil.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/ServerSocketUtil.java
new file mode 100644
index 000..0ce835f
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/ServerSocketUtil.java
@@ -0,0 +1,63 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.net;
+
+import java.io.IOException;
+import java.net.ServerSocket;
+import java.util.Random;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+public class ServerSocketUtil {
+
+  private static final Log LOG = LogFactory.getLog(ServerSocketUtil.class);
+
+  /**
+   * Port scan  allocate is how most other apps find ports
+   * 
+   * @param port given port
+   * @param retries number of retires
+   * @return
+   * @throws IOException
+   */
+  public static int getPort(int port, int retries) throws IOException {
+Random rand = new Random();
+int tryPort = port;
+int tries = 0;
+while (true) {
+  if (tries  0) {
+tryPort = port + rand.nextInt(65535 - port);
+  }
+  LOG.info(Using port  + tryPort);
+  try (ServerSocket s = new ServerSocket(tryPort)) {
+return tryPort;
+  } catch (IOException e) {
+tries++;
+if (tries = retries) {
+  LOG.info(Port is already in use; giving up);
+  throw e;
+} else {
+  LOG.info(Port is already in use; trying again);
+}
+  }
+}
+  }
+
+}



[10/37] hadoop git commit: HDFS-6682. Add a metric to expose the timestamp of the oldest under-replicated block. (aajisaka)

2015-07-27 Thread arp
HDFS-6682. Add a metric to expose the timestamp of the oldest under-replicated 
block. (aajisaka)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/02c01815
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/02c01815
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/02c01815

Branch: refs/heads/HDFS-7240
Commit: 02c01815eca656814febcdaca6115e5f53b9c746
Parents: ab3197c
Author: Akira Ajisaka aajis...@apache.org
Authored: Fri Jul 24 11:37:23 2015 +0900
Committer: Akira Ajisaka aajis...@apache.org
Committed: Fri Jul 24 11:37:23 2015 +0900

--
 .../hadoop-common/src/site/markdown/Metrics.md  |  1 +
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../server/blockmanagement/BlockManager.java|  4 ++
 .../blockmanagement/UnderReplicatedBlocks.java  | 33 --
 .../hdfs/server/namenode/FSNamesystem.java  |  9 +++-
 .../TestUnderReplicatedBlocks.java  | 48 
 6 files changed, 93 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/02c01815/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
--
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md 
b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
index 646cda5..2e6c095 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
@@ -201,6 +201,7 @@ Each metrics record contains tags such as HAState and 
Hostname as additional inf
 | Name | Description |
 |: |: |
 | `MissingBlocks` | Current number of missing blocks |
+| `TimeOfTheOldestBlockToBeReplicated` | The timestamp of the oldest block to 
be replicated. If there are no under-replicated or corrupt blocks, return 0. |
 | `ExpiredHeartbeats` | Total number of expired heartbeats |
 | `TransactionsSinceLastCheckpoint` | Total number of transactions since last 
checkpoint |
 | `TransactionsSinceLastLogRoll` | Total number of transactions since last 
edit log roll |

http://git-wip-us.apache.org/repos/asf/hadoop/blob/02c01815/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index bcc1e25..f86d41e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -747,6 +747,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-8730. Clean up the import statements in ClientProtocol.
 (Takanobu Asanuma via wheat9)
 
+HDFS-6682. Add a metric to expose the timestamp of the oldest
+under-replicated block. (aajisaka)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/02c01815/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 7dce2a8..64603d0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -171,6 +171,10 @@ public class BlockManager implements BlockStatsMXBean {
   public int getPendingDataNodeMessageCount() {
 return pendingDNMessages.count();
   }
+  /** Used by metrics. */
+  public long getTimeOfTheOldestBlockToBeReplicated() {
+return neededReplications.getTimeOfTheOldestBlockToBeReplicated();
+  }
 
   /**replicationRecheckInterval is how often namenode checks for new 
replication work*/
   private final long replicationRecheckInterval;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/02c01815/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
index 000416e..d8aec99 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
+++ 

[05/37] hadoop git commit: HDFS-8730. Clean up the import statements in ClientProtocol. Contributed by Takanobu Asanuma.

2015-07-27 Thread arp
HDFS-8730. Clean up the import statements in ClientProtocol. Contributed by 
Takanobu Asanuma.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/813cf89b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/813cf89b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/813cf89b

Branch: refs/heads/HDFS-7240
Commit: 813cf89bb56ad1a48b35fd44644d63540e8fa7d1
Parents: adfa34f
Author: Haohui Mai whe...@apache.org
Authored: Thu Jul 23 10:30:17 2015 -0700
Committer: Haohui Mai whe...@apache.org
Committed: Thu Jul 23 10:31:11 2015 -0700

--
 .../hadoop/hdfs/protocol/ClientProtocol.java| 306 +++
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 2 files changed, 182 insertions(+), 127 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/813cf89b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
index 381be30..713c23c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
@@ -17,7 +17,6 @@
  */
 package org.apache.hadoop.hdfs.protocol;
 
-import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.util.EnumSet;
 import java.util.List;
@@ -29,14 +28,9 @@ import 
org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries;
 import org.apache.hadoop.fs.CacheFlag;
 import org.apache.hadoop.fs.ContentSummary;
 import org.apache.hadoop.fs.CreateFlag;
-import org.apache.hadoop.fs.FileAlreadyExistsException;
 import org.apache.hadoop.fs.FsServerDefaults;
-import org.apache.hadoop.fs.InvalidPathException;
 import org.apache.hadoop.fs.Options;
-import org.apache.hadoop.fs.Options.Rename;
-import org.apache.hadoop.fs.ParentNotDirectoryException;
 import org.apache.hadoop.fs.StorageType;
-import org.apache.hadoop.fs.UnresolvedLinkException;
 import org.apache.hadoop.fs.XAttr;
 import org.apache.hadoop.fs.XAttrSetFlag;
 import org.apache.hadoop.fs.permission.AclEntry;
@@ -48,14 +42,11 @@ import 
org.apache.hadoop.hdfs.protocol.HdfsConstants.RollingUpgradeAction;
 import org.apache.hadoop.hdfs.security.token.block.DataEncryptionKey;
 import 
org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
 import 
org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSelector;
-import org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException;
-import org.apache.hadoop.hdfs.server.namenode.SafeModeException;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport;
 import org.apache.hadoop.io.EnumSetWritable;
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.io.retry.AtMostOnce;
 import org.apache.hadoop.io.retry.Idempotent;
-import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.KerberosInfo;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.token.TokenInfo;
@@ -121,9 +112,12 @@ public interface ClientProtocol {
*
* @return file length and array of blocks with their locations
*
-   * @throws AccessControlException If access is denied
-   * @throws FileNotFoundException If file codesrc/code does not exist
-   * @throws UnresolvedLinkException If codesrc/code contains a symlink
+   * @throws org.apache.hadoop.security.AccessControlException If access is
+   *   denied
+   * @throws java.io.FileNotFoundException If file codesrc/code does not
+   *   exist
+   * @throws org.apache.hadoop.fs.UnresolvedLinkException If codesrc/code
+   *   contains a symlink
* @throws IOException If an I/O error occurred
*/
   @Idempotent
@@ -166,24 +160,29 @@ public interface ClientProtocol {
*
* @return the status of the created file, it could be null if the server
*   doesn't support returning the file status
-   * @throws AccessControlException If access is denied
+   * @throws org.apache.hadoop.security.AccessControlException If access is
+   *   denied
* @throws AlreadyBeingCreatedException if the path does not exist.
* @throws DSQuotaExceededException If file creation violates disk space
*   quota restriction
-   * @throws FileAlreadyExistsException If file codesrc/code already exists
-   * @throws FileNotFoundException If parent of codesrc/code does not exist
-   *   and codecreateParent/code is false
-   * @throws 

[30/37] hadoop git commit: YARN-3656. LowCost: A Cost-Based Placement Agent for YARN Reservations. (Jonathan Yaniv and Ishai Menache via curino)

2015-07-27 Thread arp
YARN-3656. LowCost: A Cost-Based Placement Agent for YARN Reservations. 
(Jonathan Yaniv and Ishai Menache via curino)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/156f24ea
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/156f24ea
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/156f24ea

Branch: refs/heads/HDFS-7240
Commit: 156f24ead00436faad5d4aeef327a546392cd265
Parents: adcf5dd
Author: ccurino ccur...@ubuntu.gateway.2wire.net
Authored: Sat Jul 25 07:39:47 2015 -0700
Committer: ccurino ccur...@ubuntu.gateway.2wire.net
Committed: Sat Jul 25 07:39:47 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../reservation/AbstractReservationSystem.java  |   2 +
 .../reservation/GreedyReservationAgent.java | 390 -
 .../reservation/InMemoryPlan.java   |  13 +-
 .../InMemoryReservationAllocation.java  |   8 +-
 .../resourcemanager/reservation/Plan.java   |   1 +
 .../reservation/PlanContext.java|   2 +
 .../resourcemanager/reservation/PlanView.java   |  31 +-
 .../resourcemanager/reservation/Planner.java|  47 --
 .../RLESparseResourceAllocation.java|  55 +-
 .../reservation/ReservationAgent.java   |  72 --
 .../ReservationSchedulerConfiguration.java  |   6 +-
 .../reservation/ReservationSystem.java  |   5 +-
 .../reservation/ReservationSystemUtil.java  |   6 +-
 .../reservation/SimpleCapacityReplanner.java| 113 ---
 .../planning/AlignedPlannerWithGreedy.java  | 123 +++
 .../planning/GreedyReservationAgent.java|  97 +++
 .../reservation/planning/IterativePlanner.java  | 338 
 .../reservation/planning/Planner.java   |  49 ++
 .../reservation/planning/PlanningAlgorithm.java | 207 +
 .../reservation/planning/ReservationAgent.java  |  73 ++
 .../planning/SimpleCapacityReplanner.java   | 118 +++
 .../reservation/planning/StageAllocator.java|  55 ++
 .../planning/StageAllocatorGreedy.java  | 152 
 .../planning/StageAllocatorLowCostAligned.java  | 360 
 .../planning/StageEarliestStart.java|  46 ++
 .../planning/StageEarliestStartByDemand.java| 106 +++
 .../StageEarliestStartByJobArrival.java |  39 +
 .../planning/TryManyReservationAgents.java  | 114 +++
 .../reservation/ReservationSystemTestUtil.java  |   5 +-
 .../reservation/TestCapacityOverTimePolicy.java |   2 +-
 .../TestCapacitySchedulerPlanFollower.java  |   1 +
 .../reservation/TestFairReservationSystem.java  |   1 -
 .../TestFairSchedulerPlanFollower.java  |   1 +
 .../reservation/TestGreedyReservationAgent.java | 604 --
 .../reservation/TestInMemoryPlan.java   |   2 +
 .../reservation/TestNoOverCommitPolicy.java |   1 +
 .../TestRLESparseResourceAllocation.java|  51 +-
 .../TestSchedulerPlanFollowerBase.java  |   1 +
 .../TestSimpleCapacityReplanner.java| 162 
 .../planning/TestAlignedPlanner.java| 820 +++
 .../planning/TestGreedyReservationAgent.java| 611 ++
 .../planning/TestSimpleCapacityReplanner.java   | 170 
 43 files changed, 3634 insertions(+), 1429 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/156f24ea/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 55258a6..883d009 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -147,6 +147,9 @@ Release 2.8.0 - UNRELEASED
 YARN-2019. Retrospect on decision of making RM crashed if any exception 
throw 
 in ZKRMStateStore. (Jian He via junping_du)
 
+YARN-3656. LowCost: A Cost-Based Placement Agent for YARN Reservations. 
+(Jonathan Yaniv and Ishai Menache via curino)
+
   IMPROVEMENTS
 
 YARN-644. Basic null check is not performed on passed in arguments before

http://git-wip-us.apache.org/repos/asf/hadoop/blob/156f24ea/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/AbstractReservationSystem.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/AbstractReservationSystem.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/AbstractReservationSystem.java
index 8a15ac6..d2603c1 100644
--- 

[20/37] hadoop git commit: YARN-3026. Move application-specific container allocation logic from LeafQueue to FiCaSchedulerApp. Contributed by Wangda Tan

2015-07-27 Thread arp
YARN-3026. Move application-specific container allocation logic from LeafQueue 
to FiCaSchedulerApp. Contributed by Wangda Tan


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/83fe34ac
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/83fe34ac
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/83fe34ac

Branch: refs/heads/HDFS-7240
Commit: 83fe34ac0896cee0918bbfad7bd51231e4aec39b
Parents: fc42fa8
Author: Jian He jia...@apache.org
Authored: Fri Jul 24 14:00:25 2015 -0700
Committer: Jian He jia...@apache.org
Committed: Fri Jul 24 14:00:25 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../server/resourcemanager/RMContextImpl.java   |   3 +-
 .../scheduler/ResourceLimits.java   |  19 +-
 .../scheduler/capacity/AbstractCSQueue.java |  27 +-
 .../scheduler/capacity/CSAssignment.java|  12 +-
 .../capacity/CapacityHeadroomProvider.java  |  16 +-
 .../scheduler/capacity/CapacityScheduler.java   |  14 -
 .../scheduler/capacity/LeafQueue.java   | 833 +++
 .../scheduler/capacity/ParentQueue.java |  16 +-
 .../scheduler/common/fica/FiCaSchedulerApp.java | 721 +++-
 .../capacity/TestApplicationLimits.java |  15 +-
 .../capacity/TestCapacityScheduler.java |   3 +-
 .../capacity/TestContainerAllocation.java   |  85 +-
 .../scheduler/capacity/TestLeafQueue.java   | 191 +
 .../scheduler/capacity/TestReservations.java| 111 +--
 .../scheduler/capacity/TestUtils.java   |  25 +-
 16 files changed, 1048 insertions(+), 1046 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/83fe34ac/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index d1546b2..cf00fe5 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -345,6 +345,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3844. Make hadoop-yarn-project Native code -Wall-clean (Alan Burlison
 via Colin P. McCabe)
 
+YARN-3026. Move application-specific container allocation logic from
+LeafQueue to FiCaSchedulerApp. (Wangda Tan via jianhe)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/83fe34ac/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
index 2f9209c..8cadc3b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMContextImpl.java
@@ -292,7 +292,8 @@ public class RMContextImpl implements RMContext {
 activeServiceContext.setNMTokenSecretManager(nmTokenSecretManager);
   }
 
-  void setScheduler(ResourceScheduler scheduler) {
+  @VisibleForTesting
+  public void setScheduler(ResourceScheduler scheduler) {
 activeServiceContext.setScheduler(scheduler);
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/83fe34ac/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceLimits.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceLimits.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceLimits.java
index 8074794..c545e9e 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceLimits.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceLimits.java
@@ -26,20 +26,25 @@ 

[28/37] hadoop git commit: YARN-3656. LowCost: A Cost-Based Placement Agent for YARN Reservations. (Jonathan Yaniv and Ishai Menache via curino)

2015-07-27 Thread arp
http://git-wip-us.apache.org/repos/asf/hadoop/blob/156f24ea/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestGreedyReservationAgent.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestGreedyReservationAgent.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestGreedyReservationAgent.java
deleted file mode 100644
index de94dcd..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/TestGreedyReservationAgent.java
+++ /dev/null
@@ -1,604 +0,0 @@
-/***
- *   Licensed to the Apache Software Foundation (ASF) under one
- *   or more contributor license agreements.  See the NOTICE file
- *   distributed with this work for additional information
- *   regarding copyright ownership.  The ASF licenses this file
- *   to you under the Apache License, Version 2.0 (the
- *   License); you may not use this file except in compliance
- *   with the License.  You may obtain a copy of the License at
- *  
- *   http://www.apache.org/licenses/LICENSE-2.0
- *  
- *   Unless required by applicable law or agreed to in writing, software
- *   distributed under the License is distributed on an AS IS BASIS,
- *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- *   See the License for the specific language governing permissions and
- *   limitations under the License.
- 
***/
-package org.apache.hadoop.yarn.server.resourcemanager.reservation;
-
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assert.fail;
-import static org.mockito.Mockito.mock;
-
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.List;
-import java.util.Map;
-import java.util.Random;
-
-import org.apache.hadoop.yarn.api.records.ReservationDefinition;
-import org.apache.hadoop.yarn.api.records.ReservationId;
-import org.apache.hadoop.yarn.api.records.ReservationRequest;
-import org.apache.hadoop.yarn.api.records.ReservationRequestInterpreter;
-import org.apache.hadoop.yarn.api.records.ReservationRequests;
-import org.apache.hadoop.yarn.api.records.Resource;
-import org.apache.hadoop.yarn.api.records.impl.pb.ReservationDefinitionPBImpl;
-import org.apache.hadoop.yarn.api.records.impl.pb.ReservationRequestsPBImpl;
-import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.exceptions.PlanningException;
-import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics;
-import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler;
-import org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator;
-import org.apache.hadoop.yarn.util.resource.ResourceCalculator;
-import org.apache.hadoop.yarn.util.resource.Resources;
-import org.junit.Before;
-import org.junit.Test;
-import org.mortbay.log.Log;
-
-public class TestGreedyReservationAgent {
-
-  ReservationAgent agent;
-  InMemoryPlan plan;
-  Resource minAlloc = Resource.newInstance(1024, 1);
-  ResourceCalculator res = new DefaultResourceCalculator();
-  Resource maxAlloc = Resource.newInstance(1024 * 8, 8);
-  Random rand = new Random();
-  long step;
-
-  @Before
-  public void setup() throws Exception {
-
-long seed = rand.nextLong();
-rand.setSeed(seed);
-Log.info(Running with seed:  + seed);
-
-// setting completely loose quotas
-long timeWindow = 100L;
-Resource clusterCapacity = Resource.newInstance(100 * 1024, 100);
-step = 1000L;
-ReservationSystemTestUtil testUtil = new ReservationSystemTestUtil();
-String reservationQ = testUtil.getFullReservationQueueName();
-
-float instConstraint = 100;
-float avgConstraint = 100;
-
-ReservationSchedulerConfiguration conf =
-ReservationSystemTestUtil.createConf(reservationQ, timeWindow,
-instConstraint, avgConstraint);
-CapacityOverTimePolicy policy = new CapacityOverTimePolicy();
-policy.init(reservationQ, conf);
-agent = new GreedyReservationAgent();
-
-QueueMetrics queueMetrics = mock(QueueMetrics.class);
-
-plan = new InMemoryPlan(queueMetrics, policy, agent, clusterCapacity, step,
-res, minAlloc, maxAlloc, dedicated, null, true);
-  }
-
-  @SuppressWarnings(javadoc)
-  @Test
-  public void testSimple() throws PlanningException {
-
-prepareBasicPlan();
-
-// create a request 

[09/37] hadoop git commit: YARN-3969. Allow jobs to be submitted to reservation that is active but does not have any allocations. (subru via curino)

2015-07-27 Thread arp
YARN-3969. Allow jobs to be submitted to reservation that is active but does 
not have any allocations. (subru via curino)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0fcb4a8c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0fcb4a8c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0fcb4a8c

Branch: refs/heads/HDFS-7240
Commit: 0fcb4a8cf2add3f112907ff4e833e2f04947b53e
Parents: 206d493
Author: carlo curino Carlo Curino
Authored: Thu Jul 23 19:33:59 2015 -0700
Committer: carlo curino Carlo Curino
Committed: Thu Jul 23 19:33:59 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 +++
 .../scheduler/capacity/ReservationQueue.java|  4 ---
 .../capacity/TestReservationQueue.java  | 26 +++-
 3 files changed, 17 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0fcb4a8c/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index f23853b..8bc9e4c 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -864,6 +864,9 @@ Release 2.7.1 - 2015-07-06
 YARN-3850. NM fails to read files from full disks which can lead to
 container logs being lost and other issues (Varun Saxena via jlowe)
 
+YARN-3969. Allow jobs to be submitted to reservation that is active 
+but does not have any allocations. (subru via curino)
+
 Release 2.7.0 - 2015-04-20
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0fcb4a8c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ReservationQueue.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ReservationQueue.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ReservationQueue.java
index 4790cc7..976cf8c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ReservationQueue.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ReservationQueue.java
@@ -39,12 +39,9 @@ public class ReservationQueue extends LeafQueue {
 
   private PlanQueue parent;
 
-  private int maxSystemApps;
-
   public ReservationQueue(CapacitySchedulerContext cs, String queueName,
   PlanQueue parent) throws IOException {
 super(cs, queueName, parent, null);
-maxSystemApps = cs.getConfiguration().getMaximumSystemApplications();
 // the following parameters are common to all reservation in the plan
 updateQuotas(parent.getUserLimitForReservation(),
 parent.getUserLimitFactor(),
@@ -89,7 +86,6 @@ public class ReservationQueue extends LeafQueue {
 }
 setCapacity(capacity);
 setAbsoluteCapacity(getParent().getAbsoluteCapacity() * getCapacity());
-setMaxApplications((int) (maxSystemApps * getAbsoluteCapacity()));
 // note: we currently set maxCapacity to capacity
 // this might be revised later
 setMaxCapacity(entitlement.getMaxCapacity());

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0fcb4a8c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservationQueue.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservationQueue.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservationQueue.java
index 4e6c73d..e23e93c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservationQueue.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservationQueue.java
@@ -18,6 +18,7 @@
 
 package 

[27/37] hadoop git commit: YARN-3656. LowCost: A Cost-Based Placement Agent for YARN Reservations. (Jonathan Yaniv and Ishai Menache via curino)

2015-07-27 Thread arp
http://git-wip-us.apache.org/repos/asf/hadoop/blob/156f24ea/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/TestGreedyReservationAgent.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/TestGreedyReservationAgent.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/TestGreedyReservationAgent.java
new file mode 100644
index 000..bd18a2f
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/reservation/planning/TestGreedyReservationAgent.java
@@ -0,0 +1,611 @@
+/***
+ *   Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   License); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *  
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *  
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an AS IS BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ 
***/
+package org.apache.hadoop.yarn.server.resourcemanager.reservation.planning;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.mockito.Mockito.mock;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+
+import org.apache.hadoop.yarn.api.records.ReservationDefinition;
+import org.apache.hadoop.yarn.api.records.ReservationId;
+import org.apache.hadoop.yarn.api.records.ReservationRequest;
+import org.apache.hadoop.yarn.api.records.ReservationRequestInterpreter;
+import org.apache.hadoop.yarn.api.records.ReservationRequests;
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.api.records.impl.pb.ReservationDefinitionPBImpl;
+import org.apache.hadoop.yarn.api.records.impl.pb.ReservationRequestsPBImpl;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.CapacityOverTimePolicy;
+import org.apache.hadoop.yarn.server.resourcemanager.reservation.InMemoryPlan;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.InMemoryReservationAllocation;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationAllocation;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationInterval;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationSchedulerConfiguration;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationSystemTestUtil;
+import 
org.apache.hadoop.yarn.server.resourcemanager.reservation.exceptions.PlanningException;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler;
+import org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator;
+import org.apache.hadoop.yarn.util.resource.ResourceCalculator;
+import org.apache.hadoop.yarn.util.resource.Resources;
+import org.junit.Before;
+import org.junit.Test;
+import org.mortbay.log.Log;
+
+public class TestGreedyReservationAgent {
+
+  ReservationAgent agent;
+  InMemoryPlan plan;
+  Resource minAlloc = Resource.newInstance(1024, 1);
+  ResourceCalculator res = new DefaultResourceCalculator();
+  Resource maxAlloc = Resource.newInstance(1024 * 8, 8);
+  Random rand = new Random();
+  long step;
+
+  @Before
+  public void setup() throws Exception {
+
+long seed = rand.nextLong();
+rand.setSeed(seed);
+Log.info(Running with seed:  + seed);
+
+// setting completely loose quotas
+long timeWindow = 100L;
+Resource clusterCapacity = Resource.newInstance(100 * 1024, 100);
+step = 1000L;
+ReservationSystemTestUtil testUtil = new ReservationSystemTestUtil();
+String reservationQ = testUtil.getFullReservationQueueName();
+
+float instConstraint = 100;
+float 

[26/37] hadoop git commit: HADOOP-12237. releasedocmaker.py doesn't work behind a proxy (Tsuyoshi Ozawa via aw)

2015-07-27 Thread arp
HADOOP-12237. releasedocmaker.py doesn't work behind a proxy (Tsuyoshi Ozawa 
via aw)

(cherry picked from commit b41fe3111ae37478cbace2a07e6ac35a676ef978)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/adcf5dd9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/adcf5dd9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/adcf5dd9

Branch: refs/heads/HDFS-7240
Commit: adcf5dd94052481f66deaf402ac4ace1ffc06f49
Parents: d769783
Author: Allen Wittenauer a...@apache.org
Authored: Mon Jul 20 09:47:46 2015 -0700
Committer: Andrew Wang w...@apache.org
Committed: Fri Jul 24 18:31:48 2015 -0700

--
 dev-support/releasedocmaker.py | 11 ---
 1 file changed, 8 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/adcf5dd9/dev-support/releasedocmaker.py
--
diff --git a/dev-support/releasedocmaker.py b/dev-support/releasedocmaker.py
index 409d8e3..d2e5dda 100755
--- a/dev-support/releasedocmaker.py
+++ b/dev-support/releasedocmaker.py
@@ -24,6 +24,7 @@ import os
 import re
 import sys
 import urllib
+import urllib2
 try:
   import json
 except ImportError:
@@ -125,7 +126,7 @@ class GetVersions:
 versions.sort()
 print Looking for %s through %s%(versions[0],versions[-1])
 for p in projects:
-  resp = 
urllib.urlopen(https://issues.apache.org/jira/rest/api/2/project/%s/versions%p)
+  resp = 
urllib2.urlopen(https://issues.apache.org/jira/rest/api/2/project/%s/versions%p)
   data = json.loads(resp.read())
   for d in data:
 if d['name'][0].isdigit and versions[0] = d['name'] and d['name'] = 
versions[-1]:
@@ -288,7 +289,7 @@ class JiraIter:
 self.projects = projects
 v=str(version).replace(-SNAPSHOT,)
 
-resp = urllib.urlopen(https://issues.apache.org/jira/rest/api/2/field;)
+resp = urllib2.urlopen(https://issues.apache.org/jira/rest/api/2/field;)
 data = json.loads(resp.read())
 
 self.fieldIdMap = {}
@@ -301,7 +302,7 @@ class JiraIter:
 count=100
 while (at  end):
   params = urllib.urlencode({'jql': project in ('+' , 
'.join(projects)+') and fixVersion in ('+v+') and resolution = Fixed, 
'startAt':at, 'maxResults':count})
-  resp = 
urllib.urlopen(https://issues.apache.org/jira/rest/api/2/search?%s%params)
+  resp = 
urllib2.urlopen(https://issues.apache.org/jira/rest/api/2/search?%s%params)
   data = json.loads(resp.read())
   if (data.has_key('errorMessages')):
 raise Exception(data['errorMessages'])
@@ -407,6 +408,10 @@ def main():
   if (len(options.versions) = 0):
 parser.error(At least one version needs to be supplied)
 
+  proxy = urllib2.ProxyHandler()
+  opener = urllib2.build_opener(proxy)
+  urllib2.install_opener(opener)
+
   projects = options.projects
 
   if (options.range is True):



[32/37] hadoop git commit: HDFS-8810. Correct assertions in TestDFSInotifyEventInputStream class. Contributed by Surendra Singh Lilhore.

2015-07-27 Thread arp
HDFS-8810. Correct assertions in TestDFSInotifyEventInputStream class. 
Contributed by Surendra Singh Lilhore.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1df78688
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1df78688
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1df78688

Branch: refs/heads/HDFS-7240
Commit: 1df78688c69476f89d16f93bc74a4f05d0b1a3da
Parents: 42d4e0a
Author: Akira Ajisaka aajis...@apache.org
Authored: Mon Jul 27 13:17:24 2015 +0900
Committer: Akira Ajisaka aajis...@apache.org
Committed: Mon Jul 27 13:17:24 2015 +0900

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  | 3 +++
 .../org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java   | 4 ++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1df78688/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 3614e01..1ddf7da 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1081,6 +1081,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-8773. Few FSNamesystem metrics are not documented in the Metrics page.
 (Rakesh R via cnauroth)
 
+HDFS-8810. Correct assertions in TestDFSInotifyEventInputStream class.
+(Surendra Singh Lilhore via aajisaka)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1df78688/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
index 65569d0..e7bbcac 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSInotifyEventInputStream.java
@@ -164,7 +164,7 @@ public class TestDFSInotifyEventInputStream {
   Event.RenameEvent re2 = (Event.RenameEvent) batch.getEvents()[0];
   Assert.assertTrue(re2.getDstPath().equals(/file2));
   Assert.assertTrue(re2.getSrcPath().equals(/file4));
-  Assert.assertTrue(re.getTimestamp()  0);
+  Assert.assertTrue(re2.getTimestamp()  0);
   LOG.info(re2.toString());
 
   // AddOp with overwrite
@@ -378,7 +378,7 @@ public class TestDFSInotifyEventInputStream {
   Event.RenameEvent re3 = (Event.RenameEvent) batch.getEvents()[0];
   Assert.assertTrue(re3.getDstPath().equals(/dir/file5));
   Assert.assertTrue(re3.getSrcPath().equals(/file5));
-  Assert.assertTrue(re.getTimestamp()  0);
+  Assert.assertTrue(re3.getTimestamp()  0);
   LOG.info(re3.toString());
 
   // TruncateOp



[24/37] hadoop git commit: HADOOP-12135. cleanup releasedocmaker

2015-07-27 Thread arp
HADOOP-12135. cleanup releasedocmaker

(cherry picked from commit 3fee9f8d18dd60d83da674b3cfbefe666915fad8)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e8b62d11
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e8b62d11
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e8b62d11

Branch: refs/heads/HDFS-7240
Commit: e8b62d11d460e9706e48df92a0b0a72f4a02d3f5
Parents: 098ba45
Author: Allen Wittenauer a...@apache.org
Authored: Mon Jul 6 15:49:03 2015 -0700
Committer: Andrew Wang w...@apache.org
Committed: Fri Jul 24 18:31:30 2015 -0700

--
 dev-support/releasedocmaker.py | 384 +++-
 1 file changed, 207 insertions(+), 177 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e8b62d11/dev-support/releasedocmaker.py
--
diff --git a/dev-support/releasedocmaker.py b/dev-support/releasedocmaker.py
index 8e68b3c..6e01260 100755
--- a/dev-support/releasedocmaker.py
+++ b/dev-support/releasedocmaker.py
@@ -19,6 +19,7 @@
 from glob import glob
 from optparse import OptionParser
 from time import gmtime, strftime
+import pprint
 import os
 import re
 import sys
@@ -99,23 +100,44 @@ def mstr(obj):
 return 
   return unicode(obj)
 
-def buildindex(master):
+def buildindex(title,license):
   versions=reversed(sorted(glob([0-9]*.[0-9]*.[0-9]*)))
   with open(index.md,w) as indexfile:
+if license is True:
+  indexfile.write(asflicense)
 for v in versions:
-  indexfile.write(* Apache Hadoop v%s\n % (v))
+  indexfile.write(* %s v%s\n % (title,v))
   for k in (Changes,Release Notes):
-indexfile.write(*  %s\n %(k))
-indexfile.write(* [Combined %s](%s/%s.%s.html)\n \
+indexfile.write(* %s (%s/%s.%s.html)\n \
   % (k,v,k.upper().replace( ,),v))
-if not master:
-  indexfile.write(* [Hadoop Common 
%s](%s/%s.HADOOP.%s.html)\n \
-% (k,v,k.upper().replace( ,),v))
-  for p in (HDFS,MapReduce,YARN):
-indexfile.write(* [%s %s](%s/%s.%s.%s.html)\n \
-  % (p,k,v,k.upper().replace( ,),p.upper(),v))
   indexfile.close()
 
+class GetVersions:
+   yo 
+  def __init__(self,versions, projects):
+versions = versions
+projects = projects
+self.newversions = []
+pp = pprint.PrettyPrinter(indent=4)
+at=0
+end=1
+count=100
+versions.sort()
+print Looking for %s through %s%(versions[0],versions[-1])
+for p in projects:
+  resp = 
urllib.urlopen(https://issues.apache.org/jira/rest/api/2/project/%s/versions%p)
+  data = json.loads(resp.read())
+  for d in data:
+if d['name'][0].isdigit and versions[0] = d['name'] and d['name'] = 
versions[-1]:
+  print Adding %s to the list % d['name']
+  self.newversions.append(d['name'])
+newlist=list(set(self.newversions))
+self.newversions=newlist
+
+  def getlist(self):
+  pp = pprint.PrettyPrinter(indent=4)
+  return(self.newversions)
+
 class Version:
   Represents a version number
   def __init__(self, data):
@@ -261,8 +283,10 @@ class Jira:
 class JiraIter:
   An Iterator of JIRAs
 
-  def __init__(self, versions):
-self.versions = versions
+  def __init__(self, version, projects):
+self.version = version
+self.projects = projects
+v=str(version).replace(-SNAPSHOT,)
 
 resp = urllib.urlopen(https://issues.apache.org/jira/rest/api/2/field;)
 data = json.loads(resp.read())
@@ -276,7 +300,7 @@ class JiraIter:
 end=1
 count=100
 while (at  end):
-  params = urllib.urlencode({'jql': project in 
(HADOOP,HDFS,MAPREDUCE,YARN) and fixVersion in ('+' , 
'.join([str(v).replace(-SNAPSHOT,) for v in versions])+') and resolution 
= Fixed, 'startAt':at, 'maxResults':count})
+  params = urllib.urlencode({'jql': project in ('+' , 
'.join(projects)+') and fixVersion in ('+v+') and resolution = Fixed, 
'startAt':at, 'maxResults':count})
   resp = 
urllib.urlopen(https://issues.apache.org/jira/rest/api/2/search?%s%params)
   data = json.loads(resp.read())
   if (data.has_key('errorMessages')):
@@ -286,10 +310,8 @@ class JiraIter:
   self.jiras.extend(data['issues'])
 
   needaversion=False
-  for j in versions:
-v=str(j).replace(-SNAPSHOT,)
-if v not in releaseVersion:
-  needaversion=True
+  if v not in releaseVersion:
+needaversion=True
 
   if needaversion is True:
 for i in range(len(data['issues'])):
@@ -351,21 +373,29 @@ class Outputs:
   self.writeKeyRaw(jira.getProject(), line)
 
 def main():
-  parser = OptionParser(usage=usage: %prog --version VERSION [--version 
VERSION2 ...],
+  parser = OptionParser(usage=usage: %prog 

[04/37] hadoop git commit: HADOOP-12161. Add getStoragePolicy API to the FileSystem interface. (Contributed by Brahma Reddy Battula)

2015-07-27 Thread arp
HADOOP-12161. Add getStoragePolicy API to the FileSystem interface. 
(Contributed by Brahma Reddy Battula)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/adfa34ff
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/adfa34ff
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/adfa34ff

Branch: refs/heads/HDFS-7240
Commit: adfa34ff9992295a6d2496b259d8c483ed90b566
Parents: 3bba180
Author: Arpit Agarwal a...@apache.org
Authored: Thu Jul 23 10:13:04 2015 -0700
Committer: Arpit Agarwal a...@apache.org
Committed: Thu Jul 23 10:13:04 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 +++
 .../apache/hadoop/fs/AbstractFileSystem.java| 13 +
 .../java/org/apache/hadoop/fs/FileContext.java  | 20 
 .../java/org/apache/hadoop/fs/FileSystem.java   | 13 +
 .../org/apache/hadoop/fs/FilterFileSystem.java  |  6 ++
 .../java/org/apache/hadoop/fs/FilterFs.java |  6 ++
 .../org/apache/hadoop/fs/viewfs/ChRootedFs.java |  6 ++
 .../org/apache/hadoop/fs/viewfs/ViewFs.java | 15 +++
 .../org/apache/hadoop/fs/TestHarFileSystem.java |  3 +++
 .../main/java/org/apache/hadoop/fs/Hdfs.java|  5 +
 .../java/org/apache/hadoop/hdfs/DFSClient.java  | 18 ++
 .../hadoop/hdfs/DistributedFileSystem.java  | 19 +++
 .../hadoop/hdfs/TestBlockStoragePolicy.java | 17 +
 13 files changed, 144 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/adfa34ff/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index ff7d2ad..f1a3bc9 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -716,6 +716,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12184. Remove unused Linux-specific constants in NativeIO (Martin
 Walsh via Colin P. McCabe)
 
+HADOOP-12161. Add getStoragePolicy API to the FileSystem interface.
+(Brahma Reddy Battula via Arpit Agarwal)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp

http://git-wip-us.apache.org/repos/asf/hadoop/blob/adfa34ff/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
index cb3fb86..2bc3859 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java
@@ -1237,6 +1237,19 @@ public abstract class AbstractFileSystem {
   }
 
   /**
+   * Retrieve the storage policy for a given file or directory.
+   *
+   * @param src file or directory path.
+   * @return storage policy for give file.
+   * @throws IOException
+   */
+  public BlockStoragePolicySpi getStoragePolicy(final Path src)
+  throws IOException {
+throw new UnsupportedOperationException(getClass().getSimpleName()
++  doesn't support getStoragePolicy);
+  }
+
+  /**
* Retrieve all the storage policies supported by this file system.
*
* @return all storage policies supported by this filesystem.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/adfa34ff/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
index 0f21a61..a98d662 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
@@ -49,6 +49,7 @@ import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.fs.permission.FsPermission;
 import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY;
 import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_DEFAULT_NAME_DEFAULT;
+
 import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.ipc.RpcClientException;
 import org.apache.hadoop.ipc.RpcServerException;
@@ -2692,6 +2693,25 @@ public class FileContext {
   }
 
   /**
+   * Query the effective storage policy ID for the given file or directory.
+   *
+ 

[07/37] hadoop git commit: YARN-3900. Protobuf layout of yarn_security_token causes errors in other protos that include it (adhoot via rkanter)

2015-07-27 Thread arp
YARN-3900. Protobuf layout of yarn_security_token causes errors in other protos 
that include it (adhoot via rkanter)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1d3026e7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1d3026e7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1d3026e7

Branch: refs/heads/HDFS-7240
Commit: 1d3026e7b3cf2f3a8a544b66ff14783cc590bdac
Parents: 6736a1a
Author: Robert Kanter rkan...@apache.org
Authored: Thu Jul 23 14:42:49 2015 -0700
Committer: Robert Kanter rkan...@apache.org
Committed: Thu Jul 23 14:46:54 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../hadoop-yarn/hadoop-yarn-common/pom.xml  |  2 +-
 .../main/proto/server/yarn_security_token.proto | 70 
 .../src/main/proto/yarn_security_token.proto| 70 
 .../pom.xml |  2 +-
 .../hadoop-yarn-server-resourcemanager/pom.xml  |  2 +-
 .../resourcemanager/recovery/TestProtos.java| 36 ++
 7 files changed, 112 insertions(+), 73 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1d3026e7/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 9416cd6..3d41ba7 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -666,6 +666,9 @@ Release 2.8.0 - UNRELEASED
 
 YARN-3941. Proportional Preemption policy should try to avoid sending 
duplicate PREEMPT_CONTAINER event to scheduler. (Sunil G via wangda)
 
+YARN-3900. Protobuf layout of yarn_security_token causes errors in other 
protos
+that include it (adhoot via rkanter)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1d3026e7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
index 602fcd7..3b47cdd 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
@@ -253,7 +253,7 @@
 param${basedir}/src/main/proto/param
   /imports
   source
-directory${basedir}/src/main/proto/server/directory
+directory${basedir}/src/main/proto/directory
 includes
   includeyarn_security_token.proto/include
 /includes

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1d3026e7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/proto/server/yarn_security_token.proto
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/proto/server/yarn_security_token.proto
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/proto/server/yarn_security_token.proto
deleted file mode 100644
index 339e99e..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/proto/server/yarn_security_token.proto
+++ /dev/null
@@ -1,70 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * License); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an AS IS BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-option java_package = org.apache.hadoop.yarn.proto;
-option java_outer_classname = YarnSecurityTokenProtos;
-option java_generic_services = true;
-option java_generate_equals_and_hash = true;
-package hadoop.yarn;
-
-import yarn_protos.proto;
-
-// None of the following records are supposed to be exposed to users.
-
-message NMTokenIdentifierProto {
-  optional ApplicationAttemptIdProto appAttemptId = 1;
-  optional NodeIdProto nodeId = 2;
-  optional string appSubmitter = 3;
-  optional int32 keyId = 4 [default = -1];
-}
-
-message AMRMTokenIdentifierProto {
-  optional ApplicationAttemptIdProto appAttemptId = 1;
-  optional int32 keyId = 2 [default = -1];
-}
-
-message 

[25/37] hadoop git commit: HADOOP-12202. releasedocmaker drops missing component and assignee entries (aw)

2015-07-27 Thread arp
HADOOP-12202. releasedocmaker drops missing component and assignee entries (aw)

(cherry picked from commit adbacf7010373dbe6df239688b4cebd4a93a69e4)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d7697831
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d7697831
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d7697831

Branch: refs/heads/HDFS-7240
Commit: d7697831e3b24bec149990feed819e7d96f78184
Parents: e8b62d1
Author: Allen Wittenauer a...@apache.org
Authored: Tue Jul 7 14:30:32 2015 -0700
Committer: Andrew Wang w...@apache.org
Committed: Fri Jul 24 18:31:44 2015 -0700

--
 dev-support/releasedocmaker.py | 24 
 1 file changed, 12 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d7697831/dev-support/releasedocmaker.py
--
diff --git a/dev-support/releasedocmaker.py b/dev-support/releasedocmaker.py
index 6e01260..409d8e3 100755
--- a/dev-support/releasedocmaker.py
+++ b/dev-support/releasedocmaker.py
@@ -420,6 +420,8 @@ def main():
   else:
 title=options.title
 
+  haderrors=False
+
   for v in versions:
 vstr=str(v)
 jlist = JiraIter(vstr,projects)
@@ -468,14 +470,6 @@ def main():
 for jira in sorted(jlist):
   if jira.getIncompatibleChange():
 incompatlist.append(jira)
-if (len(jira.getReleaseNote())==0):
-warningCount+=1
-
-  if jira.checkVersionString():
- warningCount+=1
-
-  if jira.checkMissingComponent() or jira.checkMissingAssignee():
-errorCount+=1
   elif jira.getType() == Bug:
 buglist.append(jira)
   elif jira.getType() == Improvement:
@@ -496,6 +490,7 @@ def main():
  notableclean(jira.getSummary()))
 
   if (jira.getIncompatibleChange()) and (len(jira.getReleaseNote())==0):
+warningCount+=1
 reloutputs.writeKeyRaw(jira.getProject(),\n---\n\n)
 reloutputs.writeKeyRaw(jira.getProject(), line)
 line ='\n**WARNING: No release note provided for this incompatible 
change.**\n\n'
@@ -503,9 +498,11 @@ def main():
 reloutputs.writeKeyRaw(jira.getProject(), line)
 
   if jira.checkVersionString():
+  warningCount+=1
   lintMessage += \nWARNING: Version string problem for %s  % 
jira.getId()
 
   if (jira.checkMissingComponent() or jira.checkMissingAssignee()):
+  errorCount+=1
   errorMessage=[]
   jira.checkMissingComponent() and errorMessage.append(component)
   jira.checkMissingAssignee() and errorMessage.append(assignee)
@@ -520,11 +517,11 @@ def main():
 if (options.lint is True):
 print lintMessage
 print ===
-print Error:%d, Warning:%d \n % (errorCount, warningCount)
-
+print %s: Error:%d, Warning:%d \n % (vstr, errorCount, warningCount)
 if (errorCount0):
-cleanOutputDir(version)
-sys.exit(1)
+   haderrors=True
+   cleanOutputDir(vstr)
+   continue
 
 reloutputs.writeAll(\n\n)
 reloutputs.close()
@@ -571,5 +568,8 @@ def main():
   if options.index:
 buildindex(title,options.license)
 
+  if haderrors is True:
+sys.exit(1)
+
 if __name__ == __main__:
   main()



[22/37] hadoop git commit: YARN-3973. Recent changes to application priority management break reservation system from YARN-1051 (Carlo Curino via wangda)

2015-07-27 Thread arp
YARN-3973. Recent changes to application priority management break reservation 
system from YARN-1051 (Carlo Curino via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a3bd7b4a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a3bd7b4a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a3bd7b4a

Branch: refs/heads/HDFS-7240
Commit: a3bd7b4a59b3664273dc424f240356838213d4e7
Parents: ff9c13e
Author: Wangda Tan wan...@apache.org
Authored: Fri Jul 24 16:44:18 2015 -0700
Committer: Wangda Tan wan...@apache.org
Committed: Fri Jul 24 16:44:18 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt| 6 +-
 .../resourcemanager/scheduler/capacity/CapacityScheduler.java  | 2 +-
 2 files changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a3bd7b4a/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index c295784..55258a6 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -667,7 +667,8 @@ Release 2.8.0 - UNRELEASED
 
 YARN-3956. Fix TestNodeManagerHardwareUtils fails on Mac (Varun Vasudev 
via wangda)
 
-YARN-3941. Proportional Preemption policy should try to avoid sending 
duplicate PREEMPT_CONTAINER event to scheduler. (Sunil G via wangda)
+YARN-3941. Proportional Preemption policy should try to avoid sending 
duplicate 
+PREEMPT_CONTAINER event to scheduler. (Sunil G via wangda)
 
 YARN-3900. Protobuf layout of yarn_security_token causes errors in other 
protos
 that include it (adhoot via rkanter)
@@ -678,6 +679,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3957. FairScheduler NPE In FairSchedulerQueueInfo causing scheduler 
page to 
 return 500. (Anubhav Dhoot via kasha)
 
+YARN-3973. Recent changes to application priority management break 
+reservation system from YARN-1051. (Carlo Curino via wangda)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a3bd7b4a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index 68e608a..0b39d35 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -1867,7 +1867,7 @@ public class CapacityScheduler extends
 
   private Priority getDefaultPriorityForQueue(String queueName) {
 Queue queue = getQueue(queueName);
-if (null == queue) {
+if (null == queue || null == queue.getDefaultApplicationPriority()) {
   // Return with default application priority
   return Priority.newInstance(CapacitySchedulerConfiguration
   .DEFAULT_CONFIGURATION_APPLICATION_PRIORITY);



[01/37] hadoop git commit: HDFS-8797. WebHdfsFileSystem creates too many connections for pread. Contributed by Jing Zhao.

2015-07-27 Thread arp
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-7240 ef128ee4b - 2ebe8c7cb


HDFS-8797. WebHdfsFileSystem creates too many connections for pread. 
Contributed by Jing Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e91ccfad
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e91ccfad
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e91ccfad

Branch: refs/heads/HDFS-7240
Commit: e91ccfad07ec5b5674a84009772dd31a82b4e4de
Parents: 06e5dd2
Author: Jing Zhao ji...@apache.org
Authored: Wed Jul 22 17:42:31 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Wed Jul 22 17:42:31 2015 -0700

--
 .../hadoop/hdfs/web/ByteRangeInputStream.java   | 57 +---
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  2 +
 .../hdfs/web/TestByteRangeInputStream.java  | 35 ++--
 .../org/apache/hadoop/hdfs/web/TestWebHDFS.java | 41 ++
 4 files changed, 113 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e91ccfad/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/ByteRangeInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/ByteRangeInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/ByteRangeInputStream.java
index 395c9f6..bb581db 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/ByteRangeInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/ByteRangeInputStream.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.hdfs.web;
 
+import java.io.EOFException;
 import java.io.IOException;
 import java.io.InputStream;
 import java.net.HttpURLConnection;
@@ -65,6 +66,16 @@ public abstract class ByteRangeInputStream extends 
FSInputStream {
 final boolean resolved) throws IOException;
   }
 
+  static class InputStreamAndFileLength {
+final Long length;
+final InputStream in;
+
+InputStreamAndFileLength(Long length, InputStream in) {
+  this.length = length;
+  this.in = in;
+}
+  }
+
   enum StreamStatus {
 NORMAL, SEEK, CLOSED
   }
@@ -101,7 +112,9 @@ public abstract class ByteRangeInputStream extends 
FSInputStream {
 if (in != null) {
   in.close();
 }
-in = openInputStream();
+InputStreamAndFileLength fin = openInputStream(startPos);
+in = fin.in;
+fileLength = fin.length;
 status = StreamStatus.NORMAL;
 break;
   case CLOSED:
@@ -111,20 +124,22 @@ public abstract class ByteRangeInputStream extends 
FSInputStream {
   }
 
   @VisibleForTesting
-  protected InputStream openInputStream() throws IOException {
+  protected InputStreamAndFileLength openInputStream(long startOffset)
+  throws IOException {
 // Use the original url if no resolved url exists, eg. if
 // it's the first time a request is made.
 final boolean resolved = resolvedURL.getURL() != null;
 final URLOpener opener = resolved? resolvedURL: originalURL;
 
-final HttpURLConnection connection = opener.connect(startPos, resolved);
+final HttpURLConnection connection = opener.connect(startOffset, resolved);
 resolvedURL.setURL(getResolvedUrl(connection));
 
 InputStream in = connection.getInputStream();
+final Long length;
 final MapString, ListString headers = connection.getHeaderFields();
 if (isChunkedTransferEncoding(headers)) {
   // file length is not known
-  fileLength = null;
+  length = null;
 } else {
   // for non-chunked transfer-encoding, get content-length
   final String cl = connection.getHeaderField(HttpHeaders.CONTENT_LENGTH);
@@ -133,14 +148,14 @@ public abstract class ByteRangeInputStream extends 
FSInputStream {
 + headers);
   }
   final long streamlength = Long.parseLong(cl);
-  fileLength = startPos + streamlength;
+  length = startOffset + streamlength;
 
   // Java has a bug with 2GB request streams.  It won't bounds check
   // the reads so the transfer blocks until the server times out
   in = new BoundedInputStream(in, streamlength);
 }
 
-return in;
+return new InputStreamAndFileLength(length, in);
   }
 
   private static boolean isChunkedTransferEncoding(
@@ -204,6 +219,36 @@ public abstract class ByteRangeInputStream extends 
FSInputStream {
 }
   }
 
+  @Override
+  public int read(long position, byte[] buffer, int offset, int length)
+  throws IOException {
+try (InputStream in = openInputStream(position).in) {
+  return in.read(buffer, offset, length);
+}
+  }
+
+  @Override
+  

[17/37] hadoop git commit: HADOOP-12170. hadoop-common's JNIFlags.cmake is redundant and can be removed (Alan Burlison via Colin P. McCabe)

2015-07-27 Thread arp
HADOOP-12170. hadoop-common's JNIFlags.cmake is redundant and can be removed 
(Alan Burlison via Colin P. McCabe)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e4b0c744
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e4b0c744
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e4b0c744

Branch: refs/heads/HDFS-7240
Commit: e4b0c74434b82c25256a59b03d62b1a66bb8ac69
Parents: d19d187
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Fri Jul 24 13:03:31 2015 -0700
Committer: Colin Patrick Mccabe cmcc...@cloudera.com
Committed: Fri Jul 24 13:03:31 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |   3 +
 .../hadoop-common/src/JNIFlags.cmake| 124 ---
 2 files changed, 3 insertions(+), 124 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e4b0c744/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index d6d43f2..0da6194 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -727,6 +727,9 @@ Release 2.8.0 - UNRELEASED
 
 HADOOP-12259. Utility to Dynamic port allocation (brahmareddy via rkanter)
 
+HADOOP-12170. hadoop-common's JNIFlags.cmake is redundant and can be
+removed (Alan Burlison via Colin P. McCabe)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e4b0c744/hadoop-common-project/hadoop-common/src/JNIFlags.cmake
--
diff --git a/hadoop-common-project/hadoop-common/src/JNIFlags.cmake 
b/hadoop-common-project/hadoop-common/src/JNIFlags.cmake
deleted file mode 100644
index c558fe8..000
--- a/hadoop-common-project/hadoop-common/src/JNIFlags.cmake
+++ /dev/null
@@ -1,124 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# License); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an AS IS BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-cmake_minimum_required(VERSION 2.6 FATAL_ERROR)
-
-# If JVM_ARCH_DATA_MODEL is 32, compile all binaries as 32-bit.
-# This variable is set by maven.
-if (JVM_ARCH_DATA_MODEL EQUAL 32)
-# Force 32-bit code generation on amd64/x86_64, ppc64, sparc64
-if (CMAKE_COMPILER_IS_GNUCC AND CMAKE_SYSTEM_PROCESSOR MATCHES .*64)
-set(CMAKE_C_FLAGS ${CMAKE_C_FLAGS} -m32)
-set(CMAKE_CXX_FLAGS ${CMAKE_CXX_FLAGS} -m32)
-set(CMAKE_LD_FLAGS ${CMAKE_LD_FLAGS} -m32)
-endif ()
-if (CMAKE_SYSTEM_PROCESSOR STREQUAL x86_64 OR CMAKE_SYSTEM_PROCESSOR 
STREQUAL amd64)
-# Set CMAKE_SYSTEM_PROCESSOR to ensure that find_package(JNI) will use
-# the 32-bit version of libjvm.so.
-set(CMAKE_SYSTEM_PROCESSOR i686)
-endif ()
-endif (JVM_ARCH_DATA_MODEL EQUAL 32)
-
-# Determine float ABI of JVM on ARM Linux
-if (CMAKE_SYSTEM_PROCESSOR MATCHES ^arm AND CMAKE_SYSTEM_NAME STREQUAL 
Linux)
-find_program(READELF readelf)
-if (READELF MATCHES NOTFOUND)
-message(WARNING readelf not found; JVM float ABI detection disabled)
-else (READELF MATCHES NOTFOUND)
-execute_process(
-COMMAND ${READELF} -A ${JAVA_JVM_LIBRARY}
-OUTPUT_VARIABLE JVM_ELF_ARCH
-ERROR_QUIET)
-if (NOT JVM_ELF_ARCH MATCHES Tag_ABI_VFP_args: VFP registers)
-message(Soft-float JVM detected)
-
-# Test compilation with -mfloat-abi=softfp using an arbitrary libc 
function
-# (typically fails with fatal error: bits/predefs.h: No such file 
or directory
-# if soft-float dev libraries are not installed)
-include(CMakePushCheckState)
-cmake_push_check_state()
-set(CMAKE_REQUIRED_FLAGS ${CMAKE_REQUIRED_FLAGS} 
-mfloat-abi=softfp)
-include(CheckSymbolExists)
-check_symbol_exists(exit stdlib.h SOFTFP_AVAILABLE)
-if (NOT SOFTFP_AVAILABLE)
-message(FATAL_ERROR 

[06/37] hadoop git commit: HADOOP-12189. Improve CallQueueManager#swapQueue to make queue elements drop nearly impossible. Contributed by Zhihai Xu.

2015-07-27 Thread arp
HADOOP-12189. Improve CallQueueManager#swapQueue to make queue elements drop 
nearly impossible. Contributed by Zhihai Xu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6736a1ab
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6736a1ab
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6736a1ab

Branch: refs/heads/HDFS-7240
Commit: 6736a1ab7033523ed5f304fdfed46d7f348665b4
Parents: 813cf89
Author: Andrew Wang w...@apache.org
Authored: Thu Jul 23 14:42:35 2015 -0700
Committer: Andrew Wang w...@apache.org
Committed: Thu Jul 23 14:42:35 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 +++
 .../org/apache/hadoop/ipc/CallQueueManager.java | 27 +---
 .../apache/hadoop/ipc/TestCallQueueManager.java |  6 ++---
 3 files changed, 24 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6736a1ab/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index f1a3bc9..6c18add 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -719,6 +719,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12161. Add getStoragePolicy API to the FileSystem interface.
 (Brahma Reddy Battula via Arpit Agarwal)
 
+HADOOP-12189. Improve CallQueueManager#swapQueue to make queue elements
+drop nearly impossible. (Zhihai Xu via wang)
+
   OPTIMIZATIONS
 
 HADOOP-11785. Reduce the number of listStatus operation in distcp

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6736a1ab/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
index 1568bd6..c10f839 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
@@ -32,11 +32,15 @@ import org.apache.hadoop.conf.Configuration;
  */
 public class CallQueueManagerE {
   public static final Log LOG = LogFactory.getLog(CallQueueManager.class);
+  // Number of checkpoints for empty queue.
+  private static final int CHECKPOINT_NUM = 20;
+  // Interval to check empty queue.
+  private static final long CHECKPOINT_INTERVAL_MS = 10;
 
   @SuppressWarnings(unchecked)
   static E Class? extends BlockingQueueE convertQueueClass(
-  Class? queneClass, ClassE elementClass) {
-return (Class? extends BlockingQueueE)queneClass;
+  Class? queueClass, ClassE elementClass) {
+return (Class? extends BlockingQueueE)queueClass;
   }
   private final boolean clientBackOffEnabled;
 
@@ -159,18 +163,23 @@ public class CallQueueManagerE {
   }
 
   /**
-   * Checks if queue is empty by checking at two points in time.
+   * Checks if queue is empty by checking at CHECKPOINT_NUM points with
+   * CHECKPOINT_INTERVAL_MS interval.
* This doesn't mean the queue might not fill up at some point later, but
* it should decrease the probability that we lose a call this way.
*/
   private boolean queueIsReallyEmpty(BlockingQueue? q) {
-boolean wasEmpty = q.isEmpty();
-try {
-  Thread.sleep(10);
-} catch (InterruptedException ie) {
-  return false;
+for (int i = 0; i  CHECKPOINT_NUM; i++) {
+  try {
+Thread.sleep(CHECKPOINT_INTERVAL_MS);
+  } catch (InterruptedException ie) {
+return false;
+  }
+  if (!q.isEmpty()) {
+return false;
+  }
 }
-return q.isEmpty()  wasEmpty;
+return true;
   }
 
   private String stringRepr(Object o) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6736a1ab/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestCallQueueManager.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestCallQueueManager.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestCallQueueManager.java
index 6e1838e..51a9750 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestCallQueueManager.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestCallQueueManager.java
@@ -165,7 +165,7 @@ public class TestCallQueueManager {
 HashMapRunnable, Thread threads = new HashMapRunnable, Thread();
 
 

[37/37] hadoop git commit: Merge remote-tracking branch 'apache/trunk' into HDFS-7240

2015-07-27 Thread arp
Merge remote-tracking branch 'apache/trunk' into HDFS-7240


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2ebe8c7c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2ebe8c7c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2ebe8c7c

Branch: refs/heads/HDFS-7240
Commit: 2ebe8c7cb2aee1d2779183c1364dc14ad0baa0df
Parents: ef128ee 3e6fce9
Author: Arpit Agarwal a...@apache.org
Authored: Mon Jul 27 14:57:03 2015 -0700
Committer: Arpit Agarwal a...@apache.org
Committed: Mon Jul 27 14:57:03 2015 -0700

--
 dev-support/releasedocmaker.py  | 405 +
 hadoop-common-project/hadoop-common/CHANGES.txt |  16 +
 .../hadoop-common/src/JNIFlags.cmake| 124 ---
 .../apache/hadoop/fs/AbstractFileSystem.java|  13 +
 .../java/org/apache/hadoop/fs/FileContext.java  |  20 +
 .../java/org/apache/hadoop/fs/FileSystem.java   |  30 +-
 .../org/apache/hadoop/fs/FilterFileSystem.java  |   6 +
 .../java/org/apache/hadoop/fs/FilterFs.java |   6 +
 .../org/apache/hadoop/fs/viewfs/ChRootedFs.java |   6 +
 .../org/apache/hadoop/fs/viewfs/ViewFs.java |  15 +
 .../org/apache/hadoop/ipc/CallQueueManager.java |  27 +-
 .../hadoop-common/src/site/markdown/Metrics.md  |   1 +
 .../src/site/markdown/filesystem/filesystem.md  |   4 +
 .../hadoop/fs/FileSystemContractBaseTest.java   |  11 +-
 .../org/apache/hadoop/fs/TestHarFileSystem.java |   3 +
 .../apache/hadoop/ipc/TestCallQueueManager.java |   6 +-
 .../org/apache/hadoop/net/ServerSocketUtil.java |  63 ++
 .../org/apache/hadoop/hdfs/inotify/Event.java   |  95 +++
 .../hadoop/hdfs/protocol/ClientProtocol.java| 306 ---
 .../hadoop/hdfs/web/ByteRangeInputStream.java   |  57 +-
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  19 +
 .../main/java/org/apache/hadoop/fs/Hdfs.java|   5 +
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |  18 +
 .../hadoop/hdfs/DistributedFileSystem.java  |  19 +
 .../server/blockmanagement/BlockManager.java|   4 +
 .../blockmanagement/UnderReplicatedBlocks.java  |  36 +-
 .../hdfs/server/namenode/FSNamesystem.java  |   9 +-
 .../hadoop/hdfs/TestBlockStoragePolicy.java |  17 +
 .../hdfs/TestDFSInotifyEventInputStream.java|  30 +-
 .../hadoop/hdfs/TestDistributedFileSystem.java  |  13 +-
 .../TestUnderReplicatedBlocks.java  |  48 ++
 .../hdfs/web/TestByteRangeInputStream.java  |  35 +-
 .../org/apache/hadoop/hdfs/web/TestWebHDFS.java |  41 +
 hadoop-yarn-project/CHANGES.txt |  44 +
 .../hadoop-yarn/hadoop-yarn-api/pom.xml |  34 +
 .../hadoop/yarn/conf/YarnConfiguration.java |  11 +
 .../yarn/conf/TestYarnConfigurationFields.java  | 136 +++
 .../hadoop-yarn/hadoop-yarn-common/pom.xml  |   2 +-
 .../yarn/webapp/view/TwoColumnLayout.java   |   2 +-
 .../main/proto/server/yarn_security_token.proto |  70 --
 .../src/main/proto/yarn_security_token.proto|  70 ++
 .../src/main/resources/yarn-default.xml |  16 +
 .../yarn/conf/TestYarnConfigurationFields.java  | 136 ---
 .../pom.xml |   2 +-
 .../server/nodemanager/ContainerExecutor.java   |  23 +-
 .../nodemanager/DefaultContainerExecutor.java   |   2 +-
 .../nodemanager/DockerContainerExecutor.java|   2 +-
 .../nodemanager/LinuxContainerExecutor.java | 222 +++--
 .../nodemanager/LocalDirsHandlerService.java|  35 +-
 .../launcher/ContainerLaunch.java   |  15 +
 .../linux/privileged/PrivilegedOperation.java   |  46 +-
 .../PrivilegedOperationException.java   |  30 +-
 .../privileged/PrivilegedOperationExecutor.java |  30 +-
 .../linux/resources/CGroupsHandler.java |   8 +
 .../linux/resources/CGroupsHandlerImpl.java |  12 +-
 .../runtime/DefaultLinuxContainerRuntime.java   | 148 
 .../DelegatingLinuxContainerRuntime.java| 110 +++
 .../runtime/DockerLinuxContainerRuntime.java| 273 ++
 .../linux/runtime/LinuxContainerRuntime.java|  38 +
 .../runtime/LinuxContainerRuntimeConstants.java |  69 ++
 .../linux/runtime/docker/DockerClient.java  |  82 ++
 .../linux/runtime/docker/DockerCommand.java |  66 ++
 .../linux/runtime/docker/DockerLoadCommand.java |  30 +
 .../linux/runtime/docker/DockerRunCommand.java  | 107 +++
 .../runtime/ContainerExecutionException.java|  85 ++
 .../runtime/ContainerRuntime.java   |  50 ++
 .../runtime/ContainerRuntimeConstants.java  |  33 +
 .../runtime/ContainerRuntimeContext.java| 105 +++
 .../executor/ContainerLivenessContext.java  |  13 +
 .../executor/ContainerReacquisitionContext.java |  13 +
 .../executor/ContainerSignalContext.java|  13 +
 .../executor/ContainerStartContext.java |  23 +-
 .../container-executor/impl/configuration.c |  17 +-
 .../container-executor/impl/configuration.h |   2 +
 .../impl/container-executor.c  

[16/37] hadoop git commit: YARN-3957. FairScheduler NPE In FairSchedulerQueueInfo causing scheduler page to return 500. (Anubhav Dhoot via kasha)

2015-07-27 Thread arp
YARN-3957. FairScheduler NPE In FairSchedulerQueueInfo causing scheduler page 
to return 500. (Anubhav Dhoot via kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d19d1877
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d19d1877
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d19d1877

Branch: refs/heads/HDFS-7240
Commit: d19d18775368f5aaa254881165acc1299837072b
Parents: f8f6091
Author: Karthik Kambatla ka...@apache.org
Authored: Fri Jul 24 11:44:37 2015 -0700
Committer: Karthik Kambatla ka...@apache.org
Committed: Fri Jul 24 11:44:37 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../webapp/dao/FairSchedulerQueueInfo.java  |  4 +-
 .../webapp/dao/TestFairSchedulerQueueInfo.java  | 59 
 3 files changed, 65 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d19d1877/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index a25387d..44e5510 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -672,6 +672,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3845. Scheduler page does not render RGBA color combinations in IE11. 
 (Contributed by Mohammad Shahid Khan)
 
+YARN-3957. FairScheduler NPE In FairSchedulerQueueInfo causing scheduler 
page to 
+return 500. (Anubhav Dhoot via kasha)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d19d1877/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/FairSchedulerQueueInfo.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/FairSchedulerQueueInfo.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/FairSchedulerQueueInfo.java
index 9b297a2..7ba0988 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/FairSchedulerQueueInfo.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/FairSchedulerQueueInfo.java
@@ -19,6 +19,7 @@
 package org.apache.hadoop.yarn.server.resourcemanager.webapp.dao;
 
 
+import java.util.ArrayList;
 import java.util.Collection;
 
 import javax.xml.bind.annotation.XmlAccessType;
@@ -204,6 +205,7 @@ public class FairSchedulerQueueInfo {
   }
 
   public CollectionFairSchedulerQueueInfo getChildQueues() {
-return childQueues.getQueueInfoList();
+return childQueues != null ? childQueues.getQueueInfoList() :
+new ArrayListFairSchedulerQueueInfo();
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d19d1877/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/TestFairSchedulerQueueInfo.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/TestFairSchedulerQueueInfo.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/TestFairSchedulerQueueInfo.java
new file mode 100644
index 000..973afcf
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/TestFairSchedulerQueueInfo.java
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR 

hadoop git commit: YARN-3846. RM Web UI queue filter is not working for sub queue. Contributed by Mohammad Shahid Khan

2015-07-27 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/trunk 3e6fce91a - 3572ebd73


YARN-3846. RM Web UI queue filter is not working for sub queue. Contributed by 
Mohammad Shahid Khan


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3572ebd7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3572ebd7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3572ebd7

Branch: refs/heads/trunk
Commit: 3572ebd738aa5fa8b0906d75fb12cc6cbb991573
Parents: 3e6fce9
Author: Jian He jia...@apache.org
Authored: Mon Jul 27 16:57:11 2015 -0700
Committer: Jian He jia...@apache.org
Committed: Mon Jul 27 17:12:05 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt | 3 +++
 .../server/resourcemanager/webapp/CapacitySchedulerPage.java| 5 -
 2 files changed, 7 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3572ebd7/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 534c55a..4f8484a 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -695,6 +695,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3958. TestYarnConfigurationFields should be moved to hadoop-yarn-api
 module. (Varun Saxena via aajisaka)
 
+YARN-3846. RM Web UI queue filter is not working for sub queue.
+(Mohammad Shahid Khan via jianhe)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3572ebd7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
index 12a3013..d8971b7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
@@ -516,7 +516,10 @@ class CapacitySchedulerPage extends RmView {
 $('#cs').bind('select_node.jstree', function(e, data) {,
   var q = $('.q', data.rslt.obj).first().text();,
   if (q == 'Queue: root') q = '';,
-  else q = '^' + q.substr(q.lastIndexOf(':') + 2) + '$';,
+  else {,
+q = q.substr(q.lastIndexOf(':') + 2);,
+q = '^' + q.substr(q.lastIndexOf('.') + 1) + '$';,
+  },
   $('#apps').dataTable().fnFilter(q, 4, true);,
 });,
 $('#cs').show();,



hadoop git commit: YARN-3846. RM Web UI queue filter is not working for sub queue. Contributed by Mohammad Shahid Khan (cherry picked from commit 3572ebd738aa5fa8b0906d75fb12cc6cbb991573)

2015-07-27 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 9da487e0f - 7c123accd


YARN-3846. RM Web UI queue filter is not working for sub queue. Contributed by 
Mohammad Shahid Khan
(cherry picked from commit 3572ebd738aa5fa8b0906d75fb12cc6cbb991573)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7c123acc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7c123acc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7c123acc

Branch: refs/heads/branch-2
Commit: 7c123accdac370db77831d3d64a9c9ccdc07fe74
Parents: 9da487e
Author: Jian He jia...@apache.org
Authored: Mon Jul 27 16:57:11 2015 -0700
Committer: Jian He jia...@apache.org
Committed: Mon Jul 27 17:12:24 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt | 3 +++
 .../server/resourcemanager/webapp/CapacitySchedulerPage.java| 5 -
 2 files changed, 7 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7c123acc/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 724ddd0..4c6ccb7 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -643,6 +643,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3958. TestYarnConfigurationFields should be moved to hadoop-yarn-api
 module. (Varun Saxena via aajisaka)
 
+YARN-3846. RM Web UI queue filter is not working for sub queue.
+(Mohammad Shahid Khan via jianhe)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7c123acc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
index 12a3013..d8971b7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/CapacitySchedulerPage.java
@@ -516,7 +516,10 @@ class CapacitySchedulerPage extends RmView {
 $('#cs').bind('select_node.jstree', function(e, data) {,
   var q = $('.q', data.rslt.obj).first().text();,
   if (q == 'Queue: root') q = '';,
-  else q = '^' + q.substr(q.lastIndexOf(':') + 2) + '$';,
+  else {,
+q = q.substr(q.lastIndexOf(':') + 2);,
+q = '^' + q.substr(q.lastIndexOf('.') + 1) + '$';,
+  },
   $('#apps').dataTable().fnFilter(q, 4, true);,
 });,
 $('#cs').show();,



hadoop git commit: HADOOP-12245. References to misspelled REMAINING_QUATA in FileSystemShell.md. Contributed by Gabor Liptak.

2015-07-27 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/trunk 3572ebd73 - e21dde501


HADOOP-12245. References to misspelled REMAINING_QUATA in FileSystemShell.md. 
Contributed by Gabor Liptak.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e21dde50
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e21dde50
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e21dde50

Branch: refs/heads/trunk
Commit: e21dde501aa9323b7f34b4bc4ba9d282ec4f2707
Parents: 3572ebd
Author: Akira Ajisaka aajis...@apache.org
Authored: Tue Jul 28 11:33:10 2015 +0900
Committer: Akira Ajisaka aajis...@apache.org
Committed: Tue Jul 28 11:33:10 2015 +0900

--
 hadoop-common-project/hadoop-common/CHANGES.txt   | 3 +++
 .../hadoop-common/src/site/markdown/FileSystemShell.md| 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e21dde50/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index baf39e3..aeaa5b9 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1017,6 +1017,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12239. StorageException complaining  no lease ID when updating
 FolderLastModifiedTime in WASB. (Duo Xu via cnauroth)
 
+HADOOP-12245. References to misspelled REMAINING_QUATA in
+FileSystemShell.md. (Gabor Liptak via aajisaka)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e21dde50/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md 
b/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
index 144cb73..fb89ca1 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
@@ -174,7 +174,7 @@ Usage: `hadoop fs -count [-q] [-h] [-v] paths `
 
 Count the number of directories, files and bytes under the paths that match 
the specified file pattern. The output columns with -count are: DIR\_COUNT, 
FILE\_COUNT, CONTENT\_SIZE, PATHNAME
 
-The output columns with -count -q are: QUOTA, REMAINING\_QUATA, SPACE\_QUOTA, 
REMAINING\_SPACE\_QUOTA, DIR\_COUNT, FILE\_COUNT, CONTENT\_SIZE, PATHNAME
+The output columns with -count -q are: QUOTA, REMAINING\_QUOTA, SPACE\_QUOTA, 
REMAINING\_SPACE\_QUOTA, DIR\_COUNT, FILE\_COUNT, CONTENT\_SIZE, PATHNAME
 
 The -h option shows sizes in human readable format.
 



hadoop git commit: HADOOP-12245. References to misspelled REMAINING_QUATA in FileSystemShell.md. Contributed by Gabor Liptak.

2015-07-27 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 7c123accd - c650ab003


HADOOP-12245. References to misspelled REMAINING_QUATA in FileSystemShell.md. 
Contributed by Gabor Liptak.

(cherry picked from commit e21dde501aa9323b7f34b4bc4ba9d282ec4f2707)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c650ab00
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c650ab00
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c650ab00

Branch: refs/heads/branch-2
Commit: c650ab00370704760c8de77dc5531808c7e0804b
Parents: 7c123ac
Author: Akira Ajisaka aajis...@apache.org
Authored: Tue Jul 28 11:33:10 2015 +0900
Committer: Akira Ajisaka aajis...@apache.org
Committed: Tue Jul 28 11:34:16 2015 +0900

--
 hadoop-common-project/hadoop-common/CHANGES.txt   | 3 +++
 .../hadoop-common/src/site/markdown/FileSystemShell.md| 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c650ab00/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 2f954f6..b7e32ca 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -523,6 +523,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-12239. StorageException complaining  no lease ID when updating
 FolderLastModifiedTime in WASB. (Duo Xu via cnauroth)
 
+HADOOP-12245. References to misspelled REMAINING_QUATA in
+FileSystemShell.md. (Gabor Liptak via aajisaka)
+
 Release 2.7.2 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c650ab00/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
--
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md 
b/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
index ae2b0ef..6fa81eb 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
@@ -174,7 +174,7 @@ Usage: `hadoop fs -count [-q] [-h] [-v] paths `
 
 Count the number of directories, files and bytes under the paths that match 
the specified file pattern. The output columns with -count are: DIR\_COUNT, 
FILE\_COUNT, CONTENT\_SIZE, PATHNAME
 
-The output columns with -count -q are: QUOTA, REMAINING\_QUATA, SPACE\_QUOTA, 
REMAINING\_SPACE\_QUOTA, DIR\_COUNT, FILE\_COUNT, CONTENT\_SIZE, PATHNAME
+The output columns with -count -q are: QUOTA, REMAINING\_QUOTA, SPACE\_QUOTA, 
REMAINING\_SPACE\_QUOTA, DIR\_COUNT, FILE\_COUNT, CONTENT\_SIZE, PATHNAME
 
 The -h option shows sizes in human readable format.