hadoop git commit: HDFS-11064. Mention the default NN rpc ports in hdfs-default.xml. Contributed by Yiqun Lin.

2016-10-27 Thread xiao
Repository: hadoop
Updated Branches:
  refs/heads/trunk b62bc2bbd -> 57187fdb9


HDFS-11064. Mention the default NN rpc ports in hdfs-default.xml. Contributed 
by Yiqun Lin.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/57187fdb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/57187fdb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/57187fdb

Branch: refs/heads/trunk
Commit: 57187fdb93c8f59372c980eb3d86073a3c8045b9
Parents: b62bc2b
Author: Xiao Chen 
Authored: Thu Oct 27 18:13:06 2016 -0700
Committer: Xiao Chen 
Committed: Thu Oct 27 18:13:06 2016 -0700

--
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/57187fdb/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 061d078..e28dc54 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -37,7 +37,7 @@
 RPC address that handles all clients requests. In the case of 
HA/Federation where multiple namenodes exist,
 the name service id is added to the name e.g. dfs.namenode.rpc-address.ns1
 dfs.namenode.rpc-address.EXAMPLENAMESERVICE
-The value of this property will take the form of nn-host1:rpc-port.
+The value of this property will take the form of nn-host1:rpc-port. The 
NameNode's default RPC port is 9820.
   
 
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HADOOP-13763. KMS REST API Documentation Decrypt URL typo. Contributed by Jeffrey E Rodriguez.

2016-10-27 Thread xiao
Repository: hadoop
Updated Branches:
  refs/heads/trunk 4df8ed63e -> b62bc2bbd


HADOOP-13763. KMS REST API Documentation Decrypt URL typo. Contributed by 
Jeffrey E Rodriguez.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b62bc2bb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b62bc2bb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b62bc2bb

Branch: refs/heads/trunk
Commit: b62bc2bbd80bb751348f0c1f655d5e456624663e
Parents: 4df8ed6
Author: Xiao Chen 
Authored: Thu Oct 27 18:05:40 2016 -0700
Committer: Xiao Chen 
Committed: Thu Oct 27 18:05:40 2016 -0700

--
 hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b62bc2bb/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm
--
diff --git a/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm 
b/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm
index 0c6d0b2..69eb1dd 100644
--- a/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm
+++ b/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm
@@ -896,7 +896,7 @@ $H4 Decrypt Encrypted Key
 
 *REQUEST:*
 
-POST http://HOST:PORT/kms/v1/keyversion//_eek?ee_op=decrypt
+POST http://HOST:PORT/kms/v1/keyversion//_eek?eek_op=decrypt
 Content-Type: application/json
 
 {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-4743. FairSharePolicy breaks TimSort assumption. (Zephyr Guo and Yufei Gu via kasha)

2016-10-27 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 334fd9e83 -> 950bfed1d


YARN-4743. FairSharePolicy breaks TimSort assumption. (Zephyr Guo and Yufei Gu 
via kasha)

(cherry picked from commit 4df8ed63ed93f2542e4b48f521b0cc6624ab59c1)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/950bfed1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/950bfed1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/950bfed1

Branch: refs/heads/branch-2
Commit: 950bfed1d31f07b989a3659121c29a0a80dc4a49
Parents: 334fd9e
Author: Karthik Kambatla 
Authored: Thu Oct 27 17:42:44 2016 -0700
Committer: Karthik Kambatla 
Committed: Thu Oct 27 17:48:24 2016 -0700

--
 .../fair/policies/FairSharePolicy.java  |  31 ++-
 .../scheduler/fair/TestSchedulingPolicy.java| 228 +++
 2 files changed, 254 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/950bfed1/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
index 6aa8405..f120f0f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
@@ -63,7 +63,11 @@ public class FairSharePolicy extends SchedulingPolicy {
* 
* Schedulables above their min share are compared by (runningTasks / 
weight).
* If all weights are equal, slots are given to the job with the fewest 
tasks;
-   * otherwise, jobs with more weight get proportionally more slots.
+   * otherwise, jobs with more weight get proportionally more slots. If weight
+   * equals to 0, we can't compare Schedulables by (resource usage/weight).
+   * There are two situations: 1)All weights equal to 0, slots are given
+   * to one with less resource usage. 2)Only one of weight equals to 0, slots
+   * are given to the one with non-zero weight.
*/
   private static class FairShareComparator implements Comparator,
   Serializable {
@@ -74,6 +78,7 @@ public class FairSharePolicy extends SchedulingPolicy {
 public int compare(Schedulable s1, Schedulable s2) {
   double minShareRatio1, minShareRatio2;
   double useToWeightRatio1, useToWeightRatio2;
+  double weight1, weight2;
   Resource minShare1 = Resources.min(RESOURCE_CALCULATOR, null,
   s1.getMinShare(), s1.getDemand());
   Resource minShare2 = Resources.min(RESOURCE_CALCULATOR, null,
@@ -86,10 +91,26 @@ public class FairSharePolicy extends SchedulingPolicy {
   / Resources.max(RESOURCE_CALCULATOR, null, minShare1, 
ONE).getMemorySize();
   minShareRatio2 = (double) s2.getResourceUsage().getMemorySize()
   / Resources.max(RESOURCE_CALCULATOR, null, minShare2, 
ONE).getMemorySize();
-  useToWeightRatio1 = s1.getResourceUsage().getMemorySize() /
-  s1.getWeights().getWeight(ResourceType.MEMORY);
-  useToWeightRatio2 = s2.getResourceUsage().getMemorySize() /
-  s2.getWeights().getWeight(ResourceType.MEMORY);
+
+  weight1 = s1.getWeights().getWeight(ResourceType.MEMORY);
+  weight2 = s2.getWeights().getWeight(ResourceType.MEMORY);
+  if (weight1 > 0.0 && weight2 > 0.0) {
+useToWeightRatio1 = s1.getResourceUsage().getMemorySize() / weight1;
+useToWeightRatio2 = s2.getResourceUsage().getMemorySize() / weight2;
+  } else { // Either weight1 or weight2 equals to 0
+if (weight1 == weight2) {
+  // If they have same weight, just compare usage
+  useToWeightRatio1 = s1.getResourceUsage().getMemorySize();
+  useToWeightRatio2 = s2.getResourceUsage().getMemorySize();
+} else {
+  // By setting useToWeightRatios to negative weights, we give the
+  // zero-weight one less priority, so the non-zero weight one will
+  // be given slots.
+  useToWeightRatio1 = -weight1;
+  useToWeightRatio2 = -weight2;
+}
+  }
+
   int res 

hadoop git commit: YARN-2306. Add test for leakage of reservation metrics in fair scheduler. (Hong Zhiguo and Yufei Gu via subru).

2016-10-27 Thread subru
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 1455f56b6 -> 334fd9e83


YARN-2306. Add test for leakage of reservation metrics in fair scheduler. (Hong 
Zhiguo and Yufei Gu via subru).

(cherry picked from commit b2c4f24c31e73faa8f71d44db5de3aa91e3b7d5e)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/334fd9e8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/334fd9e8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/334fd9e8

Branch: refs/heads/branch-2
Commit: 334fd9e83fedceaec82d91b126841ff62ab8b9a8
Parents: 1455f56
Author: Subru Krishnan 
Authored: Thu Oct 27 17:43:13 2016 -0700
Committer: Subru Krishnan 
Committed: Thu Oct 27 17:44:05 2016 -0700

--
 .../scheduler/fair/TestFairScheduler.java   | 52 
 1 file changed, 52 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/334fd9e8/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
index 1fa120a..9221b1d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
@@ -4761,4 +4761,56 @@ public class TestFairScheduler extends 
FairSchedulerTestBase {
 rm1.stop();
 rm2.stop();
   }
+
+  @Test
+  public void testReservationMetrics() throws IOException {
+scheduler.init(conf);
+scheduler.start();
+scheduler.reinitialize(conf, resourceManager.getRMContext());
+QueueMetrics metrics = scheduler.getRootQueueMetrics();
+
+RMNode node1 =
+MockNodes
+.newNodeInfo(1, Resources.createResource(4096, 4), 1, "127.0.0.1");
+NodeAddedSchedulerEvent nodeEvent = new NodeAddedSchedulerEvent(node1);
+scheduler.handle(nodeEvent);
+
+ApplicationAttemptId appAttemptId = createAppAttemptId(1, 1);
+createApplicationWithAMResource(appAttemptId, "default", "user1", null);
+
+NodeUpdateSchedulerEvent updateEvent = new NodeUpdateSchedulerEvent(node1);
+scheduler.update();
+scheduler.handle(updateEvent);
+
+createSchedulingRequestExistingApplication(1024, 1, 1, appAttemptId);
+scheduler.handle(updateEvent);
+
+// no reservation yet
+assertEquals(0, metrics.getReservedContainers());
+assertEquals(0, metrics.getReservedMB());
+assertEquals(0, metrics.getReservedVirtualCores());
+
+// create reservation of {4096, 4}
+createSchedulingRequestExistingApplication(4096, 4, 1, appAttemptId);
+scheduler.update();
+scheduler.handle(updateEvent);
+
+// reservation created
+assertEquals(1, metrics.getReservedContainers());
+assertEquals(4096, metrics.getReservedMB());
+assertEquals(4, metrics.getReservedVirtualCores());
+
+// remove AppAttempt
+AppAttemptRemovedSchedulerEvent attRemoveEvent =
+new AppAttemptRemovedSchedulerEvent(
+appAttemptId,
+RMAppAttemptState.KILLED,
+false);
+scheduler.handle(attRemoveEvent);
+
+// The reservation metrics should be subtracted
+assertEquals(0, metrics.getReservedContainers());
+assertEquals(0, metrics.getReservedMB());
+assertEquals(0, metrics.getReservedVirtualCores());
+  }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-4743. FairSharePolicy breaks TimSort assumption. (Zephyr Guo and Yufei Gu via kasha)

2016-10-27 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/trunk b2c4f24c3 -> 4df8ed63e


YARN-4743. FairSharePolicy breaks TimSort assumption. (Zephyr Guo and Yufei Gu 
via kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4df8ed63
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4df8ed63
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4df8ed63

Branch: refs/heads/trunk
Commit: 4df8ed63ed93f2542e4b48f521b0cc6624ab59c1
Parents: b2c4f24
Author: Karthik Kambatla 
Authored: Thu Oct 27 17:42:44 2016 -0700
Committer: Karthik Kambatla 
Committed: Thu Oct 27 17:45:48 2016 -0700

--
 .../fair/policies/FairSharePolicy.java  |  31 ++-
 .../scheduler/fair/TestSchedulingPolicy.java| 228 +++
 2 files changed, 254 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4df8ed63/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
index 6aa8405..f120f0f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java
@@ -63,7 +63,11 @@ public class FairSharePolicy extends SchedulingPolicy {
* 
* Schedulables above their min share are compared by (runningTasks / 
weight).
* If all weights are equal, slots are given to the job with the fewest 
tasks;
-   * otherwise, jobs with more weight get proportionally more slots.
+   * otherwise, jobs with more weight get proportionally more slots. If weight
+   * equals to 0, we can't compare Schedulables by (resource usage/weight).
+   * There are two situations: 1)All weights equal to 0, slots are given
+   * to one with less resource usage. 2)Only one of weight equals to 0, slots
+   * are given to the one with non-zero weight.
*/
   private static class FairShareComparator implements Comparator,
   Serializable {
@@ -74,6 +78,7 @@ public class FairSharePolicy extends SchedulingPolicy {
 public int compare(Schedulable s1, Schedulable s2) {
   double minShareRatio1, minShareRatio2;
   double useToWeightRatio1, useToWeightRatio2;
+  double weight1, weight2;
   Resource minShare1 = Resources.min(RESOURCE_CALCULATOR, null,
   s1.getMinShare(), s1.getDemand());
   Resource minShare2 = Resources.min(RESOURCE_CALCULATOR, null,
@@ -86,10 +91,26 @@ public class FairSharePolicy extends SchedulingPolicy {
   / Resources.max(RESOURCE_CALCULATOR, null, minShare1, 
ONE).getMemorySize();
   minShareRatio2 = (double) s2.getResourceUsage().getMemorySize()
   / Resources.max(RESOURCE_CALCULATOR, null, minShare2, 
ONE).getMemorySize();
-  useToWeightRatio1 = s1.getResourceUsage().getMemorySize() /
-  s1.getWeights().getWeight(ResourceType.MEMORY);
-  useToWeightRatio2 = s2.getResourceUsage().getMemorySize() /
-  s2.getWeights().getWeight(ResourceType.MEMORY);
+
+  weight1 = s1.getWeights().getWeight(ResourceType.MEMORY);
+  weight2 = s2.getWeights().getWeight(ResourceType.MEMORY);
+  if (weight1 > 0.0 && weight2 > 0.0) {
+useToWeightRatio1 = s1.getResourceUsage().getMemorySize() / weight1;
+useToWeightRatio2 = s2.getResourceUsage().getMemorySize() / weight2;
+  } else { // Either weight1 or weight2 equals to 0
+if (weight1 == weight2) {
+  // If they have same weight, just compare usage
+  useToWeightRatio1 = s1.getResourceUsage().getMemorySize();
+  useToWeightRatio2 = s2.getResourceUsage().getMemorySize();
+} else {
+  // By setting useToWeightRatios to negative weights, we give the
+  // zero-weight one less priority, so the non-zero weight one will
+  // be given slots.
+  useToWeightRatio1 = -weight1;
+  useToWeightRatio2 = -weight2;
+}
+  }
+
   int res = 0;
   if (s1Needy && !s2Needy)
 res = -1;


hadoop git commit: YARN-2306. Add test for leakage of reservation metrics in fair scheduler. (Hong Zhiguo and Yufei Gu via subru).

2016-10-27 Thread subru
Repository: hadoop
Updated Branches:
  refs/heads/trunk 28660f51a -> b2c4f24c3


YARN-2306. Add test for leakage of reservation metrics in fair scheduler. (Hong 
Zhiguo and Yufei Gu via subru).


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b2c4f24c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b2c4f24c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b2c4f24c

Branch: refs/heads/trunk
Commit: b2c4f24c31e73faa8f71d44db5de3aa91e3b7d5e
Parents: 28660f5
Author: Subru Krishnan 
Authored: Thu Oct 27 17:43:13 2016 -0700
Committer: Subru Krishnan 
Committed: Thu Oct 27 17:43:13 2016 -0700

--
 .../scheduler/fair/TestFairScheduler.java   | 52 
 1 file changed, 52 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b2c4f24c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
index e28b35a..f17726c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
@@ -4761,4 +4761,56 @@ public class TestFairScheduler extends 
FairSchedulerTestBase {
 rm1.stop();
 rm2.stop();
   }
+
+  @Test
+  public void testReservationMetrics() throws IOException {
+scheduler.init(conf);
+scheduler.start();
+scheduler.reinitialize(conf, resourceManager.getRMContext());
+QueueMetrics metrics = scheduler.getRootQueueMetrics();
+
+RMNode node1 =
+MockNodes
+.newNodeInfo(1, Resources.createResource(4096, 4), 1, "127.0.0.1");
+NodeAddedSchedulerEvent nodeEvent = new NodeAddedSchedulerEvent(node1);
+scheduler.handle(nodeEvent);
+
+ApplicationAttemptId appAttemptId = createAppAttemptId(1, 1);
+createApplicationWithAMResource(appAttemptId, "default", "user1", null);
+
+NodeUpdateSchedulerEvent updateEvent = new NodeUpdateSchedulerEvent(node1);
+scheduler.update();
+scheduler.handle(updateEvent);
+
+createSchedulingRequestExistingApplication(1024, 1, 1, appAttemptId);
+scheduler.handle(updateEvent);
+
+// no reservation yet
+assertEquals(0, metrics.getReservedContainers());
+assertEquals(0, metrics.getReservedMB());
+assertEquals(0, metrics.getReservedVirtualCores());
+
+// create reservation of {4096, 4}
+createSchedulingRequestExistingApplication(4096, 4, 1, appAttemptId);
+scheduler.update();
+scheduler.handle(updateEvent);
+
+// reservation created
+assertEquals(1, metrics.getReservedContainers());
+assertEquals(4096, metrics.getReservedMB());
+assertEquals(4, metrics.getReservedVirtualCores());
+
+// remove AppAttempt
+AppAttemptRemovedSchedulerEvent attRemoveEvent =
+new AppAttemptRemovedSchedulerEvent(
+appAttemptId,
+RMAppAttemptState.KILLED,
+false);
+scheduler.handle(attRemoveEvent);
+
+// The reservation metrics should be subtracted
+assertEquals(0, metrics.getReservedContainers());
+assertEquals(0, metrics.getReservedMB());
+assertEquals(0, metrics.getReservedVirtualCores());
+  }
 }


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: MAPREDUCE-2631. Potential resource leaks in BinaryProtocol$TeeOutputStream.java. Contributed by Sunil G.

2016-10-27 Thread naganarasimha_gr
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 2ec90a1a4 -> 8a664eba7


MAPREDUCE-2631. Potential resource leaks in 
BinaryProtocol$TeeOutputStream.java. Contributed by Sunil G.

(cherry picked from commit 28660f51af161a9fa301523d96a6f8ae4ebd6edd)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8a664eba
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8a664eba
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8a664eba

Branch: refs/heads/branch-2.8
Commit: 8a664eba7d421ef96b71103f043fe04f923008e9
Parents: 2ec90a1
Author: Naganarasimha 
Authored: Fri Oct 28 05:50:13 2016 +0530
Committer: Naganarasimha 
Committed: Fri Oct 28 06:04:03 2016 +0530

--
 .../apache/hadoop/mapred/IFileOutputStream.java |  8 ++--
 .../hadoop/mapred/pipes/BinaryProtocol.java | 14 +-
 .../apache/hadoop/mapred/TestIFileStreams.java  | 20 
 3 files changed, 35 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8a664eba/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileOutputStream.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileOutputStream.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileOutputStream.java
index 8f25ba7..08bcd24 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileOutputStream.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileOutputStream.java
@@ -24,6 +24,7 @@ import java.io.FilterOutputStream;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.util.DataChecksum;
 /**
  * A Checksum output stream.
@@ -60,8 +61,11 @@ public class IFileOutputStream extends FilterOutputStream {
   return;
 }
 closed = true;
-finish();
-out.close();
+try {
+  finish();
+} finally {
+  IOUtils.closeStream(out);
+}
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8a664eba/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/BinaryProtocol.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/BinaryProtocol.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/BinaryProtocol.java
index ebfb184..5a3ed5b 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/BinaryProtocol.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/BinaryProtocol.java
@@ -36,6 +36,7 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.io.BytesWritable;
 import org.apache.hadoop.io.DataOutputBuffer;
+import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.io.Writable;
 import org.apache.hadoop.io.WritableComparable;
@@ -200,8 +201,8 @@ class BinaryProtocolhttp://git-wip-us.apache.org/repos/asf/hadoop/blob/8a664eba/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFileStreams.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFileStreams.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFileStreams.java
index 2b97d3b..a815b28 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFileStreams.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFileStreams.java
@@ -22,7 +22,13 @@ import org.apache.hadoop.fs.ChecksumException;
 import org.apache.hadoop.io.DataInputBuffer;
 import 

hadoop git commit: MAPREDUCE-2631. Potential resource leaks in BinaryProtocol$TeeOutputStream.java. Contributed by Sunil G.

2016-10-27 Thread naganarasimha_gr
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 ce8ace372 -> 1455f56b6


MAPREDUCE-2631. Potential resource leaks in 
BinaryProtocol$TeeOutputStream.java. Contributed by Sunil G.

(cherry picked from commit 28660f51af161a9fa301523d96a6f8ae4ebd6edd)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1455f56b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1455f56b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1455f56b

Branch: refs/heads/branch-2
Commit: 1455f56b63ccd54bd97b9f1dadc22f377d2b36ba
Parents: ce8ace3
Author: Naganarasimha 
Authored: Fri Oct 28 05:50:13 2016 +0530
Committer: Naganarasimha 
Committed: Fri Oct 28 06:00:29 2016 +0530

--
 .../apache/hadoop/mapred/IFileOutputStream.java |  8 ++--
 .../hadoop/mapred/pipes/BinaryProtocol.java | 14 +-
 .../apache/hadoop/mapred/TestIFileStreams.java  | 20 
 3 files changed, 35 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1455f56b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileOutputStream.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileOutputStream.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileOutputStream.java
index 8f25ba7..08bcd24 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileOutputStream.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileOutputStream.java
@@ -24,6 +24,7 @@ import java.io.FilterOutputStream;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.util.DataChecksum;
 /**
  * A Checksum output stream.
@@ -60,8 +61,11 @@ public class IFileOutputStream extends FilterOutputStream {
   return;
 }
 closed = true;
-finish();
-out.close();
+try {
+  finish();
+} finally {
+  IOUtils.closeStream(out);
+}
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1455f56b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/BinaryProtocol.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/BinaryProtocol.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/BinaryProtocol.java
index ebfb184..5a3ed5b 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/BinaryProtocol.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/BinaryProtocol.java
@@ -36,6 +36,7 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.io.BytesWritable;
 import org.apache.hadoop.io.DataOutputBuffer;
+import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.io.Writable;
 import org.apache.hadoop.io.WritableComparable;
@@ -200,8 +201,8 @@ class BinaryProtocolhttp://git-wip-us.apache.org/repos/asf/hadoop/blob/1455f56b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFileStreams.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFileStreams.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFileStreams.java
index 2b97d3b..a815b28 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFileStreams.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFileStreams.java
@@ -22,7 +22,13 @@ import org.apache.hadoop.fs.ChecksumException;
 import org.apache.hadoop.io.DataInputBuffer;
 import 

hadoop git commit: MAPREDUCE-2631. Potential resource leaks in BinaryProtocol$TeeOutputStream.java. Contributed by Sunil G.

2016-10-27 Thread naganarasimha_gr
Repository: hadoop
Updated Branches:
  refs/heads/trunk 5877f20f9 -> 28660f51a


MAPREDUCE-2631. Potential resource leaks in 
BinaryProtocol$TeeOutputStream.java. Contributed by Sunil G.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/28660f51
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/28660f51
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/28660f51

Branch: refs/heads/trunk
Commit: 28660f51af161a9fa301523d96a6f8ae4ebd6edd
Parents: 5877f20
Author: Naganarasimha 
Authored: Fri Oct 28 05:50:13 2016 +0530
Committer: Naganarasimha 
Committed: Fri Oct 28 05:50:13 2016 +0530

--
 .../apache/hadoop/mapred/IFileOutputStream.java |  8 ++--
 .../hadoop/mapred/pipes/BinaryProtocol.java | 14 +-
 .../apache/hadoop/mapred/TestIFileStreams.java  | 20 
 3 files changed, 35 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/28660f51/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileOutputStream.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileOutputStream.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileOutputStream.java
index 8f25ba7..08bcd24 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileOutputStream.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFileOutputStream.java
@@ -24,6 +24,7 @@ import java.io.FilterOutputStream;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.util.DataChecksum;
 /**
  * A Checksum output stream.
@@ -60,8 +61,11 @@ public class IFileOutputStream extends FilterOutputStream {
   return;
 }
 closed = true;
-finish();
-out.close();
+try {
+  finish();
+} finally {
+  IOUtils.closeStream(out);
+}
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/28660f51/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/BinaryProtocol.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/BinaryProtocol.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/BinaryProtocol.java
index ebfb184..5a3ed5b 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/BinaryProtocol.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/BinaryProtocol.java
@@ -36,6 +36,7 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.io.BytesWritable;
 import org.apache.hadoop.io.DataOutputBuffer;
+import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.io.Writable;
 import org.apache.hadoop.io.WritableComparable;
@@ -200,8 +201,8 @@ class BinaryProtocolhttp://git-wip-us.apache.org/repos/asf/hadoop/blob/28660f51/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFileStreams.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFileStreams.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFileStreams.java
index 2b97d3b..a815b28 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFileStreams.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFileStreams.java
@@ -22,7 +22,13 @@ import org.apache.hadoop.fs.ChecksumException;
 import org.apache.hadoop.io.DataInputBuffer;
 import org.apache.hadoop.io.DataOutputBuffer;
 import org.junit.Test;
+import 

hadoop git commit: HDFS-11005. Ozone: TestBlockPoolManager fails in ozone branch. Contributed by Chen Liang.

2016-10-27 Thread aengineer
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-7240 59dfa7482 -> eb8f2b2ca


HDFS-11005. Ozone: TestBlockPoolManager fails in ozone branch. Contributed by 
Chen Liang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/eb8f2b2c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/eb8f2b2c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/eb8f2b2c

Branch: refs/heads/HDFS-7240
Commit: eb8f2b2ca0835d756132e56c854685c9c4fa4eab
Parents: 59dfa74
Author: Anu Engineer 
Authored: Thu Oct 27 16:35:45 2016 -0700
Committer: Anu Engineer 
Committed: Thu Oct 27 16:35:45 2016 -0700

--
 .../hadoop/hdfs/server/datanode/BlockPoolManager.java| 11 ++-
 1 file changed, 6 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/eb8f2b2c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolManager.java
index 34629c4..8202f73 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolManager.java
@@ -150,15 +150,16 @@ class BlockPoolManager {
 LOG.info("Refresh request received for nameservices: " + conf.get
 (DFSConfigKeys.DFS_NAMESERVICES));
 
-final Map> newAddressMap =
+Map> newAddressMap =
 new HashMap<>();
-final Map> newLifelineAddressMap =
+Map> newLifelineAddressMap =
 new HashMap<>();
 
 try {
-  newAddressMap.putAll(DFSUtil.getNNServiceRpcAddressesForCluster(conf));
-  newLifelineAddressMap.putAll(
-  DFSUtil.getNNLifelineRpcAddressesForCluster(conf));
+  newAddressMap =
+  DFSUtil.getNNServiceRpcAddressesForCluster(conf);
+  newLifelineAddressMap =
+  DFSUtil.getNNLifelineRpcAddressesForCluster(conf);
 } catch (IOException ioe) {
   LOG.warn("Unable to get NameNode addresses. " +
   "This may be an Ozone-only cluster.");


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-9500. Fix software version counts for DataNodes during rolling upgrade. Contributed by Erik Krogen.

2016-10-27 Thread shv
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 edaa37177 -> d002e4d10


HDFS-9500. Fix software version counts for DataNodes during rolling upgrade. 
Contributed by Erik Krogen.

(cherry picked from commit f3ac1f41b8fa82a0ac87a207d7afa2061d90a9bd)

Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d002e4d1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d002e4d1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d002e4d1

Branch: refs/heads/branch-2.7
Commit: d002e4d10b91371d6f898cc6b17f86eb4bb0e87e
Parents: edaa371
Author: Erik Krogen 
Authored: Thu Oct 27 15:14:21 2016 -0700
Committer: Konstantin V Shvachko 
Committed: Thu Oct 27 16:10:26 2016 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../server/blockmanagement/DatanodeManager.java | 19 ++
 .../blockmanagement/TestDatanodeManager.java| 37 
 3 files changed, 53 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d002e4d1/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 582b146..6ce72ae 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -181,6 +181,9 @@ Release 2.7.4 - UNRELEASED
 
 HDFS-11015. Enforce timeout in balancer. (kihwal via zhz)
 
+HDFS-9500. Fix software version counts for DataNodes during rolling 
upgrade.
+(Erik Krogen via shv)
+
 Release 2.7.3 - 2016-08-25
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d002e4d1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index c75bf58..5d82186 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -657,19 +657,26 @@ public class DatanodeManager {
 }
   }
 
+  /**
+   * Will return true for all Datanodes which have a non-null software
+   * version and are considered alive (by {@link 
DatanodeDescriptor#isAlive()}),
+   * indicating the node has not yet been removed. Use {@code isAlive}
+   * rather than {@link DatanodeManager#isDatanodeDead(DatanodeDescriptor)}
+   * to ensure that the version is decremented even if the datanode
+   * hasn't issued a heartbeat recently.
+   *
+   * @param node The datanode in question
+   * @return True iff its version count should be decremented
+   */
   private boolean shouldCountVersion(DatanodeDescriptor node) {
-return node.getSoftwareVersion() != null && node.isAlive &&
-  !isDatanodeDead(node);
+return node.getSoftwareVersion() != null && node.isAlive;
   }
 
   private void countSoftwareVersions() {
 synchronized(datanodeMap) {
   HashMap versionCount = new HashMap();
   for(DatanodeDescriptor dn: datanodeMap.values()) {
-// Check isAlive too because right after removeDatanode(),
-// isDatanodeDead() is still true 
-if(shouldCountVersion(dn))
-{
+if (shouldCountVersion(dn)) {
   Integer num = versionCount.get(dn.getSoftwareVersion());
   num = num == null ? 1 : num+1;
   versionCount.put(dn.getSoftwareVersion(), num);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d002e4d1/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
index 4530ef8..c3bb0dd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
@@ -78,6 +78,43 @@ public class TestDatanodeManager {
   }
 
   /**
+   * This test checks that if a node is re-registered with a new software
+   * version after 

[02/14] hadoop git commit: HADOOP-10075. Update jetty dependency to version 9 (rkanter)

2016-10-27 Thread rkanter
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5877f20f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesFairScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesFairScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesFairScheduler.java
index cbe360a..e77785b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesFairScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesFairScheduler.java
@@ -23,6 +23,7 @@ import static org.junit.Assert.fail;
 
 import javax.ws.rs.core.MediaType;
 
+import org.apache.hadoop.http.JettyUtils;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.server.resourcemanager.MockRM;
 import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager;
@@ -91,7 +92,8 @@ public class TestRMWebServicesFairScheduler extends 
JerseyTestBase {
 ClientResponse response = r.path("ws").path("v1").path("cluster")
 .path("scheduler").accept(MediaType.APPLICATION_JSON)
 .get(ClientResponse.class);
-assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
+assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8,
+response.getType().toString());
 JSONObject json = response.getEntity(JSONObject.class);
 verifyClusterScheduler(json);
   }
@@ -102,7 +104,8 @@ public class TestRMWebServicesFairScheduler extends 
JerseyTestBase {
 ClientResponse response = r.path("ws").path("v1").path("cluster")
 .path("scheduler/").accept(MediaType.APPLICATION_JSON)
 .get(ClientResponse.class);
-assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
+assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8,
+response.getType().toString());
 JSONObject json = response.getEntity(JSONObject.class);
 verifyClusterScheduler(json);
   }
@@ -120,7 +123,8 @@ public class TestRMWebServicesFairScheduler extends 
JerseyTestBase {
 ClientResponse response = r.path("ws").path("v1").path("cluster")
 .path("scheduler").accept(MediaType.APPLICATION_JSON)
 .get(ClientResponse.class);
-assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
+assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8,
+response.getType().toString());
 JSONObject json = response.getEntity(JSONObject.class);
 JSONArray subQueueInfo = json.getJSONObject("scheduler")
 .getJSONObject("schedulerInfo").getJSONObject("rootQueue")

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5877f20f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesForCSWithPartitions.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesForCSWithPartitions.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesForCSWithPartitions.java
index 29a38d9..c286186 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesForCSWithPartitions.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesForCSWithPartitions.java
@@ -29,6 +29,7 @@ import javax.ws.rs.core.MediaType;
 import javax.xml.parsers.DocumentBuilder;
 import javax.xml.parsers.DocumentBuilderFactory;
 
+import org.apache.hadoop.http.JettyUtils;
 import org.apache.hadoop.yarn.api.records.NodeLabel;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.server.resourcemanager.MockRM;
@@ -194,7 +195,8 @@ public class TestRMWebServicesForCSWithPartitions extends 
JerseyTestBase {
 ClientResponse response =
 r.path("ws").path("v1").path("cluster").path("scheduler")
 

[06/14] hadoop git commit: HADOOP-10075. Update jetty dependency to version 9 (rkanter)

2016-10-27 Thread rkanter
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5877f20f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-ui-1.9.1.custom.min.js.gz
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-ui-1.9.1.custom.min.js.gz
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-ui-1.9.1.custom.min.js.gz
deleted file mode 100644
index abdb4b1..000
Binary files 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-ui-1.9.1.custom.min.js.gz
 and /dev/null differ


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[07/14] hadoop git commit: HADOOP-10075. Update jetty dependency to version 9 (rkanter)

2016-10-27 Thread rkanter
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5877f20f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-ui-1.9.1.custom.min.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-ui-1.9.1.custom.min.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-ui-1.9.1.custom.min.js
new file mode 100644
index 000..aa7a923
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-ui-1.9.1.custom.min.js
@@ -0,0 +1,6 @@
+/*! jQuery UI - v1.9.1 - 2012-10-25
+* http://jqueryui.com
+* Includes: jquery.ui.core.js, jquery.ui.widget.js, jquery.ui.mouse.js, 
jquery.ui.position.js, jquery.ui.accordion.js, jquery.ui.autocomplete.js, 
jquery.ui.button.js, jquery.ui.datepicker.js, jquery.ui.dialog.js, 
jquery.ui.draggable.js, jquery.ui.droppable.js, jquery.ui.effect.js, 
jquery.ui.effect-blind.js, jquery.ui.effect-bounce.js, 
jquery.ui.effect-clip.js, jquery.ui.effect-drop.js, 
jquery.ui.effect-explode.js, jquery.ui.effect-fade.js, 
jquery.ui.effect-fold.js, jquery.ui.effect-highlight.js, 
jquery.ui.effect-pulsate.js, jquery.ui.effect-scale.js, 
jquery.ui.effect-shake.js, jquery.ui.effect-slide.js, 
jquery.ui.effect-transfer.js, jquery.ui.menu.js, jquery.ui.progressbar.js, 
jquery.ui.resizable.js, jquery.ui.selectable.js, jquery.ui.slider.js, 
jquery.ui.sortable.js, jquery.ui.spinner.js, jquery.ui.tabs.js, 
jquery.ui.tooltip.js
+* Copyright (c) 2012 jQuery Foundation and other contributors Licensed MIT */
+
+(function(e,t){function i(t,n){var 
r,i,o,u=t.nodeName.toLowerCase();return"area"===u?(r=t.parentNode,i=r.name,!t.href||!i||r.nodeName.toLowerCase()!=="map"?!1:(o=e("img[usemap=#"+i+"]")[0],!!o&(o))):(/input|select|textarea|button|object/.test(u)?!t.disabled:"a"===u?t.href||n:n)&(t)}function
 s(t){return 
e.expr.filters.visible(t)&&!e(t).parents().andSelf().filter(function(){return 
e.css(this,"visibility")==="hidden"}).length}var 
n=0,r=/^ui-id-\d+$/;e.ui=e.ui||{};if(e.ui.version)return;e.extend(e.ui,{version:"1.9.1",keyCode:{BACKSPACE:8,COMMA:188,DELETE:46,DOWN:40,END:35,ENTER:13,ESCAPE:27,HOME:36,LEFT:37,NUMPAD_ADD:107,NUMPAD_DECIMAL:110,NUMPAD_DIVIDE:111,NUMPAD_ENTER:108,NUMPAD_MULTIPLY:106,NUMPAD_SUBTRACT:109,PAGE_DOWN:34,PAGE_UP:33,PERIOD:190,RIGHT:39,SPACE:32,TAB:9,UP:38}}),e.fn.extend({_focus:e.fn.focus,focus:function(t,n){return
 typeof t=="number"?this.each(function(){var 
r=this;setTimeout(function(){e(r).focus(),n&(r)},t)}):this._focus.apply(this,arguments)},scrollPa
 rent:function(){var t;return 
e.ui.ie&&/(static|relative)/.test(this.css("position"))||/absolute/.test(this.css("position"))?t=this.parents().filter(function(){return/(relative|absolute|fixed)/.test(e.css(this,"position"))&&/(auto|scroll)/.test(e.css(this,"overflow")+e.css(this,"overflow-y")+e.css(this,"overflow-x"))}).eq(0):t=this.parents().filter(function(){return/(auto|scroll)/.test(e.css(this,"overflow")+e.css(this,"overflow-y")+e.css(this,"overflow-x"))}).eq(0),/fixed/.test(this.css("position"))||!t.length?e(document):t},zIndex:function(n){if(n!==t)return
 this.css("zIndex",n);if(this.length){var 
r=e(this[0]),i,s;while(r.length&[0]!==document){i=r.css("position");if(i==="absolute"||i==="relative"||i==="fixed"){s=parseInt(r.css("zIndex"),10);if(!isNaN(s)&!==0)return
 s}r=r.parent()}}return 0},uniqueId:function(){return 
this.each(function(){this.id||(this.id="ui-id-"+ 
++n)})},removeUniqueId:function(){return 
this.each(function(){r.test(this.id)&(this).removeAttr("id")})}}),e("
 ").outerWidth(1).jquery||e.each(["Width","Height"],function(n,r){function 
u(t,n,r,s){return 
e.each(i,function(){n-=parseFloat(e.css(t,"padding"+this))||0,r&&(n-=parseFloat(e.css(t,"border"+this+"Width"))||0),s&&(n-=parseFloat(e.css(t,"margin"+this))||0)}),n}var
 
i=r==="Width"?["Left","Right"]:["Top","Bottom"],s=r.toLowerCase(),o={innerWidth:e.fn.innerWidth,innerHeight:e.fn.innerHeight,outerWidth:e.fn.outerWidth,outerHeight:e.fn.outerHeight};e.fn["inner"+r]=function(n){return
 
n===t?o["inner"+r].call(this):this.each(function(){e(this).css(s,u(this,n)+"px")})},e.fn["outer"+r]=function(t,n){return
 typeof 
t!="number"?o["outer"+r].call(this,t):this.each(function(){e(this).css(s,u(this,t,!0,n)+"px")})}}),e.extend(e.expr[":"],{data:e.expr.createPseudo?e.expr.createPseudo(function(t){return
 
function(n){return!!e.data(n,t)}}):function(t,n,r){return!!e.data(t,r[3])},focusable:function(t){return
 i(t,!isNaN(e.attr(t,"tabindex")))},tabbable:function(t){var 
n=e.attr(t,"tabindex"),r=isNaN(n);retu
 rn(r||n>=0)&(t,!r)}}),e(function(){var 
t=document.body,n=t.appendChild(n=document.createElement("div"));n.offsetHeight,e.extend(n.style,{minHeight:"100px",height:"auto",padding:0,borderWidth:0}),e.support.minHeight=n.offsetHeight===100,e.support.selectstart="onselectstart"in
 

[12/14] hadoop git commit: HADOOP-10075. Update jetty dependency to version 9 (rkanter)

2016-10-27 Thread rkanter
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5877f20f/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesAttempts.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesAttempts.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesAttempts.java
index cbf8a55..cbdbeaa 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesAttempts.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesAttempts.java
@@ -34,6 +34,7 @@ import javax.xml.parsers.DocumentBuilder;
 import javax.xml.parsers.DocumentBuilderFactory;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.http.JettyUtils;
 import org.apache.hadoop.mapreduce.v2.api.records.JobId;
 import org.apache.hadoop.mapreduce.v2.api.records.TaskAttemptId;
 import org.apache.hadoop.mapreduce.v2.api.records.TaskType;
@@ -127,7 +128,8 @@ public class TestAMWebServicesAttempts extends 
JerseyTestBase {
 ClientResponse response = r.path("ws").path("v1").path("mapreduce")
 .path("jobs").path(jobId).path("tasks").path(tid).path("attempts")
 .accept(MediaType.APPLICATION_JSON).get(ClientResponse.class);
-assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
+assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8,
+response.getType().toString());
 JSONObject json = response.getEntity(JSONObject.class);
 verifyAMTaskAttempts(json, task);
   }
@@ -146,7 +148,8 @@ public class TestAMWebServicesAttempts extends 
JerseyTestBase {
 ClientResponse response = r.path("ws").path("v1").path("mapreduce")
 .path("jobs").path(jobId).path("tasks").path(tid).path("attempts/")
 .accept(MediaType.APPLICATION_JSON).get(ClientResponse.class);
-assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
+assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8,
+response.getType().toString());
 JSONObject json = response.getEntity(JSONObject.class);
 verifyAMTaskAttempts(json, task);
   }
@@ -165,7 +168,8 @@ public class TestAMWebServicesAttempts extends 
JerseyTestBase {
 ClientResponse response = r.path("ws").path("v1").path("mapreduce")
 .path("jobs").path(jobId).path("tasks").path(tid).path("attempts")
 .get(ClientResponse.class);
-assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
+assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8,
+response.getType().toString());
 JSONObject json = response.getEntity(JSONObject.class);
 verifyAMTaskAttempts(json, task);
   }
@@ -185,7 +189,8 @@ public class TestAMWebServicesAttempts extends 
JerseyTestBase {
 .path("jobs").path(jobId).path("tasks").path(tid).path("attempts")
 .accept(MediaType.APPLICATION_XML).get(ClientResponse.class);
 
-assertEquals(MediaType.APPLICATION_XML_TYPE, response.getType());
+assertEquals(MediaType.APPLICATION_XML_TYPE + "; " + JettyUtils.UTF_8,
+response.getType().toString());
 String xml = response.getEntity(String.class);
 DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
 DocumentBuilder db = dbf.newDocumentBuilder();
@@ -220,7 +225,8 @@ public class TestAMWebServicesAttempts extends 
JerseyTestBase {
   .path("jobs").path(jobId).path("tasks").path(tid)
   .path("attempts").path(attid).accept(MediaType.APPLICATION_JSON)
   .get(ClientResponse.class);
-  assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
+  assertEquals(MediaType.APPLICATION_JSON_TYPE + "; "
+  + JettyUtils.UTF_8, response.getType().toString());
   JSONObject json = response.getEntity(JSONObject.class);
   assertEquals("incorrect number of elements", 1, json.length());
   JSONObject info = json.getJSONObject("taskAttempt");
@@ -249,7 +255,8 @@ public class TestAMWebServicesAttempts extends 
JerseyTestBase {
   .path("jobs").path(jobId).path("tasks").path(tid)
   .path("attempts").path(attid + "/")
   .accept(MediaType.APPLICATION_JSON).get(ClientResponse.class);
-  assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
+  

[04/14] hadoop git commit: HADOOP-10075. Update jetty dependency to version 9 (rkanter)

2016-10-27 Thread rkanter
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5877f20f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js.gz
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js.gz
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js.gz
deleted file mode 100644
index 2aac85f..000
Binary files 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js.gz
 and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5877f20f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/MyTestWebService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/MyTestWebService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/MyTestWebService.java
index f37b01a..74623a4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/MyTestWebService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/MyTestWebService.java
@@ -27,12 +27,13 @@ import javax.xml.bind.annotation.XmlAccessorType;
 import javax.xml.bind.annotation.XmlRootElement;
 
 import com.google.inject.Singleton;
+import org.apache.hadoop.http.JettyUtils;
 
 @Singleton
 @Path("/ws/v1/test")
 public class MyTestWebService {
   @GET
-  @Produces({ MediaType.APPLICATION_XML })
+  @Produces({ MediaType.APPLICATION_XML + "; " + JettyUtils.UTF_8 })
   public MyInfo get() {
 return new MyInfo();
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5877f20f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
index f772c77..1474c19 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
@@ -38,7 +38,7 @@
   
 
   javax.servlet
-  servlet-api
+  javax.servlet-api
 
 
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5877f20f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java
index fd63787..6e6e98b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryServer.java
@@ -56,8 +56,8 @@ import 
org.apache.hadoop.yarn.server.timeline.webapp.CrossOriginFilterInitialize
 import org.apache.hadoop.yarn.webapp.WebApp;
 import org.apache.hadoop.yarn.webapp.WebApps;
 import org.apache.hadoop.yarn.webapp.util.WebAppUtils;
-import org.mortbay.jetty.servlet.FilterHolder;
-import org.mortbay.jetty.webapp.WebAppContext;
+import org.eclipse.jetty.servlet.FilterHolder;
+import org.eclipse.jetty.webapp.WebAppContext;
 
 import com.google.common.annotations.VisibleForTesting;
 
@@ -316,27 +316,31 @@ public class ApplicationHistoryServer extends 
CompositeService {
YarnConfiguration.TIMELINE_SERVICE_UI_NAMES);
WebAppContext webAppContext = httpServer.getWebAppContext();
 
-   for (String name : names) {
- String webPath = conf.get(
- YarnConfiguration.TIMELINE_SERVICE_UI_WEB_PATH_PREFIX + name);
- String onDiskPath = conf.get(
- YarnConfiguration.TIMELINE_SERVICE_UI_ON_DISK_PATH_PREFIX + name);
- WebAppContext uiWebAppContext = new WebAppContext();
- uiWebAppContext.setContextPath(webPath);

[11/14] hadoop git commit: HADOOP-10075. Update jetty dependency to version 9 (rkanter)

2016-10-27 Thread rkanter
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5877f20f/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesJobsQuery.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesJobsQuery.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesJobsQuery.java
index c452fd9..e77cfb1 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesJobsQuery.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesJobsQuery.java
@@ -32,6 +32,7 @@ import java.util.Map;
 import javax.ws.rs.core.MediaType;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.http.JettyUtils;
 import org.apache.hadoop.mapreduce.v2.api.records.JobId;
 import org.apache.hadoop.mapreduce.v2.api.records.JobState;
 import org.apache.hadoop.mapreduce.v2.app.AppContext;
@@ -130,7 +131,8 @@ public class TestHsWebServicesJobsQuery extends 
JerseyTestBase {
 .path("mapreduce").path("jobs").queryParam("state", 
notInUse.toString())
 .accept(MediaType.APPLICATION_JSON).get(ClientResponse.class);
 
-assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
+assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8,
+response.getType().toString());
 JSONObject json = response.getEntity(JSONObject.class);
 assertEquals("incorrect number of elements", 1, json.length());
 assertEquals("jobs is not empty",
@@ -152,7 +154,8 @@ public class TestHsWebServicesJobsQuery extends 
JerseyTestBase {
 ClientResponse response = r.path("ws").path("v1").path("history")
 .path("mapreduce").path("jobs").queryParam("state", queryState)
 .accept(MediaType.APPLICATION_JSON).get(ClientResponse.class);
-assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
+assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8,
+response.getType().toString());
 JSONObject json = response.getEntity(JSONObject.class);
 assertEquals("incorrect number of elements", 1, json.length());
 JSONObject jobs = json.getJSONObject("jobs");
@@ -172,7 +175,8 @@ public class TestHsWebServicesJobsQuery extends 
JerseyTestBase {
 .accept(MediaType.APPLICATION_JSON).get(ClientResponse.class);
 
 assertResponseStatusCode(Status.BAD_REQUEST, response.getStatusInfo());
-assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
+assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8,
+response.getType().toString());
 JSONObject msg = response.getEntity(JSONObject.class);
 JSONObject exception = msg.getJSONObject("RemoteException");
 assertEquals("incorrect number of elements", 3, exception.length());
@@ -197,7 +201,8 @@ public class TestHsWebServicesJobsQuery extends 
JerseyTestBase {
 ClientResponse response = r.path("ws").path("v1").path("history")
 .path("mapreduce").path("jobs").queryParam("user", "bogus")
 .accept(MediaType.APPLICATION_JSON).get(ClientResponse.class);
-assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
+assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8,
+response.getType().toString());
 JSONObject json = response.getEntity(JSONObject.class);
 assertEquals("incorrect number of elements", 1, json.length());
 assertEquals("jobs is not empty",
@@ -210,7 +215,8 @@ public class TestHsWebServicesJobsQuery extends 
JerseyTestBase {
 ClientResponse response = r.path("ws").path("v1").path("history")
 .path("mapreduce").path("jobs").queryParam("user", "mock")
 .accept(MediaType.APPLICATION_JSON).get(ClientResponse.class);
-assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
+assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8,
+response.getType().toString());
 JSONObject json = response.getEntity(JSONObject.class);
 System.out.println(json.toString());
 
@@ -230,7 +236,8 @@ public class TestHsWebServicesJobsQuery extends 
JerseyTestBase {
 ClientResponse response = r.path("ws").path("v1").path("history")
 .path("mapreduce").path("jobs").queryParam("limit", "2")
 .accept(MediaType.APPLICATION_JSON).get(ClientResponse.class);
-assertEquals(MediaType.APPLICATION_JSON_TYPE, response.getType());
+assertEquals(MediaType.APPLICATION_JSON_TYPE + "; " + JettyUtils.UTF_8,
+response.getType().toString());
 

[13/14] hadoop git commit: HADOOP-10075. Update jetty dependency to version 9 (rkanter)

2016-10-27 Thread rkanter
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5877f20f/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java
--
diff --git 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java
 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java
index d8755ec..271c339 100644
--- 
a/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java
+++ 
b/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java
@@ -24,6 +24,7 @@ import org.apache.hadoop.crypto.key.KeyProvider.KeyVersion;
 import org.apache.hadoop.crypto.key.KeyProviderCryptoExtension;
 import 
org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.EncryptedKeyVersion;
 import org.apache.hadoop.crypto.key.kms.KMSRESTConstants;
+import org.apache.hadoop.http.JettyUtils;
 import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.crypto.key.kms.KMSClientProvider;
@@ -101,7 +102,7 @@ public class KMS {
   @POST
   @Path(KMSRESTConstants.KEYS_RESOURCE)
   @Consumes(MediaType.APPLICATION_JSON)
-  @Produces(MediaType.APPLICATION_JSON)
+  @Produces(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8)
   @SuppressWarnings("unchecked")
   public Response createKey(Map jsonKey) throws Exception {
 try{
@@ -204,7 +205,7 @@ public class KMS {
   @POST
   @Path(KMSRESTConstants.KEY_RESOURCE + "/{name:.*}")
   @Consumes(MediaType.APPLICATION_JSON)
-  @Produces(MediaType.APPLICATION_JSON)
+  @Produces(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8)
   public Response rolloverKey(@PathParam("name") final String name,
   Map jsonMaterial) throws Exception {
 try {
@@ -254,7 +255,7 @@ public class KMS {
 
   @GET
   @Path(KMSRESTConstants.KEYS_METADATA_RESOURCE)
-  @Produces(MediaType.APPLICATION_JSON)
+  @Produces(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8)
   public Response getKeysMetadata(@QueryParam(KMSRESTConstants.KEY)
   List keyNamesList) throws Exception {
 try {
@@ -287,7 +288,7 @@ public class KMS {
 
   @GET
   @Path(KMSRESTConstants.KEYS_NAMES_RESOURCE)
-  @Produces(MediaType.APPLICATION_JSON)
+  @Produces(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8)
   public Response getKeyNames() throws Exception {
 try {
   LOG.trace("Entering getKeyNames method.");
@@ -332,7 +333,7 @@ public class KMS {
   @GET
   @Path(KMSRESTConstants.KEY_RESOURCE + "/{name:.*}/" +
   KMSRESTConstants.METADATA_SUB_RESOURCE)
-  @Produces(MediaType.APPLICATION_JSON)
+  @Produces(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8)
   public Response getMetadata(@PathParam("name") final String name)
   throws Exception {
 try {
@@ -366,7 +367,7 @@ public class KMS {
   @GET
   @Path(KMSRESTConstants.KEY_RESOURCE + "/{name:.*}/" +
   KMSRESTConstants.CURRENT_VERSION_SUB_RESOURCE)
-  @Produces(MediaType.APPLICATION_JSON)
+  @Produces(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8)
   public Response getCurrentVersion(@PathParam("name") final String name)
   throws Exception {
 try {
@@ -399,7 +400,7 @@ public class KMS {
 
   @GET
   @Path(KMSRESTConstants.KEY_VERSION_RESOURCE + "/{versionName:.*}")
-  @Produces(MediaType.APPLICATION_JSON)
+  @Produces(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8)
   public Response getKeyVersion(
   @PathParam("versionName") final String versionName) throws Exception {
 try {
@@ -436,7 +437,7 @@ public class KMS {
   @GET
   @Path(KMSRESTConstants.KEY_RESOURCE + "/{name:.*}/" +
   KMSRESTConstants.EEK_SUB_RESOURCE)
-  @Produces(MediaType.APPLICATION_JSON)
+  @Produces(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8)
   public Response generateEncryptedKeys(
   @PathParam("name") final String name,
   @QueryParam(KMSRESTConstants.EEK_OP) String edekOp,
@@ -508,7 +509,7 @@ public class KMS {
   @POST
   @Path(KMSRESTConstants.KEY_VERSION_RESOURCE + "/{versionName:.*}/" +
   KMSRESTConstants.EEK_SUB_RESOURCE)
-  @Produces(MediaType.APPLICATION_JSON)
+  @Produces(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8)
   public Response decryptEncryptedKey(
   @PathParam("versionName") final String versionName,
   @QueryParam(KMSRESTConstants.EEK_OP) String eekOp,
@@ -577,7 +578,7 @@ public class KMS {
   @GET
   @Path(KMSRESTConstants.KEY_RESOURCE + "/{name:.*}/" +
   KMSRESTConstants.VERSIONS_SUB_RESOURCE)
-  @Produces(MediaType.APPLICATION_JSON)
+  @Produces(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8)
   public Response getKeyVersions(@PathParam("name") final String name)
   throws Exception {
 try {


[01/14] hadoop git commit: HADOOP-10075. Update jetty dependency to version 9 (rkanter)

2016-10-27 Thread rkanter
Repository: hadoop
Updated Branches:
  refs/heads/trunk 9e03ee527 -> 5877f20f9


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5877f20f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServices.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServices.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServices.java
index a2a9c28..4bd37f8 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServices.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWebServices.java
@@ -35,6 +35,7 @@ import javax.ws.rs.core.MediaType;
 
 import org.apache.commons.io.FileUtils;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.http.JettyUtils;
 import org.apache.hadoop.yarn.api.records.timeline.TimelineAbout;
 import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntity;
 import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntityType;
@@ -184,7 +185,8 @@ public class TestTimelineReaderWebServices {
   "timeline/clusters/cluster1/apps/app1/entities/app/id_1");
   ClientResponse resp = getResponse(client, uri);
   TimelineEntity entity = resp.getEntity(TimelineEntity.class);
-  assertEquals(MediaType.APPLICATION_JSON_TYPE, resp.getType());
+  assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8,
+  resp.getType().toString());
   assertNotNull(entity);
   assertEquals("id_1", entity.getId());
   assertEquals("app", entity.getType());
@@ -207,7 +209,8 @@ public class TestTimelineReaderWebServices {
   "userid=user1=flow1=1");
   ClientResponse resp = getResponse(client, uri);
   TimelineEntity entity = resp.getEntity(TimelineEntity.class);
-  assertEquals(MediaType.APPLICATION_JSON_TYPE, resp.getType());
+  assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8,
+  resp.getType().toString());
   assertNotNull(entity);
   assertEquals("id_1", entity.getId());
   assertEquals("app", entity.getType());
@@ -227,7 +230,8 @@ public class TestTimelineReaderWebServices {
   "fields=CONFIGS,Metrics,info");
   ClientResponse resp = getResponse(client, uri);
   TimelineEntity entity = resp.getEntity(TimelineEntity.class);
-  assertEquals(MediaType.APPLICATION_JSON_TYPE, resp.getType());
+  assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8,
+  resp.getType().toString());
   assertNotNull(entity);
   assertEquals("id_1", entity.getId());
   assertEquals("app", entity.getType());
@@ -253,7 +257,8 @@ public class TestTimelineReaderWebServices {
   "fields=ALL");
   ClientResponse resp = getResponse(client, uri);
   TimelineEntity entity = resp.getEntity(TimelineEntity.class);
-  assertEquals(MediaType.APPLICATION_JSON_TYPE, resp.getType());
+  assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8,
+  resp.getType().toString());
   assertNotNull(entity);
   assertEquals("id_1", entity.getId());
   assertEquals("app", entity.getType());
@@ -289,7 +294,8 @@ public class TestTimelineReaderWebServices {
   "timeline/apps/app1/entities/app/id_1");
   ClientResponse resp = getResponse(client, uri);
   TimelineEntity entity = resp.getEntity(TimelineEntity.class);
-  assertEquals(MediaType.APPLICATION_JSON_TYPE, resp.getType());
+  assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8,
+  resp.getType().toString());
   assertNotNull(entity);
   assertEquals("id_1", entity.getId());
   assertEquals("app", entity.getType());
@@ -299,7 +305,8 @@ public class TestTimelineReaderWebServices {
   resp = getResponse(client, uri);
   Set entities =
   resp.getEntity(new GenericType(){});
-  assertEquals(MediaType.APPLICATION_JSON_TYPE, resp.getType());
+  assertEquals(MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8,
+  resp.getType().toString());
   assertNotNull(entities);
   assertEquals(4, entities.size());
 } finally {
@@ -316,7 +323,8 @@ public class TestTimelineReaderWebServices {
   ClientResponse resp = getResponse(client, uri);
   Set entities =
   resp.getEntity(new GenericType(){});
-  

[08/14] hadoop git commit: HADOOP-10075. Update jetty dependency to version 9 (rkanter)

2016-10-27 Thread rkanter
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5877f20f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-1.8.2.min.js.gz
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-1.8.2.min.js.gz
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-1.8.2.min.js.gz
deleted file mode 100644
index d2e3ec8..000
Binary files 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-1.8.2.min.js.gz
 and /dev/null differ


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[09/14] hadoop git commit: HADOOP-10075. Update jetty dependency to version 9 (rkanter)

2016-10-27 Thread rkanter
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5877f20f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-1.8.2.min.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-1.8.2.min.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-1.8.2.min.js
new file mode 100644
index 000..bc3fbc8
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery/jquery-1.8.2.min.js
@@ -0,0 +1,2 @@
+/*! jQuery v1.8.2 jquery.com | jquery.org/license */
+(function(a,b){function G(a){var b=F[a]={};return 
p.each(a.split(s),function(a,c){b[c]=!0}),b}function 
J(a,c,d){if(d===b&===1){var 
e="data-"+c.replace(I,"-$1").toLowerCase();d=a.getAttribute(e);if(typeof 
d=="string"){try{d=d==="true"?!0:d==="false"?!1:d==="null"?null:+d+""===d?+d:H.test(d)?p.parseJSON(d):d}catch(f){}p.data(a,c,d)}else
 d=b}return d}function K(a){var b;for(b in 
a){if(b==="data"&(a[b]))continue;if(b!=="toJSON")return!1}return!0}function
 ba(){return!1}function bb(){return!0}function 
bh(a){return!a||!a.parentNode||a.parentNode.nodeType===11}function bi(a,b){do 
a=a[b];while(a&!==1);return a}function 
bj(a,b,c){b=b||0;if(p.isFunction(b))return p.grep(a,function(a,d){var 
e=!!b.call(a,d,a);return e===c});if(b.nodeType)return 
p.grep(a,function(a,d){return a===b===c});if(typeof b=="string"){var 
d=p.grep(a,function(a){return a.nodeType===1});if(be.test(b))return 
p.filter(b,d,!c);b=p.filter(b,d)}return p.grep(a,function(a,d){return p.inArray(
 a,b)>=0===c})}function bk(a){var 
b=bl.split("|"),c=a.createDocumentFragment();if(c.createElement)while(b.length)c.createElement(b.pop());return
 c}function bC(a,b){return 
a.getElementsByTagName(b)[0]||a.appendChild(a.ownerDocument.createElement(b))}function
 bD(a,b){if(b.nodeType!==1||!p.hasData(a))return;var 
c,d,e,f=p._data(a),g=p._data(b,f),h=f.events;if(h){delete 
g.handle,g.events={};for(c in 
h)for(d=0,e=h[c].length;d").appendTo(e.body),c=b.css("display");b.remove();if(c==="none"||c===""){bI=e.body.appendChild(bI||p.extend(e.createElement("iframe"),{frameBorder:0,width:0,height:0}));if(!bJ||!bI.
 
createElement)bJ=(bI.contentWindow||bI.contentDocument).document,bJ.write(""),bJ.close();b=bJ.body.appendChild(bJ.createElement(a)),c=bH(b,"display"),e.body.removeChild(bI)}return
 bS[a]=c,c}function ci(a,b,c,d){var 
e;if(p.isArray(b))p.each(b,function(b,e){c||ce.test(a)?d(a,e):ci(a+"["+(typeof 
e=="object"?b:"")+"]",e,c,d)});else 

[05/14] hadoop git commit: HADOOP-10075. Update jetty dependency to version 9 (rkanter)

2016-10-27 Thread rkanter
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5877f20f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js
new file mode 100644
index 000..d4d8985
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js
@@ -0,0 +1,4544 @@
+/*
+ * jsTree 1.0-rc3
+ * http://jstree.com/
+ *
+ * Copyright (c) 2010 Ivan Bozhanov (vakata.com)
+ *
+ * Licensed same as jquery - under the terms of either the MIT License or the 
GPL Version 2 License
+ *   http://www.opensource.org/licenses/mit-license.php
+ *   http://www.gnu.org/licenses/gpl.html
+ *
+ * $Date$
+ * $Revision$
+ */
+
+/*jslint browser: true, onevar: true, undef: true, bitwise: true, strict: true 
*/
+/*global window : false, clearInterval: false, clearTimeout: false, document: 
false, setInterval: false, setTimeout: false, jQuery: false, navigator: false, 
XSLTProcessor: false, DOMParser: false, XMLSerializer: false*/
+
+"use strict";
+
+// top wrapper to prevent multiple inclusion (is this OK?)
+(function () { if(jQuery && jQuery.jstree) { return; }
+  var is_ie6 = false, is_ie7 = false, is_ff2 = false;
+
+/*
+ * jsTree core
+ */
+(function ($) {
+  // Common functions not related to jsTree
+  // decided to move them to a `vakata` "namespace"
+  $.vakata = {};
+  // CSS related functions
+  $.vakata.css = {
+get_css : function(rule_name, delete_flag, sheet) {
+  rule_name = rule_name.toLowerCase();
+  var css_rules = sheet.cssRules || sheet.rules,
+j = 0;
+  do {
+if(css_rules.length && j > css_rules.length + 5) { return false; }
+if(css_rules[j].selectorText && 
css_rules[j].selectorText.toLowerCase() == rule_name) {
+  if(delete_flag === true) {
+if(sheet.removeRule) { sheet.removeRule(j); }
+if(sheet.deleteRule) { sheet.deleteRule(j); }
+return true;
+  }
+  else { return css_rules[j]; }
+}
+  }
+  while (css_rules[++j]);
+  return false;
+},
+add_css : function(rule_name, sheet) {
+  if($.jstree.css.get_css(rule_name, false, sheet)) { return false; }
+  if(sheet.insertRule) { sheet.insertRule(rule_name + ' { }', 0); } else { 
sheet.addRule(rule_name, null, 0); }
+  return $.vakata.css.get_css(rule_name);
+},
+remove_css : function(rule_name, sheet) {
+  return $.vakata.css.get_css(rule_name, true, sheet);
+},
+add_sheet : function(opts) {
+  var tmp = false, is_new = true;
+  if(opts.str) {
+if(opts.title) { tmp = $("style[id='" + opts.title + 
"-stylesheet']")[0]; }
+if(tmp) { is_new = false; }
+else {
+  tmp = document.createElement("style");
+  tmp.setAttribute('type',"text/css");
+  if(opts.title) { tmp.setAttribute("id", opts.title + "-stylesheet"); 
}
+}
+if(tmp.styleSheet) {
+  if(is_new) {
+document.getElementsByTagName("head")[0].appendChild(tmp);
+tmp.styleSheet.cssText = opts.str;
+  }
+  else {
+tmp.styleSheet.cssText = tmp.styleSheet.cssText + " " + opts.str;
+  }
+}
+else {
+  tmp.appendChild(document.createTextNode(opts.str));
+  document.getElementsByTagName("head")[0].appendChild(tmp);
+}
+return tmp.sheet || tmp.styleSheet;
+  }
+  if(opts.url) {
+if(document.createStyleSheet) {
+  try { tmp = document.createStyleSheet(opts.url); } catch (e) { }
+}
+else {
+  tmp  = document.createElement('link');
+  tmp.rel= 'stylesheet';
+  tmp.type  = 'text/css';
+  tmp.media  = "all";
+  tmp.href  = opts.url;
+  document.getElementsByTagName("head")[0].appendChild(tmp);
+  return tmp.styleSheet;
+}
+  }
+}
+  };
+
+  // private variables
+  var instances = [],  // instance array (used by 
$.jstree.reference/create/focused)
+focused_instance = -1,  // the index in the instance array of the 
currently focused instance
+plugins = {},  // list of included plugins
+prepared_move = {};// for the move_node function
+
+  // jQuery plugin wrapper (thanks to jquery UI widget function)
+  $.fn.jstree = function (settings) {
+var isMethodCall = (typeof settings == 'string'), // is this a method call 
like $().jstree("open_node")
+  args = Array.prototype.slice.call(arguments, 1),
+  returnValue = this;
+
+// if a method call execute the method on all selected instances
+if(isMethodCall) {
+  if(settings.substring(0, 1) == '_') { return 

[14/14] hadoop git commit: HADOOP-10075. Update jetty dependency to version 9 (rkanter)

2016-10-27 Thread rkanter
HADOOP-10075. Update jetty dependency to version 9 (rkanter)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5877f20f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5877f20f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5877f20f

Branch: refs/heads/trunk
Commit: 5877f20f9c3f6f0afa505715e9a2ee312475af17
Parents: 9e03ee5
Author: Robert Kanter 
Authored: Thu Oct 27 16:01:23 2016 -0700
Committer: Robert Kanter 
Committed: Thu Oct 27 16:09:00 2016 -0700

--
 hadoop-client/pom.xml   |   20 +-
 .../hadoop-auth-examples/pom.xml|2 +-
 .../examples/RequestLoggerFilter.java   |   12 +
 hadoop-common-project/hadoop-auth/pom.xml   |   13 +-
 .../client/AuthenticatorTestCase.java   |   29 +-
 hadoop-common-project/hadoop-common/pom.xml |   34 +-
 .../hadoop/http/AdminAuthorizedServlet.java |2 +-
 .../org/apache/hadoop/http/HttpRequestLog.java  |4 +-
 .../org/apache/hadoop/http/HttpServer2.java |  305 +-
 .../java/org/apache/hadoop/http/JettyUtils.java |   35 +
 .../ssl/SslSelectChannelConnectorSecure.java|   58 -
 .../org/apache/hadoop/conf/TestConfServlet.java |2 +-
 .../hadoop/fs/FSMainOperationsBaseTest.java |4 +-
 .../fs/viewfs/ViewFileSystemTestSetup.java  |   10 +-
 .../hadoop/fs/viewfs/ViewFsTestSetup.java   |   10 +-
 .../http/TestAuthenticationSessionCookie.java   |   11 +-
 .../apache/hadoop/http/TestHttpRequestLog.java  |4 +-
 .../org/apache/hadoop/http/TestHttpServer.java  |   22 +-
 .../apache/hadoop/http/TestServletFilter.java   |7 +-
 .../hadoop/http/resource/JerseyResource.java|5 +-
 .../delegation/web/TestWebDelegationToken.java  |   64 +-
 hadoop-common-project/hadoop-kms/pom.xml|   20 +-
 .../hadoop/crypto/key/kms/server/KMS.java   |   21 +-
 .../key/kms/server/KMSAuthenticationFilter.java |   12 +
 .../crypto/key/kms/server/KMSJSONWriter.java|3 +-
 .../hadoop/crypto/key/kms/server/MiniKMS.java   |   63 +-
 hadoop-common-project/hadoop-nfs/pom.xml|2 +-
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml  |   26 +-
 .../hadoop/fs/http/server/HttpFSServer.java |   12 +-
 .../apache/hadoop/lib/wsrs/JSONMapProvider.java |3 +-
 .../apache/hadoop/lib/wsrs/JSONProvider.java|3 +-
 .../fs/http/client/BaseTestHttpFSWith.java  |6 +-
 .../hadoop/fs/http/server/TestHttpFSServer.java |6 +-
 .../fs/http/server/TestHttpFSServerNoACLs.java  |6 +-
 .../http/server/TestHttpFSServerNoXAttrs.java   |6 +-
 .../fs/http/server/TestHttpFSWithKerberos.java  |6 +-
 .../org/apache/hadoop/test/TestHFSTestCase.java |8 +-
 .../org/apache/hadoop/test/TestHTestCase.java   |8 +-
 .../org/apache/hadoop/test/TestJettyHelper.java |   56 +-
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml |8 +-
 hadoop-hdfs-project/hadoop-hdfs/pom.xml |   25 +-
 .../hdfs/qjournal/server/JournalNode.java   |2 +-
 .../hadoop/hdfs/server/datanode/DataNode.java   |2 +-
 .../hdfs/server/namenode/FSNamesystem.java  |2 +-
 .../hadoop/hdfs/server/namenode/NNStorage.java  |2 +-
 .../hdfs/server/namenode/TransferFsImage.java   |2 +-
 .../web/resources/NamenodeWebHdfsMethods.java   |   39 +-
 .../apache/hadoop/hdfs/TestDecommission.java|2 +-
 .../qjournal/server/TestJournalNodeMXBean.java  |2 +-
 .../blockmanagement/TestBlockStatsMXBean.java   |2 +-
 .../server/datanode/TestDataNodeMXBean.java |2 +-
 .../server/namenode/TestFSNamesystemMBean.java  |2 +-
 .../server/namenode/TestNameNodeMXBean.java |2 +-
 .../namenode/TestStartupProgressServlet.java|2 +-
 .../server/namenode/TestTransferFsImage.java|2 +-
 .../hadoop/hdfs/web/TestWebHDFSForHA.java   |2 +-
 .../hadoop/test/MiniDFSClusterManager.java  |2 +-
 .../hadoop/mapreduce/v2/app/JobEndNotifier.java |   37 +-
 .../mapreduce/v2/app/webapp/AMWebServices.java  |   49 +-
 .../v2/app/webapp/TestAMWebServices.java|   31 +-
 .../v2/app/webapp/TestAMWebServicesAttempt.java |   13 +-
 .../app/webapp/TestAMWebServicesAttempts.java   |   34 +-
 .../v2/app/webapp/TestAMWebServicesJobConf.java |   13 +-
 .../v2/app/webapp/TestAMWebServicesJobs.java|   64 +-
 .../v2/app/webapp/TestAMWebServicesTasks.java   |   61 +-
 .../mapreduce/v2/hs/webapp/HsWebServices.java   |   40 +-
 .../v2/hs/webapp/TestHsWebServices.java |   25 +-
 .../v2/hs/webapp/TestHsWebServicesAttempts.java |   34 +-
 .../v2/hs/webapp/TestHsWebServicesJobConf.java  |   13 +-
 .../v2/hs/webapp/TestHsWebServicesJobs.java |   67 +-
 .../hs/webapp/TestHsWebServicesJobsQuery.java   |   76 +-
 .../v2/hs/webapp/TestHsWebServicesTasks.java|   61 +-
 .../hadoop/mapred/NotificationTestCase.java |   12 +-
 

[03/14] hadoop git commit: HADOOP-10075. Update jetty dependency to version 9 (rkanter)

2016-10-27 Thread rkanter
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5877f20f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMNMInfo.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMNMInfo.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMNMInfo.java
index f6af030..1b7ddd3 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMNMInfo.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMNMInfo.java
@@ -33,7 +33,7 @@ import org.apache.hadoop.metrics2.util.MBeans;
 import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNode;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNodeReport;
-import org.mortbay.util.ajax.JSON;
+import org.eclipse.jetty.util.ajax.JSON;
 
 /**
  * JMX bean listing statuses of all node managers.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5877f20f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
index 99440a8..2c61339 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
@@ -60,6 +60,7 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.http.JettyUtils;
 import org.apache.hadoop.io.DataOutputBuffer;
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.security.Credentials;
@@ -257,14 +258,16 @@ public class RMWebServices extends WebServices {
   }
 
   @GET
-  @Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
+  @Produces({ MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8,
+  MediaType.APPLICATION_XML + "; " + JettyUtils.UTF_8 })
   public ClusterInfo get() {
 return getClusterInfo();
   }
 
   @GET
   @Path("/info")
-  @Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
+  @Produces({ MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8,
+  MediaType.APPLICATION_XML + "; " + JettyUtils.UTF_8 })
   public ClusterInfo getClusterInfo() {
 init();
 return new ClusterInfo(this.rm);
@@ -272,7 +275,8 @@ public class RMWebServices extends WebServices {
 
   @GET
   @Path("/metrics")
-  @Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
+  @Produces({ MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8,
+  MediaType.APPLICATION_XML + "; " + JettyUtils.UTF_8 })
   public ClusterMetricsInfo getClusterMetricsInfo() {
 init();
 return new ClusterMetricsInfo(this.rm);
@@ -280,7 +284,8 @@ public class RMWebServices extends WebServices {
 
   @GET
   @Path("/scheduler")
-  @Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
+  @Produces({ MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8,
+  MediaType.APPLICATION_XML + "; " + JettyUtils.UTF_8 })
   public SchedulerTypeInfo getSchedulerInfo() {
 init();
 ResourceScheduler rs = rm.getResourceScheduler();
@@ -303,7 +308,8 @@ public class RMWebServices extends WebServices {
 
   @POST
   @Path("/scheduler/logs")
-  @Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
+  @Produces({ MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8,
+  MediaType.APPLICATION_XML + "; " + JettyUtils.UTF_8 })
   public String dumpSchedulerLogs(@FormParam("time") String time,
   @Context HttpServletRequest hsr) throws IOException {
 init();
@@ -340,7 +346,8 @@ public class RMWebServices extends WebServices {
*/
   @GET
   @Path("/nodes")
-  @Produces({ MediaType.APPLICATION_JSON, 

[10/14] hadoop git commit: HADOOP-10075. Update jetty dependency to version 9 (rkanter)

2016-10-27 Thread rkanter
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5877f20f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.9.4/js/jquery.dataTables.min.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.9.4/js/jquery.dataTables.min.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.9.4/js/jquery.dataTables.min.js
new file mode 100644
index 000..61acb9b
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.9.4/js/jquery.dataTables.min.js
@@ -0,0 +1,157 @@
+/*
+ * File:jquery.dataTables.min.js
+ * Version: 1.9.4
+ * Author:  Allan Jardine (www.sprymedia.co.uk)
+ * Info:www.datatables.net
+ *
+ * Copyright 2008-2012 Allan Jardine, all rights reserved.
+ *
+ * This source file is free software, under either the GPL v2 license or a
+ * BSD style license, available at:
+ *   http://datatables.net/license_gpl2
+ *   http://datatables.net/license_bsd
+ *
+ * This source file is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the license files for details.
+ */
+(function(la,s,p){(function(i){if(typeof 
define==="function"&)define(["jquery"],i);else 
jQuery&&!jQuery.fn.dataTable&(jQuery)})(function(i){var 
l=function(h){function n(a,b){var 
c=l.defaults.columns,d=a.aoColumns.length;b=i.extend({},l.models.oColumn,c,{sSortingClass:a.oClasses.sSortable,sSortingClassJUI:a.oClasses.sSortJUI,nTh:b?b:s.createElement("th"),sTitle:c.sTitle?c.sTitle:b?b.innerHTML:"",aDataSort:c.aDataSort?c.aDataSort:[d],mData:c.mData?c.oDefaults:d});a.aoColumns.push(b);if(a.aoPreSearchCols[d]===
+p||a.aoPreSearchCols[d]===null)a.aoPreSearchCols[d]=i.extend({},l.models.oSearch);else{b=a.aoPreSearchCols[d];if(b.bRegex===p)b.bRegex=true;if(b.bSmart===p)b.bSmart=true;if(b.bCaseInsensitive===p)b.bCaseInsensitive=true}q(a,d,null)}function
 q(a,b,c){var 
d=a.aoColumns[b];if(c!==p&!==null){if(c.mDataProp&&!c.mData)c.mData=c.mDataProp;if(c.sType!==p){d.sType=c.sType;d._bAutoType=false}i.extend(d,c);r(d,c,"sWidth","sWidthOrig");if(c.iDataSort!==p)d.aDataSort=[c.iDataSort];r(d,c,"aDataSort")}var
 e=d.mRender?
+ca(d.mRender):null,f=ca(d.mData);d.fnGetData=function(g,j){var 
k=f(g,j);if(d.mRender&&!=="")return e(k,j,g);return 
k};d.fnSetData=Ja(d.mData);if(!a.oFeatures.bSort)d.bSortable=false;if(!d.bSortable||i.inArray("asc",d.asSorting)==-1&("desc",d.asSorting)==-1){d.sSortingClass=a.oClasses.sSortableNone;d.sSortingClassJUI=""}else
 
if(i.inArray("asc",d.asSorting)==-1&("desc",d.asSorting)==-1){d.sSortingClass=a.oClasses.sSortable;d.sSortingClassJUI=a.oClasses.sSortJUI}else
 if(i.inArray("asc",
+d.asSorting)!=-1&("desc",d.asSorting)==-1){d.sSortingClass=a.oClasses.sSortableAsc;d.sSortingClassJUI=a.oClasses.sSortJUIAscAllowed}else
 
if(i.inArray("asc",d.asSorting)==-1&("desc",d.asSorting)!=-1){d.sSortingClass=a.oClasses.sSortableDesc;d.sSortingClassJUI=a.oClasses.sSortJUIDescAllowed}}function
 o(a){if(a.oFeatures.bAutoWidth===false)return false;ta(a);for(var 
b=0,c=a.aoColumns.length;b=0;e--){var 
m=b[e].aTargets;i.isArray(m)||O(a,1,"aTargets must be an array of targets, not 
a "+typeof m);f=0;for(g=m.length;f=0){for(;a.aoColumns.length<=m[f];)n(a);d(m[f],b[e])}else 
if(typeof m[f]==="number"&[f]<0)d(a.aoColumns.length+m[f],b[e]);else 
if(typeof m[f]===
+"string"){j=0;for(k=a.aoColumns.length;j

hadoop git commit: HDFS-11047. Remove deep copies of FinalizedReplica to alleviate heap consumption on DataNode. Contributed by Xiaobing Zhou

2016-10-27 Thread liuml07
Repository: hadoop
Updated Branches:
  refs/heads/trunk f3ac1f41b -> 9e03ee527


HDFS-11047. Remove deep copies of FinalizedReplica to alleviate heap 
consumption on DataNode. Contributed by Xiaobing Zhou


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9e03ee52
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9e03ee52
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9e03ee52

Branch: refs/heads/trunk
Commit: 9e03ee527988ff85af7f2c224c5570b69d09279a
Parents: f3ac1f4
Author: Mingliang Liu 
Authored: Thu Oct 27 15:58:09 2016 -0700
Committer: Mingliang Liu 
Committed: Thu Oct 27 16:00:27 2016 -0700

--
 .../hdfs/server/datanode/DirectoryScanner.java  | 14 +++---
 .../server/datanode/fsdataset/FsDatasetSpi.java | 11 ++-
 .../datanode/fsdataset/impl/FsDatasetImpl.java  | 16 +++-
 3 files changed, 28 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9e03ee52/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
index 58071dc..e2baf32 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
@@ -22,6 +22,7 @@ import java.io.File;
 import java.io.FilenameFilter;
 import java.io.IOException;
 import java.util.Arrays;
+import java.util.Collections;
 import java.util.HashMap;
 import java.util.LinkedList;
 import java.util.List;
@@ -398,14 +399,13 @@ public class DirectoryScanner implements Runnable {
 diffs.put(bpid, diffRecord);
 
 statsRecord.totalBlocks = blockpoolReport.length;
-List bl = dataset.getFinalizedBlocks(bpid);
-ReplicaInfo[] memReport = bl.toArray(new ReplicaInfo[bl.size()]);
-Arrays.sort(memReport); // Sort based on blockId
+final List bl = dataset.getFinalizedBlocks(bpid);
+Collections.sort(bl); // Sort based on blockId
   
 int d = 0; // index for blockpoolReport
 int m = 0; // index for memReprot
-while (m < memReport.length && d < blockpoolReport.length) {
-  ReplicaInfo memBlock = memReport[m];
+while (m < bl.size() && d < blockpoolReport.length) {
+  ReplicaInfo memBlock = bl.get(m);
   ScanInfo info = blockpoolReport[d];
   if (info.getBlockId() < memBlock.getBlockId()) {
 if (!dataset.isDeletingBlock(bpid, info.getBlockId())) {
@@ -452,8 +452,8 @@ public class DirectoryScanner implements Runnable {
 ++m;
   }
 }
-while (m < memReport.length) {
-  ReplicaInfo current = memReport[m++];
+while (m < bl.size()) {
+  ReplicaInfo current = bl.get(m++);
   addDifference(diffRecord, statsRecord,
 current.getBlockId(), current.getVolume());
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9e03ee52/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
index f2ffa83..e113212 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
@@ -229,7 +229,16 @@ public interface FsDatasetSpi 
extends FSDatasetMBean {
*/
   VolumeFailureSummary getVolumeFailureSummary();
 
-  /** @return a list of finalized blocks for the given block pool. */
+  /**
+   * Gets a list of references to the finalized blocks for the given block 
pool.
+   * 
+   * Callers of this function should call
+   * {@link FsDatasetSpi#acquireDatasetLock} to avoid blocks' status being
+   * changed during list iteration.
+   * 
+   * @return a list of references to the finalized blocks for the given block
+   * pool.
+   */
   List getFinalizedBlocks(String bpid);
 
   /** @return a list of finalized blocks for the given block pool. */


hadoop git commit: HDFS-9500. Fix software version counts for DataNodes during rolling upgrade. Contributed by Erik Krogen.

2016-10-27 Thread shv
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 b32f14d80 -> 2ec90a1a4


HDFS-9500. Fix software version counts for DataNodes during rolling upgrade. 
Contributed by Erik Krogen.

(cherry picked from commit f3ac1f41b8fa82a0ac87a207d7afa2061d90a9bd)

Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2ec90a1a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2ec90a1a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2ec90a1a

Branch: refs/heads/branch-2.8
Commit: 2ec90a1a4b51e0c9716fddd2371011d2efa39ab2
Parents: b32f14d
Author: Erik Krogen 
Authored: Thu Oct 27 15:14:21 2016 -0700
Committer: Konstantin V Shvachko 
Committed: Thu Oct 27 16:04:58 2016 -0700

--
 .../server/blockmanagement/DatanodeManager.java | 19 ++
 .../blockmanagement/TestDatanodeManager.java| 37 
 2 files changed, 50 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2ec90a1a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index cc69622..e245ee2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -672,19 +672,26 @@ public class DatanodeManager {
 }
   }
 
+  /**
+   * Will return true for all Datanodes which have a non-null software
+   * version and are considered alive (by {@link 
DatanodeDescriptor#isAlive()}),
+   * indicating the node has not yet been removed. Use {@code isAlive}
+   * rather than {@link DatanodeManager#isDatanodeDead(DatanodeDescriptor)}
+   * to ensure that the version is decremented even if the datanode
+   * hasn't issued a heartbeat recently.
+   *
+   * @param node The datanode in question
+   * @return True iff its version count should be decremented
+   */
   private boolean shouldCountVersion(DatanodeDescriptor node) {
-return node.getSoftwareVersion() != null && node.isAlive() &&
-  !isDatanodeDead(node);
+return node.getSoftwareVersion() != null && node.isAlive();
   }
 
   private void countSoftwareVersions() {
 synchronized(datanodeMap) {
   HashMap versionCount = new HashMap<>();
   for(DatanodeDescriptor dn: datanodeMap.values()) {
-// Check isAlive too because right after removeDatanode(),
-// isDatanodeDead() is still true 
-if(shouldCountVersion(dn))
-{
+if (shouldCountVersion(dn)) {
   Integer num = versionCount.get(dn.getSoftwareVersion());
   num = num == null ? 1 : num+1;
   versionCount.put(dn.getSoftwareVersion(), num);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2ec90a1a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
index b55a716..a8a1db9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
@@ -86,6 +86,43 @@ public class TestDatanodeManager {
   }
 
   /**
+   * This test checks that if a node is re-registered with a new software
+   * version after the heartbeat expiry interval but before the 
HeartbeatManager
+   * has a chance to detect this and remove it, the node's version will still
+   * be correctly decremented.
+   */
+  @Test
+  public void testNumVersionsCorrectAfterReregister()
+  throws IOException, InterruptedException {
+//Create the DatanodeManager which will be tested
+FSNamesystem fsn = Mockito.mock(FSNamesystem.class);
+Mockito.when(fsn.hasWriteLock()).thenReturn(true);
+Configuration conf = new Configuration();
+conf.setLong(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 0);
+conf.setLong(DFSConfigKeys.DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY, 
10);
+DatanodeManager dm = mockDatanodeManager(fsn, conf);
+
+String storageID = 

hadoop git commit: HDFS-9500. Fix software version counts for DataNodes during rolling upgrade. Contributed by Erik Krogen.

2016-10-27 Thread shv
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 2b0fd1f4e -> ce8ace372


HDFS-9500. Fix software version counts for DataNodes during rolling upgrade. 
Contributed by Erik Krogen.

(cherry picked from commit f3ac1f41b8fa82a0ac87a207d7afa2061d90a9bd)

Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ce8ace37
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ce8ace37
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ce8ace37

Branch: refs/heads/branch-2
Commit: ce8ace372c4e4433bbb1846b4ff7e699a635bfce
Parents: 2b0fd1f
Author: Erik Krogen 
Authored: Thu Oct 27 15:14:21 2016 -0700
Committer: Konstantin V Shvachko 
Committed: Thu Oct 27 16:00:39 2016 -0700

--
 .../server/blockmanagement/DatanodeManager.java | 16 ++---
 .../blockmanagement/TestDatanodeManager.java| 37 
 2 files changed, 49 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ce8ace37/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index 2675a04..27a11bf 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -685,17 +685,25 @@ public class DatanodeManager {
 }
   }
 
+  /**
+   * Will return true for all Datanodes which have a non-null software
+   * version and are considered alive (by {@link 
DatanodeDescriptor#isAlive()}),
+   * indicating the node has not yet been removed. Use {@code isAlive}
+   * rather than {@link DatanodeManager#isDatanodeDead(DatanodeDescriptor)}
+   * to ensure that the version is decremented even if the datanode
+   * hasn't issued a heartbeat recently.
+   *
+   * @param node The datanode in question
+   * @return True iff its version count should be decremented
+   */
   private boolean shouldCountVersion(DatanodeDescriptor node) {
-return node.getSoftwareVersion() != null && node.isAlive() &&
-  !isDatanodeDead(node);
+return node.getSoftwareVersion() != null && node.isAlive();
   }
 
   private void countSoftwareVersions() {
 synchronized(this) {
   datanodesSoftwareVersions.clear();
   for(DatanodeDescriptor dn: datanodeMap.values()) {
-// Check isAlive too because right after removeDatanode(),
-// isDatanodeDead() is still true 
 if (shouldCountVersion(dn)) {
   Integer num = datanodesSoftwareVersions.get(dn.getSoftwareVersion());
   num = num == null ? 1 : num+1;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ce8ace37/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
index be8a0f0..30e2aaf 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
@@ -86,6 +86,43 @@ public class TestDatanodeManager {
   }
 
   /**
+   * This test checks that if a node is re-registered with a new software
+   * version after the heartbeat expiry interval but before the 
HeartbeatManager
+   * has a chance to detect this and remove it, the node's version will still
+   * be correctly decremented.
+   */
+  @Test
+  public void testNumVersionsCorrectAfterReregister()
+  throws IOException, InterruptedException {
+//Create the DatanodeManager which will be tested
+FSNamesystem fsn = Mockito.mock(FSNamesystem.class);
+Mockito.when(fsn.hasWriteLock()).thenReturn(true);
+Configuration conf = new Configuration();
+conf.setLong(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 0);
+conf.setLong(DFSConfigKeys.DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY, 
10);
+DatanodeManager dm = mockDatanodeManager(fsn, conf);
+
+String storageID = "someStorageID1";
+String ip = "someIP" + storageID;
+
+// Register then reregister the same node but with a different version
+

hadoop git commit: HDFS-9500. Fix software version counts for DataNodes during rolling upgrade. Contributed by Erik Krogen.

2016-10-27 Thread shv
Repository: hadoop
Updated Branches:
  refs/heads/trunk 022bf783a -> f3ac1f41b


HDFS-9500. Fix software version counts for DataNodes during rolling upgrade. 
Contributed by Erik Krogen.

Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f3ac1f41
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f3ac1f41
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f3ac1f41

Branch: refs/heads/trunk
Commit: f3ac1f41b8fa82a0ac87a207d7afa2061d90a9bd
Parents: 022bf78
Author: Erik Krogen 
Authored: Thu Oct 27 15:14:21 2016 -0700
Committer: Konstantin V Shvachko 
Committed: Thu Oct 27 15:58:25 2016 -0700

--
 .../server/blockmanagement/DatanodeManager.java | 16 ++---
 .../blockmanagement/TestDatanodeManager.java| 37 
 2 files changed, 49 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f3ac1f41/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index 1a47835..47f15c4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -759,17 +759,25 @@ public class DatanodeManager {
 }
   }
 
+  /**
+   * Will return true for all Datanodes which have a non-null software
+   * version and are considered alive (by {@link 
DatanodeDescriptor#isAlive()}),
+   * indicating the node has not yet been removed. Use {@code isAlive}
+   * rather than {@link DatanodeManager#isDatanodeDead(DatanodeDescriptor)}
+   * to ensure that the version is decremented even if the datanode
+   * hasn't issued a heartbeat recently.
+   *
+   * @param node The datanode in question
+   * @return True iff its version count should be decremented
+   */
   private boolean shouldCountVersion(DatanodeDescriptor node) {
-return node.getSoftwareVersion() != null && node.isAlive() &&
-  !isDatanodeDead(node);
+return node.getSoftwareVersion() != null && node.isAlive();
   }
 
   private void countSoftwareVersions() {
 synchronized(this) {
   datanodesSoftwareVersions.clear();
   for(DatanodeDescriptor dn: datanodeMap.values()) {
-// Check isAlive too because right after removeDatanode(),
-// isDatanodeDead() is still true 
 if (shouldCountVersion(dn)) {
   Integer num = datanodesSoftwareVersions.get(dn.getSoftwareVersion());
   num = num == null ? 1 : num+1;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f3ac1f41/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
index be8a0f0..30e2aaf 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
@@ -86,6 +86,43 @@ public class TestDatanodeManager {
   }
 
   /**
+   * This test checks that if a node is re-registered with a new software
+   * version after the heartbeat expiry interval but before the 
HeartbeatManager
+   * has a chance to detect this and remove it, the node's version will still
+   * be correctly decremented.
+   */
+  @Test
+  public void testNumVersionsCorrectAfterReregister()
+  throws IOException, InterruptedException {
+//Create the DatanodeManager which will be tested
+FSNamesystem fsn = Mockito.mock(FSNamesystem.class);
+Mockito.when(fsn.hasWriteLock()).thenReturn(true);
+Configuration conf = new Configuration();
+conf.setLong(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 0);
+conf.setLong(DFSConfigKeys.DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY, 
10);
+DatanodeManager dm = mockDatanodeManager(fsn, conf);
+
+String storageID = "someStorageID1";
+String ip = "someIP" + storageID;
+
+// Register then reregister the same node but with a different version
+for (int i = 0; i <= 1; i++) {
+  dm.registerDatanode(new 

[2/4] hadoop git commit: YARN-4765 Split TestHBaseTimelineStorage into multiple test classes (Varun Saxena via Vrushali C)

2016-10-27 Thread sjlee
http://git-wip-us.apache.org/repos/asf/hadoop/blob/022bf783/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageApps.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageApps.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageApps.java
new file mode 100644
index 000..e70198a
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageApps.java
@@ -0,0 +1,1849 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.timelineservice.storage;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.EnumSet;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.NavigableSet;
+import java.util.Set;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.timelineservice.ApplicationEntity;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntities;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntity;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntityType;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEvent;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetric;
+import 
org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetricOperation;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetric.Type;
+import org.apache.hadoop.yarn.server.metrics.ApplicationMetricsConstants;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.TimelineDataToRetrieve;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.TimelineEntityFilters;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderContext;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineCompareFilter;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineCompareOp;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineExistsFilter;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilterList;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineKeyValueFilter;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineKeyValuesFilter;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelinePrefixFilter;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilterList.Operator;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.TimelineReader.Field;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationColumn;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationColumnPrefix;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationRowKey;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationTable;
+import 

[4/4] hadoop git commit: YARN-4765 Split TestHBaseTimelineStorage into multiple test classes (Varun Saxena via Vrushali C)

2016-10-27 Thread sjlee
YARN-4765 Split TestHBaseTimelineStorage into multiple test classes (Varun 
Saxena via Vrushali C)

(cherry picked from commit 513dcf6817dd76fde8096ff04cd888d7c908461d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/022bf783
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/022bf783
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/022bf783

Branch: refs/heads/trunk
Commit: 022bf783aa89c1c81374ebef5dba2df95b7563b5
Parents: 221582c
Author: Vrushali Channapattan 
Authored: Thu Oct 27 14:37:50 2016 -0700
Committer: Sangjin Lee 
Committed: Thu Oct 27 15:37:36 2016 -0700

--
 .../storage/DataGeneratorForTest.java   |  381 ++
 .../storage/TestHBaseTimelineStorage.java   | 3751 --
 .../storage/TestHBaseTimelineStorageApps.java   | 1849 +
 .../TestHBaseTimelineStorageEntities.java   | 1675 
 4 files changed, 3905 insertions(+), 3751 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/022bf783/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/DataGeneratorForTest.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/DataGeneratorForTest.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/DataGeneratorForTest.java
new file mode 100644
index 000..0938e9e
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/DataGeneratorForTest.java
@@ -0,0 +1,381 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.timelineservice.storage;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntities;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntity;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntityType;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEvent;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetric;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetric.Type;
+
+final class DataGeneratorForTest {
+  static void loadApps(HBaseTestingUtility util) throws IOException {
+TimelineEntities te = new TimelineEntities();
+TimelineEntity entity = new TimelineEntity();
+String id = "application_11_";
+entity.setId(id);
+entity.setType(TimelineEntityType.YARN_APPLICATION.toString());
+Long cTime = 1425016502000L;
+entity.setCreatedTime(cTime);
+// add the info map in Timeline Entity
+Map infoMap = new HashMap<>();
+infoMap.put("infoMapKey1", "infoMapValue2");
+infoMap.put("infoMapKey2", 20);
+infoMap.put("infoMapKey3", 85.85);
+entity.addInfo(infoMap);
+// add the isRelatedToEntity info
+Set isRelatedToSet = new HashSet<>();
+isRelatedToSet.add("relatedto1");
+Map isRelatedTo = new HashMap<>();
+isRelatedTo.put("task", isRelatedToSet);
+entity.setIsRelatedToEntities(isRelatedTo);
+// add the relatesTo info
+Set relatesToSet = new HashSet<>();
+relatesToSet.add("relatesto1");
+relatesToSet.add("relatesto3");
+Map relatesTo = new HashMap<>();
+relatesTo.put("container", relatesToSet);
+Set relatesToSet11 = new HashSet<>();
+relatesToSet11.add("relatesto4");
+

[1/4] hadoop git commit: YARN-4765 Split TestHBaseTimelineStorage into multiple test classes (Varun Saxena via Vrushali C)

2016-10-27 Thread sjlee
Repository: hadoop
Updated Branches:
  refs/heads/trunk 221582c4a -> 022bf783a


http://git-wip-us.apache.org/repos/asf/hadoop/blob/022bf783/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageEntities.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageEntities.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageEntities.java
new file mode 100644
index 000..3076709
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageEntities.java
@@ -0,0 +1,1675 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.timelineservice.storage;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.EnumSet;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.NavigableSet;
+import java.util.Set;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.timelineservice.ApplicationEntity;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntities;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntity;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEvent;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetric;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetric.Type;
+import org.apache.hadoop.yarn.server.metrics.ApplicationMetricsConstants;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.TimelineDataToRetrieve;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.TimelineEntityFilters;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderContext;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineCompareFilter;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineCompareOp;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineExistsFilter;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilterList;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilterList.Operator;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineKeyValueFilter;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineKeyValuesFilter;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelinePrefixFilter;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.TimelineReader.Field;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.common.EventColumnName;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.common.EventColumnNameConverter;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.common.KeyConverter;
+import org.apache.hadoop.yarn.server.timelineservice.storage.common.Separator;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.common.StringKeyConverter;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.entity.EntityColumn;

[3/4] hadoop git commit: YARN-4765 Split TestHBaseTimelineStorage into multiple test classes (Varun Saxena via Vrushali C)

2016-10-27 Thread sjlee
http://git-wip-us.apache.org/repos/asf/hadoop/blob/022bf783/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorage.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorage.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorage.java
deleted file mode 100644
index e37865f..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorage.java
+++ /dev/null
@@ -1,3751 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.timelineservice.storage;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertNotNull;
-import static org.junit.Assert.assertNull;
-import static org.junit.Assert.assertTrue;
-
-import java.io.IOException;
-import java.util.Arrays;
-import java.util.EnumSet;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.Map;
-import java.util.NavigableMap;
-import java.util.NavigableSet;
-import java.util.Set;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hbase.HBaseTestingUtility;
-import org.apache.hadoop.hbase.client.Connection;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
-import org.apache.hadoop.hbase.client.Get;
-import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.ResultScanner;
-import org.apache.hadoop.hbase.client.Scan;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
-import org.apache.hadoop.yarn.api.records.timelineservice.ApplicationEntity;
-import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntities;
-import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntity;
-import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntityType;
-import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEvent;
-import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetric;
-import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetric.Type;
-import 
org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetricOperation;
-import org.apache.hadoop.yarn.server.metrics.ApplicationMetricsConstants;
-import 
org.apache.hadoop.yarn.server.timelineservice.reader.TimelineDataToRetrieve;
-import 
org.apache.hadoop.yarn.server.timelineservice.reader.TimelineEntityFilters;
-import 
org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderContext;
-import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineCompareFilter;
-import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineCompareOp;
-import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineExistsFilter;
-import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilterList;
-import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilterList.Operator;
-import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineKeyValueFilter;
-import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineKeyValuesFilter;
-import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelinePrefixFilter;
-import 
org.apache.hadoop.yarn.server.timelineservice.storage.TimelineReader.Field;
-import 
org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationColumn;
-import 
org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationColumnPrefix;
-import 
org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationRowKey;
-import 

hadoop git commit: YARN-4668. Reuse objectMapper instance in Yarn. (Yiqun Lin via gtcarrera9)

2016-10-27 Thread gtcarrera9
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 58ac40b55 -> 2b0fd1f4e


YARN-4668. Reuse objectMapper instance in Yarn. (Yiqun Lin via gtcarrera9)

(cherry picked from commit 221582c4ab0ff1d5936f754f23da140aac656654)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2b0fd1f4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2b0fd1f4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2b0fd1f4

Branch: refs/heads/branch-2
Commit: 2b0fd1f4e28f667d670e9e9b6514e1ec1f70c832
Parents: 58ac40b
Author: Li Lu 
Authored: Thu Oct 27 15:19:59 2016 -0700
Committer: Li Lu 
Committed: Thu Oct 27 15:26:05 2016 -0700

--
 .../hadoop/yarn/client/api/impl/TimelineClientImpl.java  | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2b0fd1f4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
index c433854..d4c68f7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
@@ -85,6 +85,7 @@ public class TimelineClientImpl extends TimelineClient {
 
   private static final Log LOG = LogFactory.getLog(TimelineClientImpl.class);
   private static final String RESOURCE_URI_STR = "/ws/v1/timeline/";
+  private static final ObjectMapper MAPPER = new ObjectMapper();
   private static final Joiner JOINER = Joiner.on("");
   public final static int DEFAULT_SOCKET_TIMEOUT = 1 * 60 * 1000; // 1 minute
 
@@ -563,15 +564,14 @@ public class TimelineClientImpl extends TimelineClient {
   LOG.error("File [" + jsonFile.getAbsolutePath() + "] doesn't exist");
   return;
 }
-ObjectMapper mapper = new ObjectMapper();
-YarnJacksonJaxbJsonProvider.configObjectMapper(mapper);
+YarnJacksonJaxbJsonProvider.configObjectMapper(MAPPER);
 TimelineEntities entities = null;
 TimelineDomains domains = null;
 try {
   if (type.equals(ENTITY_DATA_TYPE)) {
-entities = mapper.readValue(jsonFile, TimelineEntities.class);
+entities = MAPPER.readValue(jsonFile, TimelineEntities.class);
   } else if (type.equals(DOMAIN_DATA_TYPE)){
-domains = mapper.readValue(jsonFile, TimelineDomains.class);
+domains = MAPPER.readValue(jsonFile, TimelineDomains.class);
   }
 } catch (Exception e) {
   LOG.error("Error when reading  " + e.getMessage());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-3432. Cluster metrics have wrong Total Memory when there is reserved memory on CS. (Brahma Reddy Battula via curino)

2016-10-27 Thread curino
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 55ba22072 -> b32f14d80


YARN-3432.  Cluster metrics have wrong Total Memory when there is reserved 
memory on CS. (Brahma Reddy Battula via curino)

(cherry picked from commit 892a8348fceb42069ea9877251c413fe33415e16)
(cherry picked from commit 58ac40b55296834a8e3f3375caddc03bee901e9a)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b32f14d8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b32f14d8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b32f14d8

Branch: refs/heads/branch-2.8
Commit: b32f14d8061cf1c201a253b32e689016df709954
Parents: 55ba220
Author: Carlo Curino 
Authored: Thu Oct 27 15:12:10 2016 -0700
Committer: Carlo Curino 
Committed: Thu Oct 27 15:23:04 2016 -0700

--
 .../resourcemanager/webapp/dao/ClusterMetricsInfo.java   | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b32f14d8/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ClusterMetricsInfo.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ClusterMetricsInfo.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ClusterMetricsInfo.java
index 3012d0d..d441658 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ClusterMetricsInfo.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ClusterMetricsInfo.java
@@ -25,6 +25,7 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.ClusterMetrics;
 import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler;
 
 @XmlRootElement(name = "clusterMetrics")
 @XmlAccessorType(XmlAccessType.FIELD)
@@ -87,8 +88,14 @@ public class ClusterMetricsInfo {
 this.containersPending = metrics.getPendingContainers();
 this.containersReserved = metrics.getReservedContainers();
 
-this.totalMB = availableMB + allocatedMB;
-this.totalVirtualCores = availableVirtualCores + allocatedVirtualCores;
+if (rs instanceof CapacityScheduler) {
+  this.totalMB = availableMB + allocatedMB + reservedMB;
+  this.totalVirtualCores = availableVirtualCores + allocatedVirtualCores
+  + containersReserved;
+} else {
+  this.totalMB = availableMB + allocatedMB;
+  this.totalVirtualCores = availableVirtualCores + allocatedVirtualCores;
+}
 this.activeNodes = clusterMetrics.getNumActiveNMs();
 this.lostNodes = clusterMetrics.getNumLostNMs();
 this.unhealthyNodes = clusterMetrics.getUnhealthyNMs();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-3568. TestAMRMTokens should use some random port. (Takashi Ohnishi via Subru).

2016-10-27 Thread subru
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 bc2f429c0 -> 55ba22072


YARN-3568. TestAMRMTokens should use some random port. (Takashi Ohnishi via 
Subru).

(cherry picked from commit 79ae78dcbec183ab53b26de408b4517e5a151878)
(cherry picked from commit 4274600b9527706259ec5df62a13a5a66adc3ff2)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/55ba2207
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/55ba2207
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/55ba2207

Branch: refs/heads/branch-2.8
Commit: 55ba22072194680563ae95adc8c05fcab706aa76
Parents: bc2f429
Author: Subru Krishnan 
Authored: Thu Oct 27 15:11:12 2016 -0700
Committer: Subru Krishnan 
Committed: Thu Oct 27 15:19:01 2016 -0700

--
 .../yarn/server/resourcemanager/security/TestAMRMTokens.java | 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/55ba2207/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestAMRMTokens.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestAMRMTokens.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestAMRMTokens.java
index 4488ad6..bcf239d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestAMRMTokens.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestAMRMTokens.java
@@ -113,6 +113,8 @@ public class TestAMRMTokens {
 DEFAULT_RM_AMRM_TOKEN_MASTER_KEY_ROLLING_INTERVAL_SECS);
 conf.setLong(YarnConfiguration.RM_AM_EXPIRY_INTERVAL_MS,
 YarnConfiguration.DEFAULT_RM_AM_EXPIRY_INTERVAL_MS);
+conf.set(YarnConfiguration.RM_SCHEDULER_ADDRESS,
+"0.0.0.0:0");
 
 MyContainerManager containerManager = new MyContainerManager();
 final MockRMWithAMS rm =
@@ -230,6 +232,8 @@ public class TestAMRMTokens {
   YarnConfiguration.RM_AMRM_TOKEN_MASTER_KEY_ROLLING_INTERVAL_SECS,
   rolling_interval_sec);
 conf.setLong(YarnConfiguration.RM_AM_EXPIRY_INTERVAL_MS, am_expire_ms);
+conf.set(YarnConfiguration.RM_SCHEDULER_ADDRESS,
+"0.0.0.0:0");
 MyContainerManager containerManager = new MyContainerManager();
 final MockRMWithAMS rm =
 new MockRMWithAMS(conf, containerManager);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-3432. Cluster metrics have wrong Total Memory when there is reserved memory on CS. (Brahma Reddy Battula via curino)

2016-10-27 Thread curino
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 4274600b9 -> 58ac40b55


YARN-3432.  Cluster metrics have wrong Total Memory when there is reserved 
memory on CS. (Brahma Reddy Battula via curino)

(cherry picked from commit 892a8348fceb42069ea9877251c413fe33415e16)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/58ac40b5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/58ac40b5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/58ac40b5

Branch: refs/heads/branch-2
Commit: 58ac40b55296834a8e3f3375caddc03bee901e9a
Parents: 4274600
Author: Carlo Curino 
Authored: Thu Oct 27 15:12:10 2016 -0700
Committer: Carlo Curino 
Committed: Thu Oct 27 15:22:04 2016 -0700

--
 .../resourcemanager/webapp/dao/ClusterMetricsInfo.java   | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/58ac40b5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ClusterMetricsInfo.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ClusterMetricsInfo.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ClusterMetricsInfo.java
index 1789e09..f083b05 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ClusterMetricsInfo.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ClusterMetricsInfo.java
@@ -25,6 +25,7 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.ClusterMetrics;
 import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler;
 
 @XmlRootElement(name = "clusterMetrics")
 @XmlAccessorType(XmlAccessType.FIELD)
@@ -87,8 +88,14 @@ public class ClusterMetricsInfo {
 this.containersPending = metrics.getPendingContainers();
 this.containersReserved = metrics.getReservedContainers();
 
-this.totalMB = availableMB + allocatedMB;
-this.totalVirtualCores = availableVirtualCores + allocatedVirtualCores;
+if (rs instanceof CapacityScheduler) {
+  this.totalMB = availableMB + allocatedMB + reservedMB;
+  this.totalVirtualCores = availableVirtualCores + allocatedVirtualCores
+  + containersReserved;
+} else {
+  this.totalMB = availableMB + allocatedMB;
+  this.totalVirtualCores = availableVirtualCores + allocatedVirtualCores;
+}
 this.activeNodes = clusterMetrics.getNumActiveNMs();
 this.lostNodes = clusterMetrics.getNumLostNMs();
 this.unhealthyNodes = clusterMetrics.getUnhealthyNMs();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-4668. Reuse objectMapper instance in Yarn. (Yiqun Lin via gtcarrera9)

2016-10-27 Thread gtcarrera9
Repository: hadoop
Updated Branches:
  refs/heads/trunk 892a8348f -> 221582c4a


YARN-4668. Reuse objectMapper instance in Yarn. (Yiqun Lin via gtcarrera9)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/221582c4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/221582c4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/221582c4

Branch: refs/heads/trunk
Commit: 221582c4ab0ff1d5936f754f23da140aac656654
Parents: 892a834
Author: Li Lu 
Authored: Thu Oct 27 15:19:59 2016 -0700
Committer: Li Lu 
Committed: Thu Oct 27 15:20:17 2016 -0700

--
 .../hadoop/yarn/client/api/impl/TimelineClientImpl.java  | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/221582c4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
index dc4d3e6..d969c59 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java
@@ -98,6 +98,7 @@ import com.sun.jersey.core.util.MultivaluedMapImpl;
 public class TimelineClientImpl extends TimelineClient {
 
   private static final Log LOG = LogFactory.getLog(TimelineClientImpl.class);
+  private static final ObjectMapper MAPPER = new ObjectMapper();
   private static final String RESOURCE_URI_STR_V1 = "/ws/v1/timeline/";
   private static final String RESOURCE_URI_STR_V2 = "/ws/v2/timeline/";
   private static final Joiner JOINER = Joiner.on("");
@@ -765,15 +766,14 @@ public class TimelineClientImpl extends TimelineClient {
   LOG.error("File [" + jsonFile.getAbsolutePath() + "] doesn't exist");
   return;
 }
-ObjectMapper mapper = new ObjectMapper();
-YarnJacksonJaxbJsonProvider.configObjectMapper(mapper);
+YarnJacksonJaxbJsonProvider.configObjectMapper(MAPPER);
 TimelineEntities entities = null;
 TimelineDomains domains = null;
 try {
   if (type.equals(ENTITY_DATA_TYPE)) {
-entities = mapper.readValue(jsonFile, TimelineEntities.class);
+entities = MAPPER.readValue(jsonFile, TimelineEntities.class);
   } else if (type.equals(DOMAIN_DATA_TYPE)){
-domains = mapper.readValue(jsonFile, TimelineDomains.class);
+domains = MAPPER.readValue(jsonFile, TimelineDomains.class);
   }
 } catch (Exception e) {
   LOG.error("Error when reading  " + e.getMessage());


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-3432. Cluster metrics have wrong Total Memory when there is reserved memory on CS. (Brahma Reddy Battula via curino)

2016-10-27 Thread curino
Repository: hadoop
Updated Branches:
  refs/heads/trunk 79ae78dcb -> 892a8348f


YARN-3432.  Cluster metrics have wrong Total Memory when there is reserved 
memory on CS. (Brahma Reddy Battula via curino)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/892a8348
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/892a8348
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/892a8348

Branch: refs/heads/trunk
Commit: 892a8348fceb42069ea9877251c413fe33415e16
Parents: 79ae78d
Author: Carlo Curino 
Authored: Thu Oct 27 15:12:10 2016 -0700
Committer: Carlo Curino 
Committed: Thu Oct 27 15:15:49 2016 -0700

--
 .../resourcemanager/webapp/dao/ClusterMetricsInfo.java   | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/892a8348/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ClusterMetricsInfo.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ClusterMetricsInfo.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ClusterMetricsInfo.java
index 1789e09..f083b05 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ClusterMetricsInfo.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ClusterMetricsInfo.java
@@ -25,6 +25,7 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.ClusterMetrics;
 import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler;
 
 @XmlRootElement(name = "clusterMetrics")
 @XmlAccessorType(XmlAccessType.FIELD)
@@ -87,8 +88,14 @@ public class ClusterMetricsInfo {
 this.containersPending = metrics.getPendingContainers();
 this.containersReserved = metrics.getReservedContainers();
 
-this.totalMB = availableMB + allocatedMB;
-this.totalVirtualCores = availableVirtualCores + allocatedVirtualCores;
+if (rs instanceof CapacityScheduler) {
+  this.totalMB = availableMB + allocatedMB + reservedMB;
+  this.totalVirtualCores = availableVirtualCores + allocatedVirtualCores
+  + containersReserved;
+} else {
+  this.totalMB = availableMB + allocatedMB;
+  this.totalVirtualCores = availableVirtualCores + allocatedVirtualCores;
+}
 this.activeNodes = clusterMetrics.getNumActiveNMs();
 this.lostNodes = clusterMetrics.getNumLostNMs();
 this.unhealthyNodes = clusterMetrics.getUnhealthyNMs();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-3568. TestAMRMTokens should use some random port. (Takashi Ohnishi via Subru).

2016-10-27 Thread subru
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 f57244880 -> 4274600b9


YARN-3568. TestAMRMTokens should use some random port. (Takashi Ohnishi via 
Subru).

(cherry picked from commit 79ae78dcbec183ab53b26de408b4517e5a151878)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4274600b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4274600b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4274600b

Branch: refs/heads/branch-2
Commit: 4274600b9527706259ec5df62a13a5a66adc3ff2
Parents: f572448
Author: Subru Krishnan 
Authored: Thu Oct 27 15:11:12 2016 -0700
Committer: Subru Krishnan 
Committed: Thu Oct 27 15:13:01 2016 -0700

--
 .../yarn/server/resourcemanager/security/TestAMRMTokens.java | 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4274600b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestAMRMTokens.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestAMRMTokens.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestAMRMTokens.java
index 4488ad6..bcf239d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestAMRMTokens.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestAMRMTokens.java
@@ -113,6 +113,8 @@ public class TestAMRMTokens {
 DEFAULT_RM_AMRM_TOKEN_MASTER_KEY_ROLLING_INTERVAL_SECS);
 conf.setLong(YarnConfiguration.RM_AM_EXPIRY_INTERVAL_MS,
 YarnConfiguration.DEFAULT_RM_AM_EXPIRY_INTERVAL_MS);
+conf.set(YarnConfiguration.RM_SCHEDULER_ADDRESS,
+"0.0.0.0:0");
 
 MyContainerManager containerManager = new MyContainerManager();
 final MockRMWithAMS rm =
@@ -230,6 +232,8 @@ public class TestAMRMTokens {
   YarnConfiguration.RM_AMRM_TOKEN_MASTER_KEY_ROLLING_INTERVAL_SECS,
   rolling_interval_sec);
 conf.setLong(YarnConfiguration.RM_AM_EXPIRY_INTERVAL_MS, am_expire_ms);
+conf.set(YarnConfiguration.RM_SCHEDULER_ADDRESS,
+"0.0.0.0:0");
 MyContainerManager containerManager = new MyContainerManager();
 final MockRMWithAMS rm =
 new MockRMWithAMS(conf, containerManager);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-3568. TestAMRMTokens should use some random port. (Takashi Ohnishi via Subru).

2016-10-27 Thread subru
Repository: hadoop
Updated Branches:
  refs/heads/trunk b98fc8249 -> 79ae78dcb


YARN-3568. TestAMRMTokens should use some random port. (Takashi Ohnishi via 
Subru).


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/79ae78dc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/79ae78dc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/79ae78dc

Branch: refs/heads/trunk
Commit: 79ae78dcbec183ab53b26de408b4517e5a151878
Parents: b98fc82
Author: Subru Krishnan 
Authored: Thu Oct 27 15:11:12 2016 -0700
Committer: Subru Krishnan 
Committed: Thu Oct 27 15:11:12 2016 -0700

--
 .../yarn/server/resourcemanager/security/TestAMRMTokens.java | 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/79ae78dc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestAMRMTokens.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestAMRMTokens.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestAMRMTokens.java
index 4488ad6..bcf239d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestAMRMTokens.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestAMRMTokens.java
@@ -113,6 +113,8 @@ public class TestAMRMTokens {
 DEFAULT_RM_AMRM_TOKEN_MASTER_KEY_ROLLING_INTERVAL_SECS);
 conf.setLong(YarnConfiguration.RM_AM_EXPIRY_INTERVAL_MS,
 YarnConfiguration.DEFAULT_RM_AM_EXPIRY_INTERVAL_MS);
+conf.set(YarnConfiguration.RM_SCHEDULER_ADDRESS,
+"0.0.0.0:0");
 
 MyContainerManager containerManager = new MyContainerManager();
 final MockRMWithAMS rm =
@@ -230,6 +232,8 @@ public class TestAMRMTokens {
   YarnConfiguration.RM_AMRM_TOKEN_MASTER_KEY_ROLLING_INTERVAL_SECS,
   rolling_interval_sec);
 conf.setLong(YarnConfiguration.RM_AM_EXPIRY_INTERVAL_MS, am_expire_ms);
+conf.set(YarnConfiguration.RM_SCHEDULER_ADDRESS,
+"0.0.0.0:0");
 MyContainerManager containerManager = new MyContainerManager();
 final MockRMWithAMS rm =
 new MockRMWithAMS(conf, containerManager);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-4328. Fix findbugs warnings in resourcemanager (Akira Ajisaka via Varun Saxena)

2016-10-27 Thread varunsaxena
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.6 82f5d6350 -> 8325ce6e4


YARN-4328. Fix findbugs warnings in resourcemanager (Akira Ajisaka via Varun 
Saxena)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8325ce6e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8325ce6e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8325ce6e

Branch: refs/heads/branch-2.6
Commit: 8325ce6e43922230949b350c5b9da92e5961bac4
Parents: 82f5d63
Author: Varun Saxena 
Authored: Fri Oct 28 03:39:10 2016 +0530
Committer: Varun Saxena 
Committed: Fri Oct 28 03:39:10 2016 +0530

--
 .../hadoop-yarn/dev-support/findbugs-exclude.xml| 5 +
 .../yarn/server/resourcemanager/recovery/ZKRMStateStore.java| 2 +-
 2 files changed, 6 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8325ce6e/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml 
b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
index ece5cff..6e1c393 100644
--- a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
+++ b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
@@ -394,4 +394,9 @@
 
   
 
+  
+
+
+
+  
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8325ce6e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
index a8acab8..5343a8b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
@@ -113,7 +113,7 @@ public class ZKRMStateStore extends RMStateStore {
   private List zkAcl;
   private List zkAuths;
 
-  class ZKSyncOperationCallback implements AsyncCallback.VoidCallback {
+  static class ZKSyncOperationCallback implements AsyncCallback.VoidCallback {
 @Override
 public void processResult(int rc, String path, Object ctx){
   if (rc == Code.OK.intValue()) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-4710. Reduce logging application reserved debug info in FSAppAttempt#assignContainer (Contributed by Yiqun Lin via Daniel Templeton)

2016-10-27 Thread templedf
Repository: hadoop
Updated Branches:
  refs/heads/trunk 9449519a2 -> b98fc8249


YARN-4710. Reduce logging application reserved debug info in 
FSAppAttempt#assignContainer (Contributed by Yiqun Lin via Daniel Templeton)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b98fc824
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b98fc824
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b98fc824

Branch: refs/heads/trunk
Commit: b98fc8249f0576e7b4e230ffc3cec5a20eefc543
Parents: 9449519
Author: Daniel Templeton 
Authored: Thu Oct 27 14:35:38 2016 -0700
Committer: Daniel Templeton 
Committed: Thu Oct 27 14:42:19 2016 -0700

--
 .../yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b98fc824/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
index 3555faa..cef4387 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
@@ -771,8 +771,8 @@ public class FSAppAttempt extends 
SchedulerApplicationAttempt
   }
 
   private Resource assignContainer(FSSchedulerNode node, boolean reserved) {
-if (LOG.isDebugEnabled()) {
-  LOG.debug("Node offered to app: " + getName() + " reserved: " + 
reserved);
+if (LOG.isTraceEnabled()) {
+  LOG.trace("Node offered to app: " + getName() + " reserved: " + 
reserved);
 }
 
 Collection keysToTry = (reserved) ?


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-4710. Reduce logging application reserved debug info in FSAppAttempt#assignContainer (Contributed by Yiqun Lin via Daniel Templeton)

2016-10-27 Thread templedf
Repository: hadoop
Updated Branches:
  refs/heads/official [created] bfa6891e7


YARN-4710. Reduce logging application reserved debug info in 
FSAppAttempt#assignContainer (Contributed by Yiqun Lin via Daniel Templeton)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bfa6891e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bfa6891e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bfa6891e

Branch: refs/heads/official
Commit: bfa6891e7d6bfb1798a19c19c1b0b28cb5f47e27
Parents: 9449519
Author: Daniel Templeton 
Authored: Thu Oct 27 14:51:36 2016 -0700
Committer: Daniel Templeton 
Committed: Thu Oct 27 14:51:36 2016 -0700

--
 .../yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bfa6891e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
index 3555faa..cef4387 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
@@ -771,8 +771,8 @@ public class FSAppAttempt extends 
SchedulerApplicationAttempt
   }
 
   private Resource assignContainer(FSSchedulerNode node, boolean reserved) {
-if (LOG.isDebugEnabled()) {
-  LOG.debug("Node offered to app: " + getName() + " reserved: " + 
reserved);
+if (LOG.isTraceEnabled()) {
+  LOG.trace("Node offered to app: " + getName() + " reserved: " + 
reserved);
 }
 
 Collection keysToTry = (reserved) ?


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-4328. Fix findbugs warnings in resourcemanager (Akira Ajisaka via Varun Saxena)

2016-10-27 Thread varunsaxena
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 a49510f69 -> edaa37177


YARN-4328. Fix findbugs warnings in resourcemanager (Akira Ajisaka via Varun 
Saxena)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/edaa3717
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/edaa3717
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/edaa3717

Branch: refs/heads/branch-2.7
Commit: edaa37177b69526639927da6038b717ccae6884f
Parents: a49510f
Author: Varun Saxena 
Authored: Fri Oct 28 03:08:59 2016 +0530
Committer: Varun Saxena 
Committed: Fri Oct 28 03:08:59 2016 +0530

--
 .../yarn/server/resourcemanager/recovery/ZKRMStateStore.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/edaa3717/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
index 70ccd8c..9e4eec2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/ZKRMStateStore.java
@@ -115,7 +115,7 @@ public class ZKRMStateStore extends RMStateStore {
   private List zkAcl;
   private List zkAuths;
 
-  class ZKSyncOperationCallback implements AsyncCallback.VoidCallback {
+  static class ZKSyncOperationCallback implements AsyncCallback.VoidCallback {
 @Override
 public void processResult(int rc, String path, Object ctx){
   if (rc == Code.OK.intValue()) {


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[3/4] hadoop git commit: YARN-4765 Split TestHBaseTimelineStorage into multiple test classes (Varun Saxena via Vrushali C)

2016-10-27 Thread vrushali
http://git-wip-us.apache.org/repos/asf/hadoop/blob/513dcf68/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorage.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorage.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorage.java
deleted file mode 100644
index e37865f..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorage.java
+++ /dev/null
@@ -1,3751 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.timelineservice.storage;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertNotNull;
-import static org.junit.Assert.assertNull;
-import static org.junit.Assert.assertTrue;
-
-import java.io.IOException;
-import java.util.Arrays;
-import java.util.EnumSet;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.Map;
-import java.util.NavigableMap;
-import java.util.NavigableSet;
-import java.util.Set;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hbase.HBaseTestingUtility;
-import org.apache.hadoop.hbase.client.Connection;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
-import org.apache.hadoop.hbase.client.Get;
-import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.ResultScanner;
-import org.apache.hadoop.hbase.client.Scan;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
-import org.apache.hadoop.yarn.api.records.timelineservice.ApplicationEntity;
-import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntities;
-import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntity;
-import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntityType;
-import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEvent;
-import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetric;
-import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetric.Type;
-import 
org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetricOperation;
-import org.apache.hadoop.yarn.server.metrics.ApplicationMetricsConstants;
-import 
org.apache.hadoop.yarn.server.timelineservice.reader.TimelineDataToRetrieve;
-import 
org.apache.hadoop.yarn.server.timelineservice.reader.TimelineEntityFilters;
-import 
org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderContext;
-import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineCompareFilter;
-import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineCompareOp;
-import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineExistsFilter;
-import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilterList;
-import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilterList.Operator;
-import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineKeyValueFilter;
-import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineKeyValuesFilter;
-import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelinePrefixFilter;
-import 
org.apache.hadoop.yarn.server.timelineservice.storage.TimelineReader.Field;
-import 
org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationColumn;
-import 
org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationColumnPrefix;
-import 
org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationRowKey;
-import 

[4/4] hadoop git commit: YARN-4765 Split TestHBaseTimelineStorage into multiple test classes (Varun Saxena via Vrushali C)

2016-10-27 Thread vrushali
YARN-4765 Split TestHBaseTimelineStorage into multiple test classes (Varun 
Saxena via Vrushali C)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/513dcf68
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/513dcf68
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/513dcf68

Branch: refs/heads/YARN-5355
Commit: 513dcf6817dd76fde8096ff04cd888d7c908461d
Parents: f37288c
Author: Vrushali Channapattan 
Authored: Thu Oct 27 14:37:50 2016 -0700
Committer: Vrushali Channapattan 
Committed: Thu Oct 27 14:37:50 2016 -0700

--
 .../storage/DataGeneratorForTest.java   |  381 ++
 .../storage/TestHBaseTimelineStorage.java   | 3751 --
 .../storage/TestHBaseTimelineStorageApps.java   | 1849 +
 .../TestHBaseTimelineStorageEntities.java   | 1675 
 4 files changed, 3905 insertions(+), 3751 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/513dcf68/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/DataGeneratorForTest.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/DataGeneratorForTest.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/DataGeneratorForTest.java
new file mode 100644
index 000..0938e9e
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/DataGeneratorForTest.java
@@ -0,0 +1,381 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.timelineservice.storage;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntities;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntity;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntityType;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEvent;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetric;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetric.Type;
+
+final class DataGeneratorForTest {
+  static void loadApps(HBaseTestingUtility util) throws IOException {
+TimelineEntities te = new TimelineEntities();
+TimelineEntity entity = new TimelineEntity();
+String id = "application_11_";
+entity.setId(id);
+entity.setType(TimelineEntityType.YARN_APPLICATION.toString());
+Long cTime = 1425016502000L;
+entity.setCreatedTime(cTime);
+// add the info map in Timeline Entity
+Map infoMap = new HashMap<>();
+infoMap.put("infoMapKey1", "infoMapValue2");
+infoMap.put("infoMapKey2", 20);
+infoMap.put("infoMapKey3", 85.85);
+entity.addInfo(infoMap);
+// add the isRelatedToEntity info
+Set isRelatedToSet = new HashSet<>();
+isRelatedToSet.add("relatedto1");
+Map isRelatedTo = new HashMap<>();
+isRelatedTo.put("task", isRelatedToSet);
+entity.setIsRelatedToEntities(isRelatedTo);
+// add the relatesTo info
+Set relatesToSet = new HashSet<>();
+relatesToSet.add("relatesto1");
+relatesToSet.add("relatesto3");
+Map relatesTo = new HashMap<>();
+relatesTo.put("container", relatesToSet);
+Set relatesToSet11 = new HashSet<>();
+relatesToSet11.add("relatesto4");
+relatesTo.put("container1", relatesToSet11);
+

[2/4] hadoop git commit: YARN-4765 Split TestHBaseTimelineStorage into multiple test classes (Varun Saxena via Vrushali C)

2016-10-27 Thread vrushali
http://git-wip-us.apache.org/repos/asf/hadoop/blob/513dcf68/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageApps.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageApps.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageApps.java
new file mode 100644
index 000..e70198a
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageApps.java
@@ -0,0 +1,1849 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.timelineservice.storage;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.EnumSet;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.NavigableSet;
+import java.util.Set;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.timelineservice.ApplicationEntity;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntities;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntity;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntityType;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEvent;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetric;
+import 
org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetricOperation;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetric.Type;
+import org.apache.hadoop.yarn.server.metrics.ApplicationMetricsConstants;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.TimelineDataToRetrieve;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.TimelineEntityFilters;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderContext;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineCompareFilter;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineCompareOp;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineExistsFilter;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilterList;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineKeyValueFilter;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineKeyValuesFilter;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelinePrefixFilter;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilterList.Operator;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.TimelineReader.Field;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationColumn;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationColumnPrefix;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationRowKey;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.application.ApplicationTable;
+import 

[1/4] hadoop git commit: YARN-4765 Split TestHBaseTimelineStorage into multiple test classes (Varun Saxena via Vrushali C)

2016-10-27 Thread vrushali
Repository: hadoop
Updated Branches:
  refs/heads/YARN-5355 f37288c7e -> 513dcf681


http://git-wip-us.apache.org/repos/asf/hadoop/blob/513dcf68/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageEntities.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageEntities.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageEntities.java
new file mode 100644
index 000..3076709
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/storage/TestHBaseTimelineStorageEntities.java
@@ -0,0 +1,1675 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.timelineservice.storage;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.EnumSet;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.NavigableSet;
+import java.util.Set;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.timelineservice.ApplicationEntity;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntities;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntity;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEvent;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetric;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineMetric.Type;
+import org.apache.hadoop.yarn.server.metrics.ApplicationMetricsConstants;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.TimelineDataToRetrieve;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.TimelineEntityFilters;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.TimelineReaderContext;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineCompareFilter;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineCompareOp;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineExistsFilter;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilterList;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineFilterList.Operator;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineKeyValueFilter;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelineKeyValuesFilter;
+import 
org.apache.hadoop.yarn.server.timelineservice.reader.filter.TimelinePrefixFilter;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.TimelineReader.Field;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.common.EventColumnName;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.common.EventColumnNameConverter;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.common.KeyConverter;
+import org.apache.hadoop.yarn.server.timelineservice.storage.common.Separator;
+import 
org.apache.hadoop.yarn.server.timelineservice.storage.common.StringKeyConverter;
+import 

hadoop git commit: YARN-5776. Checkstyle: MonitoringThread.Run method length is too long (miklos.szeg...@cloudera.com via rkanter)

2016-10-27 Thread rkanter
Repository: hadoop
Updated Branches:
  refs/heads/trunk dd4ed6a58 -> 9449519a2


YARN-5776. Checkstyle: MonitoringThread.Run method length is too long 
(miklos.szeg...@cloudera.com via rkanter)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9449519a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9449519a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9449519a

Branch: refs/heads/trunk
Commit: 9449519a2503c55d9eac8fd7519df28aa0760059
Parents: dd4ed6a
Author: Robert Kanter 
Authored: Thu Oct 27 14:36:27 2016 -0700
Committer: Robert Kanter 
Committed: Thu Oct 27 14:36:38 2016 -0700

--
 .../monitor/ContainersMonitorImpl.java  | 460 +++
 1 file changed, 279 insertions(+), 181 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9449519a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java
index a04a914..cd9d6af 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java
@@ -48,10 +48,14 @@ import java.util.Map;
 import java.util.Map.Entry;
 import java.util.concurrent.ConcurrentHashMap;
 
+/**
+ * Monitors containers collecting resource usage and preempting the container
+ * if it exceeds its limits.
+ */
 public class ContainersMonitorImpl extends AbstractService implements
 ContainersMonitor {
 
-  final static Log LOG = LogFactory
+  private final static Log LOG = LogFactory
   .getLog(ContainersMonitorImpl.class);
 
   private long monitoringInterval;
@@ -66,7 +70,7 @@ public class ContainersMonitorImpl extends AbstractService 
implements
 
   private final ContainerExecutor containerExecutor;
   private final Dispatcher eventDispatcher;
-  protected final Context context;
+  private final Context context;
   private ResourceCalculatorPlugin resourceCalculatorPlugin;
   private Configuration conf;
   private static float vmemRatio;
@@ -84,15 +88,18 @@ public class ContainersMonitorImpl extends AbstractService 
implements
   private static final long UNKNOWN_MEMORY_LIMIT = -1L;
   private int nodeCpuPercentageForYARN;
 
+  /**
+   * Type of container metric.
+   */
   @Private
-  public static enum ContainerMetric {
+  public enum ContainerMetric {
 CPU, MEMORY
   }
 
   private ResourceUtilization containersUtilization;
   // Tracks the aggregated allocation of the currently allocated containers
   // when queuing of containers at the NMs is enabled.
-  private ResourceUtilization containersAllocation;
+  private final ResourceUtilization containersAllocation;
 
   private volatile boolean stopped = false;
 
@@ -111,44 +118,47 @@ public class ContainersMonitorImpl extends 
AbstractService implements
   }
 
   @Override
-  protected void serviceInit(Configuration conf) throws Exception {
+  protected void serviceInit(Configuration myConf) throws Exception {
+this.conf = myConf;
 this.monitoringInterval =
-conf.getLong(YarnConfiguration.NM_CONTAINER_MON_INTERVAL_MS,
-conf.getLong(YarnConfiguration.NM_RESOURCE_MON_INTERVAL_MS,
+this.conf.getLong(YarnConfiguration.NM_CONTAINER_MON_INTERVAL_MS,
+this.conf.getLong(YarnConfiguration.NM_RESOURCE_MON_INTERVAL_MS,
 YarnConfiguration.DEFAULT_NM_RESOURCE_MON_INTERVAL_MS));
 
 Class clazz =
-conf.getClass(YarnConfiguration.NM_CONTAINER_MON_RESOURCE_CALCULATOR,
-conf.getClass(
+this.conf.getClass(YarnConfiguration
+.NM_CONTAINER_MON_RESOURCE_CALCULATOR,
+this.conf.getClass(
 YarnConfiguration.NM_MON_RESOURCE_CALCULATOR, null,
 ResourceCalculatorPlugin.class),
 ResourceCalculatorPlugin.class);
 this.resourceCalculatorPlugin =
-ResourceCalculatorPlugin.getResourceCalculatorPlugin(clazz, conf);
+

hadoop git commit: YARN-5715. Introduce entity prefix for return and sort order. Contributed by Rohith Sharma K S.

2016-10-27 Thread sjlee
Repository: hadoop
Updated Branches:
  refs/heads/YARN-5355-branch-2 c4d097bfb -> 30e209cd8


YARN-5715. Introduce entity prefix for return and sort order. Contributed by 
Rohith Sharma K S.

(cherry picked from commit f37288c7e0127b564645e978c7aab2a186fa6be6)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/30e209cd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/30e209cd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/30e209cd

Branch: refs/heads/YARN-5355-branch-2
Commit: 30e209cd88c5634858906f2b5c3861e0aace1d5a
Parents: c4d097b
Author: Sangjin Lee 
Authored: Thu Oct 27 13:06:18 2016 -0700
Committer: Sangjin Lee 
Committed: Thu Oct 27 13:57:17 2016 -0700

--
 .../records/timelineservice/TimelineEntity.java | 36 
 .../hadoop/yarn/util/TimelineServiceHelper.java |  8 +
 .../storage/HBaseTimelineWriterImpl.java|  2 +-
 .../storage/entity/EntityRowKey.java| 26 ++
 .../storage/entity/EntityRowKeyPrefix.java  |  6 ++--
 .../storage/entity/EntityTable.java |  4 +--
 .../storage/reader/GenericEntityReader.java |  4 ++-
 .../storage/common/TestRowKeys.java | 24 +++--
 8 files changed, 88 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/30e209cd/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEntity.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEntity.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEntity.java
index 9c0a983..7a289b9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEntity.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEntity.java
@@ -54,6 +54,7 @@ import org.codehaus.jackson.annotate.JsonSetter;
 @InterfaceStability.Unstable
 public class TimelineEntity implements Comparable {
   protected final static String SYSTEM_INFO_KEY_PREFIX = "SYSTEM_INFO_";
+  public final static long DEFAULT_ENTITY_PREFIX = 0L;
 
   /**
* Identifier of timeline entity(entity id + entity type).
@@ -145,6 +146,7 @@ public class TimelineEntity implements 
Comparable {
   private HashMap isRelatedToEntities = new HashMap<>();
   private HashMap relatesToEntities = new HashMap<>();
   private Long createdTime;
+  private long idPrefix;
 
   public TimelineEntity() {
 identifier = new Identifier();
@@ -581,4 +583,38 @@ public class TimelineEntity implements 
Comparable {
   return real.toString();
 }
   }
+
+  @XmlElement(name = "idprefix")
+  public long getIdPrefix() {
+if (real == null) {
+  return idPrefix;
+} else {
+  return real.getIdPrefix();
+}
+  }
+
+  /**
+   * Sets idPrefix for an entity.
+   * 
+   * Note: Entities will be stored in the order of idPrefix specified.
+   * If users decide to set idPrefix for an entity, they MUST provide
+   * the same prefix for every update of this entity.
+   * 
+   * Example: 
+   * TimelineEntity entity = new TimelineEntity();
+   * entity.setIdPrefix(value);
+   * 
+   * Users can use {@link TimelineServiceHelper#invertLong(long)} to invert
+   * the prefix if necessary.
+   *
+   * @param entityIdPrefix prefix for an entity.
+   */
+  @JsonSetter("idprefix")
+  public void setIdPrefix(long entityIdPrefix) {
+if (real == null) {
+  this.idPrefix = entityIdPrefix;
+} else {
+  real.setIdPrefix(entityIdPrefix);
+}
+  }
 }
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/30e209cd/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/TimelineServiceHelper.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/TimelineServiceHelper.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/TimelineServiceHelper.java
index e0268a6..65ed18a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/TimelineServiceHelper.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/TimelineServiceHelper.java
@@ -46,4 +46,12 @@ public final class TimelineServiceHelper {
 (HashMap) originalMap : new 

[2/2] hadoop git commit: YARN-4388. Cleanup mapreduce.job.hdfs-servers from yarn-default.xml (Junping Du via Varun Saxena)

2016-10-27 Thread varunsaxena
YARN-4388. Cleanup mapreduce.job.hdfs-servers from yarn-default.xml (Junping Du 
via Varun Saxena)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f5724488
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f5724488
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f5724488

Branch: refs/heads/branch-2
Commit: f572448809b9299ac766bd4e750890bc8f4329c6
Parents: c34bc3d
Author: Varun Saxena 
Authored: Fri Oct 28 02:24:24 2016 +0530
Committer: Varun Saxena 
Committed: Fri Oct 28 02:24:24 2016 +0530

--
 .../src/main/resources/mapred-default.xml   | 5 +
 .../apache/hadoop/yarn/conf/TestYarnConfigurationFields.java| 4 
 .../hadoop-yarn-common/src/main/resources/yarn-default.xml  | 5 -
 3 files changed, 5 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f5724488/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
index 84f054e..c70d6df 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
@@ -24,6 +24,11 @@
 
 
 
+  mapreduce.job.hdfs-servers
+  ${fs.defaultFS}
+
+
+
   mapreduce.job.committer.setup.cleanup.needed
   true
true, if job needs job-setup and job-cleanup.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f5724488/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
index f6f03e2..818f4e0 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
@@ -132,10 +132,6 @@ public class TestYarnConfigurationFields extends 
TestConfigurationFieldsBase {
 xmlPropsToSkipCompare = new HashSet();
 xmlPrefixToSkipCompare = new HashSet();
 
-// Should probably be moved from yarn-default.xml to mapred-default.xml
-xmlPropsToSkipCompare.add("mapreduce.job.hdfs-servers");
-xmlPropsToSkipCompare.add("mapreduce.job.jar");
-
 // Possibly obsolete, but unable to verify 100%
 
xmlPropsToSkipCompare.add("yarn.nodemanager.aux-services.mapreduce_shuffle.class");
 
xmlPropsToSkipCompare.add("yarn.resourcemanager.container.liveness-monitor.interval-ms");

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f5724488/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index a1eff52..59597f6 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -1798,11 +1798,6 @@
   
 
   
-mapreduce.job.hdfs-servers
-${fs.defaultFS}
-  
-
-  
 yarn.nodemanager.aux-services.mapreduce_shuffle.class
 org.apache.hadoop.mapred.ShuffleHandler
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[1/2] hadoop git commit: YARN-5308. FairScheduler: Move continuous scheduling related tests to TestContinuousScheduling (Kai Sasaki via Varun Saxena)

2016-10-27 Thread varunsaxena
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 e50215306 -> f57244880


YARN-5308. FairScheduler: Move continuous scheduling related tests to 
TestContinuousScheduling (Kai Sasaki via Varun Saxena)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c34bc3d6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c34bc3d6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c34bc3d6

Branch: refs/heads/branch-2
Commit: c34bc3d661ef813e0c344514eb9478e21cdced78
Parents: e502153
Author: Varun Saxena 
Authored: Fri Oct 28 00:35:40 2016 +0530
Committer: Varun Saxena 
Committed: Fri Oct 28 02:23:25 2016 +0530

--
 .../fair/TestContinuousScheduling.java  | 189 ++-
 .../scheduler/fair/TestFairScheduler.java   | 157 ---
 2 files changed, 187 insertions(+), 159 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c34bc3d6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestContinuousScheduling.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestContinuousScheduling.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestContinuousScheduling.java
index 6188246..5964d2f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestContinuousScheduling.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestContinuousScheduling.java
@@ -22,20 +22,32 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
 import org.apache.hadoop.yarn.api.records.ContainerId;
 import org.apache.hadoop.yarn.api.records.NodeId;
+import org.apache.hadoop.yarn.api.records.Priority;
 import org.apache.hadoop.yarn.api.records.ResourceRequest;
+import org.apache.hadoop.yarn.event.AsyncDispatcher;
+import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
 import org.apache.hadoop.yarn.server.resourcemanager.MockNodes;
 import org.apache.hadoop.yarn.server.resourcemanager.MockRM;
 import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer;
 import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNode;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerRequestKey;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestUtils;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeAddedSchedulerEvent;
 
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeRemovedSchedulerEvent;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent;
 import org.apache.hadoop.yarn.util.ControlledClock;
 import org.apache.hadoop.yarn.util.resource.Resources;
 import org.junit.After;
 import org.junit.Assert;
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotEquals;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.mockito.Matchers.isA;
+import static org.mockito.Mockito.doThrow;
+import static org.mockito.Mockito.spy;
+
 import org.junit.Before;
 import org.junit.Test;
 
@@ -43,18 +55,22 @@ import java.util.ArrayList;
 import java.util.HashSet;
 import java.util.Iterator;
 import java.util.List;
+import java.util.Map;
 import java.util.Set;
 
 public class TestContinuousScheduling extends FairSchedulerTestBase {
   private ControlledClock mockClock;
+  private static int delayThresholdTimeMs = 1000;
 
   @Override
   public Configuration createConfiguration() {
 Configuration conf = super.createConfiguration();
 conf.setBoolean(
 FairSchedulerConfiguration.CONTINUOUS_SCHEDULING_ENABLED, true);
-conf.setInt(FairSchedulerConfiguration.LOCALITY_DELAY_NODE_MS, 100);
-conf.setInt(FairSchedulerConfiguration.LOCALITY_DELAY_RACK_MS, 100);
+conf.setInt(FairSchedulerConfiguration.LOCALITY_DELAY_NODE_MS,
+delayThresholdTimeMs);
+conf.setInt(FairSchedulerConfiguration.LOCALITY_DELAY_RACK_MS,
+delayThresholdTimeMs);
 return conf;
   }
 
@@ -167,6 +183,175 @@ public class TestContinuousScheduling extends 
FairSchedulerTestBase {
 

hadoop git commit: YARN-5715. Introduce entity prefix for return and sort order. Contributed by Rohith Sharma K S.

2016-10-27 Thread sjlee
Repository: hadoop
Updated Branches:
  refs/heads/YARN-5355 d1e04e9ae -> f37288c7e


YARN-5715. Introduce entity prefix for return and sort order. Contributed by 
Rohith Sharma K S.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f37288c7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f37288c7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f37288c7

Branch: refs/heads/YARN-5355
Commit: f37288c7e0127b564645e978c7aab2a186fa6be6
Parents: d1e04e9
Author: Sangjin Lee 
Authored: Thu Oct 27 13:06:18 2016 -0700
Committer: Sangjin Lee 
Committed: Thu Oct 27 13:06:18 2016 -0700

--
 .../records/timelineservice/TimelineEntity.java | 36 
 .../hadoop/yarn/util/TimelineServiceHelper.java |  8 +
 .../storage/HBaseTimelineWriterImpl.java|  2 +-
 .../storage/entity/EntityRowKey.java| 26 ++
 .../storage/entity/EntityRowKeyPrefix.java  |  6 ++--
 .../storage/entity/EntityTable.java |  4 +--
 .../storage/reader/GenericEntityReader.java |  4 ++-
 .../storage/common/TestRowKeys.java | 24 +++--
 8 files changed, 88 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f37288c7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEntity.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEntity.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEntity.java
index 9c0a983..7a289b9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEntity.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineEntity.java
@@ -54,6 +54,7 @@ import org.codehaus.jackson.annotate.JsonSetter;
 @InterfaceStability.Unstable
 public class TimelineEntity implements Comparable {
   protected final static String SYSTEM_INFO_KEY_PREFIX = "SYSTEM_INFO_";
+  public final static long DEFAULT_ENTITY_PREFIX = 0L;
 
   /**
* Identifier of timeline entity(entity id + entity type).
@@ -145,6 +146,7 @@ public class TimelineEntity implements 
Comparable {
   private HashMap isRelatedToEntities = new HashMap<>();
   private HashMap relatesToEntities = new HashMap<>();
   private Long createdTime;
+  private long idPrefix;
 
   public TimelineEntity() {
 identifier = new Identifier();
@@ -581,4 +583,38 @@ public class TimelineEntity implements 
Comparable {
   return real.toString();
 }
   }
+
+  @XmlElement(name = "idprefix")
+  public long getIdPrefix() {
+if (real == null) {
+  return idPrefix;
+} else {
+  return real.getIdPrefix();
+}
+  }
+
+  /**
+   * Sets idPrefix for an entity.
+   * 
+   * Note: Entities will be stored in the order of idPrefix specified.
+   * If users decide to set idPrefix for an entity, they MUST provide
+   * the same prefix for every update of this entity.
+   * 
+   * Example: 
+   * TimelineEntity entity = new TimelineEntity();
+   * entity.setIdPrefix(value);
+   * 
+   * Users can use {@link TimelineServiceHelper#invertLong(long)} to invert
+   * the prefix if necessary.
+   *
+   * @param entityIdPrefix prefix for an entity.
+   */
+  @JsonSetter("idprefix")
+  public void setIdPrefix(long entityIdPrefix) {
+if (real == null) {
+  this.idPrefix = entityIdPrefix;
+} else {
+  real.setIdPrefix(entityIdPrefix);
+}
+  }
 }
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f37288c7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/TimelineServiceHelper.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/TimelineServiceHelper.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/TimelineServiceHelper.java
index e0268a6..65ed18a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/TimelineServiceHelper.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/TimelineServiceHelper.java
@@ -46,4 +46,12 @@ public final class TimelineServiceHelper {
 (HashMap) originalMap : new HashMap(originalMap);
   }
 
+  /**
+   * Inverts the given key.
+   * @param key value to 

hadoop git commit: YARN-4388. Cleanup mapreduce.job.hdfs-servers from yarn-default.xml (Junping Du via Varun Saxena)

2016-10-27 Thread varunsaxena
Repository: hadoop
Updated Branches:
  refs/heads/trunk 7e3c327d3 -> dd4ed6a58


YARN-4388. Cleanup mapreduce.job.hdfs-servers from yarn-default.xml (Junping Du 
via Varun Saxena)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/dd4ed6a5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/dd4ed6a5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/dd4ed6a5

Branch: refs/heads/trunk
Commit: dd4ed6a587bf9cc57eb38d7957d8a907901a1cac
Parents: 7e3c327
Author: Varun Saxena 
Authored: Fri Oct 28 01:41:14 2016 +0530
Committer: Varun Saxena 
Committed: Fri Oct 28 02:22:25 2016 +0530

--
 .../src/main/resources/mapred-default.xml   | 5 +
 .../apache/hadoop/yarn/conf/TestYarnConfigurationFields.java| 4 
 .../hadoop-yarn-common/src/main/resources/yarn-default.xml  | 5 -
 3 files changed, 5 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/dd4ed6a5/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
index fe29212..2b834bd 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
@@ -24,6 +24,11 @@
 
 
 
+  mapreduce.job.hdfs-servers
+  ${fs.defaultFS}
+
+
+
   mapreduce.job.committer.setup.cleanup.needed
   true
true, if job needs job-setup and job-cleanup.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/dd4ed6a5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
index 668821d..0c40fa9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java
@@ -135,10 +135,6 @@ public class TestYarnConfigurationFields extends 
TestConfigurationFieldsBase {
 xmlPropsToSkipCompare = new HashSet();
 xmlPrefixToSkipCompare = new HashSet();
 
-// Should probably be moved from yarn-default.xml to mapred-default.xml
-xmlPropsToSkipCompare.add("mapreduce.job.hdfs-servers");
-xmlPropsToSkipCompare.add("mapreduce.job.jar");
-
 // Possibly obsolete, but unable to verify 100%
 
xmlPropsToSkipCompare.add("yarn.nodemanager.aux-services.mapreduce_shuffle.class");
 
xmlPropsToSkipCompare.add("yarn.resourcemanager.container.liveness-monitor.interval-ms");

http://git-wip-us.apache.org/repos/asf/hadoop/blob/dd4ed6a5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index c7076e5..6c247b0 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -1801,11 +1801,6 @@
   
 
   
-mapreduce.job.hdfs-servers
-${fs.defaultFS}
-  
-
-  
 yarn.nodemanager.aux-services.mapreduce_shuffle.class
 org.apache.hadoop.mapred.ShuffleHandler
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-11055. Update default-log4j.properties for httpfs to imporve test logging. Contributed by Wei-Chiu Chuang.

2016-10-27 Thread weichiu
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 d21dc8aa6 -> bc2f429c0


HDFS-11055. Update default-log4j.properties for httpfs to imporve test logging. 
Contributed by Wei-Chiu Chuang.

(cherry picked from commit 31ff42b51037632ec871f29efc0fa894e1b738d0)
(cherry picked from commit 2cf3138d72a1392d13f6225b4a9fca4fe7ced132)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bc2f429c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bc2f429c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bc2f429c

Branch: refs/heads/branch-2.8
Commit: bc2f429c0e112d7123ad0dc4612954a6bbca0b7e
Parents: d21dc8a
Author: Wei-Chiu Chuang 
Authored: Thu Oct 27 13:37:00 2016 -0700
Committer: Wei-Chiu Chuang 
Committed: Thu Oct 27 13:47:26 2016 -0700

--
 .../src/test/resources/default-log4j.properties  | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bc2f429c/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/default-log4j.properties
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/default-log4j.properties
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/default-log4j.properties
index 7517512..45a8412 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/default-log4j.properties
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/default-log4j.properties
@@ -18,5 +18,9 @@ log4j.appender.test.File=${test.dir}/test.log
 log4j.appender.test.Append=true
 log4j.appender.test.layout=org.apache.log4j.PatternLayout
 log4j.appender.test.layout.ConversionPattern=%d{ISO8601} %5p %20c{1}: %4L - 
%m%n
-log4j.rootLogger=ALL, test
-
+log4j.rootLogger=INFO, stdout
+log4j.appender.stdout=org.apache.log4j.ConsoleAppender
+log4j.appender.stdout.Target=System.out
+log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
+log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} %-5p %c{1} - %m%n
+log4j.logger.com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator=OFF


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-4831. Recovered containers will be killed after NM stateful restart. Contributed by Siqi Li (cherry picked from commit 7e3c327d316b33d6a09bfd4e65e7e5384943bb1d)

2016-10-27 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 40dbf2a18 -> d21dc8aa6


YARN-4831. Recovered containers will be killed after NM stateful restart. 
Contributed by Siqi Li
(cherry picked from commit 7e3c327d316b33d6a09bfd4e65e7e5384943bb1d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d21dc8aa
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d21dc8aa
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d21dc8aa

Branch: refs/heads/branch-2.8
Commit: d21dc8aa6c7baa6f401623d76b48f79d748aec8d
Parents: 40dbf2a
Author: Jason Lowe 
Authored: Thu Oct 27 20:41:43 2016 +
Committer: Jason Lowe 
Committed: Thu Oct 27 20:45:18 2016 +

--
 .../container/ContainerImpl.java| 24 
 1 file changed, 14 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d21dc8aa/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
index 5891682..052fe8d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
@@ -1010,16 +1010,20 @@ public class ContainerImpl implements Container {
   static class KillOnNewTransition extends ContainerDoneTransition {
 @Override
 public void transition(ContainerImpl container, ContainerEvent event) {
-  ContainerKillEvent killEvent = (ContainerKillEvent) event;
-  container.exitCode = killEvent.getContainerExitStatus();
-  container.addDiagnostics(killEvent.getDiagnostic(), "\n");
-  container.addDiagnostics("Container is killed before being launched.\n");
-  container.metrics.killedContainer();
-  NMAuditLogger.logSuccess(container.user,
-  AuditConstants.FINISH_KILLED_CONTAINER, "ContainerImpl",
-  container.containerId.getApplicationAttemptId().getApplicationId(),
-  container.containerId);
-  super.transition(container, event);
+  if (container.recoveredStatus == RecoveredContainerStatus.COMPLETED) {
+container.sendFinishedEvents();
+  } else {
+ContainerKillEvent killEvent = (ContainerKillEvent) event;
+container.exitCode = killEvent.getContainerExitStatus();
+container.addDiagnostics(killEvent.getDiagnostic(), "\n");
+container.addDiagnostics("Container is killed before being 
launched.\n");
+container.metrics.killedContainer();
+NMAuditLogger.logSuccess(container.user,
+AuditConstants.FINISH_KILLED_CONTAINER, "ContainerImpl",
+container.containerId.getApplicationAttemptId().getApplicationId(),
+container.containerId);
+super.transition(container, event);
+  }
 }
   }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-4831. Recovered containers will be killed after NM stateful restart. Contributed by Siqi Li (cherry picked from commit 7e3c327d316b33d6a09bfd4e65e7e5384943bb1d)

2016-10-27 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 2cf3138d7 -> e50215306


YARN-4831. Recovered containers will be killed after NM stateful restart. 
Contributed by Siqi Li
(cherry picked from commit 7e3c327d316b33d6a09bfd4e65e7e5384943bb1d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e5021530
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e5021530
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e5021530

Branch: refs/heads/branch-2
Commit: e50215306d4483ac9159669e68febd67c7bce2cf
Parents: 2cf3138
Author: Jason Lowe 
Authored: Thu Oct 27 20:41:43 2016 +
Committer: Jason Lowe 
Committed: Thu Oct 27 20:44:17 2016 +

--
 .../container/ContainerImpl.java| 24 
 1 file changed, 14 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e5021530/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
index 6631d3a..6caa418 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
@@ -1496,16 +1496,20 @@ public class ContainerImpl implements Container {
   static class KillOnNewTransition extends ContainerDoneTransition {
 @Override
 public void transition(ContainerImpl container, ContainerEvent event) {
-  ContainerKillEvent killEvent = (ContainerKillEvent) event;
-  container.exitCode = killEvent.getContainerExitStatus();
-  container.addDiagnostics(killEvent.getDiagnostic(), "\n");
-  container.addDiagnostics("Container is killed before being launched.\n");
-  container.metrics.killedContainer();
-  NMAuditLogger.logSuccess(container.user,
-  AuditConstants.FINISH_KILLED_CONTAINER, "ContainerImpl",
-  container.containerId.getApplicationAttemptId().getApplicationId(),
-  container.containerId);
-  super.transition(container, event);
+  if (container.recoveredStatus == RecoveredContainerStatus.COMPLETED) {
+container.sendFinishedEvents();
+  } else {
+ContainerKillEvent killEvent = (ContainerKillEvent) event;
+container.exitCode = killEvent.getContainerExitStatus();
+container.addDiagnostics(killEvent.getDiagnostic(), "\n");
+container.addDiagnostics("Container is killed before being 
launched.\n");
+container.metrics.killedContainer();
+NMAuditLogger.logSuccess(container.user,
+AuditConstants.FINISH_KILLED_CONTAINER, "ContainerImpl",
+container.containerId.getApplicationAttemptId().getApplicationId(),
+container.containerId);
+super.transition(container, event);
+  }
 }
   }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-11055. Update default-log4j.properties for httpfs to imporve test logging. Contributed by Wei-Chiu Chuang.

2016-10-27 Thread weichiu
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 8935b2370 -> 2cf3138d7


HDFS-11055. Update default-log4j.properties for httpfs to imporve test logging. 
Contributed by Wei-Chiu Chuang.

(cherry picked from commit 31ff42b51037632ec871f29efc0fa894e1b738d0)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2cf3138d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2cf3138d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2cf3138d

Branch: refs/heads/branch-2
Commit: 2cf3138d72a1392d13f6225b4a9fca4fe7ced132
Parents: 8935b23
Author: Wei-Chiu Chuang 
Authored: Thu Oct 27 13:37:00 2016 -0700
Committer: Wei-Chiu Chuang 
Committed: Thu Oct 27 13:38:02 2016 -0700

--
 .../src/test/resources/default-log4j.properties  | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2cf3138d/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/default-log4j.properties
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/default-log4j.properties
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/default-log4j.properties
index 7517512..45a8412 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/default-log4j.properties
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/default-log4j.properties
@@ -18,5 +18,9 @@ log4j.appender.test.File=${test.dir}/test.log
 log4j.appender.test.Append=true
 log4j.appender.test.layout=org.apache.log4j.PatternLayout
 log4j.appender.test.layout.ConversionPattern=%d{ISO8601} %5p %20c{1}: %4L - 
%m%n
-log4j.rootLogger=ALL, test
-
+log4j.rootLogger=INFO, stdout
+log4j.appender.stdout=org.apache.log4j.ConsoleAppender
+log4j.appender.stdout.Target=System.out
+log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
+log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} %-5p %c{1} - %m%n
+log4j.logger.com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator=OFF


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-4831. Recovered containers will be killed after NM stateful restart. Contributed by Siqi Li

2016-10-27 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/trunk 31ff42b51 -> 7e3c327d3


YARN-4831. Recovered containers will be killed after NM stateful restart. 
Contributed by Siqi Li


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7e3c327d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7e3c327d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7e3c327d

Branch: refs/heads/trunk
Commit: 7e3c327d316b33d6a09bfd4e65e7e5384943bb1d
Parents: 31ff42b
Author: Jason Lowe 
Authored: Thu Oct 27 20:41:43 2016 +
Committer: Jason Lowe 
Committed: Thu Oct 27 20:42:52 2016 +

--
 .../container/ContainerImpl.java| 24 
 1 file changed, 14 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7e3c327d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
index 4bc0a0f..e6b9d9f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
@@ -1506,16 +1506,20 @@ public class ContainerImpl implements Container {
   static class KillOnNewTransition extends ContainerDoneTransition {
 @Override
 public void transition(ContainerImpl container, ContainerEvent event) {
-  ContainerKillEvent killEvent = (ContainerKillEvent) event;
-  container.exitCode = killEvent.getContainerExitStatus();
-  container.addDiagnostics(killEvent.getDiagnostic(), "\n");
-  container.addDiagnostics("Container is killed before being launched.\n");
-  container.metrics.killedContainer();
-  NMAuditLogger.logSuccess(container.user,
-  AuditConstants.FINISH_KILLED_CONTAINER, "ContainerImpl",
-  container.containerId.getApplicationAttemptId().getApplicationId(),
-  container.containerId);
-  super.transition(container, event);
+  if (container.recoveredStatus == RecoveredContainerStatus.COMPLETED) {
+container.sendFinishedEvents();
+  } else {
+ContainerKillEvent killEvent = (ContainerKillEvent) event;
+container.exitCode = killEvent.getContainerExitStatus();
+container.addDiagnostics(killEvent.getDiagnostic(), "\n");
+container.addDiagnostics("Container is killed before being 
launched.\n");
+container.metrics.killedContainer();
+NMAuditLogger.logSuccess(container.user,
+AuditConstants.FINISH_KILLED_CONTAINER, "ContainerImpl",
+container.containerId.getApplicationAttemptId().getApplicationId(),
+container.containerId);
+super.transition(container, event);
+  }
 }
   }
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-11055. Update default-log4j.properties for httpfs to imporve test logging. Contributed by Wei-Chiu Chuang.

2016-10-27 Thread weichiu
Repository: hadoop
Updated Branches:
  refs/heads/trunk b4a8fbcbb -> 31ff42b51


HDFS-11055. Update default-log4j.properties for httpfs to imporve test logging. 
Contributed by Wei-Chiu Chuang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/31ff42b5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/31ff42b5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/31ff42b5

Branch: refs/heads/trunk
Commit: 31ff42b51037632ec871f29efc0fa894e1b738d0
Parents: b4a8fbc
Author: Wei-Chiu Chuang 
Authored: Thu Oct 27 13:37:00 2016 -0700
Committer: Wei-Chiu Chuang 
Committed: Thu Oct 27 13:37:00 2016 -0700

--
 .../src/test/resources/default-log4j.properties  | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/31ff42b5/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/default-log4j.properties
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/default-log4j.properties
 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/default-log4j.properties
index 7517512..45a8412 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/default-log4j.properties
+++ 
b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/resources/default-log4j.properties
@@ -18,5 +18,9 @@ log4j.appender.test.File=${test.dir}/test.log
 log4j.appender.test.Append=true
 log4j.appender.test.layout=org.apache.log4j.PatternLayout
 log4j.appender.test.layout.ConversionPattern=%d{ISO8601} %5p %20c{1}: %4L - 
%m%n
-log4j.rootLogger=ALL, test
-
+log4j.rootLogger=INFO, stdout
+log4j.appender.stdout=org.apache.log4j.ConsoleAppender
+log4j.appender.stdout.Target=System.out
+log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
+log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} %-5p %c{1} - %m%n
+log4j.logger.com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator=OFF


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-5172. Update yarn daemonlog documentation due to HADOOP-12847. Contributed by Wei-Chiu Chuang (cherry picked from commit b4a8fbcbbc5ea4ab3087ecf913839a53f32be113)

2016-10-27 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 b7f7d42c2 -> 8935b2370


YARN-5172. Update yarn daemonlog documentation due to HADOOP-12847. Contributed 
by Wei-Chiu Chuang
(cherry picked from commit b4a8fbcbbc5ea4ab3087ecf913839a53f32be113)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8935b237
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8935b237
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8935b237

Branch: refs/heads/branch-2
Commit: 8935b23706b7e26685e32c790a2f27d46302a492
Parents: b7f7d42
Author: Jason Lowe 
Authored: Thu Oct 27 19:41:43 2016 +
Committer: Jason Lowe 
Committed: Thu Oct 27 19:45:00 2016 +

--
 .../src/site/markdown/YarnCommands.md  | 17 ++---
 1 file changed, 2 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8935b237/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md
index ed9f7b8..0c038f4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md
@@ -169,21 +169,8 @@ Commands useful for administrators of a Hadoop cluster.
 
 ### `daemonlog`
 
-Usage:
-
-```
-   yarn daemonlog -getlevel   
-   yarn daemonlog -setlevel   
-```
-
-| COMMAND\_OPTIONS | Description |
-|: |: |
-| -getlevel `` `` | Prints the log level of the log 
identified by a qualified ``, in the daemon running at 
``. This command internally connects to 
`http:///logLevel?log=` |
-| -setlevel `  ` | Sets the log level of the 
log identified by a qualified `` in the daemon running at 
``. This command internally connects to 
`http:///logLevel?log==` |
-
-Get/Set the log level for a Log identified by a qualified class name in the 
daemon.
-
-Example: `$ bin/yarn daemonlog -setlevel 127.0.0.1:8088 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl DEBUG`
+Get/Set the log level for a Log identified by a qualified class name in the 
daemon dynamically.
+See the Hadoop [Commands 
Manual](../../hadoop-project-dist/hadoop-common/CommandsManual.html#daemonlog) 
for more information.
 
 ### `nodemanager`
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-5172. Update yarn daemonlog documentation due to HADOOP-12847. Contributed by Wei-Chiu Chuang

2016-10-27 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/trunk 6fbfb501f -> b4a8fbcbb


YARN-5172. Update yarn daemonlog documentation due to HADOOP-12847. Contributed 
by Wei-Chiu Chuang


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b4a8fbcb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b4a8fbcb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b4a8fbcb

Branch: refs/heads/trunk
Commit: b4a8fbcbbc5ea4ab3087ecf913839a53f32be113
Parents: 6fbfb50
Author: Jason Lowe 
Authored: Thu Oct 27 19:41:43 2016 +
Committer: Jason Lowe 
Committed: Thu Oct 27 19:43:02 2016 +

--
 .../src/site/markdown/YarnCommands.md  | 17 ++---
 1 file changed, 2 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b4a8fbcb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md
index 2c38967..8f954ff 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md
@@ -173,21 +173,8 @@ Commands useful for administrators of a Hadoop cluster.
 
 ### `daemonlog`
 
-Usage:
-
-```
-   yarn daemonlog -getlevel   
-   yarn daemonlog -setlevel   
-```
-
-| COMMAND\_OPTIONS | Description |
-|: |: |
-| -getlevel `` `` | Prints the log level of the log 
identified by a qualified ``, in the daemon running at 
``. This command internally connects to 
`http:///logLevel?log=` |
-| -setlevel `  ` | Sets the log level of the 
log identified by a qualified `` in the daemon running at 
``. This command internally connects to 
`http:///logLevel?log==` |
-
-Get/Set the log level for a Log identified by a qualified class name in the 
daemon.
-
-Example: `$ bin/yarn daemonlog -setlevel 127.0.0.1:8088 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl DEBUG`
+Get/Set the log level for a Log identified by a qualified class name in the 
daemon dynamically.
+See the Hadoop [Commands 
Manual](../../hadoop-project-dist/hadoop-common/CommandsManual.html#daemonlog) 
for more information.
 
 ### `nodemanager`
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-4456. Clean up Lint warnings in nodemanager (templedf via rkanter)

2016-10-27 Thread rkanter
Repository: hadoop
Updated Branches:
  refs/heads/trunk ae48c496d -> 6fbfb501f


YARN-4456. Clean up Lint warnings in nodemanager (templedf via rkanter)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6fbfb501
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6fbfb501
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6fbfb501

Branch: refs/heads/trunk
Commit: 6fbfb501f2e27045da5ce8f48dde881b29328b4a
Parents: ae48c49
Author: Robert Kanter 
Authored: Thu Oct 27 12:37:01 2016 -0700
Committer: Robert Kanter 
Committed: Thu Oct 27 12:37:01 2016 -0700

--
 .../containermanager/logaggregation/TestLogAggregationService.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6fbfb501/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
index 1edb841..b9d18a3 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
@@ -2138,7 +2138,7 @@ public class TestLogAggregationService extends 
BaseContainerManagerTest {
 ApplicationEventType.APPLICATION_LOG_HANDLING_INITED) };
 checkEvents(appEventHandler, expectedInitEvents, false, "getType",
 "getApplicationID");
-reset(appEventHandler);
+reset(new EventHandler[] {appEventHandler});
 
 logAggregationService.handle(new LogHandlerAppFinishedEvent(appId));
 logAggregationService.stop();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: YARN-4456. Clean up Lint warnings in nodemanager (templedf via rkanter)

2016-10-27 Thread rkanter
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 5057abc39 -> b7f7d42c2


YARN-4456. Clean up Lint warnings in nodemanager (templedf via rkanter)

(cherry picked from commit 6fbfb501f2e27045da5ce8f48dde881b29328b4a)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b7f7d42c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b7f7d42c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b7f7d42c

Branch: refs/heads/branch-2
Commit: b7f7d42c273622cad2d69be6bc0ae65c1715b530
Parents: 5057abc
Author: Robert Kanter 
Authored: Thu Oct 27 12:37:01 2016 -0700
Committer: Robert Kanter 
Committed: Thu Oct 27 12:37:20 2016 -0700

--
 .../containermanager/logaggregation/TestLogAggregationService.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b7f7d42c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
index 1edb841..b9d18a3 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
@@ -2138,7 +2138,7 @@ public class TestLogAggregationService extends 
BaseContainerManagerTest {
 ApplicationEventType.APPLICATION_LOG_HANDLING_INITED) };
 checkEvents(appEventHandler, expectedInitEvents, false, "getType",
 "getApplicationID");
-reset(appEventHandler);
+reset(new EventHandler[] {appEventHandler});
 
 logAggregationService.handle(new LogHandlerAppFinishedEvent(appId));
 logAggregationService.stop();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-11069. Tighten the authorization of datanode RPC. Contributed by Kihwal Lee Updated CHANGES.txt (cherry picked from commit ae48c496dce8d0eae4571fc64e6850d602bae688)

2016-10-27 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 616d63452 -> a49510f69


HDFS-11069. Tighten the authorization of datanode RPC. Contributed by Kihwal Lee
Updated CHANGES.txt
(cherry picked from commit ae48c496dce8d0eae4571fc64e6850d602bae688)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a49510f6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a49510f6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a49510f6

Branch: refs/heads/branch-2.7
Commit: a49510f697c7f64861e4429c3b8bd23b062e6905
Parents: 616d634
Author: Kihwal Lee 
Authored: Thu Oct 27 14:23:20 2016 -0500
Committer: Kihwal Lee 
Committed: Thu Oct 27 14:23:20 2016 -0500

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt| 2 ++
 .../java/org/apache/hadoop/hdfs/server/datanode/DataNode.java  | 6 +++---
 2 files changed, 5 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a49510f6/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index bf7199b..582b146 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -57,6 +57,8 @@ Release 2.7.4 - UNRELEASED
 
 HDFS-11053. Unnecessary superuser check in versionRequest() (kihwal)
 
+HDFS-11069. Tighten the authorization in datanode RPC. (kihwal)
+
   OPTIMIZATIONS
 
 HDFS-10896. Move lock logging logic from FSNamesystem into 
FSNamesystemLock.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a49510f6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index eb159eb..9ba64b4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -839,7 +839,7 @@ public class DataNode extends ReconfigurableBase
 
 // Is this by the DN user itself?
 assert dnUserName != null;
-if (callerUgi.getShortUserName().equals(dnUserName)) {
+if (callerUgi.getUserName().equals(dnUserName)) {
   return;
 }
 
@@ -1135,7 +1135,7 @@ public class DataNode extends ReconfigurableBase
 this.blockPoolTokenSecretManager = new BlockPoolTokenSecretManager();
 
 // Login is done by now. Set the DN user name.
-dnUserName = UserGroupInformation.getCurrentUser().getShortUserName();
+dnUserName = UserGroupInformation.getCurrentUser().getUserName();
 LOG.info("dnUserName = " + dnUserName);
 LOG.info("supergroup = " + supergroup);
 initIpcServer(conf);
@@ -3256,4 +3256,4 @@ public class DataNode extends ReconfigurableBase
   void setBlockScanner(BlockScanner blockScanner) {
 this.blockScanner = blockScanner;
   }
-}
\ No newline at end of file
+}


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-11069. Tighten the authorization of datanode RPC. Contributed by Kihwal Lee

2016-10-27 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 5b0d32b4a -> 40dbf2a18


HDFS-11069. Tighten the authorization of datanode RPC. Contributed by Kihwal Lee

(cherry picked from commit ae48c496dce8d0eae4571fc64e6850d602bae688)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/40dbf2a1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/40dbf2a1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/40dbf2a1

Branch: refs/heads/branch-2.8
Commit: 40dbf2a18d1da2c5553619feef31a49e35c69256
Parents: 5b0d32b
Author: Kihwal Lee 
Authored: Thu Oct 27 14:18:48 2016 -0500
Committer: Kihwal Lee 
Committed: Thu Oct 27 14:18:48 2016 -0500

--
 .../java/org/apache/hadoop/hdfs/server/datanode/DataNode.java  | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/40dbf2a1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index c31e5fd..55e68f0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -982,7 +982,7 @@ public class DataNode extends ReconfigurableBase
 
 // Is this by the DN user itself?
 assert dnUserName != null;
-if (callerUgi.getShortUserName().equals(dnUserName)) {
+if (callerUgi.getUserName().equals(dnUserName)) {
   return;
 }
 
@@ -1300,7 +1300,7 @@ public class DataNode extends ReconfigurableBase
 this.blockPoolTokenSecretManager = new BlockPoolTokenSecretManager();
 
 // Login is done by now. Set the DN user name.
-dnUserName = UserGroupInformation.getCurrentUser().getShortUserName();
+dnUserName = UserGroupInformation.getCurrentUser().getUserName();
 LOG.info("dnUserName = " + dnUserName);
 LOG.info("supergroup = " + supergroup);
 initIpcServer(conf);
@@ -3307,4 +3307,4 @@ public class DataNode extends ReconfigurableBase
   void setBlockScanner(BlockScanner blockScanner) {
 this.blockScanner = blockScanner;
   }
-}
\ No newline at end of file
+}


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[25/50] [abbrv] hadoop git commit: YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses. (wangda)

2016-10-27 Thread wangda
YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build to 
mvn, and fix licenses. (wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3b550eb6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3b550eb6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3b550eb6

Branch: refs/heads/YARN-3368
Commit: 3b550eb6542ede65f4084df07e67c92580a93a8e
Parents: 185a219
Author: Wangda Tan 
Authored: Mon Mar 21 14:03:13 2016 -0700
Committer: Wangda Tan 
Committed: Thu Oct 27 12:10:57 2016 -0700

--
 .gitignore  |  13 +
 BUILDING.txt|   4 +-
 LICENSE.txt |  80 +
 dev-support/create-release.sh   | 144 +
 dev-support/docker/Dockerfile   |   5 +
 .../src/site/markdown/YarnUI2.md|  43 +++
 .../hadoop-yarn/hadoop-yarn-ui/.bowerrc |   4 -
 .../hadoop-yarn/hadoop-yarn-ui/.editorconfig|  34 ---
 .../hadoop-yarn/hadoop-yarn-ui/.ember-cli   |  11 -
 .../hadoop-yarn/hadoop-yarn-ui/.gitignore   |  17 --
 .../hadoop-yarn/hadoop-yarn-ui/.jshintrc|  32 --
 .../hadoop-yarn/hadoop-yarn-ui/.travis.yml  |  23 --
 .../hadoop-yarn/hadoop-yarn-ui/.watchmanconfig  |   3 -
 .../hadoop-yarn/hadoop-yarn-ui/README.md|  24 --
 .../hadoop-yarn-ui/app/adapters/cluster-info.js |  20 --
 .../app/adapters/cluster-metric.js  |  20 --
 .../app/adapters/yarn-app-attempt.js|  32 --
 .../hadoop-yarn-ui/app/adapters/yarn-app.js |  26 --
 .../app/adapters/yarn-container-log.js  |  74 -
 .../app/adapters/yarn-container.js  |  43 ---
 .../app/adapters/yarn-node-app.js   |  63 
 .../app/adapters/yarn-node-container.js |  64 
 .../hadoop-yarn-ui/app/adapters/yarn-node.js|  40 ---
 .../hadoop-yarn-ui/app/adapters/yarn-queue.js   |  20 --
 .../hadoop-yarn-ui/app/adapters/yarn-rm-node.js |  45 ---
 .../hadoop-yarn/hadoop-yarn-ui/app/app.js   |  20 --
 .../hadoop-yarn-ui/app/components/.gitkeep  |   0
 .../app/components/app-attempt-table.js |   4 -
 .../hadoop-yarn-ui/app/components/app-table.js  |   4 -
 .../hadoop-yarn-ui/app/components/bar-chart.js  | 104 ---
 .../app/components/base-chart-component.js  | 109 ---
 .../app/components/container-table.js   |   4 -
 .../app/components/donut-chart.js   | 148 --
 .../app/components/item-selector.js |  21 --
 .../app/components/queue-configuration-table.js |   4 -
 .../app/components/queue-navigator.js   |   4 -
 .../hadoop-yarn-ui/app/components/queue-view.js | 272 -
 .../app/components/simple-table.js  |  58 
 .../app/components/timeline-view.js | 250 
 .../app/components/tree-selector.js | 257 
 .../hadoop-yarn/hadoop-yarn-ui/app/config.js|  27 --
 .../hadoop-yarn/hadoop-yarn-ui/app/constants.js |  24 --
 .../hadoop-yarn-ui/app/controllers/.gitkeep |   0
 .../app/controllers/application.js  |  55 
 .../app/controllers/cluster-overview.js |   5 -
 .../hadoop-yarn-ui/app/controllers/yarn-apps.js |   4 -
 .../app/controllers/yarn-queue.js   |   6 -
 .../hadoop-yarn-ui/app/helpers/.gitkeep |   0
 .../hadoop-yarn-ui/app/helpers/divide.js|  31 --
 .../app/helpers/log-files-comma.js  |  48 ---
 .../hadoop-yarn-ui/app/helpers/node-link.js |  37 ---
 .../hadoop-yarn-ui/app/helpers/node-menu.js |  66 -
 .../hadoop-yarn/hadoop-yarn-ui/app/index.html   |  25 --
 .../hadoop-yarn-ui/app/models/.gitkeep  |   0
 .../hadoop-yarn-ui/app/models/cluster-info.js   |  13 -
 .../hadoop-yarn-ui/app/models/cluster-metric.js | 115 
 .../app/models/yarn-app-attempt.js  |  44 ---
 .../hadoop-yarn-ui/app/models/yarn-app.js   |  65 -
 .../app/models/yarn-container-log.js|  25 --
 .../hadoop-yarn-ui/app/models/yarn-container.js |  39 ---
 .../hadoop-yarn-ui/app/models/yarn-node-app.js  |  44 ---
 .../app/models/yarn-node-container.js   |  57 
 .../hadoop-yarn-ui/app/models/yarn-node.js  |  33 ---
 .../hadoop-yarn-ui/app/models/yarn-queue.js |  76 -
 .../hadoop-yarn-ui/app/models/yarn-rm-node.js   |  92 --
 .../hadoop-yarn-ui/app/models/yarn-user.js  |   8 -
 .../hadoop-yarn/hadoop-yarn-ui/app/router.js|  29 --
 .../hadoop-yarn-ui/app/routes/.gitkeep  |   0
 .../hadoop-yarn-ui/app/routes/application.js|  38 ---
 .../app/routes/cluster-overview.js  |  11 -
 .../hadoop-yarn-ui/app/routes/index.js  |  29 --
 .../app/routes/yarn-app-attempt.js  |  21 --
 

hadoop git commit: HDFS-11069. Tighten the authorization of datanode RPC. Contributed by Kihwal Lee

2016-10-27 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 6c11a1191 -> 5057abc39


HDFS-11069. Tighten the authorization of datanode RPC. Contributed by Kihwal Lee

(cherry picked from commit ae48c496dce8d0eae4571fc64e6850d602bae688)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5057abc3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5057abc3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5057abc3

Branch: refs/heads/branch-2
Commit: 5057abc3901e5460ec455edbb4c078ca92fb5a7e
Parents: 6c11a11
Author: Kihwal Lee 
Authored: Thu Oct 27 14:18:10 2016 -0500
Committer: Kihwal Lee 
Committed: Thu Oct 27 14:18:10 2016 -0500

--
 .../java/org/apache/hadoop/hdfs/server/datanode/DataNode.java  | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5057abc3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index 9aec106..f3f0f49 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -996,7 +996,7 @@ public class DataNode extends ReconfigurableBase
 
 // Is this by the DN user itself?
 assert dnUserName != null;
-if (callerUgi.getShortUserName().equals(dnUserName)) {
+if (callerUgi.getUserName().equals(dnUserName)) {
   return;
 }
 
@@ -1315,7 +1315,7 @@ public class DataNode extends ReconfigurableBase
 this.blockPoolTokenSecretManager = new BlockPoolTokenSecretManager();
 
 // Login is done by now. Set the DN user name.
-dnUserName = UserGroupInformation.getCurrentUser().getShortUserName();
+dnUserName = UserGroupInformation.getCurrentUser().getUserName();
 LOG.info("dnUserName = " + dnUserName);
 LOG.info("supergroup = " + supergroup);
 initIpcServer();
@@ -3322,4 +3322,4 @@ public class DataNode extends ReconfigurableBase
   void setBlockScanner(BlockScanner blockScanner) {
 this.blockScanner = blockScanner;
   }
-}
\ No newline at end of file
+}


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



hadoop git commit: HDFS-11069. Tighten the authorization of datanode RPC. Contributed by Kihwal Lee

2016-10-27 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/trunk db4196599 -> ae48c496d


HDFS-11069. Tighten the authorization of datanode RPC. Contributed by Kihwal Lee


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ae48c496
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ae48c496
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ae48c496

Branch: refs/heads/trunk
Commit: ae48c496dce8d0eae4571fc64e6850d602bae688
Parents: db41965
Author: Kihwal Lee 
Authored: Thu Oct 27 14:17:16 2016 -0500
Committer: Kihwal Lee 
Committed: Thu Oct 27 14:17:16 2016 -0500

--
 .../java/org/apache/hadoop/hdfs/server/datanode/DataNode.java| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ae48c496/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index 416c138..9ceffc2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -989,7 +989,7 @@ public class DataNode extends ReconfigurableBase
 
 // Is this by the DN user itself?
 assert dnUserName != null;
-if (callerUgi.getShortUserName().equals(dnUserName)) {
+if (callerUgi.getUserName().equals(dnUserName)) {
   return;
 }
 
@@ -1348,7 +1348,7 @@ public class DataNode extends ReconfigurableBase
 this.blockPoolTokenSecretManager = new BlockPoolTokenSecretManager();
 
 // Login is done by now. Set the DN user name.
-dnUserName = UserGroupInformation.getCurrentUser().getShortUserName();
+dnUserName = UserGroupInformation.getCurrentUser().getUserName();
 LOG.info("dnUserName = " + dnUserName);
 LOG.info("supergroup = " + supergroup);
 initIpcServer();


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[30/50] [abbrv] hadoop git commit: YARN-5504. [YARN-3368] Fix YARN UI build pom.xml (Sreenath Somarajapuram via Sunil G)

2016-10-27 Thread wangda
YARN-5504. [YARN-3368] Fix YARN UI build pom.xml (Sreenath Somarajapuram via 
Sunil G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/46e3a213
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/46e3a213
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/46e3a213

Branch: refs/heads/YARN-3368
Commit: 46e3a2131b092f4982e25737aeb1269089096ad9
Parents: 8c1ed32
Author: sunilg 
Authored: Thu Aug 25 23:21:29 2016 +0530
Committer: Wangda Tan 
Committed: Thu Oct 27 12:10:57 2016 -0700

--
 .../hadoop-yarn/hadoop-yarn-ui/pom.xml  | 59 +---
 .../src/main/webapp/ember-cli-build.js  |  2 +-
 .../hadoop-yarn-ui/src/main/webapp/package.json |  3 +-
 3 files changed, 17 insertions(+), 47 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/46e3a213/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
index 2933a76..fca8d30 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
@@ -35,7 +35,7 @@
 node
 v0.12.2
 2.10.0
-false
+false
   
 
   
@@ -60,19 +60,20 @@
   
 
   
- maven-clean-plugin
- 3.0.0
- 
-false
-
-   
-  
${basedir}/src/main/webapp/bower_components
-   
-   
-  
${basedir}/src/main/webapp/node_modules
-   
-
- 
+maven-clean-plugin
+3.0.0
+
+  ${keep-ui-build-cache}
+  false
+  
+
+  
${basedir}/src/main/webapp/bower_components
+
+
+  ${basedir}/src/main/webapp/node_modules
+
+  
+
   
 
   
@@ -126,21 +127,6 @@
 
   
   
-generate-sources
-bower --allow-root install
-
-  exec
-
-
-  ${webappDir}
-  bower
-  
---allow-root
-install
-  
-
-  
-  
 ember build
 generate-sources
 
@@ -158,21 +144,6 @@
 
   
   
-ember test
-generate-resources
-
-  exec
-
-
-  ${skipTests}
-  ${webappDir}
-  ember
-  
-test
-  
-
-  
-  
 cleanup tmp
 generate-sources
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/46e3a213/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
index d21cc3e..7736c75 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
@@ -22,7 +22,7 @@ var EmberApp = require('ember-cli/lib/broccoli/ember-app');
 
 module.exports = function(defaults) {
   var app = new EmberApp(defaults, {
-// Add options here
+hinting: false
   });
 
   
app.import("bower_components/datatables/media/css/jquery.dataTables.min.css");

http://git-wip-us.apache.org/repos/asf/hadoop/blob/46e3a213/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json
index baa473a..6a4eb16 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json
@@ -9,8 +9,7 @@
   },
   "scripts": {
 "build": "ember build",
-"start": "ember server",
-"test": "ember test"
+"start": "ember server"
   },
   "repository": "",
   "engines": {


-
To unsubscribe, e-mail: 

[41/50] [abbrv] hadoop git commit: YARN-4514. [YARN-3368] Cleanup hardcoded configurations, such as RM/ATS addresses. (Sunil G via wangda)

2016-10-27 Thread wangda
YARN-4514. [YARN-3368] Cleanup hardcoded configurations, such as RM/ATS 
addresses. (Sunil G via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3d61d2bd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3d61d2bd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3d61d2bd

Branch: refs/heads/YARN-3368
Commit: 3d61d2bdeaa38e542bc93127ffca113fbf9b764f
Parents: 3b550eb
Author: Wangda Tan 
Authored: Sat Apr 16 23:04:45 2016 -0700
Committer: Wangda Tan 
Committed: Thu Oct 27 12:10:57 2016 -0700

--
 .../src/main/webapp/app/adapters/abstract.js| 48 +
 .../main/webapp/app/adapters/cluster-info.js| 22 ++
 .../main/webapp/app/adapters/cluster-metric.js  | 22 ++
 .../webapp/app/adapters/yarn-app-attempt.js | 24 ++-
 .../src/main/webapp/app/adapters/yarn-app.js| 27 ++-
 .../webapp/app/adapters/yarn-container-log.js   | 10 ++-
 .../main/webapp/app/adapters/yarn-container.js  | 20 +++---
 .../main/webapp/app/adapters/yarn-node-app.js   | 24 +++
 .../webapp/app/adapters/yarn-node-container.js  | 24 +++
 .../src/main/webapp/app/adapters/yarn-node.js   | 23 +++---
 .../src/main/webapp/app/adapters/yarn-queue.js  | 22 ++
 .../main/webapp/app/adapters/yarn-rm-node.js| 21 ++
 .../hadoop-yarn-ui/src/main/webapp/app/app.js   |  4 +-
 .../src/main/webapp/app/config.js   |  5 +-
 .../src/main/webapp/app/index.html  |  1 +
 .../src/main/webapp/app/initializers/env.js | 29 
 .../src/main/webapp/app/initializers/hosts.js   | 28 
 .../src/main/webapp/app/services/env.js | 59 
 .../src/main/webapp/app/services/hosts.js   | 74 
 .../hadoop-yarn-ui/src/main/webapp/bower.json   | 25 +++
 .../src/main/webapp/config/configs.env  | 48 +
 .../src/main/webapp/config/default-config.js| 32 +
 .../src/main/webapp/config/environment.js   | 11 ++-
 .../src/main/webapp/ember-cli-build.js  | 10 ++-
 .../hadoop-yarn-ui/src/main/webapp/package.json | 35 -
 .../webapp/tests/unit/initializers/env-test.js  | 41 +++
 .../tests/unit/initializers/hosts-test.js   | 41 +++
 .../tests/unit/initializers/jquery-test.js  | 41 +++
 .../main/webapp/tests/unit/services/env-test.js | 30 
 .../webapp/tests/unit/services/hosts-test.js| 30 
 30 files changed, 637 insertions(+), 194 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3d61d2bd/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/abstract.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/abstract.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/abstract.js
new file mode 100644
index 000..c7e5c36
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/abstract.js
@@ -0,0 +1,48 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+import Ember from 'ember';
+
+export default DS.JSONAPIAdapter.extend({
+  address: null, //Must be set by inheriting classes
+  restNameSpace: null, //Must be set by inheriting classes
+  serverName: null, //Must be set by inheriting classes
+
+  headers: {
+Accept: 'application/json'
+  },
+
+  host: Ember.computed("address", function () {
+var address = this.get("address");
+return this.get(`hosts.${address}`);
+  }),
+
+  namespace: Ember.computed("restNameSpace", function () {
+var serverName = this.get("restNameSpace");
+return this.get(`env.app.namespaces.${serverName}`);
+  }),
+
+  ajax: function(url, method, options) {
+options = options || {};
+options.crossDomain = true;
+options.xhrFields = {
+  withCredentials: true
+};
+options.targetServer = this.get('serverName');
+return this._super(url, method, 

[45/50] [abbrv] hadoop git commit: YARN-5698. [YARN-3368] Launch new YARN UI under hadoop web app port. (Sunil G via wangda)

2016-10-27 Thread wangda
YARN-5698. [YARN-3368] Launch new YARN UI under hadoop web app port. (Sunil G 
via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2cb15feb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2cb15feb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2cb15feb

Branch: refs/heads/YARN-3368
Commit: 2cb15feb625d7612c6b6dadb813eb2362d32e31e
Parents: de183bb
Author: Wangda Tan 
Authored: Wed Oct 12 13:22:20 2016 -0700
Committer: Wangda Tan 
Committed: Thu Oct 27 12:10:57 2016 -0700

--
 .../hadoop/yarn/conf/YarnConfiguration.java | 21 ++
 .../org/apache/hadoop/yarn/webapp/WebApps.java  |  8 +++
 .../src/main/resources/yarn-default.xml | 20 ++
 .../server/resourcemanager/ResourceManager.java | 68 +++-
 .../src/main/webapp/config/default-config.js|  4 +-
 5 files changed, 55 insertions(+), 66 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2cb15feb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 0134465..c16e1ea 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -266,25 +266,12 @@ public class YarnConfiguration extends Configuration {
   /**
* Enable YARN WebApp V2.
*/
-  public static final String RM_WEBAPP_UI2_ENABLE = RM_PREFIX
+  public static final String YARN_WEBAPP_UI2_ENABLE = "yarn."
   + "webapp.ui2.enable";
-  public static final boolean DEFAULT_RM_WEBAPP_UI2_ENABLE = false;
+  public static final boolean DEFAULT_YARN_WEBAPP_UI2_ENABLE = false;
 
-  /** The address of the RM web ui2 application. */
-  public static final String RM_WEBAPP_UI2_ADDRESS = RM_PREFIX
-  + "webapp.ui2.address";
-
-  public static final int DEFAULT_RM_WEBAPP_UI2_PORT = 8288;
-  public static final String DEFAULT_RM_WEBAPP_UI2_ADDRESS = "0.0.0.0:" +
-  DEFAULT_RM_WEBAPP_UI2_PORT;
-  
-  /** The https address of the RM web ui2 application.*/
-  public static final String RM_WEBAPP_UI2_HTTPS_ADDRESS =
-  RM_PREFIX + "webapp.ui2.https.address";
-
-  public static final int DEFAULT_RM_WEBAPP_UI2_HTTPS_PORT = 8290;
-  public static final String DEFAULT_RM_WEBAPP_UI2_HTTPS_ADDRESS = "0.0.0.0:"
-  + DEFAULT_RM_WEBAPP_UI2_HTTPS_PORT;
+  public static final String YARN_WEBAPP_UI2_WARFILE_PATH = "yarn."
+  + "webapp.ui2.war-file-path";
 
   public static final String RM_RESOURCE_TRACKER_ADDRESS =
 RM_PREFIX + "resource-tracker.address";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2cb15feb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java
index 53cb3ee..d3b37d9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java
@@ -43,6 +43,7 @@ import 
org.apache.hadoop.security.http.RestCsrfPreventionFilter;
 import org.apache.hadoop.security.http.XFrameOptionsFilter;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.webapp.util.WebAppUtils;
+import org.mortbay.jetty.webapp.WebAppContext;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -369,8 +370,15 @@ public class WebApps {
 }
 
 public WebApp start(WebApp webapp) {
+  return start(webapp, null);
+}
+
+public WebApp start(WebApp webapp, WebAppContext ui2Context) {
   WebApp webApp = build(webapp);
   HttpServer2 httpServer = webApp.httpServer();
+  if (ui2Context != null) {
+httpServer.addContext(ui2Context, true);
+  }
   try {
 httpServer.start();
 LOG.info("Web app " + name + " started at "

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2cb15feb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
--

[26/50] [abbrv] hadoop git commit: YARN-5598. [YARN-3368] Fix create-release to be able to generate bits for the new yarn-ui (Wangda Tan via Sunil G)

2016-10-27 Thread wangda
YARN-5598. [YARN-3368] Fix create-release to be able to generate bits for the 
new yarn-ui (Wangda Tan via Sunil G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7f1ec061
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7f1ec061
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7f1ec061

Branch: refs/heads/YARN-3368
Commit: 7f1ec0616294373485158bcdd1610d19e6102f58
Parents: 9ec92d1
Author: sunilg 
Authored: Tue Sep 6 23:15:59 2016 +0530
Committer: Wangda Tan 
Committed: Thu Oct 27 12:10:57 2016 -0700

--
 dev-support/bin/create-release |   2 +-
 dev-support/create-release.sh  | 144 
 dev-support/docker/Dockerfile  |   6 +-
 3 files changed, 6 insertions(+), 146 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7f1ec061/dev-support/bin/create-release
--
diff --git a/dev-support/bin/create-release b/dev-support/bin/create-release
index 0e0ab86..d40fffa 100755
--- a/dev-support/bin/create-release
+++ b/dev-support/bin/create-release
@@ -527,7 +527,7 @@ function makearelease
   # shellcheck disable=SC2046
   run_and_redirect "${LOGDIR}/mvn_install.log" \
 "${MVN}" "${MVN_ARGS[@]}" install \
-  -Pdist,src \
+  -Pdist,src,yarn-ui \
   "${signflags[@]}" \
   -DskipTests -Dtar $(hadoop_native_flags)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7f1ec061/dev-support/create-release.sh
--
diff --git a/dev-support/create-release.sh b/dev-support/create-release.sh
deleted file mode 100755
index 792a805..000
--- a/dev-support/create-release.sh
+++ /dev/null
@@ -1,144 +0,0 @@
-#!/bin/bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-# Function to probe the exit code of the script commands, 
-# and stop in the case of failure with an contextual error 
-# message.
-run() {
-  echo "\$ ${@}"
-  "${@}"
-  exitCode=$?
-  if [[ $exitCode != 0 ]]; then
-echo
-echo "Failed! running ${@} in `pwd`"
-echo
-exit $exitCode
-  fi
-}
-
-doMD5() {
-  MD5CMD="md5sum"
-  which $MD5CMD
-  if [[ $? != 0 ]]; then
-MD5CMD="md5"
-  fi
-  run $MD5CMD ${1} > ${1}.md5
-}
-
-# If provided, the created release artifacts will be tagged with it 
-# (use RC#, i.e: RC0). Do not use a label to create the final release 
-# artifact.
-RC_LABEL=$1
-
-# Extract Hadoop version from POM
-HADOOP_VERSION=`cat pom.xml | grep "" | head -1 | sed 's|^ 
*||' | sed 's|.*$||'`
-
-# Setup git
-GIT=${GIT:-git}
-
-echo
-echo "*"
-echo
-echo "Hadoop version to create release artifacts: ${HADOOP_VERSION}"
-echo 
-echo "Release Candidate Label: ${RC_LABEL}"
-echo
-echo "*"
-echo
-
-if [[ ! -z ${RC_LABEL} ]]; then
-  RC_LABEL="-${RC_LABEL}"
-fi
-
-# Get Maven command
-if [ -z "$MAVEN_HOME" ]; then
-  MVN=mvn
-else
-  MVN=$MAVEN_HOME/bin/mvn
-fi
-
-ARTIFACTS_DIR="target/artifacts"
-
-# git clean to clear any remnants from previous build
-run ${GIT} clean -xdf
-
-# mvn clean for sanity
-run ${MVN} clean
-
-# Create staging dir for release artifacts
-run mkdir -p ${ARTIFACTS_DIR}
-
-# Create RAT report
-run ${MVN} apache-rat:check
-
-# Create SRC and BIN tarballs for release,
-# Using 'install’ goal instead of 'package' so artifacts are available 
-# in the Maven local cache for the site generation
-run ${MVN} install -Pdist,src,native,yarn-ui -DskipTests -Dtar
-
-# Create site for release
-run ${MVN} site site:stage -Pdist -Psrc
-run mkdir -p target/staging/hadoop-project/hadoop-project-dist/hadoop-yarn
-run mkdir -p target/staging/hadoop-project/hadoop-project-dist/hadoop-mapreduce
-run cp ./hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html 
target/staging/hadoop-project/hadoop-project-dist/hadoop-common/
-run cp ./hadoop-common-project/hadoop-common/CHANGES.txt 

[43/50] [abbrv] hadoop git commit: YARN-5509. Build error due to preparing 3.0.0-alpha2 deployment. (Kai Sasaki via wangda)

2016-10-27 Thread wangda
YARN-5509. Build error due to preparing 3.0.0-alpha2 deployment. (Kai Sasaki 
via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ea389310
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ea389310
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ea389310

Branch: refs/heads/YARN-3368
Commit: ea389310df89d17f04e3a541c2cb7bd7f270a69b
Parents: 08e3ab6
Author: Wangda Tan 
Authored: Thu Aug 11 14:59:14 2016 -0700
Committer: Wangda Tan 
Committed: Thu Oct 27 12:10:57 2016 -0700

--
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ea389310/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
index 6d46fda..2933a76 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
@@ -20,12 +20,12 @@
   
 hadoop-yarn
 org.apache.hadoop
-3.0.0-alpha1-SNAPSHOT
+3.0.0-alpha2-SNAPSHOT
   
   4.0.0
   org.apache.hadoop
   hadoop-yarn-ui
-  3.0.0-alpha1-SNAPSHOT
+  3.0.0-alpha2-SNAPSHOT
   Apache Hadoop YARN UI
   ${packaging.type}
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[50/50] [abbrv] hadoop git commit: YARN-5488. [YARN-3368] Applications table overflows beyond the page boundary(Harish Jaiprakash via Sunil G)

2016-10-27 Thread wangda
YARN-5488. [YARN-3368] Applications table overflows beyond the page 
boundary(Harish Jaiprakash via Sunil G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d8e9e3c2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d8e9e3c2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d8e9e3c2

Branch: refs/heads/YARN-3368
Commit: d8e9e3c2a476ece852c7fa4f3f6a55a273e90f71
Parents: ea38931
Author: sunilg 
Authored: Fri Aug 12 14:51:03 2016 +0530
Committer: Wangda Tan 
Committed: Thu Oct 27 12:10:57 2016 -0700

--
 .../src/main/webapp/app/styles/app.css  |  4 +
 .../src/main/webapp/app/templates/yarn-app.hbs  | 98 ++--
 2 files changed, 54 insertions(+), 48 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d8e9e3c2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.css
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.css
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.css
index a68a0ac..da5b4bf 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.css
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.css
@@ -273,3 +273,7 @@ li a.navigation-link.ember-view {
   right: 20px;
   top: 3px;
 }
+
+.x-scroll {
+  overflow-x: scroll;
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d8e9e3c2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs
index 49c4bfd..9e92fc1 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs
@@ -49,55 +49,57 @@
 
   
 Basic Info
-
-  
-
-  Application ID
-  Name
-  User
-  Queue
-  State
-  Final Status
-  Start Time
-  Elapsed Time
-  Finished Time
-  Priority
-  Progress
-  Is Unmanaged AM
-
-  
+
+  
+
+  
+Application ID
+Name
+User
+Queue
+State
+Final Status
+Start Time
+Elapsed Time
+Finished Time
+Priority
+Progress
+Is Unmanaged AM
+  
+
 
-  
-
-  {{model.app.id}}
-  {{model.app.appName}}
-  {{model.app.user}}
-  {{model.app.queue}}
-  {{model.app.state}}
-  
-
-  {{model.app.finalStatus}}
-
-  
-  {{model.app.startTime}}
-  {{model.app.elapsedTime}}
-  {{model.app.validatedFinishedTs}}
-  {{model.app.priority}}
-  
-
-  
-{{model.app.progress}}%
+
+  
+{{model.app.id}}
+{{model.app.appName}}
+{{model.app.user}}
+{{model.app.queue}}
+{{model.app.state}}
+
+  
+{{model.app.finalStatus}}
+  
+
+{{model.app.startTime}}
+{{model.app.elapsedTime}}
+{{model.app.validatedFinishedTs}}
+{{model.app.priority}}
+
+  
+
+  {{model.app.progress}}%
+
   
-
-  
-  {{model.app.unmanagedApplication}}
-
-  
-
+
+{{model.app.unmanagedApplication}}
+  
+
+  
+ 

[22/50] [abbrv] hadoop git commit: YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses. (wangda)

2016-10-27 Thread wangda
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b550eb6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
new file mode 100644
index 000..66bf54a
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
@@ -0,0 +1,207 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd;>
+  
+hadoop-yarn
+org.apache.hadoop
+3.0.0-SNAPSHOT
+  
+  4.0.0
+  org.apache.hadoop
+  hadoop-yarn-ui
+  3.0.0-SNAPSHOT
+  Apache Hadoop YARN UI
+  ${packaging.type}
+
+  
+jar
+src/main/webapp
+node
+v0.12.2
+2.10.0
+false
+  
+
+  
+
+  
+  
+org.apache.rat
+apache-rat-plugin
+
+  
+src/main/webapp/node_modules/**/*
+src/main/webapp/bower_components/**/*
+src/main/webapp/jsconfig.json
+src/main/webapp/bower.json
+src/main/webapp/package.json
+src/main/webapp/testem.json
+src/main/webapp/public/assets/images/**/*
+src/main/webapp/public/robots.txt
+public/crossdomain.xml
+  
+
+  
+
+  
+ maven-clean-plugin
+ 3.0.0
+ 
+false
+
+   
+  
${basedir}/src/main/webapp/bower_components
+   
+   
+  
${basedir}/src/main/webapp/node_modules
+   
+
+ 
+  
+
+  
+
+  
+
+  yarn-ui
+
+  
+false
+  
+
+  
+war
+  
+
+  
+
+  
+  
+exec-maven-plugin
+org.codehaus.mojo
+
+  
+generate-sources
+npm install
+
+  exec
+
+
+  ${webappDir}
+  npm
+  
+install
+  
+
+  
+  
+generate-sources
+bower install
+
+  exec
+
+
+  ${webappDir}
+  bower
+  
+--allow-root
+install
+  
+
+  
+  
+generate-sources
+bower --allow-root install
+
+  exec
+
+
+  ${webappDir}
+  bower
+  
+--allow-root
+install
+  
+
+  
+  
+ember build
+generate-sources
+
+  exec
+
+
+  ${webappDir}
+  ember
+  
+build
+-prod
+--output-path
+${basedir}/target/dist
+  
+
+  
+  
+ember test
+generate-resources
+
+  exec
+
+
+  ${skipTests}
+  ${webappDir}
+  ember
+  
+test
+  
+
+  
+  
+cleanup tmp
+generate-sources
+
+  exec
+
+
+  ${webappDir}
+  rm
+  
+-rf
+tmp
+  
+
+  
+
+  
+
+  
+  
+org.apache.maven.plugins
+maven-war-plugin
+
+  ${basedir}/src/main/webapp/WEB-INF/web.xml
+  ${basedir}/target/dist
+
+  
+
+
+  
+
+  
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b550eb6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/robots.txt
--
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/robots.txt 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/robots.txt
deleted file mode 100644
index f591645..000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/robots.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-# http://www.robotstxt.org
-User-agent: *
-Disallow:


[34/50] [abbrv] hadoop git commit: YARN-5682. [YARN-3368] Fix maven build to keep all generated or downloaded files in target folder (Wangda Tan via Sunil G)

2016-10-27 Thread wangda
YARN-5682. [YARN-3368] Fix maven build to keep all generated or downloaded 
files in target folder (Wangda Tan via Sunil G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/de183bb9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/de183bb9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/de183bb9

Branch: refs/heads/YARN-3368
Commit: de183bb90e02398114cb54b0b1813f61da8eba1b
Parents: cec96f4
Author: sunilg 
Authored: Tue Oct 4 21:07:42 2016 +0530
Committer: Wangda Tan 
Committed: Thu Oct 27 12:10:57 2016 -0700

--
 .../hadoop-yarn/hadoop-yarn-ui/pom.xml  | 54 
 hadoop-yarn-project/hadoop-yarn/pom.xml |  2 +-
 2 files changed, 34 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/de183bb9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
index b750a73..440aca9 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
@@ -31,7 +31,7 @@
 
   
 war
-src/main/webapp
+${basedir}/target/src/main/webapp
 node
 v0.12.2
 2.10.0
@@ -84,10 +84,10 @@
   false
   
 
-  
${basedir}/src/main/webapp/bower_components
+  ${webappTgtDir}/bower_components
 
 
-  ${basedir}/src/main/webapp/node_modules
+  ${webappTgtDir}/node_modules
 
   
 
@@ -109,6 +109,33 @@
 
   
 
+  
+  
+org.apache.maven.plugins
+maven-antrun-plugin
+
+  
+prepare-source-code
+generate-sources
+
+  run
+
+
+  
+
+  
+
+
+
+  
+
+  
+
+  
+
+  
+
+
   
   
 exec-maven-plugin
@@ -121,7 +148,7 @@
   exec
 
 
-  ${webappDir}
+  ${webappTgtDir}
   npm
   
 install
@@ -135,7 +162,7 @@
   exec
 
 
-  ${webappDir}
+  ${webappTgtDir}
   bower
   
 --allow-root
@@ -150,7 +177,7 @@
   exec
 
 
-  ${webappDir}
+  ${webappTgtDir}
   ember
   
 build
@@ -160,21 +187,6 @@
   
 
   
-  
-cleanup tmp
-generate-sources
-
-  exec
-
-
-  ${webappDir}
-  rm
-  
--rf
-tmp
-  
-
-  
 
   
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/de183bb9/hadoop-yarn-project/hadoop-yarn/pom.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/pom.xml
index ca78ef8..70b68d7 100644
--- a/hadoop-yarn-project/hadoop-yarn/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/pom.xml
@@ -230,7 +230,6 @@
   
 
   
-hadoop-yarn-ui
 hadoop-yarn-api
 hadoop-yarn-common
 hadoop-yarn-server
@@ -238,5 +237,6 @@
 hadoop-yarn-site
 hadoop-yarn-client
 hadoop-yarn-registry
+hadoop-yarn-ui
   
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[08/50] [abbrv] hadoop git commit: YARN-5500. [YARN-3368] ‘Master node' link under application tab is broken. (Akhil P B Tan via Sunil G)

2016-10-27 Thread wangda
YARN-5500. [YARN-3368]  ‘Master node' link under application tab is broken. 
(Akhil P B Tan via Sunil G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ebe87bcd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ebe87bcd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ebe87bcd

Branch: refs/heads/YARN-3368
Commit: ebe87bcd4993117074262c5ce9963aed8ff5ad3e
Parents: b5ef411
Author: sunilg 
Authored: Thu Oct 27 14:19:44 2016 +0530
Committer: Wangda Tan 
Committed: Thu Oct 27 12:10:57 2016 -0700

--
 .../src/main/webapp/app/controllers/yarn-app.js | 9 -
 .../src/main/webapp/app/templates/yarn-app.hbs  | 4 ++--
 2 files changed, 10 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ebe87bcd/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app.js
index f6b9404..309c895 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app.js
@@ -33,6 +33,13 @@ export default Ember.Controller.extend({
   routeName: 'yarn-app',
   model: appId
 }];
-  })
+  }),
 
+  amHostHttpAddressFormatted: function() {
+var amHostAddress = this.get('model.app.amHostHttpAddress');
+if (amHostAddress.indexOf('http://') < 0) {
+  amHostAddress = 'http://' + amHostAddress;
+}
+return amHostAddress;
+  }.property('model.app.amHostHttpAddress')
 });

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ebe87bcd/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs
index 9e92fc1..acf00d1 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs
@@ -172,8 +172,8 @@
 
   
   
-Link
-Link
+Link
+Link
 {{model.app.amNodeLabelExpression}}
   
   


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[01/50] [abbrv] hadoop git commit: YARN-5420. Delete org.apache.hadoop.yarn.server.resourcemanager.resource.Priority as its not necessary. Contributed by Sunil G. [Forced Update!]

2016-10-27 Thread wangda
Repository: hadoop
Updated Branches:
  refs/heads/YARN-3368 2fa540e86 -> 29d7106f6 (forced update)


YARN-5420. Delete 
org.apache.hadoop.yarn.server.resourcemanager.resource.Priority as its not 
necessary. Contributed by Sunil G.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b3c15e4e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b3c15e4e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b3c15e4e

Branch: refs/heads/YARN-3368
Commit: b3c15e4ef763ebc4b033c686114fe627350824ac
Parents: 060558c
Author: Naganarasimha 
Authored: Thu Oct 27 18:22:07 2016 +0530
Committer: Naganarasimha 
Committed: Thu Oct 27 18:22:07 2016 +0530

--
 .../resourcemanager/resource/Priority.java  | 31 ---
 .../resourcemanager/TestResourceManager.java|  6 +--
 ...estProportionalCapacityPreemptionPolicy.java |  4 +-
 .../scheduler/TestSchedulerHealth.java  | 16 ++--
 .../capacity/TestCapacityScheduler.java | 42 +---
 .../scheduler/fifo/TestFifoScheduler.java   | 12 ++
 6 files changed, 23 insertions(+), 88 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b3c15e4e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/Priority.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/Priority.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/Priority.java
deleted file mode 100644
index f098806..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/Priority.java
+++ /dev/null
@@ -1,31 +0,0 @@
-/**
-* Licensed to the Apache Software Foundation (ASF) under one
-* or more contributor license agreements.  See the NOTICE file
-* distributed with this work for additional information
-* regarding copyright ownership.  The ASF licenses this file
-* to you under the Apache License, Version 2.0 (the
-* "License"); you may not use this file except in compliance
-* with the License.  You may obtain a copy of the License at
-*
-* http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing, software
-* distributed under the License is distributed on an "AS IS" BASIS,
-* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-* See the License for the specific language governing permissions and
-* limitations under the License.
-*/
-
-package org.apache.hadoop.yarn.server.resourcemanager.resource;
-
-import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider;
-
-public class Priority {
-  
-  public static org.apache.hadoop.yarn.api.records.Priority create(int prio) {
-org.apache.hadoop.yarn.api.records.Priority priority = 
RecordFactoryProvider.getRecordFactory(null).newRecordInstance(org.apache.hadoop.yarn.api.records.Priority.class);
-priority.setPriority(prio);
-return priority;
-  }
-
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b3c15e4e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceManager.java
index 3b59417..ad8c335 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceManager.java
@@ -117,8 +117,7 @@ public class TestResourceManager {
 // Application resource requirements
 final int memory1 = 1024;
 Resource capability1 = Resources.createResource(memory1, 1);
-Priority priority1 = 
-  
org.apache.hadoop.yarn.server.resourcemanager.resource.Priority.create(1);
+Priority priority1 = Priority.newInstance(1);
 

[21/50] [abbrv] hadoop git commit: YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses. (wangda)

2016-10-27 Thread wangda
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b550eb6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
new file mode 100644
index 000..f7ec020
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
@@ -0,0 +1,275 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import Ember from 'ember';
+
+export default Ember.Component.extend({
+  // Map: 
+  map : undefined,
+
+  // Normalized data for d3
+  treeData: undefined,
+
+  // folded queues, folded[] == true means  is folded
+  foldedQueues: { },
+
+  // maxDepth
+  maxDepth: 0,
+
+  // num of leaf queue, folded queue is treated as leaf queue
+  numOfLeafQueue: 0,
+
+  // mainSvg
+  mainSvg: undefined,
+
+  // Init data
+  initData: function() {
+this.map = { };
+this.treeData = { };
+this.maxDepth = 0;
+this.numOfLeafQueue = 0;
+
+this.get("model")
+  .forEach(function(o) {
+this.map[o.id] = o;
+  }.bind(this));
+
+var selected = this.get("selected");
+
+this.initQueue("root", 1, this.treeData);
+  },
+
+  // get Children array of given queue
+  getChildrenNamesArray: function(q) {
+var namesArr = [];
+
+// Folded queue's children is empty
+if (this.foldedQueues[q.get("name")]) {
+  return namesArr;
+}
+
+var names = q.get("children");
+if (names) {
+  names.forEach(function(name) {
+namesArr.push(name);
+  });
+}
+
+return namesArr;
+  },
+
+  // Init queues
+  initQueue: function(queueName, depth, node) {
+if ((!queueName) || (!this.map[queueName])) {
+  // Queue is not existed
+  return;
+}
+
+if (depth > this.maxDepth) {
+  this.maxDepth = this.maxDepth + 1;
+}
+
+var queue = this.map[queueName];
+
+var names = this.getChildrenNamesArray(queue);
+
+node.name = queueName;
+node.parent = queue.get("parent");
+node.queueData = queue;
+
+if (names.length > 0) {
+  node.children = [];
+
+  names.forEach(function(name) {
+var childQueueData = {};
+node.children.push(childQueueData);
+this.initQueue(name, depth + 1, childQueueData);
+  }.bind(this));
+} else {
+  this.numOfLeafQueue = this.numOfLeafQueue + 1;
+}
+  },
+
+  update: function(source, root, tree, diagonal) {
+var duration = 300;
+var i = 0;
+
+// Compute the new tree layout.
+var nodes = tree.nodes(root).reverse();
+var links = tree.links(nodes);
+
+// Normalize for fixed-depth.
+nodes.forEach(function(d) { d.y = d.depth * 200; });
+
+// Update the nodes…
+var node = this.mainSvg.selectAll("g.node")
+  .data(nodes, function(d) { return d.id || (d.id = ++i); });
+
+// Enter any new nodes at the parent's previous position.
+var nodeEnter = node.enter().append("g")
+  .attr("class", "node")
+  .attr("transform", function(d) { return "translate(" + source.y0 + "," + 
source.x0 + ")"; })
+  .on("click", function(d,i){
+if (d.queueData.get("name") != this.get("selected")) {
+document.location.href = "yarnQueue/" + d.queueData.get("name");
+}
+  }.bind(this));
+  // .on("click", click);
+
+nodeEnter.append("circle")
+  .attr("r", 1e-6)
+  .style("fill", function(d) {
+var usedCap = d.queueData.get("usedCapacity");
+if (usedCap <= 60.0) {
+  return "LimeGreen";
+} else if (usedCap <= 100.0) {
+  return "DarkOrange";
+} else {
+  return "LightCoral";
+}
+  });
+
+// append percentage
+nodeEnter.append("text")
+  .attr("x", function(d) { return 0; })
+  .attr("dy", ".35em")
+  .attr("text-anchor", function(d) { return "middle"; })
+  .text(function(d) {
+var usedCap = d.queueData.get("usedCapacity");
+if (usedCap >= 100.0) {
+

[03/50] [abbrv] hadoop git commit: Revert "HADOOP-13514. Upgrade maven surefire plugin to 2.19.1. Contributed by Ewan Higgs."

2016-10-27 Thread wangda
Revert "HADOOP-13514. Upgrade maven surefire plugin to 2.19.1. Contributed by 
Ewan Higgs."

This reverts commit dbd205762ef2cba903b9bd9335bb9a5964d51f74.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b4395175
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b4395175
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b4395175

Branch: refs/heads/YARN-3368
Commit: b43951750254290b0aaec3641cff3061a3927991
Parents: 94e77e9
Author: Steve Loughran 
Authored: Thu Oct 27 15:18:33 2016 +0200
Committer: Steve Loughran 
Committed: Thu Oct 27 15:18:33 2016 +0200

--
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b4395175/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index c7c5a72..f914f92 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -107,7 +107,7 @@
 
 
 -Xmx2048m 
-XX:+HeapDumpOnOutOfMemoryError
-2.19.1
+2.17
 
${maven-surefire-plugin.version}
 
${maven-surefire-plugin.version}
 


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[27/50] [abbrv] hadoop git commit: YARN-5019. [YARN-3368] Change urls in new YARN ui from camel casing to hyphens. (Sunil G via wangda)

2016-10-27 Thread wangda
YARN-5019. [YARN-3368] Change urls in new YARN ui from camel casing to hyphens. 
(Sunil G via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/938e1e0e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/938e1e0e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/938e1e0e

Branch: refs/heads/YARN-3368
Commit: 938e1e0ee63cfbdd3b3ee4dfc49816597d056ec0
Parents: 3d61d2b
Author: Wangda Tan 
Authored: Mon May 9 11:29:59 2016 -0700
Committer: Wangda Tan 
Committed: Thu Oct 27 12:10:57 2016 -0700

--
 .../main/webapp/app/components/tree-selector.js |  4 +--
 .../main/webapp/app/controllers/application.js  | 16 +-
 .../main/webapp/app/helpers/log-files-comma.js  |  2 +-
 .../src/main/webapp/app/helpers/node-link.js|  2 +-
 .../src/main/webapp/app/helpers/node-menu.js| 12 
 .../main/webapp/app/models/yarn-app-attempt.js  |  2 +-
 .../src/main/webapp/app/router.js   | 32 ++--
 .../src/main/webapp/app/routes/index.js |  2 +-
 .../main/webapp/app/routes/yarn-app-attempt.js  |  6 ++--
 .../src/main/webapp/app/routes/yarn-app.js  |  4 +--
 .../src/main/webapp/app/routes/yarn-apps.js |  2 +-
 .../webapp/app/routes/yarn-container-log.js |  2 +-
 .../src/main/webapp/app/routes/yarn-node-app.js |  2 +-
 .../main/webapp/app/routes/yarn-node-apps.js|  2 +-
 .../webapp/app/routes/yarn-node-container.js|  2 +-
 .../webapp/app/routes/yarn-node-containers.js   |  2 +-
 .../src/main/webapp/app/routes/yarn-node.js |  4 +--
 .../src/main/webapp/app/routes/yarn-nodes.js|  2 +-
 .../src/main/webapp/app/routes/yarn-queue.js|  6 ++--
 .../main/webapp/app/routes/yarn-queues/index.js |  2 +-
 .../app/routes/yarn-queues/queues-selector.js   |  2 +-
 .../app/templates/components/app-table.hbs  |  4 +--
 .../webapp/app/templates/yarn-container-log.hbs |  2 +-
 .../main/webapp/app/templates/yarn-node-app.hbs |  4 +--
 .../webapp/app/templates/yarn-node-apps.hbs |  4 +--
 .../app/templates/yarn-node-container.hbs   |  2 +-
 .../app/templates/yarn-node-containers.hbs  |  4 +--
 .../src/main/webapp/app/templates/yarn-node.hbs |  2 +-
 28 files changed, 66 insertions(+), 66 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/938e1e0e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
index f7ec020..698c253 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
@@ -126,7 +126,7 @@ export default Ember.Component.extend({
   .attr("transform", function(d) { return "translate(" + source.y0 + "," + 
source.x0 + ")"; })
   .on("click", function(d,i){
 if (d.queueData.get("name") != this.get("selected")) {
-document.location.href = "yarnQueue/" + d.queueData.get("name");
+document.location.href = "yarn-queue/" + d.queueData.get("name");
 }
   }.bind(this));
   // .on("click", click);
@@ -176,7 +176,7 @@ export default Ember.Component.extend({
   .attr("r", 20)
   .attr("href", 
 function(d) {
-  return "yarnQueues/" + d.queueData.get("name");
+  return "yarn-queues/" + d.queueData.get("name");
 })
   .style("stroke", function(d) {
 if (d.queueData.get("name") == this.get("selected")) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/938e1e0e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/application.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/application.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/application.js
index 3c68365..2effb13 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/application.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/application.js
@@ -29,25 +29,25 @@ export default Ember.Controller.extend({
   outputMainMenu: function(){
 var path = this.get('currentPath');
 var html = 'Queues' +
+html = html + '>Queues' +
 '(current)

[06/50] [abbrv] hadoop git commit: YARN-5308. FairScheduler: Move continuous scheduling related tests to TestContinuousScheduling (Kai Sasaki via Varun Saxena)

2016-10-27 Thread wangda
YARN-5308. FairScheduler: Move continuous scheduling related tests to 
TestContinuousScheduling (Kai Sasaki via Varun Saxena)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/79aeddc8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/79aeddc8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/79aeddc8

Branch: refs/heads/YARN-3368
Commit: 79aeddc88f0e71f0031d5f39fded172de0b29a2e
Parents: ac35ee9
Author: Varun Saxena 
Authored: Fri Oct 28 00:29:53 2016 +0530
Committer: Varun Saxena 
Committed: Fri Oct 28 00:34:50 2016 +0530

--
 .../fair/TestContinuousScheduling.java  | 189 ++-
 .../scheduler/fair/TestFairScheduler.java   | 157 ---
 2 files changed, 187 insertions(+), 159 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/79aeddc8/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestContinuousScheduling.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestContinuousScheduling.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestContinuousScheduling.java
index 6188246..5964d2f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestContinuousScheduling.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestContinuousScheduling.java
@@ -22,20 +22,32 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
 import org.apache.hadoop.yarn.api.records.ContainerId;
 import org.apache.hadoop.yarn.api.records.NodeId;
+import org.apache.hadoop.yarn.api.records.Priority;
 import org.apache.hadoop.yarn.api.records.ResourceRequest;
+import org.apache.hadoop.yarn.event.AsyncDispatcher;
+import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
 import org.apache.hadoop.yarn.server.resourcemanager.MockNodes;
 import org.apache.hadoop.yarn.server.resourcemanager.MockRM;
 import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer;
 import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNode;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerRequestKey;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestUtils;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeAddedSchedulerEvent;
 
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeRemovedSchedulerEvent;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.NodeUpdateSchedulerEvent;
 import org.apache.hadoop.yarn.util.ControlledClock;
 import org.apache.hadoop.yarn.util.resource.Resources;
 import org.junit.After;
 import org.junit.Assert;
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotEquals;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.mockito.Matchers.isA;
+import static org.mockito.Mockito.doThrow;
+import static org.mockito.Mockito.spy;
+
 import org.junit.Before;
 import org.junit.Test;
 
@@ -43,18 +55,22 @@ import java.util.ArrayList;
 import java.util.HashSet;
 import java.util.Iterator;
 import java.util.List;
+import java.util.Map;
 import java.util.Set;
 
 public class TestContinuousScheduling extends FairSchedulerTestBase {
   private ControlledClock mockClock;
+  private static int delayThresholdTimeMs = 1000;
 
   @Override
   public Configuration createConfiguration() {
 Configuration conf = super.createConfiguration();
 conf.setBoolean(
 FairSchedulerConfiguration.CONTINUOUS_SCHEDULING_ENABLED, true);
-conf.setInt(FairSchedulerConfiguration.LOCALITY_DELAY_NODE_MS, 100);
-conf.setInt(FairSchedulerConfiguration.LOCALITY_DELAY_RACK_MS, 100);
+conf.setInt(FairSchedulerConfiguration.LOCALITY_DELAY_NODE_MS,
+delayThresholdTimeMs);
+conf.setInt(FairSchedulerConfiguration.LOCALITY_DELAY_RACK_MS,
+delayThresholdTimeMs);
 return conf;
   }
 
@@ -167,6 +183,175 @@ public class TestContinuousScheduling extends 
FairSchedulerTestBase {
 Assert.assertEquals(2, nodes.size());
   }
 
+  @Test
+  public void testWithNodeRemoved() throws 

[47/50] [abbrv] hadoop git commit: YARN-4849. Addendum patch to fix license. (Wangda Tan via Sunil G)

2016-10-27 Thread wangda
YARN-4849. Addendum patch to fix license. (Wangda Tan via Sunil G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8c1ed324
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8c1ed324
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8c1ed324

Branch: refs/heads/YARN-3368
Commit: 8c1ed324909fd26ce5e7771db8a8a1cc145e5cb0
Parents: a88bb85
Author: sunilg 
Authored: Wed Aug 24 16:28:34 2016 +0530
Committer: Wangda Tan 
Committed: Thu Oct 27 12:10:57 2016 -0700

--
 LICENSE.txt | 84 ++--
 1 file changed, 51 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8c1ed324/LICENSE.txt
--
diff --git a/LICENSE.txt b/LICENSE.txt
index 8f418af..04d2daa 100644
--- a/LICENSE.txt
+++ b/LICENSE.txt
@@ -2315,35 +2315,53 @@ jamon-runtime 2.3.1
  Your choice of the MPL or the alternative licenses, if any, specified
  by the Initial Developer in the file described in Exhibit A.
 
-For Apache Hadoop YARN Web UI component: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/
--
-The Apache Hadoop YARN Web UI component bundles the following files under the 
MIT License:
-
- - ember v2.2.0 (http://emberjs.com/) - Copyright (c) 2014 Yehuda Katz, Tom 
Dale and Ember.js contributors
- - ember-data v2.1.0 (https://github.com/emberjs/data) - Copyright (C) 
2011-2014 Tilde, Inc. and contributors, Portions Copyright (C) 2011 
LivingSocial Inc.
- - ember-resolver v2.0.3 (https://github.com/ember-cli/ember-resolver) - 
Copyright (c) 2013 Stefan Penner and Ember App Kit Contributors
- - bootstrap v3.3.6 (http://getbootstrap.com) - Copyright (c) 2011-2014 
Twitter, Inc
- - jquery v2.1.4 (http://jquery.org) - Copyright 2005, 2014 jQuery Foundation 
and other contributors
- - jquery-ui v1.11.4 (http://jqueryui.com/) - Copyright 2014 jQuery Foundation 
and other contributors
- - datatables v1.10.8 (https://datatables.net/)
- - moment v2.10.6 (http://momentjs.com/) - Copyright (c) 2011-2015 Tim Wood, 
Iskren Chernev, Moment.js contributors
- - em-helpers v0.5.8 (https://github.com/sreenaths/em-helpers)
- - ember-array-contains-helper v1.0.2 
(https://github.com/bmeurant/ember-array-contains-helper)
- - ember-cli-app-version v0.5.8 
(https://github.com/EmberSherpa/ember-cli-app-version) - Authored by Taras 
Mankovski 
- - ember-cli-babel v5.1.6 (https://github.com/babel/ember-cli-babel) - 
Authored by Stefan Penner 
- - ember-cli-content-security-policy v0.4.0 
(https://github.com/rwjblue/ember-cli-content-security-policy)
- - ember-cli-dependency-checker v1.2.0 
(https://github.com/quaertym/ember-cli-dependency-checker) - Authored by Emre 
Unal
- - ember-cli-htmlbars v1.0.2 (https://github.com/ember-cli/ember-cli-htmlbars) 
- Authored by Robert Jackson 
- - ember-cli-htmlbars-inline-precompile v0.3.1 
(https://github.com/pangratz/ember-cli-htmlbars-inline-precompile) - Authored 
by Clemens Müller 
- - ember-cli-ic-ajax v0.2.1 (https://github.com/rwjblue/ember-cli-ic-ajax) - 
Authored by Robert Jackson 
- - ember-cli-inject-live-reload v1.4.0 
(https://github.com/rwjblue/ember-cli-inject-live-reload) - Authored by Robert 
Jackson 
- - ember-cli-qunit v1.2.1 (https://github.com/ember-cli/ember-cli-qunit) - 
Authored by Robert Jackson 
- - ember-cli-release v0.2.8 (https://github.com/lytics/ember-cli-release) - 
Authored by Robert Jackson 
- - ember-cli-sri v1.2.1 (https://github.com/jonathanKingston/ember-cli-sri) - 
Authored by Jonathan Kingston
- - ember-cli-uglify v1.2.0 (github.com/ember-cli/ember-cli-uglify) - Authored 
by Robert Jackson 
- - ember-d3 v0.1.0 (https://github.com/brzpegasus/ember-d3) - Authored by 
Estelle DeBlois
- - ember-truth-helpers v1.2.0 
(https://github.com/jmurphyau/ember-truth-helpers)
- - select2 v4.0.0 (https://select2.github.io/)
+The binary distribution of this product bundles these dependencies under the
+following license:
+bootstrap v3.3.6
+broccoli-asset-rev v2.4.2
+broccoli-funnel v1.0.1
+datatables v1.10.8
+em-helpers v0.5.13
+em-table v0.1.6
+ember v2.2.0
+ember-array-contains-helper v1.0.2
+ember-bootstrap v0.5.1
+ember-cli v1.13.13
+ember-cli-app-version v1.0.0
+ember-cli-babel v5.1.6
+ember-cli-content-security-policy v0.4.0
+ember-cli-dependency-checker v1.2.0
+ember-cli-htmlbars v1.0.2
+ember-cli-htmlbars-inline-precompile v0.3.1
+ember-cli-ic-ajax v0.2.1
+ember-cli-inject-live-reload v1.4.0
+ember-cli-jquery-ui v0.0.20
+ember-cli-qunit v1.2.1

[44/50] [abbrv] hadoop git commit: YARN-3334. [YARN-3368] Introduce REFRESH button in various UI pages (Sreenath Somarajapuram via Sunil G)

2016-10-27 Thread wangda
YARN-3334. [YARN-3368] Introduce REFRESH button in various UI pages (Sreenath 
Somarajapuram via Sunil G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/08e3ab69
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/08e3ab69
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/08e3ab69

Branch: refs/heads/YARN-3368
Commit: 08e3ab69f5120221791c137556f2073f171ce46e
Parents: e5a160c
Author: sunilg 
Authored: Wed Aug 10 06:53:13 2016 +0530
Committer: Wangda Tan 
Committed: Thu Oct 27 12:10:57 2016 -0700

--
 .../app/components/app-usage-donut-chart.js |  5 ---
 .../src/main/webapp/app/components/bar-chart.js |  4 +-
 .../webapp/app/components/breadcrumb-bar.js | 31 ++
 .../main/webapp/app/components/donut-chart.js   |  8 ++--
 .../app/components/queue-usage-donut-chart.js   |  2 +-
 .../app/controllers/yarn-container-log.js   | 40 ++
 .../webapp/app/controllers/yarn-node-app.js | 36 
 .../src/main/webapp/app/routes/abstract.js  | 32 +++
 .../main/webapp/app/routes/cluster-overview.js  | 12 +-
 .../main/webapp/app/routes/yarn-app-attempt.js  |  9 +++-
 .../main/webapp/app/routes/yarn-app-attempts.js |  8 +++-
 .../src/main/webapp/app/routes/yarn-app.js  | 11 -
 .../src/main/webapp/app/routes/yarn-apps.js |  9 +++-
 .../webapp/app/routes/yarn-container-log.js | 10 -
 .../src/main/webapp/app/routes/yarn-node-app.js |  8 +++-
 .../main/webapp/app/routes/yarn-node-apps.js|  8 +++-
 .../webapp/app/routes/yarn-node-container.js|  8 +++-
 .../webapp/app/routes/yarn-node-containers.js   |  8 +++-
 .../src/main/webapp/app/routes/yarn-node.js |  9 +++-
 .../src/main/webapp/app/routes/yarn-nodes.js|  9 +++-
 .../main/webapp/app/routes/yarn-queue-apps.js   | 12 --
 .../src/main/webapp/app/routes/yarn-queue.js| 14 ---
 .../src/main/webapp/app/routes/yarn-queues.js   | 14 ---
 .../src/main/webapp/app/styles/app.css  |  6 +++
 .../webapp/app/templates/cluster-overview.hbs   |  4 +-
 .../app/templates/components/breadcrumb-bar.hbs | 22 ++
 .../webapp/app/templates/yarn-app-attempt.hbs   |  4 +-
 .../webapp/app/templates/yarn-app-attempts.hbs  |  4 +-
 .../src/main/webapp/app/templates/yarn-app.hbs  |  4 +-
 .../src/main/webapp/app/templates/yarn-apps.hbs |  4 +-
 .../webapp/app/templates/yarn-container-log.hbs |  2 +
 .../main/webapp/app/templates/yarn-node-app.hbs |  2 +
 .../webapp/app/templates/yarn-node-apps.hbs |  4 +-
 .../app/templates/yarn-node-container.hbs   |  4 +-
 .../app/templates/yarn-node-containers.hbs  |  4 +-
 .../src/main/webapp/app/templates/yarn-node.hbs |  4 +-
 .../main/webapp/app/templates/yarn-nodes.hbs|  4 +-
 .../webapp/app/templates/yarn-queue-apps.hbs|  4 +-
 .../main/webapp/app/templates/yarn-queue.hbs|  4 +-
 .../main/webapp/app/templates/yarn-queues.hbs   |  4 +-
 .../components/breadcrumb-bar-test.js   | 43 
 .../unit/controllers/yarn-container-log-test.js | 30 ++
 .../unit/controllers/yarn-node-app-test.js  | 30 ++
 43 files changed, 417 insertions(+), 77 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/08e3ab69/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/app-usage-donut-chart.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/app-usage-donut-chart.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/app-usage-donut-chart.js
index 0baf630..90f41fc 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/app-usage-donut-chart.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/app-usage-donut-chart.js
@@ -26,7 +26,6 @@ export default BaseUsageDonutChart.extend({
   colors: d3.scale.category20().range(),
 
   draw: function() {
-this.initChart();
 var usageByApps = [];
 var avail = 100;
 
@@ -60,8 +59,4 @@ export default BaseUsageDonutChart.extend({
 this.renderDonutChart(usageByApps, this.get("title"), 
this.get("showLabels"),
   this.get("middleLabel"), "100%", "%");
   },
-
-  didInsertElement: function() {
-this.draw();
-  },
 })
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/08e3ab69/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/bar-chart.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/bar-chart.js
 

[15/50] [abbrv] hadoop git commit: YARN-4517. Add nodes page and fix bunch of license issues. (Varun Saxena via wangda)

2016-10-27 Thread wangda
http://git-wip-us.apache.org/repos/asf/hadoop/blob/185a2197/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/error.hbs
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/error.hbs 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/error.hbs
new file mode 100644
index 000..c546bf7
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/error.hbs
@@ -0,0 +1,19 @@
+{{!--
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+--}}
+
+Sorry, Error Occured.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/185a2197/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/notfound.hbs
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/notfound.hbs 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/notfound.hbs
new file mode 100644
index 000..588ea44
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/notfound.hbs
@@ -0,0 +1,20 @@
+{{!--
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+--}}
+
+404, Not Found
+Please Check your URL

http://git-wip-us.apache.org/repos/asf/hadoop/blob/185a2197/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/yarn-apps.hbs
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/yarn-apps.hbs 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/yarn-apps.hbs
index e58d6bd..3a79080 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/yarn-apps.hbs
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/yarn-apps.hbs
@@ -1,3 +1,3 @@
 {{app-table table-id="apps-table" arr=model}}
-{{simple-table table-id="apps-table" bFilter=true colTypes="elapsed-time" 
colTargets="7"}}
-{{outlet}}
\ No newline at end of file
+{{simple-table table-id="apps-table" bFilter=true colsOrder="0,desc" 
colTypes="natural elapsed-time" colTargets="0 7"}}
+{{outlet}}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/185a2197/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/yarn-container-log.hbs
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/yarn-container-log.hbs
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/yarn-container-log.hbs
new file mode 100644
index 000..9cc3b0f
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/templates/yarn-container-log.hbs
@@ -0,0 +1,36 @@
+{{!--
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+--}}

[17/50] [abbrv] hadoop git commit: YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses. (wangda)

2016-10-27 Thread wangda
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b550eb6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-node-test.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-node-test.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-node-test.js
deleted file mode 100644
index 5877589..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-node-test.js
+++ /dev/null
@@ -1,58 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-import { moduleForModel, test } from 'ember-qunit';
-
-moduleForModel('yarn-node', 'Unit | Model | Node', {
-  // Specify the other units that are required for this test.
-  needs: []
-});
-
-test('Basic creation test', function(assert) {
-  let model = this.subject();
-
-  assert.ok(model);
-  assert.ok(model._notifyProperties);
-  assert.ok(model.didLoad);
-  assert.ok(model.totalVmemAllocatedContainersMB);
-  assert.ok(model.vmemCheckEnabled);
-  assert.ok(model.pmemCheckEnabled);
-  assert.ok(model.nodeHealthy);
-  assert.ok(model.lastNodeUpdateTime);
-  assert.ok(model.healthReport);
-  assert.ok(model.nmStartupTime);
-  assert.ok(model.nodeManagerBuildVersion);
-  assert.ok(model.hadoopBuildVersion);
-});
-
-test('test fields', function(assert) {
-  let model = this.subject();
-
-  assert.expect(4);
-  Ember.run(function () {
-model.set("totalVmemAllocatedContainersMB", 4096);
-model.set("totalPmemAllocatedContainersMB", 2048);
-model.set("totalVCoresAllocatedContainers", 4);
-model.set("hadoopBuildVersion", "3.0.0-SNAPSHOT");
-assert.equal(model.get("totalVmemAllocatedContainersMB"), 4096);
-assert.equal(model.get("totalPmemAllocatedContainersMB"), 2048);
-assert.equal(model.get("totalVCoresAllocatedContainers"), 4);
-assert.equal(model.get("hadoopBuildVersion"), "3.0.0-SNAPSHOT");
-  });
-});
-

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b550eb6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-rm-node-test.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-rm-node-test.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-rm-node-test.js
deleted file mode 100644
index 4fd2517..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/models/yarn-rm-node-test.js
+++ /dev/null
@@ -1,95 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-import { moduleForModel, test } from 'ember-qunit';
-
-moduleForModel('yarn-rm-node', 'Unit | Model | RMNode', {
-  // Specify the other units that are required for this test.
-  needs: []
-});
-
-test('Basic creation test', function(assert) {
-  let model = this.subject();
-
-  assert.ok(model);
-  assert.ok(model._notifyProperties);
-  assert.ok(model.didLoad);
-  assert.ok(model.rack);
-  assert.ok(model.state);
-  assert.ok(model.nodeHostName);
-  assert.ok(model.nodeHTTPAddress);
-  assert.ok(model.lastHealthUpdate);
-  assert.ok(model.healthReport);
-  assert.ok(model.numContainers);
-  assert.ok(model.usedMemoryMB);
-  assert.ok(model.availMemoryMB);
-  assert.ok(model.usedVirtualCores);
-  assert.ok(model.availableVirtualCores);
-  assert.ok(model.version);
-  assert.ok(model.nodeLabels);
-  

[14/50] [abbrv] hadoop git commit: YARN-4517. Add nodes page and fix bunch of license issues. (Varun Saxena via wangda)

2016-10-27 Thread wangda
http://git-wip-us.apache.org/repos/asf/hadoop/blob/185a2197/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-app-test.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-app-test.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-app-test.js
new file mode 100644
index 000..21a715c
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-app-test.js
@@ -0,0 +1,102 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import { moduleFor, test } from 'ember-qunit';
+
+moduleFor('serializer:yarn-node-app', 'Unit | Serializer | NodeApp', {
+});
+
+test('Basic creation test', function(assert) {
+  let serializer = this.subject();
+
+  assert.ok(serializer);
+  assert.ok(serializer.normalizeSingleResponse);
+  assert.ok(serializer.normalizeArrayResponse);
+  assert.ok(serializer.internalNormalizeSingleResponse);
+});
+
+test('normalizeArrayResponse test', function(assert) {
+  let serializer = this.subject(),
+  modelClass = {
+modelName: "yarn-node-app"
+  },
+  payload = {
+apps: {
+  app: [{
+id:"application_1456251210105_0001", state:"FINISHED", user:"root"
+  },{
+id:"application_1456251210105_0002", state:"RUNNING",user:"root",
+containerids:["container_e38_1456251210105_0002_01_01",
+"container_e38_1456251210105_0002_01_02"]
+  }]
+}
+  };
+  assert.expect(15);
+  var response =
+  serializer.normalizeArrayResponse({}, modelClass, payload, null, null);
+  assert.ok(response.data);
+  assert.equal(response.data.length, 2);
+  assert.equal(response.data[0].attributes.containers, undefined);
+  assert.equal(response.data[1].attributes.containers.length, 2);
+  assert.deepEqual(response.data[1].attributes.containers,
+  payload.apps.app[1].containerids);
+  for (var i = 0; i < 2; i++) {
+assert.equal(response.data[i].type, modelClass.modelName);
+assert.equal(response.data[i].id, payload.apps.app[i].id);
+assert.equal(response.data[i].attributes.appId, payload.apps.app[i].id);
+assert.equal(response.data[i].attributes.state, payload.apps.app[i].state);
+assert.equal(response.data[i].attributes.user, payload.apps.app[i].user);
+  }
+});
+
+test('normalizeArrayResponse no apps test', function(assert) {
+  let serializer = this.subject(),
+  modelClass = {
+modelName: "yarn-node-app"
+  },
+  payload = { apps: null };
+  assert.expect(5);
+  var response =
+  serializer.normalizeArrayResponse({}, modelClass, payload, null, null);
+  assert.ok(response.data);
+  assert.equal(response.data.length, 1);
+  assert.equal(response.data[0].type, modelClass.modelName);
+  assert.equal(response.data[0].id, "dummy");
+  assert.equal(response.data[0].attributes.appId, undefined);
+});
+
+test('normalizeSingleResponse test', function(assert) {
+  let serializer = this.subject(),
+  modelClass = {
+modelName: "yarn-node-app"
+  },
+  payload = {
+app: {id:"application_1456251210105_0001", state:"FINISHED", user:"root"}
+  };
+  assert.expect(7);
+  var response =
+  serializer.normalizeSingleResponse({}, modelClass, payload, null, null);
+  assert.ok(response.data);
+  assert.equal(payload.app.id, response.data.id);
+  assert.equal(modelClass.modelName, response.data.type);
+  assert.equal(payload.app.id, response.data.attributes.appId);
+  assert.equal(payload.app.state, response.data.attributes.state);
+  assert.equal(payload.app.user, response.data.attributes.user);
+  assert.equal(response.data.attributes.containers, undefined);
+});
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/185a2197/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-container-test.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-container-test.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/tests/unit/serializers/yarn-node-container-test.js
new file mode 100644
index 000..1f08467
--- 

[42/50] [abbrv] hadoop git commit: YARN-5490. [YARN-3368] Fix various alignment issues and broken breadcrumb link in Node page. (Akhil P B Tan via Sunil G)

2016-10-27 Thread wangda
YARN-5490. [YARN-3368] Fix various alignment issues and broken breadcrumb link 
in Node page. (Akhil P B Tan via Sunil G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/29d7106f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/29d7106f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/29d7106f

Branch: refs/heads/YARN-3368
Commit: 29d7106f6d86b005788f2931934c3539d906843d
Parents: de225da
Author: sunilg 
Authored: Thu Oct 27 21:04:56 2016 +0530
Committer: Wangda Tan 
Committed: Thu Oct 27 12:10:57 2016 -0700

--
 .../src/main/webapp/app/controllers/yarn-node-apps.js| 2 +-
 .../src/main/webapp/app/controllers/yarn-node-containers.js  | 2 +-
 .../hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node.js  | 2 +-
 .../src/main/webapp/app/controllers/yarn-nodes-heatmap.js| 2 +-
 .../hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-nodes.js | 2 +-
 .../hadoop-yarn-ui/src/main/webapp/app/templates/yarn-node.hbs   | 4 ++--
 .../hadoop-yarn-ui/src/main/webapp/app/templates/yarn-nodes.hbs  | 2 +-
 7 files changed, 8 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/29d7106f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node-apps.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node-apps.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node-apps.js
index 4bfe9d0..6e67ab0 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node-apps.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node-apps.js
@@ -27,7 +27,7 @@ export default Ember.Controller.extend({
   routeName: 'application'
 },{
   text: "Nodes",
-  routeName: 'yarn-nodes'
+  routeName: 'yarn-nodes.table'
 }, {
   text: `Node [ ${nodeInfo.id} ]`,
   href: `/#/yarn-node/${nodeInfo.id}/${nodeInfo.addr}`,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/29d7106f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node-containers.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node-containers.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node-containers.js
index 59c8591..abe4098 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node-containers.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node-containers.js
@@ -27,7 +27,7 @@ export default Ember.Controller.extend({
   routeName: 'application'
 },{
   text: "Nodes",
-  routeName: 'yarn-nodes'
+  routeName: 'yarn-nodes.table'
 }, {
   text: `Node [ ${nodeInfo.id} ]`,
   href: `/#/yarn-node/${nodeInfo.id}/${nodeInfo.addr}`,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/29d7106f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node.js
index e505022..0661415 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-node.js
@@ -27,7 +27,7 @@ export default Ember.Controller.extend({
   routeName: 'application'
 },{
   text: "Nodes",
-  routeName: 'yarn-nodes'
+  routeName: 'yarn-nodes.table'
 }, {
   text: `Node [ ${nodeInfo.id} ]`,
   href: `/#/yarn-node/${nodeInfo.id}/${nodeInfo.addr}`,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/29d7106f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-nodes-heatmap.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-nodes-heatmap.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-nodes-heatmap.js
index fbe77fa..a38d8c5 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-nodes-heatmap.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-nodes-heatmap.js
@@ 

[23/50] [abbrv] hadoop git commit: YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses. (wangda)

2016-10-27 Thread wangda
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b550eb6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app-attempt.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app-attempt.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app-attempt.js
deleted file mode 100644
index c5394d0..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app-attempt.js
+++ /dev/null
@@ -1,49 +0,0 @@
-import DS from 'ember-data';
-import Converter from 'yarn-ui/utils/converter';
-
-export default DS.JSONAPISerializer.extend({
-internalNormalizeSingleResponse(store, primaryModelClass, payload, id,
-  requestType) {
-  
-  if (payload.appAttempt) {
-payload = payload.appAttempt;  
-  }
-  
-  var fixedPayload = {
-id: payload.appAttemptId,
-type: primaryModelClass.modelName, // yarn-app
-attributes: {
-  startTime: Converter.timeStampToDate(payload.startTime),
-  finishedTime: Converter.timeStampToDate(payload.finishedTime),
-  containerId: payload.containerId,
-  nodeHttpAddress: payload.nodeHttpAddress,
-  nodeId: payload.nodeId,
-  state: payload.nodeId,
-  logsLink: payload.logsLink
-}
-  };
-
-  return fixedPayload;
-},
-
-normalizeSingleResponse(store, primaryModelClass, payload, id,
-  requestType) {
-  var p = this.internalNormalizeSingleResponse(store, 
-primaryModelClass, payload, id, requestType);
-  return { data: p };
-},
-
-normalizeArrayResponse(store, primaryModelClass, payload, id,
-  requestType) {
-  // return expected is { data: [ {}, {} ] }
-  var normalizedArrayResponse = {};
-
-  // payload has apps : { app: [ {},{},{} ]  }
-  // need some error handling for ex apps or app may not be defined.
-  normalizedArrayResponse.data = 
payload.appAttempts.appAttempt.map(singleApp => {
-return this.internalNormalizeSingleResponse(store, primaryModelClass,
-  singleApp, singleApp.id, requestType);
-  }, this);
-  return normalizedArrayResponse;
-}
-});
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b550eb6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app.js 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app.js
deleted file mode 100644
index a038fff..000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/app/serializers/yarn-app.js
+++ /dev/null
@@ -1,66 +0,0 @@
-import DS from 'ember-data';
-import Converter from 'yarn-ui/utils/converter';
-
-export default DS.JSONAPISerializer.extend({
-internalNormalizeSingleResponse(store, primaryModelClass, payload, id,
-  requestType) {
-  if (payload.app) {
-payload = payload.app;  
-  }
-  
-  var fixedPayload = {
-id: id,
-type: primaryModelClass.modelName, // yarn-app
-attributes: {
-  appName: payload.name,
-  user: payload.user,
-  queue: payload.queue,
-  state: payload.state,
-  startTime: Converter.timeStampToDate(payload.startedTime),
-  elapsedTime: Converter.msToElapsedTime(payload.elapsedTime),
-  finishedTime: Converter.timeStampToDate(payload.finishedTime),
-  finalStatus: payload.finalStatus,
-  progress: payload.progress,
-  diagnostics: payload.diagnostics,
-  amContainerLogs: payload.amContainerLogs,
-  amHostHttpAddress: payload.amHostHttpAddress,
-  logAggregationStatus: payload.logAggregationStatus,
-  unmanagedApplication: payload.unmanagedApplication,
-  amNodeLabelExpression: payload.amNodeLabelExpression,
-  priority: payload.priority,
-  allocatedMB: payload.allocatedMB,
-  allocatedVCores: payload.allocatedVCores,
-  runningContainers: payload.runningContainers,
-  memorySeconds: payload.memorySeconds,
-  vcoreSeconds: payload.vcoreSeconds,
-  preemptedResourceMB: payload.preemptedResourceMB,
-  preemptedResourceVCores: payload.preemptedResourceVCores,
-  numNonAMContainerPreempted: payload.numNonAMContainerPreempted,
-  numAMContainerPreempted: payload.numAMContainerPreempted
-}
-  };
-
-  return fixedPayload;
-},
-
-normalizeSingleResponse(store, primaryModelClass, payload, id,
-  requestType) {
-  var p = this.internalNormalizeSingleResponse(store, 
-primaryModelClass, payload, id, requestType);
-  return { data: p };
-},
-
-normalizeArrayResponse(store, primaryModelClass, payload, id,
-  requestType) {

[13/50] [abbrv] hadoop git commit: YARN-5183. [YARN-3368] Support for responsive navbar when window is resized. (Kai Sasaki via Sunil G)

2016-10-27 Thread wangda
YARN-5183. [YARN-3368] Support for responsive navbar when window is resized. 
(Kai Sasaki via Sunil G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7959d112
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7959d112
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7959d112

Branch: refs/heads/YARN-3368
Commit: 7959d11298a881fad300b0b90781ea25f3115e9c
Parents: 2de5680
Author: Sunil 
Authored: Fri Jun 10 10:33:41 2016 +0530
Committer: Wangda Tan 
Committed: Thu Oct 27 12:10:57 2016 -0700

--
 .../hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js | 3 +++
 1 file changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7959d112/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
index bce18ce..d21cc3e 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
@@ -32,6 +32,9 @@ module.exports = function(defaults) {
   app.import("bower_components/select2/dist/js/select2.min.js");
   app.import('bower_components/jquery-ui/jquery-ui.js');
   app.import('bower_components/more-js/dist/more.js');
+  app.import('bower_components/bootstrap/dist/css/bootstrap.css');
+  app.import('bower_components/bootstrap/dist/css/bootstrap-theme.css');
+  app.import('bower_components/bootstrap/dist/js/bootstrap.min.js');
 
   // Use `app.import` to add additional libraries to the generated
   // output files.


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[39/50] [abbrv] hadoop git commit: YARN-5503. [YARN-3368] Add missing hidden files in webapp folder for deployment (Sreenath Somarajapuram via Sunil G)

2016-10-27 Thread wangda
YARN-5503. [YARN-3368] Add missing hidden files in webapp folder for deployment 
(Sreenath Somarajapuram via Sunil G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/84e7c444
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/84e7c444
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/84e7c444

Branch: refs/heads/YARN-3368
Commit: 84e7c444205ca9e0c031a5a77563d971a9dc59d7
Parents: 6c52317
Author: sunilg 
Authored: Tue Aug 30 20:58:35 2016 +0530
Committer: Wangda Tan 
Committed: Thu Oct 27 12:10:57 2016 -0700

--
 .../hadoop-yarn/hadoop-yarn-ui/pom.xml  | 19 ++-
 .../hadoop-yarn-ui/src/main/webapp/.bowerrc |  4 +++
 .../src/main/webapp/.editorconfig   | 34 
 .../hadoop-yarn-ui/src/main/webapp/.ember-cli   |  9 ++
 .../hadoop-yarn-ui/src/main/webapp/.jshintrc| 32 ++
 .../src/main/webapp/.watchmanconfig |  3 ++
 6 files changed, 100 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/84e7c444/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
index fca8d30..b750a73 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
@@ -30,7 +30,7 @@
   ${packaging.type}
 
   
-jar
+war
 src/main/webapp
 node
 v0.12.2
@@ -52,9 +52,26 @@
 src/main/webapp/bower.json
 src/main/webapp/package.json
 src/main/webapp/testem.json
+
+src/main/webapp/dist/**/*
+src/main/webapp/tmp/**/*
 src/main/webapp/public/assets/images/**/*
+src/main/webapp/public/assets/images/*
 src/main/webapp/public/robots.txt
+
+public/assets/images/**/*
 public/crossdomain.xml
+
+src/main/webapp/.tmp/**/*
+src/main/webapp/.bowerrc
+src/main/webapp/.editorconfig
+src/main/webapp/.ember-cli
+src/main/webapp/.gitignore
+src/main/webapp/.jshintrc
+src/main/webapp/.travis.yml
+src/main/webapp/.watchmanconfig
+src/main/webapp/tests/.jshintrc
+src/main/webapp/blueprints/.jshintrc
   
 
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/84e7c444/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.bowerrc
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.bowerrc 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.bowerrc
new file mode 100644
index 000..959e169
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.bowerrc
@@ -0,0 +1,4 @@
+{
+  "directory": "bower_components",
+  "analytics": false
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/84e7c444/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.editorconfig
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.editorconfig 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.editorconfig
new file mode 100644
index 000..47c5438
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.editorconfig
@@ -0,0 +1,34 @@
+# EditorConfig helps developers define and maintain consistent
+# coding styles between different editors and IDEs
+# editorconfig.org
+
+root = true
+
+
+[*]
+end_of_line = lf
+charset = utf-8
+trim_trailing_whitespace = true
+insert_final_newline = true
+indent_style = space
+indent_size = 2
+
+[*.js]
+indent_style = space
+indent_size = 2
+
+[*.hbs]
+insert_final_newline = false
+indent_style = space
+indent_size = 2
+
+[*.css]
+indent_style = space
+indent_size = 2
+
+[*.html]
+indent_style = space
+indent_size = 2
+
+[*.{diff,md}]
+trim_trailing_whitespace = false

http://git-wip-us.apache.org/repos/asf/hadoop/blob/84e7c444/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.ember-cli
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.ember-cli 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.ember-cli
new file mode 100644
index 000..ee64cfe
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.ember-cli
@@ -0,0 +1,9 @@
+{
+  /**
+Ember CLI sends analytics information by default. The data is 

[36/50] [abbrv] hadoop git commit: YARN-4849. Addendum patch to remove unwanted files from rat exclusions. (Wangda Tan via Sunil G)

2016-10-27 Thread wangda
YARN-4849. Addendum patch to remove unwanted files from rat exclusions. (Wangda 
Tan via Sunil G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/313a19e7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/313a19e7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/313a19e7

Branch: refs/heads/YARN-3368
Commit: 313a19e72a92c0c63a8ecce4b2e3731eed7bd99b
Parents: 2cb15fe
Author: sunilg 
Authored: Fri Oct 14 18:23:04 2016 +0530
Committer: Wangda Tan 
Committed: Thu Oct 27 12:10:57 2016 -0700

--
 .../hadoop-yarn/hadoop-yarn-ui/pom.xml  | 14 
 .../src/main/webapp/.editorconfig   | 34 
 2 files changed, 48 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/313a19e7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
index 440aca9..b427713 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
@@ -46,32 +46,18 @@
 apache-rat-plugin
 
   
-src/main/webapp/node_modules/**/*
-src/main/webapp/bower_components/**/*
 src/main/webapp/jsconfig.json
 src/main/webapp/bower.json
 src/main/webapp/package.json
 src/main/webapp/testem.json
-
-src/main/webapp/dist/**/*
-src/main/webapp/tmp/**/*
 src/main/webapp/public/assets/images/**/*
 src/main/webapp/public/assets/images/*
 src/main/webapp/public/robots.txt
-
-public/assets/images/**/*
 public/crossdomain.xml
-
-src/main/webapp/.tmp/**/*
 src/main/webapp/.bowerrc
-src/main/webapp/.editorconfig
 src/main/webapp/.ember-cli
-src/main/webapp/.gitignore
 src/main/webapp/.jshintrc
-src/main/webapp/.travis.yml
 src/main/webapp/.watchmanconfig
-src/main/webapp/tests/.jshintrc
-src/main/webapp/blueprints/.jshintrc
   
 
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/313a19e7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.editorconfig
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.editorconfig 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.editorconfig
deleted file mode 100644
index 47c5438..000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.editorconfig
+++ /dev/null
@@ -1,34 +0,0 @@
-# EditorConfig helps developers define and maintain consistent
-# coding styles between different editors and IDEs
-# editorconfig.org
-
-root = true
-
-
-[*]
-end_of_line = lf
-charset = utf-8
-trim_trailing_whitespace = true
-insert_final_newline = true
-indent_style = space
-indent_size = 2
-
-[*.js]
-indent_style = space
-indent_size = 2
-
-[*.hbs]
-insert_final_newline = false
-indent_style = space
-indent_size = 2
-
-[*.css]
-indent_style = space
-indent_size = 2
-
-[*.html]
-indent_style = space
-indent_size = 2
-
-[*.{diff,md}]
-trim_trailing_whitespace = false


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[05/50] [abbrv] hadoop git commit: HDFS-10455. Logging the username when deny the setOwner operation. Contributed by Tianyin Xiu

2016-10-27 Thread wangda
HDFS-10455. Logging the username when deny the setOwner operation. Contributed 
by Tianyin Xiu


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ac35ee93
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ac35ee93
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ac35ee93

Branch: refs/heads/YARN-3368
Commit: ac35ee9393e0afce9fede1d2052e7bf4032312fd
Parents: 0c837db
Author: Brahma Reddy Battula 
Authored: Thu Oct 27 20:20:56 2016 +0530
Committer: Brahma Reddy Battula 
Committed: Thu Oct 27 20:20:56 2016 +0530

--
 .../org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java| 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ac35ee93/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
index 417ce01..488c600 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAttrOp.java
@@ -84,10 +84,12 @@ public class FSDirAttrOp {
   fsd.checkOwner(pc, iip);
   if (!pc.isSuperUser()) {
 if (username != null && !pc.getUser().equals(username)) {
-  throw new AccessControlException("Non-super user cannot change 
owner");
+  throw new AccessControlException("User " + pc.getUser()
+  + " is not a super user (non-super user cannot change owner).");
 }
 if (group != null && !pc.isMemberOfGroup(group)) {
-  throw new AccessControlException("User does not belong to " + group);
+  throw new AccessControlException(
+  "User " + pc.getUser() + " does not belong to " + group);
 }
   }
   unprotectedSetOwner(fsd, iip, username, group);


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[48/50] [abbrv] hadoop git commit: YARN-4849. Addendum patch to fix javadocs. (Sunil G via wangda)

2016-10-27 Thread wangda
 YARN-4849. Addendum patch to fix javadocs. (Sunil G via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cec96f4d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cec96f4d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cec96f4d

Branch: refs/heads/YARN-3368
Commit: cec96f4d839ec30631a03720c6738d6db7deca70
Parents: 7f1ec06
Author: Wangda Tan 
Authored: Fri Sep 9 10:54:37 2016 -0700
Committer: Wangda Tan 
Committed: Thu Oct 27 12:10:57 2016 -0700

--
 .../hadoop/yarn/server/resourcemanager/ResourceManager.java| 6 ++
 1 file changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cec96f4d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
index d32f649..f739e31 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
@@ -916,6 +916,12 @@ public class ResourceManager extends CompositeService 
implements Recoverable {
* Return a HttpServer.Builder that the journalnode / namenode / secondary
* namenode can use to initialize their HTTP / HTTPS server.
*
+   * @param conf configuration object
+   * @param httpAddr HTTP address
+   * @param httpsAddr HTTPS address
+   * @param name  Name of the server
+   * @throws IOException from Builder
+   * @return builder object
*/
   public static HttpServer2.Builder httpServerTemplateForRM(Configuration conf,
   final InetSocketAddress httpAddr, final InetSocketAddress httpsAddr,


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[38/50] [abbrv] hadoop git commit: YARN-5497. [YARN-3368] Use different color for Undefined and Succeeded for Final State in applications page. (Akhil P B Tan via Sunil G)

2016-10-27 Thread wangda
YARN-5497. [YARN-3368] Use different color for Undefined and Succeeded for 
Final State in applications page. (Akhil P B Tan via Sunil G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/de225daf
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/de225daf
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/de225daf

Branch: refs/heads/YARN-3368
Commit: de225daf31eda4478d8b0c52a79a2591915aea4a
Parents: ebe87bc
Author: sunilg 
Authored: Thu Oct 27 14:45:23 2016 +0530
Committer: Wangda Tan 
Committed: Thu Oct 27 12:10:57 2016 -0700

--
 .../hadoop-yarn-ui/src/main/webapp/app/models/yarn-app.js | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/de225daf/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app.js
index 0a5df87..8b5474f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-app.js
@@ -86,14 +86,17 @@ export default DS.Model.extend({
   }.property("progress"),
 
   finalStatusStyle: function() {
-var style = "default";
 var finalStatus = this.get("finalStatus");
+var style = "";
+
 if (finalStatus == "KILLED") {
   style = "warning";
 } else if (finalStatus == "FAILED") {
   style = "danger";
-} else {
+} else if (finalStatus == "SUCCEEDED") {
   style = "success";
+} else {
+  style = "default";
 }
 
 return "label label-" + style;


-
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org



[32/50] [abbrv] hadoop git commit: YARN-5321. [YARN-3368] Add resource usage for application by node managers (Wangda Tan via Sunil G) YARN-5320. [YARN-3368] Add resource usage by applications and que

2016-10-27 Thread wangda
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e5a160c9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps.js
index ff49403..b945451 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps.js
@@ -20,7 +20,9 @@ import Ember from 'ember';
 
 export default Ember.Route.extend({
   model() {
-var apps = this.store.findAll('yarn-app');
-return apps;
+return Ember.RSVP.hash({
+  apps: this.store.findAll('yarn-app'),
+  clusterMetrics: this.store.findAll('ClusterMetric'),
+});
   }
 });

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e5a160c9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps/apps.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps/apps.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps/apps.js
new file mode 100644
index 000..8719170
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps/apps.js
@@ -0,0 +1,22 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import Ember from 'ember';
+
+export default Ember.Route.extend({
+});

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e5a160c9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps/services.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps/services.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps/services.js
new file mode 100644
index 000..8719170
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps/services.js
@@ -0,0 +1,22 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import Ember from 'ember';
+
+export default Ember.Route.extend({
+});

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e5a160c9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node.js
index 6e57388..64a1b3e 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node.js
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-node.js
@@ -22,6 +22,7 @@ export default Ember.Route.extend({
   model(param) {
 // Fetches data from both NM and RM. RM is queried to get node usage info.
 return Ember.RSVP.hash({
+  nodeInfo: { id: param.node_id, addr: param.node_addr },
   node: this.store.findRecord('yarn-node', param.node_addr),
   rmNode: this.store.findRecord('yarn-rm-node', param.node_id)
 });


[20/50] [abbrv] hadoop git commit: YARN-4849. [YARN-3368] cleanup code base, integrate web UI related build to mvn, and fix licenses. (wangda)

2016-10-27 Thread wangda
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b550eb6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queue.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queue.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queue.js
new file mode 100644
index 000..89858bf
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queue.js
@@ -0,0 +1,38 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+import Ember from 'ember';
+
+export default Ember.Route.extend({
+  model(param) {
+return Ember.RSVP.hash({
+  selected : param.queue_name,
+  queues: this.store.findAll('yarnQueue'),
+  selectedQueue : undefined,
+  apps: undefined, // apps of selected queue
+});
+  },
+
+  afterModel(model) {
+model.selectedQueue = this.store.peekRecord('yarnQueue', model.selected);
+model.apps = this.store.findAll('yarnApp');
+model.apps.forEach(function(o) {
+  console.log(o);
+})
+  }
+});

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b550eb6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/index.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/index.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/index.js
new file mode 100644
index 000..7da6f6d
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/index.js
@@ -0,0 +1,23 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+export default Ember.Route.extend({
+  beforeModel() {
+this.transitionTo('yarnQueues.root');
+  }
+});
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b550eb6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/queues-selector.js
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/queues-selector.js
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/queues-selector.js
new file mode 100644
index 000..3686c83
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-queues/queues-selector.js
@@ -0,0 +1,25 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+

[33/50] [abbrv] hadoop git commit: YARN-5321. [YARN-3368] Add resource usage for application by node managers (Wangda Tan via Sunil G) YARN-5320. [YARN-3368] Add resource usage by applications and que

2016-10-27 Thread wangda
YARN-5321. [YARN-3368] Add resource usage for application by node managers 
(Wangda Tan via Sunil G)
YARN-5320. [YARN-3368] Add resource usage by applications and queues to cluster 
overview page  (Wangda Tan via Sunil G)
YARN-5322. [YARN-3368] Add a node heat chart map (Wangda Tan via Sunil G)
YARN-5347. [YARN-3368] Applications page improvements (Sreenath Somarajapuram 
via Sunil G)
YARN-5348. [YARN-3368] Node details page improvements (Sreenath Somarajapuram 
via Sunil G)
YARN-5346. [YARN-3368] Queues page improvements (Sreenath Somarajapuram via 
Sunil G)
YARN-5345. [YARN-3368] Cluster overview page improvements (Sreenath 
Somarajapuram via Sunil G)
YARN-5344. [YARN-3368] Generic UI improvements (Sreenath Somarajapuram via 
Sunil G)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e5a160c9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e5a160c9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e5a160c9

Branch: refs/heads/YARN-3368
Commit: e5a160c925b3307be6106a399c21dee2e200e95d
Parents: 842b320
Author: Sunil 
Authored: Fri Jul 15 21:16:06 2016 +0530
Committer: Wangda Tan 
Committed: Thu Oct 27 12:10:57 2016 -0700

--
 .../src/main/webapp/app/adapters/yarn-app.js|  14 +
 .../app/components/app-usage-donut-chart.js |  67 
 .../src/main/webapp/app/components/bar-chart.js |   5 +
 .../app/components/base-chart-component.js  |  55 ++-
 .../app/components/base-usage-donut-chart.js|  43 +++
 .../main/webapp/app/components/donut-chart.js   |  55 ++-
 .../main/webapp/app/components/nodes-heatmap.js | 209 +++
 ...er-app-memusage-by-nodes-stacked-barchart.js |  88 +
 ...app-ncontainers-by-nodes-stacked-barchart.js |  67 
 .../app/components/queue-usage-donut-chart.js   |  69 
 .../main/webapp/app/components/queue-view.js|   3 +-
 .../main/webapp/app/components/simple-table.js  |   9 +-
 .../webapp/app/components/stacked-barchart.js   | 198 +++
 .../main/webapp/app/components/timeline-view.js |   2 +-
 .../main/webapp/app/components/tree-selector.js |  43 ++-
 .../webapp/app/controllers/cluster-overview.js  |   9 +
 .../webapp/app/controllers/yarn-app-attempt.js  |  40 +++
 .../webapp/app/controllers/yarn-app-attempts.js |  40 +++
 .../src/main/webapp/app/controllers/yarn-app.js |  38 ++
 .../main/webapp/app/controllers/yarn-apps.js|   9 +
 .../webapp/app/controllers/yarn-node-apps.js|  39 +++
 .../app/controllers/yarn-node-containers.js |  39 +++
 .../main/webapp/app/controllers/yarn-node.js|  37 ++
 .../app/controllers/yarn-nodes-heatmap.js   |  36 ++
 .../main/webapp/app/controllers/yarn-nodes.js   |  33 ++
 .../webapp/app/controllers/yarn-queue-apps.js   |  46 +++
 .../main/webapp/app/controllers/yarn-queue.js   |  20 ++
 .../main/webapp/app/controllers/yarn-queues.js  |  34 ++
 .../webapp/app/controllers/yarn-services.js |  34 ++
 .../main/webapp/app/models/cluster-metric.js|   2 +-
 .../main/webapp/app/models/yarn-app-attempt.js  |  11 +
 .../src/main/webapp/app/models/yarn-app.js  |   4 +
 .../src/main/webapp/app/models/yarn-rm-node.js  |   7 +
 .../src/main/webapp/app/router.js   |  15 +-
 .../src/main/webapp/app/routes/application.js   |   2 +
 .../main/webapp/app/routes/cluster-overview.js  |   9 +-
 .../main/webapp/app/routes/yarn-app-attempts.js |  30 ++
 .../src/main/webapp/app/routes/yarn-app.js  |  17 +-
 .../src/main/webapp/app/routes/yarn-apps.js |   6 +-
 .../main/webapp/app/routes/yarn-apps/apps.js|  22 ++
 .../webapp/app/routes/yarn-apps/services.js |  22 ++
 .../src/main/webapp/app/routes/yarn-node.js |   1 +
 .../src/main/webapp/app/routes/yarn-nodes.js|   5 +-
 .../webapp/app/routes/yarn-nodes/heatmap.js |  22 ++
 .../main/webapp/app/routes/yarn-nodes/table.js  |  22 ++
 .../main/webapp/app/routes/yarn-queue-apps.js   |  36 ++
 .../src/main/webapp/app/routes/yarn-queues.js   |  38 ++
 .../webapp/app/serializers/yarn-app-attempt.js  |  19 +-
 .../src/main/webapp/app/serializers/yarn-app.js |   8 +-
 .../webapp/app/serializers/yarn-container.js|  20 +-
 .../src/main/webapp/app/styles/app.css  | 139 ++--
 .../main/webapp/app/templates/application.hbs   |  99 --
 .../webapp/app/templates/cluster-overview.hbs   | 168 ++---
 .../app/templates/components/app-table.hbs  |  10 +-
 .../templates/components/node-menu-panel.hbs|   2 +-
 .../app/templates/components/nodes-heatmap.hbs  |  27 ++
 .../components/queue-configuration-table.hbs|   4 -
 .../templates/components/queue-navigator.hbs|  14 +-
 .../app/templates/components/timeline-view.hbs  |   3 +-
 .../webapp/app/templates/yarn-app-attempt.hbs   |  13 +-
 .../webapp/app/templates/yarn-app-attempts.hbs  |  57 +++
 .../src/main/webapp/app/templates/yarn-app.hbs  | 346 ---
 

  1   2   >