hadoop git commit: YARN-4850. test-fair-scheduler.xml isn't valid xml (Yufei Gu via aw)

2016-04-01 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 82d1f858e -> 608223b51


YARN-4850. test-fair-scheduler.xml isn't valid xml (Yufei Gu via aw)

(cherry picked from commit b1394d6307425a41d388c71e39f0880babb2c7a9)
(cherry picked from commit 92a3dbe44ffb113dead560fc179c2bc15fd6fce8)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/608223b5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/608223b5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/608223b5

Branch: refs/heads/branch-2.8
Commit: 608223b51b3c958ccb887aeb8ed532bf3cf17069
Parents: 82d1f85
Author: Allen Wittenauer 
Authored: Thu Mar 24 08:15:58 2016 -0700
Committer: Karthik Kambatla 
Committed: Fri Apr 1 16:58:00 2016 -0700

--
 .../src/test/resources/test-fair-scheduler.xml  | 34 ++--
 1 file changed, 17 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/608223b5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/test-fair-scheduler.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/test-fair-scheduler.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/test-fair-scheduler.xml
index db160c9..f7934c4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/test-fair-scheduler.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/test-fair-scheduler.xml
@@ -1,21 +1,21 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
 
+
 
   
 



hadoop git commit: YARN-4850. test-fair-scheduler.xml isn't valid xml (Yufei Gu via aw)

2016-04-01 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 856a131c6 -> 353f37916


YARN-4850. test-fair-scheduler.xml isn't valid xml (Yufei Gu via aw)

(cherry picked from commit b1394d6307425a41d388c71e39f0880babb2c7a9)
(cherry picked from commit 92a3dbe44ffb113dead560fc179c2bc15fd6fce8)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/353f3791
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/353f3791
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/353f3791

Branch: refs/heads/branch-2.7
Commit: 353f3791687fe884fae781d081170010f39c3f0c
Parents: 856a131
Author: Allen Wittenauer 
Authored: Thu Mar 24 08:15:58 2016 -0700
Committer: Karthik Kambatla 
Committed: Fri Apr 1 16:58:33 2016 -0700

--
 .../src/test/resources/test-fair-scheduler.xml  | 34 ++--
 1 file changed, 17 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/353f3791/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/test-fair-scheduler.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/test-fair-scheduler.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/test-fair-scheduler.xml
index db160c9..f7934c4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/test-fair-scheduler.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/test-fair-scheduler.xml
@@ -1,21 +1,21 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
 
+
 
   
 



hadoop git commit: YARN-4850. test-fair-scheduler.xml isn't valid xml (Yufei Gu via aw)

2016-04-01 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 ecfd25dc6 -> 92a3dbe44


YARN-4850. test-fair-scheduler.xml isn't valid xml (Yufei Gu via aw)

(cherry picked from commit b1394d6307425a41d388c71e39f0880babb2c7a9)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/92a3dbe4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/92a3dbe4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/92a3dbe4

Branch: refs/heads/branch-2
Commit: 92a3dbe44ffb113dead560fc179c2bc15fd6fce8
Parents: ecfd25d
Author: Allen Wittenauer 
Authored: Thu Mar 24 08:15:58 2016 -0700
Committer: Karthik Kambatla 
Committed: Fri Apr 1 16:57:31 2016 -0700

--
 .../src/test/resources/test-fair-scheduler.xml  | 34 ++--
 1 file changed, 17 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/92a3dbe4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/test-fair-scheduler.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/test-fair-scheduler.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/test-fair-scheduler.xml
index db160c9..f7934c4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/test-fair-scheduler.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/test-fair-scheduler.xml
@@ -1,21 +1,21 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
 
+
 
   
 



hadoop git commit: YARN-4657. Javadoc comment is broken for Resources.multiplyByAndAddTo(). (Daniel Templeton via kasha)

2016-04-01 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 f56266fcc -> ecfd25dc6


YARN-4657. Javadoc comment is broken for Resources.multiplyByAndAddTo(). 
(Daniel Templeton via kasha)

(cherry picked from commit 81d04cae41182808ace5d86cdac7e4d71871eb1e)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ecfd25dc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ecfd25dc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ecfd25dc

Branch: refs/heads/branch-2
Commit: ecfd25dc656d701480d1e3be4a795715e243b120
Parents: f56266f
Author: Karthik Kambatla 
Authored: Fri Apr 1 16:19:54 2016 -0700
Committer: Karthik Kambatla 
Committed: Fri Apr 1 16:20:18 2016 -0700

--
 .../main/java/org/apache/hadoop/yarn/util/resource/Resources.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecfd25dc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
index b05d021..558f96c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
@@ -152,7 +152,7 @@ public class Resources {
   }
 
   /**
-   * Multiply @param rhs by @param by, and add the result to @param lhs
+   * Multiply {@code rhs} by {@code by}, and add the result to {@code lhs}
* without creating any new {@link Resource} object
*/
   public static Resource multiplyAndAddTo(



hadoop git commit: YARN-4657. Javadoc comment is broken for Resources.multiplyByAndAddTo(). (Daniel Templeton via kasha)

2016-04-01 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/trunk 5686caa9f -> 81d04cae4


YARN-4657. Javadoc comment is broken for Resources.multiplyByAndAddTo(). 
(Daniel Templeton via kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/81d04cae
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/81d04cae
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/81d04cae

Branch: refs/heads/trunk
Commit: 81d04cae41182808ace5d86cdac7e4d71871eb1e
Parents: 5686caa
Author: Karthik Kambatla 
Authored: Fri Apr 1 16:19:54 2016 -0700
Committer: Karthik Kambatla 
Committed: Fri Apr 1 16:20:00 2016 -0700

--
 .../main/java/org/apache/hadoop/yarn/util/resource/Resources.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/81d04cae/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
index b05d021..558f96c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
@@ -152,7 +152,7 @@ public class Resources {
   }
 
   /**
-   * Multiply @param rhs by @param by, and add the result to @param lhs
+   * Multiply {@code rhs} by {@code by}, and add the result to {@code lhs}
* without creating any new {@link Resource} object
*/
   public static Resource multiplyAndAddTo(



hadoop git commit: Missing file for YARN-4895.

2016-04-01 Thread asuresh
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 9def4d4d3 -> 82d1f858e


Missing file for YARN-4895.

(cherry picked from commit 5686caa9fcb59759c9286385575f31e407a97c16)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/82d1f858
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/82d1f858
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/82d1f858

Branch: refs/heads/branch-2.8
Commit: 82d1f858eea0588c2f81285187b70e29d80e3729
Parents: 9def4d4
Author: Arun Suresh 
Authored: Fri Apr 1 15:58:13 2016 -0700
Committer: Arun Suresh 
Committed: Fri Apr 1 15:58:54 2016 -0700

--
 .../api/records/TestResourceUtilization.java| 63 
 1 file changed, 63 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/82d1f858/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/TestResourceUtilization.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/TestResourceUtilization.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/TestResourceUtilization.java
new file mode 100644
index 000..5934846
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/TestResourceUtilization.java
@@ -0,0 +1,63 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.api.records;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+public class TestResourceUtilization {
+
+  @Test
+  public void testResourceUtilization() {
+ResourceUtilization u1 = ResourceUtilization.newInstance(10, 20, 0.5f);
+ResourceUtilization u2 = ResourceUtilization.newInstance(u1);
+ResourceUtilization u3 = ResourceUtilization.newInstance(10, 20, 0.5f);
+ResourceUtilization u4 = ResourceUtilization.newInstance(20, 20, 0.5f);
+ResourceUtilization u5 = ResourceUtilization.newInstance(30, 40, 0.8f);
+
+Assert.assertEquals(u1, u2);
+Assert.assertEquals(u1, u3);
+Assert.assertNotEquals(u1, u4);
+Assert.assertNotEquals(u2, u5);
+Assert.assertNotEquals(u4, u5);
+
+Assert.assertTrue(u1.hashCode() == u2.hashCode());
+Assert.assertTrue(u1.hashCode() == u3.hashCode());
+Assert.assertFalse(u1.hashCode() == u4.hashCode());
+Assert.assertFalse(u2.hashCode() == u5.hashCode());
+Assert.assertFalse(u4.hashCode() == u5.hashCode());
+
+Assert.assertTrue(u1.getPhysicalMemory() == 10);
+Assert.assertFalse(u1.getVirtualMemory() == 10);
+Assert.assertTrue(u1.getCPU() == 0.5f);
+
+Assert.assertEquals("", u1.toString());
+
+u1.addTo(10, 0, 0.0f);
+Assert.assertNotEquals(u1, u2);
+Assert.assertEquals(u1, u4);
+u1.addTo(10, 20, 0.3f);
+Assert.assertEquals(u1, u5);
+u1.subtractFrom(10, 20, 0.3f);
+Assert.assertEquals(u1, u4);
+u1.subtractFrom(10, 0, 0.0f);
+Assert.assertEquals(u1, u3);
+  }
+}



hadoop git commit: Missing file for YARN-4895.

2016-04-01 Thread asuresh
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 24ce9ea45 -> f56266fcc


Missing file for YARN-4895.

(cherry picked from commit 5686caa9fcb59759c9286385575f31e407a97c16)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f56266fc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f56266fc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f56266fc

Branch: refs/heads/branch-2
Commit: f56266fccc7aff881457ddf9cf63878a34cb8eff
Parents: 24ce9ea
Author: Arun Suresh 
Authored: Fri Apr 1 15:58:13 2016 -0700
Committer: Arun Suresh 
Committed: Fri Apr 1 15:58:38 2016 -0700

--
 .../api/records/TestResourceUtilization.java| 63 
 1 file changed, 63 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f56266fc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/TestResourceUtilization.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/TestResourceUtilization.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/TestResourceUtilization.java
new file mode 100644
index 000..5934846
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/TestResourceUtilization.java
@@ -0,0 +1,63 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.api.records;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+public class TestResourceUtilization {
+
+  @Test
+  public void testResourceUtilization() {
+ResourceUtilization u1 = ResourceUtilization.newInstance(10, 20, 0.5f);
+ResourceUtilization u2 = ResourceUtilization.newInstance(u1);
+ResourceUtilization u3 = ResourceUtilization.newInstance(10, 20, 0.5f);
+ResourceUtilization u4 = ResourceUtilization.newInstance(20, 20, 0.5f);
+ResourceUtilization u5 = ResourceUtilization.newInstance(30, 40, 0.8f);
+
+Assert.assertEquals(u1, u2);
+Assert.assertEquals(u1, u3);
+Assert.assertNotEquals(u1, u4);
+Assert.assertNotEquals(u2, u5);
+Assert.assertNotEquals(u4, u5);
+
+Assert.assertTrue(u1.hashCode() == u2.hashCode());
+Assert.assertTrue(u1.hashCode() == u3.hashCode());
+Assert.assertFalse(u1.hashCode() == u4.hashCode());
+Assert.assertFalse(u2.hashCode() == u5.hashCode());
+Assert.assertFalse(u4.hashCode() == u5.hashCode());
+
+Assert.assertTrue(u1.getPhysicalMemory() == 10);
+Assert.assertFalse(u1.getVirtualMemory() == 10);
+Assert.assertTrue(u1.getCPU() == 0.5f);
+
+Assert.assertEquals("", u1.toString());
+
+u1.addTo(10, 0, 0.0f);
+Assert.assertNotEquals(u1, u2);
+Assert.assertEquals(u1, u4);
+u1.addTo(10, 20, 0.3f);
+Assert.assertEquals(u1, u5);
+u1.subtractFrom(10, 20, 0.3f);
+Assert.assertEquals(u1, u4);
+u1.subtractFrom(10, 0, 0.0f);
+Assert.assertEquals(u1, u3);
+  }
+}



hadoop git commit: Missing file for YARN-4895.

2016-04-01 Thread asuresh
Repository: hadoop
Updated Branches:
  refs/heads/trunk 82621e38a -> 5686caa9f


Missing file for YARN-4895.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5686caa9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5686caa9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5686caa9

Branch: refs/heads/trunk
Commit: 5686caa9fcb59759c9286385575f31e407a97c16
Parents: 82621e3
Author: Arun Suresh 
Authored: Fri Apr 1 15:58:13 2016 -0700
Committer: Arun Suresh 
Committed: Fri Apr 1 15:58:13 2016 -0700

--
 .../api/records/TestResourceUtilization.java| 63 
 1 file changed, 63 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5686caa9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/TestResourceUtilization.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/TestResourceUtilization.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/TestResourceUtilization.java
new file mode 100644
index 000..5934846
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/TestResourceUtilization.java
@@ -0,0 +1,63 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.api.records;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+public class TestResourceUtilization {
+
+  @Test
+  public void testResourceUtilization() {
+ResourceUtilization u1 = ResourceUtilization.newInstance(10, 20, 0.5f);
+ResourceUtilization u2 = ResourceUtilization.newInstance(u1);
+ResourceUtilization u3 = ResourceUtilization.newInstance(10, 20, 0.5f);
+ResourceUtilization u4 = ResourceUtilization.newInstance(20, 20, 0.5f);
+ResourceUtilization u5 = ResourceUtilization.newInstance(30, 40, 0.8f);
+
+Assert.assertEquals(u1, u2);
+Assert.assertEquals(u1, u3);
+Assert.assertNotEquals(u1, u4);
+Assert.assertNotEquals(u2, u5);
+Assert.assertNotEquals(u4, u5);
+
+Assert.assertTrue(u1.hashCode() == u2.hashCode());
+Assert.assertTrue(u1.hashCode() == u3.hashCode());
+Assert.assertFalse(u1.hashCode() == u4.hashCode());
+Assert.assertFalse(u2.hashCode() == u5.hashCode());
+Assert.assertFalse(u4.hashCode() == u5.hashCode());
+
+Assert.assertTrue(u1.getPhysicalMemory() == 10);
+Assert.assertFalse(u1.getVirtualMemory() == 10);
+Assert.assertTrue(u1.getCPU() == 0.5f);
+
+Assert.assertEquals("", u1.toString());
+
+u1.addTo(10, 0, 0.0f);
+Assert.assertNotEquals(u1, u2);
+Assert.assertEquals(u1, u4);
+u1.addTo(10, 20, 0.3f);
+Assert.assertEquals(u1, u5);
+u1.subtractFrom(10, 20, 0.3f);
+Assert.assertEquals(u1, u4);
+u1.subtractFrom(10, 0, 0.0f);
+Assert.assertEquals(u1, u3);
+  }
+}



hadoop git commit: YARN-4895. Add subtractFrom method to ResourceUtilization class. Contributed by Konstantinos Karanasos.

2016-04-01 Thread asuresh
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 7c5b55d4e -> 24ce9ea45


YARN-4895. Add subtractFrom method to ResourceUtilization class. Contributed by 
Konstantinos Karanasos.

(cherry picked from commit 82621e38a0445832998bc00693279e23a98605c1)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/24ce9ea4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/24ce9ea4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/24ce9ea4

Branch: refs/heads/branch-2
Commit: 24ce9ea456f23eb562dabc0f23e79575fcc0ce6f
Parents: 7c5b55d
Author: Arun Suresh 
Authored: Fri Apr 1 14:57:06 2016 -0700
Committer: Arun Suresh 
Committed: Fri Apr 1 15:00:02 2016 -0700

--
 .../yarn/api/records/ResourceUtilization.java   | 22 
 1 file changed, 22 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/24ce9ea4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceUtilization.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceUtilization.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceUtilization.java
index 5f52f85..2ae4872 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceUtilization.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceUtilization.java
@@ -44,6 +44,14 @@ public abstract class ResourceUtilization implements
 return utilization;
   }
 
+  @Public
+  @Unstable
+  public static ResourceUtilization newInstance(
+  ResourceUtilization resourceUtil) {
+return newInstance(resourceUtil.getPhysicalMemory(),
+resourceUtil.getVirtualMemory(), resourceUtil.getCPU());
+  }
+
   /**
* Get used virtual memory.
*
@@ -147,4 +155,18 @@ public abstract class ResourceUtilization implements
 this.setVirtualMemory(this.getVirtualMemory() + vmem);
 this.setCPU(this.getCPU() + cpu);
   }
+
+  /**
+   * Subtract utilization from the current one.
+   * @param pmem Physical memory to be subtracted.
+   * @param vmem Virtual memory to be subtracted.
+   * @param cpu CPU utilization to be subtracted.
+   */
+  @Public
+  @Unstable
+  public void subtractFrom(int pmem, int vmem, float cpu) {
+this.setPhysicalMemory(this.getPhysicalMemory() - pmem);
+this.setVirtualMemory(this.getVirtualMemory() - vmem);
+this.setCPU(this.getCPU() - cpu);
+  }
 }
\ No newline at end of file



hadoop git commit: YARN-4895. Add subtractFrom method to ResourceUtilization class. Contributed by Konstantinos Karanasos.

2016-04-01 Thread asuresh
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 719c131c4 -> 9def4d4d3


YARN-4895. Add subtractFrom method to ResourceUtilization class. Contributed by 
Konstantinos Karanasos.

(cherry picked from commit 82621e38a0445832998bc00693279e23a98605c1)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9def4d4d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9def4d4d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9def4d4d

Branch: refs/heads/branch-2.8
Commit: 9def4d4d390e4a86f9588dea5db4557a03b032a4
Parents: 719c131
Author: Arun Suresh 
Authored: Fri Apr 1 14:57:06 2016 -0700
Committer: Arun Suresh 
Committed: Fri Apr 1 15:00:30 2016 -0700

--
 .../yarn/api/records/ResourceUtilization.java   | 22 
 1 file changed, 22 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9def4d4d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceUtilization.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceUtilization.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceUtilization.java
index 5f52f85..2ae4872 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceUtilization.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceUtilization.java
@@ -44,6 +44,14 @@ public abstract class ResourceUtilization implements
 return utilization;
   }
 
+  @Public
+  @Unstable
+  public static ResourceUtilization newInstance(
+  ResourceUtilization resourceUtil) {
+return newInstance(resourceUtil.getPhysicalMemory(),
+resourceUtil.getVirtualMemory(), resourceUtil.getCPU());
+  }
+
   /**
* Get used virtual memory.
*
@@ -147,4 +155,18 @@ public abstract class ResourceUtilization implements
 this.setVirtualMemory(this.getVirtualMemory() + vmem);
 this.setCPU(this.getCPU() + cpu);
   }
+
+  /**
+   * Subtract utilization from the current one.
+   * @param pmem Physical memory to be subtracted.
+   * @param vmem Virtual memory to be subtracted.
+   * @param cpu CPU utilization to be subtracted.
+   */
+  @Public
+  @Unstable
+  public void subtractFrom(int pmem, int vmem, float cpu) {
+this.setPhysicalMemory(this.getPhysicalMemory() - pmem);
+this.setVirtualMemory(this.getVirtualMemory() - vmem);
+this.setCPU(this.getCPU() - cpu);
+  }
 }
\ No newline at end of file



hadoop git commit: YARN-4895. Add subtractFrom method to ResourceUtilization class. Contributed by Konstantinos Karanasos.

2016-04-01 Thread asuresh
Repository: hadoop
Updated Branches:
  refs/heads/trunk 256c82fe2 -> 82621e38a


YARN-4895. Add subtractFrom method to ResourceUtilization class. Contributed by 
Konstantinos Karanasos.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/82621e38
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/82621e38
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/82621e38

Branch: refs/heads/trunk
Commit: 82621e38a0445832998bc00693279e23a98605c1
Parents: 256c82f
Author: Arun Suresh 
Authored: Fri Apr 1 14:57:06 2016 -0700
Committer: Arun Suresh 
Committed: Fri Apr 1 14:57:06 2016 -0700

--
 .../yarn/api/records/ResourceUtilization.java   | 22 
 1 file changed, 22 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/82621e38/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceUtilization.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceUtilization.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceUtilization.java
index 5f52f85..2ae4872 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceUtilization.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceUtilization.java
@@ -44,6 +44,14 @@ public abstract class ResourceUtilization implements
 return utilization;
   }
 
+  @Public
+  @Unstable
+  public static ResourceUtilization newInstance(
+  ResourceUtilization resourceUtil) {
+return newInstance(resourceUtil.getPhysicalMemory(),
+resourceUtil.getVirtualMemory(), resourceUtil.getCPU());
+  }
+
   /**
* Get used virtual memory.
*
@@ -147,4 +155,18 @@ public abstract class ResourceUtilization implements
 this.setVirtualMemory(this.getVirtualMemory() + vmem);
 this.setCPU(this.getCPU() + cpu);
   }
+
+  /**
+   * Subtract utilization from the current one.
+   * @param pmem Physical memory to be subtracted.
+   * @param vmem Virtual memory to be subtracted.
+   * @param cpu CPU utilization to be subtracted.
+   */
+  @Public
+  @Unstable
+  public void subtractFrom(int pmem, int vmem, float cpu) {
+this.setPhysicalMemory(this.getPhysicalMemory() - pmem);
+this.setVirtualMemory(this.getVirtualMemory() - vmem);
+this.setCPU(this.getCPU() - cpu);
+  }
 }
\ No newline at end of file



[2/2] hadoop git commit: HDFS-10238. Ozone : Add chunk persistance. Contributed by Anu Engineer.

2016-04-01 Thread cnauroth
HDFS-10238. Ozone : Add chunk persistance. Contributed by Anu Engineer.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4ac97b18
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4ac97b18
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4ac97b18

Branch: refs/heads/HDFS-7240
Commit: 4ac97b1815ba1caf6477c736c10e433ee7d5bc50
Parents: 55d4ee7
Author: Chris Nauroth 
Authored: Fri Apr 1 10:52:38 2016 -0700
Committer: Chris Nauroth 
Committed: Fri Apr 1 10:52:38 2016 -0700

--
 .../org/apache/hadoop/ozone/OzoneConsts.java|   4 +-
 .../container/common/helpers/ChunkInfo.java | 184 
 .../container/common/helpers/ChunkUtils.java| 299 +++
 .../common/helpers/ContainerUtils.java  |  92 +-
 .../container/common/helpers/Pipeline.java  |  10 +-
 .../container/common/helpers/package-info.java  |   3 +-
 .../container/common/impl/ChunkManagerImpl.java | 149 +
 .../common/impl/ContainerManagerImpl.java   |  38 ++-
 .../ozone/container/common/impl/Dispatcher.java | 193 ++--
 .../common/interfaces/ChunkManager.java |  68 +
 .../common/interfaces/ContainerManager.java |  13 +
 .../container/ozoneimpl/OzoneContainer.java |   5 +
 .../main/proto/DatanodeContainerProtocol.proto  |  45 ++-
 .../ozone/container/ContainerTestHelper.java| 159 +-
 .../common/impl/TestContainerPersistence.java   | 244 ++-
 .../container/ozoneimpl/TestOzoneContainer.java |  54 +++-
 .../transport/server/TestContainerServer.java   |  17 +-
 17 files changed, 1461 insertions(+), 116 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4ac97b18/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
index 9e52853..bebbb78 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
@@ -65,8 +65,8 @@ public final class OzoneConsts {
   public static final String CONTAINER_ROOT_PREFIX = "repository";
 
   public static final String CONTAINER_DB = "container.db";
-
-
+  public static final String FILE_HASH = "SHA-256";
+  public final static String CHUNK_OVERWRITE = "OverWriteRequested";
 
   /**
* Supports Bucket Versioning.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4ac97b18/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ChunkInfo.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ChunkInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ChunkInfo.java
new file mode 100644
index 000..7a3d449
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ChunkInfo.java
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.container.common.helpers;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdfs.ozone.protocol.proto.ContainerProtos;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.TreeMap;
+
+/**
+ * Java class that represents ChunkInfo ProtoBuf class. This helper class 
allows
+ * us to convert to and from protobuf to normal java.
+ */
+public class ChunkInfo {
+  private final String chunkName;
+  private final long offset;
+  private final long len;
+  private String checksum;
+  private final Map metadata;
+
+
+  /**
+   * Constructs a ChunkInfo.
+   *
+ 

[1/2] hadoop git commit: HDFS-10238. Ozone : Add chunk persistance. Contributed by Anu Engineer.

2016-04-01 Thread cnauroth
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-7240 55d4ee71e -> 4ac97b181


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4ac97b18/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/transport/server/TestContainerServer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/transport/server/TestContainerServer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/transport/server/TestContainerServer.java
index 6b134fa..f1d9c02 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/transport/server/TestContainerServer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/transport/server/TestContainerServer.java
@@ -33,6 +33,7 @@ import 
org.apache.hadoop.ozone.container.common.transport.client.XceiverClient;
 import org.apache.hadoop.ozone.container.common.transport.server.XceiverServer;
 import 
org.apache.hadoop.ozone.container.common.transport.server.XceiverServerHandler;
 
+import org.apache.hadoop.ozone.web.utils.OzoneUtils;
 import org.junit.Assert;
 import org.junit.Test;
 
@@ -45,11 +46,12 @@ public class TestContainerServer {
   @Test
   public void testPipeline() throws IOException {
 EmbeddedChannel channel = null;
+String containerName = OzoneUtils.getRequestID();
 try {
   channel = new EmbeddedChannel(new XceiverServerHandler(
   new TestContainerDispatcher()));
   ContainerCommandRequestProto request =
-  ContainerTestHelper.getCreateContainerRequest();
+  ContainerTestHelper.getCreateContainerRequest(containerName);
   channel.writeInbound(request);
   Assert.assertTrue(channel.finish());
   ContainerCommandResponseProto response = channel.readOutbound();
@@ -65,9 +67,10 @@ public class TestContainerServer {
   public void testClientServer() throws Exception {
 XceiverServer server = null;
 XceiverClient client = null;
-
+String containerName = OzoneUtils.getRequestID();
 try {
-  Pipeline pipeline = ContainerTestHelper.createSingleNodePipeline();
+  Pipeline pipeline = ContainerTestHelper.createSingleNodePipeline
+  (containerName);
   OzoneConfiguration conf = new OzoneConfiguration();
   conf.setInt(OzoneConfigKeys.DFS_OZONE_CONTAINER_IPC_PORT,
   pipeline.getLeader().getContainerPort());
@@ -79,7 +82,7 @@ public class TestContainerServer {
   client.connect();
 
   ContainerCommandRequestProto request =
-  ContainerTestHelper.getCreateContainerRequest();
+  ContainerTestHelper.getCreateContainerRequest(containerName);
   ContainerCommandResponseProto response = client.sendCommand(request);
   Assert.assertTrue(request.getTraceID().equals(response.getTraceID()));
 } finally {
@@ -96,9 +99,11 @@ public class TestContainerServer {
   public void testClientServerWithContainerDispatcher() throws Exception {
 XceiverServer server = null;
 XceiverClient client = null;
+String containerName = OzoneUtils.getRequestID();
 
 try {
-  Pipeline pipeline = ContainerTestHelper.createSingleNodePipeline();
+  Pipeline pipeline = ContainerTestHelper.createSingleNodePipeline
+  (containerName);
   OzoneConfiguration conf = new OzoneConfiguration();
   conf.setInt(OzoneConfigKeys.DFS_OZONE_CONTAINER_IPC_PORT,
   pipeline.getLeader().getContainerPort());
@@ -111,7 +116,7 @@ public class TestContainerServer {
   client.connect();
 
   ContainerCommandRequestProto request =
-  ContainerTestHelper.getCreateContainerRequest();
+  ContainerTestHelper.getCreateContainerRequest(containerName);
   ContainerCommandResponseProto response = client.sendCommand(request);
   Assert.assertTrue(request.getTraceID().equals(response.getTraceID()));
   Assert.assertEquals(response.getResult(), 
ContainerProtos.Result.SUCCESS);



hadoop git commit: HADOOP-11687. Ignore x-* and response headers when copying an Amazon S3 object. Contributed by Aaron Peterson and harsh.

2016-04-01 Thread stevel
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.8 7887fab81 -> 719c131c4


HADOOP-11687. Ignore x-* and response headers when copying an Amazon S3 object. 
Contributed by Aaron Peterson and harsh.

(cherry picked from commit 256c82fe2981748cd0befc5490d8118d139908f9)
(cherry picked from commit 7c5b55d4e5f4317abed0259909b89a32297836f8)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/719c131c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/719c131c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/719c131c

Branch: refs/heads/branch-2.8
Commit: 719c131c4d9f6a425dd68347203bd7bcaa15461e
Parents: 7887fab
Author: Harsh J 
Authored: Fri Apr 1 14:18:10 2016 +0530
Committer: Steve Loughran 
Committed: Fri Apr 1 14:15:43 2016 +0100

--
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java | 70 +++-
 .../src/site/markdown/tools/hadoop-aws/index.md |  7 ++
 2 files changed, 76 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/719c131c/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
index 4cda7cd..33db86e 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
@@ -26,6 +26,7 @@ import java.net.URI;
 import java.util.ArrayList;
 import java.util.Date;
 import java.util.List;
+import java.util.Map;
 import java.util.concurrent.LinkedBlockingQueue;
 import java.util.concurrent.ThreadFactory;
 import java.util.concurrent.ThreadPoolExecutor;
@@ -1186,7 +1187,7 @@ public class S3AFileSystem extends FileSystem {
 }
 
 ObjectMetadata srcom = s3.getObjectMetadata(bucket, srcKey);
-final ObjectMetadata dstom = srcom.clone();
+ObjectMetadata dstom = cloneObjectMetadata(srcom);
 if (StringUtils.isNotBlank(serverSideEncryptionAlgorithm)) {
   dstom.setSSEAlgorithm(serverSideEncryptionAlgorithm);
 }
@@ -1293,6 +1294,73 @@ public class S3AFileSystem extends FileSystem {
   }
 
   /**
+   * Creates a copy of the passed {@link ObjectMetadata}.
+   * Does so without using the {@link ObjectMetadata#clone()} method,
+   * to avoid copying unnecessary headers.
+   * @param source the {@link ObjectMetadata} to copy
+   * @return a copy of {@link ObjectMetadata} with only relevant attributes
+   */
+  private ObjectMetadata cloneObjectMetadata(ObjectMetadata source) {
+// This approach may be too brittle, especially if
+// in future there are new attributes added to ObjectMetadata
+// that we do not explicitly call to set here
+ObjectMetadata ret = new ObjectMetadata();
+
+// Non null attributes
+ret.setContentLength(source.getContentLength());
+
+// Possibly null attributes
+// Allowing nulls to pass breaks it during later use
+if (source.getCacheControl() != null) {
+  ret.setCacheControl(source.getCacheControl());
+}
+if (source.getContentDisposition() != null) {
+  ret.setContentDisposition(source.getContentDisposition());
+}
+if (source.getContentEncoding() != null) {
+  ret.setContentEncoding(source.getContentEncoding());
+}
+if (source.getContentMD5() != null) {
+  ret.setContentMD5(source.getContentMD5());
+}
+if (source.getContentType() != null) {
+  ret.setContentType(source.getContentType());
+}
+if (source.getExpirationTime() != null) {
+  ret.setExpirationTime(source.getExpirationTime());
+}
+if (source.getExpirationTimeRuleId() != null) {
+  ret.setExpirationTimeRuleId(source.getExpirationTimeRuleId());
+}
+if (source.getHttpExpiresDate() != null) {
+  ret.setHttpExpiresDate(source.getHttpExpiresDate());
+}
+if (source.getLastModified() != null) {
+  ret.setLastModified(source.getLastModified());
+}
+if (source.getOngoingRestore() != null) {
+  ret.setOngoingRestore(source.getOngoingRestore());
+}
+if (source.getRestoreExpirationTime() != null) {
+  ret.setRestoreExpirationTime(source.getRestoreExpirationTime());
+}
+if (source.getSSEAlgorithm() != null) {
+  ret.setSSEAlgorithm(source.getSSEAlgorithm());
+}
+if (source.getSSECustomerAlgorithm() != null) {
+  ret.setSSECustomerAlgorithm(source.getSSECustomerAlgorithm());
+}
+if (source.getSSECustomerKeyMd5() != null) {
+  ret.setSSECustomerKeyMd5(source.getSSECustomerKeyMd5());
+}
+
+for (Map.Entry e : 

svn commit: r1737353 - in /hadoop/common/site/main: author/src/documentation/content/xdocs/ publish/

2016-04-01 Thread naganarasimha_gr
Author: naganarasimha_gr
Date: Fri Apr  1 12:06:23 2016
New Revision: 1737353

URL: http://svn.apache.org/viewvc?rev=1737353=rev
Log:
Added my name(Naga) to the commiters list

Modified:
hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml
hadoop/common/site/main/publish/bylaws.pdf
hadoop/common/site/main/publish/index.pdf
hadoop/common/site/main/publish/issue_tracking.pdf
hadoop/common/site/main/publish/linkmap.pdf
hadoop/common/site/main/publish/mailing_lists.pdf
hadoop/common/site/main/publish/privacy_policy.pdf
hadoop/common/site/main/publish/releases.pdf
hadoop/common/site/main/publish/version_control.pdf
hadoop/common/site/main/publish/who.html
hadoop/common/site/main/publish/who.pdf

Modified: hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml?rev=1737353=1737352=1737353=diff
==
--- hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml 
(original)
+++ hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml Fri 
Apr  1 12:06:23 2016
@@ -303,14 +303,6 @@
   -8
 
 
-   
- nagnarasimha_gr
- http://people.apache.org/~naganarasimha_gr/;>Naganarasimha G R
- Huawei
- YARN
- +5.5
-   
-

  nigel
  Nigel Daley
@@ -1052,6 +1044,14 @@
 
 

+ naganarasimha_gr
+ http://people.apache.org/~naganarasimha_gr/;>Naganarasimha G R
+ Huawei
+ 
+ +5.5
+   
+
+   
  nigel
  http://people.apache.org/~nigel;>Nigel Daley
  

Modified: hadoop/common/site/main/publish/bylaws.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/bylaws.pdf?rev=1737353=1737352=1737353=diff
==
Binary files - no diff available.

Modified: hadoop/common/site/main/publish/index.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/index.pdf?rev=1737353=1737352=1737353=diff
==
Binary files - no diff available.

Modified: hadoop/common/site/main/publish/issue_tracking.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/issue_tracking.pdf?rev=1737353=1737352=1737353=diff
==
Binary files - no diff available.

Modified: hadoop/common/site/main/publish/linkmap.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/linkmap.pdf?rev=1737353=1737352=1737353=diff
==
Binary files - no diff available.

Modified: hadoop/common/site/main/publish/mailing_lists.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/mailing_lists.pdf?rev=1737353=1737352=1737353=diff
==
Binary files - no diff available.

Modified: hadoop/common/site/main/publish/privacy_policy.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/privacy_policy.pdf?rev=1737353=1737352=1737353=diff
==
Binary files - no diff available.

Modified: hadoop/common/site/main/publish/releases.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/releases.pdf?rev=1737353=1737352=1737353=diff
==
Binary files - no diff available.

Modified: hadoop/common/site/main/publish/version_control.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/version_control.pdf?rev=1737353=1737352=1737353=diff
==
Binary files - no diff available.

Modified: hadoop/common/site/main/publish/who.html
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/publish/who.html?rev=1737353=1737352=1737353=diff
==
--- hadoop/common/site/main/publish/who.html (original)
+++ hadoop/common/site/main/publish/who.html Fri Apr  1 12:06:23 2016
@@ -644,17 +644,6 @@ document.write("Last Published: " + docu
 
 
 
-   
-
- 
-nagnarasimha_gr
- http://people.apache.org/~naganarasimha_gr/;>Naganarasimha G R
- Huawei
- YARN
- +5.5
-   
-
-

 
  
@@ -990,7 +979,7 @@ document.write("Last Published: " + docu
 
 
 
-
+
 Emeritus Hadoop PMC Members
 
 
@@ -1005,7 +994,7 @@ document.write("Last Published: " + docu
 
 

-
+
 Hadoop Committers
 
 Hadoop's active committers include:
@@ -1674,6 +1663,17 @@ 

svn commit: r1737346 - /hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml

2016-04-01 Thread naganarasimha_gr
Author: naganarasimha_gr
Date: Fri Apr  1 11:27:42 2016
New Revision: 1737346

URL: http://svn.apache.org/viewvc?rev=1737346=rev
Log:
Added my name(Naga) to the list

Modified:
hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml

Modified: hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml?rev=1737346=1737345=1737346=diff
==
--- hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml 
(original)
+++ hadoop/common/site/main/author/src/documentation/content/xdocs/who.xml Fri 
Apr  1 11:27:42 2016
@@ -303,6 +303,14 @@
   -8
 
 
+   
+ nagnarasimha_gr
+ http://people.apache.org/~naganarasimha_gr/;>Naganarasimha G R
+ Huawei
+ YARN
+ +5.5
+   
+

  nigel
  Nigel Daley




hadoop git commit: HADOOP-11687. Ignore x-* and response headers when copying an Amazon S3 object. Contributed by Aaron Peterson and harsh.

2016-04-01 Thread harsh
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 10d8f8a39 -> 7c5b55d4e


HADOOP-11687. Ignore x-* and response headers when copying an Amazon S3 object. 
Contributed by Aaron Peterson and harsh.

(cherry picked from commit 256c82fe2981748cd0befc5490d8118d139908f9)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7c5b55d4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7c5b55d4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7c5b55d4

Branch: refs/heads/branch-2
Commit: 7c5b55d4e5f4317abed0259909b89a32297836f8
Parents: 10d8f8a
Author: Harsh J 
Authored: Fri Apr 1 14:18:10 2016 +0530
Committer: Harsh J 
Committed: Fri Apr 1 14:35:58 2016 +0530

--
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java | 70 +++-
 .../src/site/markdown/tools/hadoop-aws/index.md |  7 ++
 2 files changed, 76 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7c5b55d4/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
index 4cda7cd..33db86e 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
@@ -26,6 +26,7 @@ import java.net.URI;
 import java.util.ArrayList;
 import java.util.Date;
 import java.util.List;
+import java.util.Map;
 import java.util.concurrent.LinkedBlockingQueue;
 import java.util.concurrent.ThreadFactory;
 import java.util.concurrent.ThreadPoolExecutor;
@@ -1186,7 +1187,7 @@ public class S3AFileSystem extends FileSystem {
 }
 
 ObjectMetadata srcom = s3.getObjectMetadata(bucket, srcKey);
-final ObjectMetadata dstom = srcom.clone();
+ObjectMetadata dstom = cloneObjectMetadata(srcom);
 if (StringUtils.isNotBlank(serverSideEncryptionAlgorithm)) {
   dstom.setSSEAlgorithm(serverSideEncryptionAlgorithm);
 }
@@ -1293,6 +1294,73 @@ public class S3AFileSystem extends FileSystem {
   }
 
   /**
+   * Creates a copy of the passed {@link ObjectMetadata}.
+   * Does so without using the {@link ObjectMetadata#clone()} method,
+   * to avoid copying unnecessary headers.
+   * @param source the {@link ObjectMetadata} to copy
+   * @return a copy of {@link ObjectMetadata} with only relevant attributes
+   */
+  private ObjectMetadata cloneObjectMetadata(ObjectMetadata source) {
+// This approach may be too brittle, especially if
+// in future there are new attributes added to ObjectMetadata
+// that we do not explicitly call to set here
+ObjectMetadata ret = new ObjectMetadata();
+
+// Non null attributes
+ret.setContentLength(source.getContentLength());
+
+// Possibly null attributes
+// Allowing nulls to pass breaks it during later use
+if (source.getCacheControl() != null) {
+  ret.setCacheControl(source.getCacheControl());
+}
+if (source.getContentDisposition() != null) {
+  ret.setContentDisposition(source.getContentDisposition());
+}
+if (source.getContentEncoding() != null) {
+  ret.setContentEncoding(source.getContentEncoding());
+}
+if (source.getContentMD5() != null) {
+  ret.setContentMD5(source.getContentMD5());
+}
+if (source.getContentType() != null) {
+  ret.setContentType(source.getContentType());
+}
+if (source.getExpirationTime() != null) {
+  ret.setExpirationTime(source.getExpirationTime());
+}
+if (source.getExpirationTimeRuleId() != null) {
+  ret.setExpirationTimeRuleId(source.getExpirationTimeRuleId());
+}
+if (source.getHttpExpiresDate() != null) {
+  ret.setHttpExpiresDate(source.getHttpExpiresDate());
+}
+if (source.getLastModified() != null) {
+  ret.setLastModified(source.getLastModified());
+}
+if (source.getOngoingRestore() != null) {
+  ret.setOngoingRestore(source.getOngoingRestore());
+}
+if (source.getRestoreExpirationTime() != null) {
+  ret.setRestoreExpirationTime(source.getRestoreExpirationTime());
+}
+if (source.getSSEAlgorithm() != null) {
+  ret.setSSEAlgorithm(source.getSSEAlgorithm());
+}
+if (source.getSSECustomerAlgorithm() != null) {
+  ret.setSSECustomerAlgorithm(source.getSSECustomerAlgorithm());
+}
+if (source.getSSECustomerKeyMd5() != null) {
+  ret.setSSECustomerKeyMd5(source.getSSECustomerKeyMd5());
+}
+
+for (Map.Entry e : source.getUserMetadata().entrySet()) {
+  ret.addUserMetadata(e.getKey(), 

hadoop git commit: HADOOP-11687. Ignore x-* and response headers when copying an Amazon S3 object. Contributed by Aaron Peterson and harsh.

2016-04-01 Thread harsh
Repository: hadoop
Updated Branches:
  refs/heads/trunk 3488c4f2c -> 256c82fe2


HADOOP-11687. Ignore x-* and response headers when copying an Amazon S3 object. 
Contributed by Aaron Peterson and harsh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/256c82fe
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/256c82fe
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/256c82fe

Branch: refs/heads/trunk
Commit: 256c82fe2981748cd0befc5490d8118d139908f9
Parents: 3488c4f
Author: Harsh J 
Authored: Fri Apr 1 14:18:10 2016 +0530
Committer: Harsh J 
Committed: Fri Apr 1 14:18:10 2016 +0530

--
 .../org/apache/hadoop/fs/s3a/S3AFileSystem.java | 70 +++-
 .../src/site/markdown/tools/hadoop-aws/index.md |  7 ++
 2 files changed, 76 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/256c82fe/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
index 7ab6c79..6afb05d 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
@@ -26,6 +26,7 @@ import java.net.URI;
 import java.util.ArrayList;
 import java.util.Date;
 import java.util.List;
+import java.util.Map;
 import java.util.concurrent.ExecutorService;
 import java.util.concurrent.TimeUnit;
 
@@ -1128,7 +1129,7 @@ public class S3AFileSystem extends FileSystem {
 }
 
 ObjectMetadata srcom = s3.getObjectMetadata(bucket, srcKey);
-final ObjectMetadata dstom = srcom.clone();
+ObjectMetadata dstom = cloneObjectMetadata(srcom);
 if (StringUtils.isNotBlank(serverSideEncryptionAlgorithm)) {
   dstom.setSSEAlgorithm(serverSideEncryptionAlgorithm);
 }
@@ -1235,6 +1236,73 @@ public class S3AFileSystem extends FileSystem {
   }
 
   /**
+   * Creates a copy of the passed {@link ObjectMetadata}.
+   * Does so without using the {@link ObjectMetadata#clone()} method,
+   * to avoid copying unnecessary headers.
+   * @param source the {@link ObjectMetadata} to copy
+   * @return a copy of {@link ObjectMetadata} with only relevant attributes
+   */
+  private ObjectMetadata cloneObjectMetadata(ObjectMetadata source) {
+// This approach may be too brittle, especially if
+// in future there are new attributes added to ObjectMetadata
+// that we do not explicitly call to set here
+ObjectMetadata ret = new ObjectMetadata();
+
+// Non null attributes
+ret.setContentLength(source.getContentLength());
+
+// Possibly null attributes
+// Allowing nulls to pass breaks it during later use
+if (source.getCacheControl() != null) {
+  ret.setCacheControl(source.getCacheControl());
+}
+if (source.getContentDisposition() != null) {
+  ret.setContentDisposition(source.getContentDisposition());
+}
+if (source.getContentEncoding() != null) {
+  ret.setContentEncoding(source.getContentEncoding());
+}
+if (source.getContentMD5() != null) {
+  ret.setContentMD5(source.getContentMD5());
+}
+if (source.getContentType() != null) {
+  ret.setContentType(source.getContentType());
+}
+if (source.getExpirationTime() != null) {
+  ret.setExpirationTime(source.getExpirationTime());
+}
+if (source.getExpirationTimeRuleId() != null) {
+  ret.setExpirationTimeRuleId(source.getExpirationTimeRuleId());
+}
+if (source.getHttpExpiresDate() != null) {
+  ret.setHttpExpiresDate(source.getHttpExpiresDate());
+}
+if (source.getLastModified() != null) {
+  ret.setLastModified(source.getLastModified());
+}
+if (source.getOngoingRestore() != null) {
+  ret.setOngoingRestore(source.getOngoingRestore());
+}
+if (source.getRestoreExpirationTime() != null) {
+  ret.setRestoreExpirationTime(source.getRestoreExpirationTime());
+}
+if (source.getSSEAlgorithm() != null) {
+  ret.setSSEAlgorithm(source.getSSEAlgorithm());
+}
+if (source.getSSECustomerAlgorithm() != null) {
+  ret.setSSECustomerAlgorithm(source.getSSECustomerAlgorithm());
+}
+if (source.getSSECustomerKeyMd5() != null) {
+  ret.setSSECustomerKeyMd5(source.getSSECustomerKeyMd5());
+}
+
+for (Map.Entry e : source.getUserMetadata().entrySet()) {
+  ret.addUserMetadata(e.getKey(), e.getValue());
+}
+return ret;
+  }
+
+  /**
* Return the number of bytes that large input files should be optimally
* be 

[2/2] hadoop git commit: Revert "YARN-4857. Add missing default configuration regarding preemption of CapacityScheduler. Contributed by Kai Sasaki."

2016-04-01 Thread vvasudev
Revert "YARN-4857. Add missing default configuration regarding preemption of 
CapacityScheduler. Contributed by Kai Sasaki."

This reverts commit 0064cba169d1bb761f6e81ee86830be598d7c500.

(cherry picked from commit 3488c4f2c9767684eb1007bb00250f474c06d5d8)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/10d8f8a3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/10d8f8a3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/10d8f8a3

Branch: refs/heads/branch-2
Commit: 10d8f8a39c51d3ac39e646e016138f67df12fd45
Parents: 80d122d
Author: Varun Vasudev 
Authored: Fri Apr 1 12:20:40 2016 +0530
Committer: Varun Vasudev 
Committed: Fri Apr 1 12:21:49 2016 +0530

--
 .../src/main/resources/yarn-default.xml | 58 
 1 file changed, 58 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/10d8f8a3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index cb3c73a..506cf3d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -908,64 +908,6 @@
 60
   
 
-  
-
-If true, run the policy but do not affect the cluster with preemption and 
kill events.
-
-yarn.resourcemanager.monitor.capacity.preemption.observe_only
-false
-  
-
-  
-
-Time in milliseconds between invocations of this 
ProportionalCapacityPreemptionPolicy
-policy.
-
-
yarn.resourcemanager.monitor.capacity.preemption.monitoring_interval
-3000
-  
-
-  
-
-Time in milliseconds between requesting a preemption from an application 
and killing
-the container.
-
-
yarn.resourcemanager.monitor.capacity.preemption.max_wait_before_kill
-15000
-  
-
-  
-
-Maximum percentage of resources preempted in a single round. By 
controlling this valueone
-can throttle the pace at which containers are reclaimed from the cluster. 
After computing
-the total desired preemption, the policy scales it back within this limit.
-
-
yarn.resourcemanager.monitor.capacity.preemption.total_preemption_per_round
-0.1
-  
-
-  
-
-Maximum amount of resources above the target capacity ignored for 
preemption.
-This defines a deadzone around the target capacity that helps prevent 
thrashing and
-oscillations around the computed target balance. High values would slow 
the time to capacity
-and (absent natural.completions) it might prevent convergence to 
guaranteed capacity.
-
-
yarn.resourcemanager.monitor.capacity.preemption.max_ignored_over_capacity
-0.1
-  
-
-  
-
-Given a computed preemption target, account for containers naturally 
expiring and preempt
-only this percentage of the delta. This determines the rate of geometric 
convergence into
-the deadzone (MAX_IGNORED_OVER_CAPACITY). For example, a termination 
factor of 0.5 will reclaim
-almost 95% of resources within 5 * #WAIT_TIME_BEFORE_KILL, even absent 
natural termination.
-
-
yarn.resourcemanager.monitor.capacity.preemption.natural_termination_factor
-0.2
-  
-
   
 
   



[1/2] hadoop git commit: Revert "YARN-4857. Add missing default configuration regarding preemption of CapacityScheduler. Contributed by Kai Sasaki."

2016-04-01 Thread vvasudev
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 80d122d7f -> 10d8f8a39
  refs/heads/trunk a8d8b80a2 -> 3488c4f2c


Revert "YARN-4857. Add missing default configuration regarding preemption of 
CapacityScheduler. Contributed by Kai Sasaki."

This reverts commit 0064cba169d1bb761f6e81ee86830be598d7c500.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3488c4f2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3488c4f2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3488c4f2

Branch: refs/heads/trunk
Commit: 3488c4f2c9767684eb1007bb00250f474c06d5d8
Parents: a8d8b80
Author: Varun Vasudev 
Authored: Fri Apr 1 12:20:40 2016 +0530
Committer: Varun Vasudev 
Committed: Fri Apr 1 12:20:40 2016 +0530

--
 .../src/main/resources/yarn-default.xml | 58 
 1 file changed, 58 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3488c4f2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index cb3c73a..506cf3d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -908,64 +908,6 @@
 60
   
 
-  
-
-If true, run the policy but do not affect the cluster with preemption and 
kill events.
-
-yarn.resourcemanager.monitor.capacity.preemption.observe_only
-false
-  
-
-  
-
-Time in milliseconds between invocations of this 
ProportionalCapacityPreemptionPolicy
-policy.
-
-
yarn.resourcemanager.monitor.capacity.preemption.monitoring_interval
-3000
-  
-
-  
-
-Time in milliseconds between requesting a preemption from an application 
and killing
-the container.
-
-
yarn.resourcemanager.monitor.capacity.preemption.max_wait_before_kill
-15000
-  
-
-  
-
-Maximum percentage of resources preempted in a single round. By 
controlling this valueone
-can throttle the pace at which containers are reclaimed from the cluster. 
After computing
-the total desired preemption, the policy scales it back within this limit.
-
-
yarn.resourcemanager.monitor.capacity.preemption.total_preemption_per_round
-0.1
-  
-
-  
-
-Maximum amount of resources above the target capacity ignored for 
preemption.
-This defines a deadzone around the target capacity that helps prevent 
thrashing and
-oscillations around the computed target balance. High values would slow 
the time to capacity
-and (absent natural.completions) it might prevent convergence to 
guaranteed capacity.
-
-
yarn.resourcemanager.monitor.capacity.preemption.max_ignored_over_capacity
-0.1
-  
-
-  
-
-Given a computed preemption target, account for containers naturally 
expiring and preempt
-only this percentage of the delta. This determines the rate of geometric 
convergence into
-the deadzone (MAX_IGNORED_OVER_CAPACITY). For example, a termination 
factor of 0.5 will reclaim
-almost 95% of resources within 5 * #WAIT_TIME_BEFORE_KILL, even absent 
natural termination.
-
-
yarn.resourcemanager.monitor.capacity.preemption.natural_termination_factor
-0.2
-  
-
   
 
   



[Hadoop Wiki] Update of "Books" by Packt Publishing

2016-04-01 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "Books" page has been changed by Packt Publishing:
https://wiki.apache.org/hadoop/Books?action=diff=35=36

  }}}
  
  
+ === Hadoop Real-World Solutions Cookbook- Second Edition ===
+ '''Name:''' 
[[https://www.packtpub.com/big-data-and-business-intelligence/hadoop-real-world-solutions-cookbook-second-edition|Hadoop
 Real-World Solutions Cookbook- Second Edition]]
+ 
+ '''Author:''' Tanmay Deshpande
+ 
+ '''Publisher:''' Packt Publishing
+ 
+ '''Date of Publishing:''' March 2016
+ 
+ The book covers recipes that are based on the latest versions of Apache 
Hadoop 2.X, YARN, Hive, Pig, Sqoop, Flume, Apache Spark, Mahout etc.
+ 
  === Hadoop Security: Protecting Your Big Data Platform ===
  
  '''Name:'''  
[[https://www.gitbook.com/book/steveloughran/kerberos_and_hadoop/details|Hadoop 
Security: Protecting Your Big Data Platform]]