hadoop git commit: HDFS-8217. During block recovery for truncate Log new Block Id in case of copy-on-truncate is true. (Contributed by Vinayakumar B)

2015-04-24 Thread vinayakumarb
Repository: hadoop
Updated Branches:
  refs/heads/trunk a8898445d - 262c1bc33


HDFS-8217. During block recovery for truncate Log new Block Id in case of 
copy-on-truncate is true. (Contributed by Vinayakumar B)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/262c1bc3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/262c1bc3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/262c1bc3

Branch: refs/heads/trunk
Commit: 262c1bc3398ce2ede03f9d86fc97c35ca7a8e9db
Parents: a889844
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Apr 24 12:16:41 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Apr 24 12:16:41 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +++
 .../apache/hadoop/hdfs/server/datanode/DataNode.java|  4 +++-
 .../hadoop/hdfs/server/namenode/FSNamesystem.java   | 12 +++-
 3 files changed, 13 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/262c1bc3/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index b0a0a50..0e00025 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -545,6 +545,9 @@ Release 2.8.0 - UNRELEASED
 
 HDFS-7993. Provide each Replica details in fsck (J.Andreina via 
vinayakumarb)
 
+HDFS-8217. During block recovery for truncate Log new Block Id in case of
+copy-on-truncate is true. (vinayakumarb)
+
 Release 2.7.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/262c1bc3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index e81da52..23ab43a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -2840,7 +2840,9 @@ public class DataNode extends ReconfigurableBase
 
 LOG.info(who +  calls recoverBlock( + block
 + , targets=[ + Joiner.on(, ).join(targets) + ]
-+ , newGenerationStamp= + rb.getNewGenerationStamp() + ));
++ ((rb.getNewBlock() == null) ? , newGenerationStamp=
++ rb.getNewGenerationStamp() : , newBlock= + rb.getNewBlock())
++ ));
   }
 
   @Override // ClientDataNodeProtocol

http://git-wip-us.apache.org/repos/asf/hadoop/blob/262c1bc3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 3599fad..4477dc4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -4229,6 +4229,8 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 String src = ;
 waitForLoadingFSImage();
 writeLock();
+boolean copyTruncate = false;
+BlockInfoContiguousUnderConstruction truncatedBlock = null;
 try {
   checkOperation(OperationCategory.WRITE);
   // If a DN tries to commit to the standby, the recovery will
@@ -4285,11 +4287,10 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 return;
   }
 
-  BlockInfoContiguousUnderConstruction truncatedBlock =
-  (BlockInfoContiguousUnderConstruction) iFile.getLastBlock();
+  truncatedBlock = (BlockInfoContiguousUnderConstruction) iFile
+  .getLastBlock();
   long recoveryId = truncatedBlock.getBlockRecoveryId();
-  boolean copyTruncate =
-  truncatedBlock.getBlockId() != storedBlock.getBlockId();
+  copyTruncate = truncatedBlock.getBlockId() != storedBlock.getBlockId();
   if(recoveryId != newgenerationstamp) {
 throw new IOException(The recovery id  + newgenerationstamp
   +  does not match current recovery id 
@@ -4382,7 +4383,8 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 if (closeFile) {
 

hadoop git commit: HDFS-8217. During block recovery for truncate Log new Block Id in case of copy-on-truncate is true. (Contributed by Vinayakumar B)

2015-04-24 Thread vinayakumarb
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 3d0385c3c - 68063cac3


HDFS-8217. During block recovery for truncate Log new Block Id in case of 
copy-on-truncate is true. (Contributed by Vinayakumar B)

(cherry picked from commit 262c1bc3398ce2ede03f9d86fc97c35ca7a8e9db)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/68063cac
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/68063cac
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/68063cac

Branch: refs/heads/branch-2
Commit: 68063cac3e701d84217cfae8d15ed214af398803
Parents: 3d0385c
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Apr 24 12:16:41 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Apr 24 12:18:04 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +++
 .../apache/hadoop/hdfs/server/datanode/DataNode.java|  4 +++-
 .../hadoop/hdfs/server/namenode/FSNamesystem.java   | 12 +++-
 3 files changed, 13 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/68063cac/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index b3b0607..913040f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -227,6 +227,9 @@ Release 2.8.0 - UNRELEASED
 
 HDFS-7993. Provide each Replica details in fsck (J.Andreina via 
vinayakumarb)
 
+HDFS-8217. During block recovery for truncate Log new Block Id in case of
+copy-on-truncate is true. (vinayakumarb)
+
 Release 2.7.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/68063cac/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index a13a31f..ba02be2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -2847,7 +2847,9 @@ public class DataNode extends ReconfigurableBase
 
 LOG.info(who +  calls recoverBlock( + block
 + , targets=[ + Joiner.on(, ).join(targets) + ]
-+ , newGenerationStamp= + rb.getNewGenerationStamp() + ));
++ ((rb.getNewBlock() == null) ? , newGenerationStamp=
++ rb.getNewGenerationStamp() : , newBlock= + rb.getNewBlock())
++ ));
   }
 
   @Override // ClientDataNodeProtocol

http://git-wip-us.apache.org/repos/asf/hadoop/blob/68063cac/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 4249fec..f175301 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -4225,6 +4225,8 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 String src = ;
 waitForLoadingFSImage();
 writeLock();
+boolean copyTruncate = false;
+BlockInfoContiguousUnderConstruction truncatedBlock = null;
 try {
   checkOperation(OperationCategory.WRITE);
   // If a DN tries to commit to the standby, the recovery will
@@ -4281,11 +4283,10 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 return;
   }
 
-  BlockInfoContiguousUnderConstruction truncatedBlock =
-  (BlockInfoContiguousUnderConstruction) iFile.getLastBlock();
+  truncatedBlock = (BlockInfoContiguousUnderConstruction) iFile
+  .getLastBlock();
   long recoveryId = truncatedBlock.getBlockRecoveryId();
-  boolean copyTruncate =
-  truncatedBlock.getBlockId() != storedBlock.getBlockId();
+  copyTruncate = truncatedBlock.getBlockId() != storedBlock.getBlockId();
   if(recoveryId != newgenerationstamp) {
 throw new IOException(The recovery id  + newgenerationstamp
   +  does not match current recovery id 
@@ -4378,7 +4379,8 @@ public class 

hadoop git commit: HDFS-8110. Remove unsupported 'hdfs namenode -rollingUpgrade downgrade' from document. Contributed by J.Andreina.

2015-04-24 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/trunk c8d72907f - 91b97c21c


HDFS-8110. Remove unsupported 'hdfs namenode -rollingUpgrade downgrade' from 
document. Contributed by J.Andreina.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/91b97c21
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/91b97c21
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/91b97c21

Branch: refs/heads/trunk
Commit: 91b97c21c9271629dae7515a6a58c35d13b777ff
Parents: c8d7290
Author: Akira Ajisaka aajis...@apache.org
Authored: Fri Apr 24 20:32:26 2015 +0900
Committer: Akira Ajisaka aajis...@apache.org
Committed: Fri Apr 24 20:32:55 2015 +0900

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +++
 .../src/site/xdoc/HdfsRollingUpgrade.xml| 26 +++-
 2 files changed, 6 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/91b97c21/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index b442bad..56f8ec3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -315,6 +315,9 @@ Trunk (Unreleased)
 HDFS-4681. 
TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks 
 fails using IBM java (Ayappan via aw)
 
+HDFS-8110. Remove unsupported 'hdfs namenode -rollingUpgrade downgrade'
+from document. (J.Andreina via aajisaka)
+
 Release 2.8.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/91b97c21/hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsRollingUpgrade.xml
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsRollingUpgrade.xml 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsRollingUpgrade.xml
index 1c3dc60..f0b0ccf 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsRollingUpgrade.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsRollingUpgrade.xml
@@ -190,14 +190,12 @@
 only if both the namenode layout version and the datenode layout version
 are not changed between these two releases.
   /p
-
-  subsection name=Downgrade without Downtime id=DowngradeWithoutDowntime
   p
 In a HA cluster,
 when a rolling upgrade from an old software release to a new software 
release is in progress,
 it is possible to downgrade, in a rolling fashion, the upgraded machines 
back to the old software release.
 Same as before, suppose emNN1/em and emNN2/em are respectively in 
active and standby states.
-Below are the steps for rolling downgrade:
+Below are the steps for rolling downgrade without downtime:
   /p
   ol
 liDowngrade emDNs/emol
@@ -214,16 +212,12 @@
 /ol/li
 liDowngrade Active and Standby emNNs/emol
   liShutdown and downgrade emNN2/em./li
-  liStart emNN2/em as standby normally. (Note that it is incorrect 
to use the
-a href=#namenode_-rollingUpgradecode-rollingUpgrade 
downgrade/code/a
-option here.)
+  liStart emNN2/em as standby normally.
   /li
   liFailover from emNN1/em to emNN2/em
 so that emNN2/em becomes active and emNN1/em becomes 
standby./li
   liShutdown and upgrade emNN1/em./li
-  liStart emNN1/em as standby normally. (Note that it is incorrect 
to use the
-a href=#namenode_-rollingUpgradecode-rollingUpgrade 
downgrade/code/a
-option here.)
+  liStart emNN1/em as standby normally.
   /li
 /ol/li
 liFinalize Rolling Downgradeul
@@ -236,20 +230,6 @@
 since protocols may be changed in a backward compatible manner but not 
forward compatible,
 i.e. old datanodes can talk to the new namenodes but not vice versa.
   /p
-  /subsection
-  subsection name=Downgrade with Downtime id=DowngradeWithDowntime
-  p
-Administrator may choose to first shutdown the cluster and then downgrade 
it.
-The following are the steps:
-  /p
-  ol
-  liShutdown all emNNs/em and emDNs/em./li
-  liRestore the pre-upgrade release in all machines./li
-  liStart emNNs/em with the 
-a href=#namenode_-rollingUpgradecode-rollingUpgrade 
downgrade/code/a option./li
-  liStart emDNs/em normally./li
-  /ol
-  /subsection
   /section
 
   section name=Rollback id=Rollback



hadoop git commit: Fix commit version for YARN-3537

2015-04-24 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/trunk 5e093f0d4 - 78fe6e57c


Fix commit version for YARN-3537


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/78fe6e57
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/78fe6e57
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/78fe6e57

Branch: refs/heads/trunk
Commit: 78fe6e57c7697dae192d74e2f5b91040a3579dfd
Parents: 5e093f0
Author: Jason Lowe jl...@apache.org
Authored: Fri Apr 24 22:07:53 2015 +
Committer: Jason Lowe jl...@apache.org
Committed: Fri Apr 24 22:07:53 2015 +

--
 hadoop-yarn-project/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/78fe6e57/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 001396f..6da55b5 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -267,6 +267,9 @@ Release 2.8.0 - UNRELEASED
 
 YARN-3444. Fix typo capabililty. (Gabor Liptak via aajisaka)
 
+YARN-3537. NPE when NodeManager.serviceInit fails and stopRecoveryStore
+invoked (Brahma Reddy Battula via jlowe)
+
 Release 2.7.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES
@@ -290,9 +293,6 @@ Release 2.7.1 - UNRELEASED
 YARN-3522. Fixed DistributedShell to instantiate TimeLineClient as the
 correct user. (Zhijie Shen via jianhe)
 
-YARN-3537. NPE when NodeManager.serviceInit fails and stopRecoveryStore
-invoked (Brahma Reddy Battula via jlowe)
-
 Release 2.7.0 - 2015-04-20
 
   INCOMPATIBLE CHANGES



[2/2] hadoop git commit: YARN-2498. Respect labels in preemption policy of capacity scheduler for inter-queue preemption. Contributed by Wangda Tan (cherry picked from commit d497f6ea2be559aa31ed76f37

2015-04-24 Thread jianhe
YARN-2498. Respect labels in preemption policy of capacity scheduler for 
inter-queue preemption. Contributed by Wangda Tan
(cherry picked from commit d497f6ea2be559aa31ed76f37ae949dbfabe2a51)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9bf09b33
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9bf09b33
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9bf09b33

Branch: refs/heads/branch-2
Commit: 9bf09b334d90bc88e0e365774eb0cadc4eed549c
Parents: 932cff6
Author: Jian He jia...@apache.org
Authored: Fri Apr 24 17:03:13 2015 -0700
Committer: Jian He jia...@apache.org
Committed: Fri Apr 24 17:03:57 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |3 +
 .../ProportionalCapacityPreemptionPolicy.java   |  585 +
 .../rmcontainer/RMContainerImpl.java|   28 +-
 .../scheduler/capacity/CapacityScheduler.java   |2 +-
 .../scheduler/capacity/LeafQueue.java   |   70 +-
 .../scheduler/common/AssignmentInformation.java |   31 +-
 ...estProportionalCapacityPreemptionPolicy.java |   94 +-
 ...pacityPreemptionPolicyForNodePartitions.java | 1211 ++
 .../scheduler/capacity/TestChildQueueOrder.java |2 +-
 .../scheduler/capacity/TestLeafQueue.java   |4 +-
 .../TestNodeLabelContainerAllocation.java   |   16 +
 .../scheduler/capacity/TestParentQueue.java |2 +-
 12 files changed, 1750 insertions(+), 298 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9bf09b33/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 1f486e4..7964807 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -54,6 +54,9 @@ Release 2.8.0 - UNRELEASED
 
 YARN-3319. Implement a FairOrderingPolicy. (Craig Welch via wangda)
 
+YARN-2498. Respect labels in preemption policy of capacity scheduler for
+inter-queue preemption. (Wangda Tan via jianhe)
+
   IMPROVEMENTS
 
 YARN-1880. Cleanup TestApplicationClientProtocolOnHA

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9bf09b33/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
index 2ab4197..1f47b5f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity;
 
+import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Collections;
@@ -26,11 +27,10 @@ import java.util.HashSet;
 import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
-import java.util.NavigableSet;
 import java.util.PriorityQueue;
 import java.util.Set;
+import java.util.TreeSet;
 
-import org.apache.commons.collections.map.HashedMap;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
@@ -40,7 +40,6 @@ import org.apache.hadoop.yarn.api.records.Priority;
 import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.event.EventHandler;
 import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
-import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
 import 
org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingEditPolicy;
 import 
org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
 import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer;
@@ -49,7 +48,9 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ContainerPreemptE
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.PreemptableResourceScheduler;
 import 

[1/2] hadoop git commit: YARN-2498. Respect labels in preemption policy of capacity scheduler for inter-queue preemption. Contributed by Wangda Tan

2015-04-24 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/trunk dcc5455e0 - d497f6ea2


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d497f6ea/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyForNodePartitions.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyForNodePartitions.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyForNodePartitions.java
new file mode 100644
index 000..e13320c
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyForNodePartitions.java
@@ -0,0 +1,1211 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity;
+
+import static 
org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.MONITORING_INTERVAL;
+import static 
org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.NATURAL_TERMINATION_FACTOR;
+import static 
org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.TOTAL_PREEMPTION_PER_ROUND;
+import static 
org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.WAIT_TIME_BEFORE_KILL;
+import static org.mockito.Matchers.any;
+import static org.mockito.Matchers.argThat;
+import static org.mockito.Matchers.eq;
+import static org.mockito.Mockito.doAnswer;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.never;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeSet;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.Container;
+import org.apache.hadoop.yarn.api.records.ContainerId;
+import org.apache.hadoop.yarn.api.records.NodeId;
+import org.apache.hadoop.yarn.api.records.Priority;
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.event.EventHandler;
+import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
+import 
org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.TestProportionalCapacityPreemptionPolicy.IsPreemptionRequestFor;
+import 
org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
+import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer;
+import 
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ContainerPreemptEvent;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceUsage;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueue;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue;
+import 

[1/2] hadoop git commit: YARN-2498. Respect labels in preemption policy of capacity scheduler for inter-queue preemption. Contributed by Wangda Tan (cherry picked from commit d497f6ea2be559aa31ed76f37

2015-04-24 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 932cff610 - 9bf09b334


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9bf09b33/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyForNodePartitions.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyForNodePartitions.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyForNodePartitions.java
new file mode 100644
index 000..e13320c
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyForNodePartitions.java
@@ -0,0 +1,1211 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity;
+
+import static 
org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.MONITORING_INTERVAL;
+import static 
org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.NATURAL_TERMINATION_FACTOR;
+import static 
org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.TOTAL_PREEMPTION_PER_ROUND;
+import static 
org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy.WAIT_TIME_BEFORE_KILL;
+import static org.mockito.Matchers.any;
+import static org.mockito.Matchers.argThat;
+import static org.mockito.Matchers.eq;
+import static org.mockito.Mockito.doAnswer;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.never;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeSet;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.Container;
+import org.apache.hadoop.yarn.api.records.ContainerId;
+import org.apache.hadoop.yarn.api.records.NodeId;
+import org.apache.hadoop.yarn.api.records.Priority;
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.event.EventHandler;
+import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
+import 
org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.TestProportionalCapacityPreemptionPolicy.IsPreemptionRequestFor;
+import 
org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
+import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer;
+import 
org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ContainerPreemptEvent;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceUsage;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueue;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration;
+import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue;
+import 

hadoop git commit: Moving YARN-3351, YARN-3382, YARN-3472, MAPREDUCE-6238 to the 2.7.1 CHANGES.txt sections given the recent merge into branch-2.7.

2015-04-24 Thread vinodkv
Repository: hadoop
Updated Branches:
  refs/heads/trunk d497f6ea2 - 2f82ae042


Moving YARN-3351, YARN-3382, YARN-3472, MAPREDUCE-6238 to the 2.7.1 CHANGES.txt
sections given the recent merge into branch-2.7.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2f82ae04
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2f82ae04
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2f82ae04

Branch: refs/heads/trunk
Commit: 2f82ae042a6f3110742aaa57c076bb9ebd7888d1
Parents: d497f6e
Author: Vinod Kumar Vavilapalli vino...@apache.org
Authored: Fri Apr 24 17:18:46 2015 -0700
Committer: Vinod Kumar Vavilapalli vino...@apache.org
Committed: Fri Apr 24 17:18:46 2015 -0700

--
 hadoop-mapreduce-project/CHANGES.txt |  6 +++---
 hadoop-yarn-project/CHANGES.txt  | 17 +
 2 files changed, 12 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2f82ae04/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index e5acd1e..5b26910 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -334,9 +334,6 @@ Release 2.8.0 - UNRELEASED
 MAPREDUCE-6266. Job#getTrackingURL should consistently return a proper URL
 (rchiang via rkanter)
 
-MAPREDUCE-6238. MR2 can't run local jobs with -libjars command options
-which is a regression from MR1 (zxu via rkanter)
-
 MAPREDUCE-6293. Set job classloader on uber-job's LocalContainerLauncher
 event thread. (Sangjin Lee via gera)
 
@@ -360,6 +357,9 @@ Release 2.7.1 - UNRELEASED
 
 MAPREDUCE-6300. Task list sort by task id broken. (Siqi Li via aajisaka)
 
+MAPREDUCE-6238. MR2 can't run local jobs with -libjars command options
+which is a regression from MR1 (zxu via rkanter)
+
 Release 2.7.0 - 2015-04-20
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2f82ae04/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index a830771..a626f82 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -193,8 +193,6 @@ Release 2.8.0 - UNRELEASED
 YARN-3205 FileSystemRMStateStore should disable FileSystem Cache to avoid
 get a Filesystem with an old configuration. (Zhihai Xu via ozawa)
 
-YARN-3351. AppMaster tracking URL is broken in HA. (Anubhav Dhoot via 
kasha)
-
 YARN-3269. Yarn.nodemanager.remote-app-log-dir could not be configured to 
 fully qualified path. (Xuan Gong via junping_du)
 
@@ -238,12 +236,6 @@ Release 2.8.0 - UNRELEASED
 YARN-3465. Use LinkedHashMap to preserve order of resource requests. 
 (Zhihai Xu via kasha)
 
-YARN-3382. Some of UserMetricsInfo metrics are incorrectly set to root
-queue metrics. (Rohit Agarwal via jianhe)
-
-YARN-3472. Fixed possible leak in DelegationTokenRenewer#allTokens.
-(Rohith Sharmaks via jianhe)
-
 YARN-3266. RMContext#inactiveNodes should have NodeId as map key.
 (Chengbing Liu via jianhe)
 
@@ -287,6 +279,7 @@ Release 2.7.1 - UNRELEASED
   OPTIMIZATIONS
 
   BUG FIXES
+
 YARN-3487. CapacityScheduler scheduler lock obtained unnecessarily when 
 calling getQueue (Jason Lowe via wangda)
 
@@ -299,6 +292,14 @@ Release 2.7.1 - UNRELEASED
 YARN-3522. Fixed DistributedShell to instantiate TimeLineClient as the
 correct user. (Zhijie Shen via jianhe)
 
+YARN-3351. AppMaster tracking URL is broken in HA. (Anubhav Dhoot via 
kasha)
+
+YARN-3382. Some of UserMetricsInfo metrics are incorrectly set to root
+queue metrics. (Rohit Agarwal via jianhe)
+
+YARN-3472. Fixed possible leak in DelegationTokenRenewer#allTokens.
+(Rohith Sharmaks via jianhe)
+
 Release 2.7.0 - 2015-04-20
 
   INCOMPATIBLE CHANGES



hadoop git commit: Moving YARN-3351, YARN-3382, YARN-3472, MAPREDUCE-6238 to the 2.7.1 CHANGES.txt sections given the recent merge into branch-2.7.

2015-04-24 Thread vinodkv
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 9bf09b334 - db1a33e95


Moving YARN-3351, YARN-3382, YARN-3472, MAPREDUCE-6238 to the 2.7.1 CHANGES.txt
sections given the recent merge into branch-2.7.

(cherry picked from commit 2f82ae042a6f3110742aaa57c076bb9ebd7888d1)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/db1a33e9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/db1a33e9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/db1a33e9

Branch: refs/heads/branch-2
Commit: db1a33e956675fad2737517e2d9dea6c1cbae2c4
Parents: 9bf09b3
Author: Vinod Kumar Vavilapalli vino...@apache.org
Authored: Fri Apr 24 17:18:46 2015 -0700
Committer: Vinod Kumar Vavilapalli vino...@apache.org
Committed: Fri Apr 24 17:20:02 2015 -0700

--
 hadoop-mapreduce-project/CHANGES.txt |  6 +++---
 hadoop-yarn-project/CHANGES.txt  | 17 +
 2 files changed, 12 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/db1a33e9/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index b5d1ba6..0a0bd32 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -89,9 +89,6 @@ Release 2.8.0 - UNRELEASED
 MAPREDUCE-6266. Job#getTrackingURL should consistently return a proper URL
 (rchiang via rkanter)
 
-MAPREDUCE-6238. MR2 can't run local jobs with -libjars command options
-which is a regression from MR1 (zxu via rkanter)
-
 MAPREDUCE-6293. Set job classloader on uber-job's LocalContainerLauncher
 event thread. (Sangjin Lee via gera)
 
@@ -115,6 +112,9 @@ Release 2.7.1 - UNRELEASED
 
 MAPREDUCE-6300. Task list sort by task id broken. (Siqi Li via aajisaka)
 
+MAPREDUCE-6238. MR2 can't run local jobs with -libjars command options
+which is a regression from MR1 (zxu via rkanter)
+
 Release 2.7.0 - 2015-04-20
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/db1a33e9/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 7964807..d8a8058 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -145,8 +145,6 @@ Release 2.8.0 - UNRELEASED
 YARN-3205 FileSystemRMStateStore should disable FileSystem Cache to avoid
 get a Filesystem with an old configuration. (Zhihai Xu via ozawa)
 
-YARN-3351. AppMaster tracking URL is broken in HA. (Anubhav Dhoot via 
kasha)
-
 YARN-3269. Yarn.nodemanager.remote-app-log-dir could not be configured to 
 fully qualified path. (Xuan Gong via junping_du)
 
@@ -190,12 +188,6 @@ Release 2.8.0 - UNRELEASED
 YARN-3465. Use LinkedHashMap to preserve order of resource requests. 
 (Zhihai Xu via kasha)
 
-YARN-3382. Some of UserMetricsInfo metrics are incorrectly set to root
-queue metrics. (Rohit Agarwal via jianhe)
-
-YARN-3472. Fixed possible leak in DelegationTokenRenewer#allTokens.
-(Rohith Sharmaks via jianhe)
-
 YARN-3266. RMContext#inactiveNodes should have NodeId as map key.
 (Chengbing Liu via jianhe)
 
@@ -239,6 +231,7 @@ Release 2.7.1 - UNRELEASED
   OPTIMIZATIONS
 
   BUG FIXES
+
 YARN-3487. CapacityScheduler scheduler lock obtained unnecessarily when 
 calling getQueue (Jason Lowe via wangda)
 
@@ -254,6 +247,14 @@ Release 2.7.1 - UNRELEASED
 YARN-3522. Fixed DistributedShell to instantiate TimeLineClient as the
 correct user. (Zhijie Shen via jianhe)
 
+YARN-3351. AppMaster tracking URL is broken in HA. (Anubhav Dhoot via 
kasha)
+
+YARN-3382. Some of UserMetricsInfo metrics are incorrectly set to root
+queue metrics. (Rohit Agarwal via jianhe)
+
+YARN-3472. Fixed possible leak in DelegationTokenRenewer#allTokens.
+(Rohith Sharmaks via jianhe)
+
 Release 2.7.0 - 2015-04-20
 
   INCOMPATIBLE CHANGES



[1/2] hadoop git commit: HDFS-8211. DataNode UUID is always null in the JMX counter. (Contributed by Anu Engineer)

2015-04-24 Thread arp
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 3884948d6 - 932cff610
  refs/heads/trunk 4a3dabd94 - dcc5455e0


HDFS-8211. DataNode UUID is always null in the JMX counter. (Contributed by Anu 
Engineer)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/dcc5455e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/dcc5455e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/dcc5455e

Branch: refs/heads/trunk
Commit: dcc5455e07be75ca44eb6a33d4e706eec11b9905
Parents: 4a3dabd
Author: Arpit Agarwal a...@apache.org
Authored: Fri Apr 24 16:47:48 2015 -0700
Committer: Arpit Agarwal a...@apache.org
Committed: Fri Apr 24 16:47:48 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../hadoop/hdfs/server/datanode/DataNode.java   |  4 +-
 .../hdfs/server/datanode/TestDataNodeUUID.java  | 65 
 3 files changed, 70 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/dcc5455e/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 317211e..a7b5ed3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -560,6 +560,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-8191. Fix byte to integer casting in SimulatedFSDataset#simulatedByte.
 (Zhe Zhang via wang)
 
+HDFS-8211. DataNode UUID is always null in the JMX counter. (Anu Engineer
+via Arpit Agarwal)
+
 Release 2.7.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/dcc5455e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index 23ab43a..2401d9c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -1226,7 +1226,7 @@ public class DataNode extends ReconfigurableBase
*
* @throws IOException
*/
-  private synchronized void checkDatanodeUuid() throws IOException {
+  synchronized void checkDatanodeUuid() throws IOException {
 if (storage.getDatanodeUuid() == null) {
   storage.setDatanodeUuid(generateUuid());
   storage.writeAll();
@@ -3159,7 +3159,7 @@ public class DataNode extends ReconfigurableBase
   }
 
   public String getDatanodeUuid() {
-return id == null ? null : id.getDatanodeUuid();
+return storage == null ? null : storage.getDatanodeUuid();
   }
 
   boolean shouldRun() {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/dcc5455e/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeUUID.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeUUID.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeUUID.java
new file mode 100644
index 000..34e53a3
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeUUID.java
@@ -0,0 +1,65 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.datanode;
+
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.HdfsConfiguration;
+import org.junit.Test;
+
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+
+import static 

hadoop git commit: YARN-3390. Reuse TimelineCollectorManager for RM (Zhijie Shen via sjlee)

2015-04-24 Thread sjlee
Repository: hadoop
Updated Branches:
  refs/heads/YARN-2928 5eeb2b156 - 582211888


YARN-3390. Reuse TimelineCollectorManager for RM (Zhijie Shen via sjlee)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/58221188
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/58221188
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/58221188

Branch: refs/heads/YARN-2928
Commit: 58221188811e0f61d842dac89e1f4ad4fd8aa182
Parents: 5eeb2b1
Author: Sangjin Lee sj...@apache.org
Authored: Fri Apr 24 16:56:23 2015 -0700
Committer: Sangjin Lee sj...@apache.org
Committed: Fri Apr 24 16:56:23 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   2 +
 .../resourcemanager/RMActiveServiceContext.java |  13 +-
 .../server/resourcemanager/RMAppManager.java|   3 +-
 .../yarn/server/resourcemanager/RMContext.java  |   7 +-
 .../server/resourcemanager/RMContextImpl.java   |  12 +-
 .../server/resourcemanager/ResourceManager.java |  14 +-
 .../server/resourcemanager/rmapp/RMAppImpl.java |  15 ++
 .../timelineservice/RMTimelineCollector.java| 111 
 .../RMTimelineCollectorManager.java |  75 ++
 .../TestTimelineServiceClientIntegration.java   |  12 +-
 .../collector/AppLevelTimelineCollector.java|   2 +-
 .../collector/NodeTimelineCollectorManager.java | 223 
 .../PerNodeTimelineCollectorsAuxService.java|  15 +-
 .../collector/TimelineCollector.java|   2 +-
 .../collector/TimelineCollectorManager.java | 259 +++
 .../collector/TimelineCollectorWebService.java  |  23 +-
 .../TestNMTimelineCollectorManager.java | 160 
 ...TestPerNodeTimelineCollectorsAuxService.java |  24 +-
 .../collector/TestTimelineCollectorManager.java | 160 
 19 files changed, 578 insertions(+), 554 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/58221188/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index a3ca475..408b8e6 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -53,6 +53,8 @@ Branch YARN-2928: Timeline Server Next Generation: Phase 1
 YARN-3391. Clearly define flow ID/ flow run / flow version in API and 
storage.
 (Zhijie Shen via junping_du)
 
+YARN-3390. Reuse TimelineCollectorManager for RM (Zhijie Shen via sjlee)
+
   IMPROVEMENTS
 
   OPTIMIZATIONS

http://git-wip-us.apache.org/repos/asf/hadoop/blob/58221188/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java
index 1d95204..00768ed 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMActiveServiceContext.java
@@ -47,7 +47,7 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRen
 import 
org.apache.hadoop.yarn.server.resourcemanager.security.NMTokenSecretManagerInRM;
 import 
org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager;
 import 
org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager;
-import 
org.apache.hadoop.yarn.server.resourcemanager.timelineservice.RMTimelineCollector;
+import 
org.apache.hadoop.yarn.server.resourcemanager.timelineservice.RMTimelineCollectorManager;
 import org.apache.hadoop.yarn.util.Clock;
 import org.apache.hadoop.yarn.util.SystemClock;
 
@@ -95,7 +95,7 @@ public class RMActiveServiceContext {
   private ApplicationMasterService applicationMasterService;
   private RMApplicationHistoryWriter rmApplicationHistoryWriter;
   private SystemMetricsPublisher systemMetricsPublisher;
-  private RMTimelineCollector timelineCollector;
+  private RMTimelineCollectorManager timelineCollectorManager;
 
   private RMNodeLabelsManager nodeLabelManager;
   private long epoch;
@@ -379,14 +379,15 @@ public class RMActiveServiceContext {
 
   @Private
   @Unstable
-  public RMTimelineCollector 

hadoop git commit: MAPREDUCE-6333. TestEvents, TestAMWebServicesTasks, TestAppController are broken due to MAPREDUCE-6297. (Siqi Li via gera)

2015-04-24 Thread gera
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 db1a33e95 - fa915f73e


MAPREDUCE-6333. TestEvents,TestAMWebServicesTasks,TestAppController are broken 
due to MAPREDUCE-6297. (Siqi Li via gera)

(cherry picked from commit 78c6b462412bbadad4a1a13ed4c597927b0cf188)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fa915f73
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fa915f73
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fa915f73

Branch: refs/heads/branch-2
Commit: fa915f73e2456a3c75a8e28f70879db9a009ac5e
Parents: db1a33e
Author: Gera Shegalov g...@apache.org
Authored: Fri Apr 24 09:21:44 2015 -0700
Committer: Gera Shegalov g...@apache.org
Committed: Fri Apr 24 17:38:16 2015 -0700

--
 hadoop-mapreduce-project/CHANGES.txt|  3 ++
 .../hadoop/mapreduce/jobhistory/TestEvents.java | 29 ++--
 .../v2/app/webapp/TestAMWebServicesTasks.java   | 27 --
 .../v2/app/webapp/TestAppController.java|  9 +++---
 4 files changed, 41 insertions(+), 27 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa915f73/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 0a0bd32..a1d3523 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -98,6 +98,9 @@ Release 2.8.0 - UNRELEASED
 MAPREDUCE-6330. Fix typo in Task Attempt API's URL in documentations.
 (Ryu Kobayashi via ozawa)
 
+MAPREDUCE-6333. TestEvents,TestAMWebServicesTasks,TestAppController are
+broken due to MAPREDUCE-6297. (Siqi Li via gera)
+
 Release 2.7.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa915f73/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestEvents.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestEvents.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestEvents.java
index 00be4b8..bb9b56b 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestEvents.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestEvents.java
@@ -39,6 +39,7 @@ import org.junit.Test;
 
 public class TestEvents {
 
+  private static final String taskId = task_1_2_r_3;
   /**
* test a getters of TaskAttemptFinishedEvent and TaskAttemptFinished
* 
@@ -131,7 +132,7 @@ public class TestEvents {
 
 e = reader.getNextEvent();
 assertTrue(e.getEventType().equals(EventType.REDUCE_ATTEMPT_KILLED));
-assertEquals(task_1_2_r03_4,
+assertEquals(taskId,
 ((TaskAttemptUnsuccessfulCompletion) e.getDatum()).taskid.toString());
 
 e = reader.getNextEvent();
@@ -141,42 +142,42 @@ public class TestEvents {
 
 e = reader.getNextEvent();
 assertTrue(e.getEventType().equals(EventType.REDUCE_ATTEMPT_STARTED));
-assertEquals(task_1_2_r03_4,
+assertEquals(taskId,
 ((TaskAttemptStarted) e.getDatum()).taskid.toString());
 
 e = reader.getNextEvent();
 assertTrue(e.getEventType().equals(EventType.REDUCE_ATTEMPT_FINISHED));
-assertEquals(task_1_2_r03_4,
+assertEquals(taskId,
 ((TaskAttemptFinished) e.getDatum()).taskid.toString());
 
 e = reader.getNextEvent();
 assertTrue(e.getEventType().equals(EventType.REDUCE_ATTEMPT_KILLED));
-assertEquals(task_1_2_r03_4,
+assertEquals(taskId,
 ((TaskAttemptUnsuccessfulCompletion) e.getDatum()).taskid.toString());
 
 e = reader.getNextEvent();
 assertTrue(e.getEventType().equals(EventType.REDUCE_ATTEMPT_KILLED));
-assertEquals(task_1_2_r03_4,
+assertEquals(taskId,
 ((TaskAttemptUnsuccessfulCompletion) e.getDatum()).taskid.toString());
 
 e = reader.getNextEvent();
 assertTrue(e.getEventType().equals(EventType.REDUCE_ATTEMPT_STARTED));
-assertEquals(task_1_2_r03_4,
+assertEquals(taskId,
 ((TaskAttemptStarted) e.getDatum()).taskid.toString());
 
 e = reader.getNextEvent();
 assertTrue(e.getEventType().equals(EventType.REDUCE_ATTEMPT_FINISHED));
-assertEquals(task_1_2_r03_4,
+assertEquals(taskId,
 ((TaskAttemptFinished) e.getDatum()).taskid.toString());
 
 e = reader.getNextEvent();
 

hadoop git commit: YARN-3406. Display count of running containers in the RM's Web UI. Contributed by Ryu Kobayashi.

2015-04-24 Thread ozawa
Repository: hadoop
Updated Branches:
  refs/heads/trunk 78fe6e57c - 4a3dabd94


YARN-3406. Display count of running containers in the RM's Web UI. Contributed 
by Ryu Kobayashi.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4a3dabd9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4a3dabd9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4a3dabd9

Branch: refs/heads/trunk
Commit: 4a3dabd94fc39d7604b826065c23859d565f
Parents: 78fe6e5
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Sat Apr 25 07:17:11 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Sat Apr 25 07:17:11 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt |  3 +++
 .../hadoop/yarn/server/webapp/WebPageUtils.java | 25 +---
 .../hadoop/yarn/server/webapp/dao/AppInfo.java  |  9 +++
 .../webapp/FairSchedulerAppsBlock.java  |  2 ++
 .../webapp/FairSchedulerPage.java   |  2 +-
 .../resourcemanager/webapp/RMAppsBlock.java |  6 -
 6 files changed, 37 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4a3dabd9/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 6da55b5..44b87e5 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -168,6 +168,9 @@ Release 2.8.0 - UNRELEASED
 
 YARN-3511. Add errors and warnings page to ATS. (Varun Vasudev via xgong)
 
+YARN-3406. Display count of running containers in the RM's Web UI.
+(Ryu Kobayashi via ozawa)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4a3dabd9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
index 5acabf5..6ca5011 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
@@ -24,10 +24,11 @@ import static 
org.apache.hadoop.yarn.webapp.view.JQueryUI.tableInit;
 public class WebPageUtils {
 
   public static String appsTableInit() {
-return appsTableInit(false);
+return appsTableInit(false, true);
   }
 
-  public static String appsTableInit(boolean isFairSchedulerPage) {
+  public static String appsTableInit(
+  boolean isFairSchedulerPage, boolean isResourceManager) {
 // id, user, name, queue, starttime, finishtime, state, status, progress, 
ui
 // FairSchedulerPage's table is a bit different
 return tableInit()
@@ -35,22 +36,30 @@ public class WebPageUtils {
   .append(, bDeferRender: true)
   .append(, bProcessing: true)
   .append(\n, aoColumnDefs: )
-  .append(getAppsTableColumnDefs(isFairSchedulerPage))
+  .append(getAppsTableColumnDefs(isFairSchedulerPage, isResourceManager))
   // Sort by id upon page load
   .append(, aaSorting: [[0, 'desc']]}).toString();
   }
 
-  private static String getAppsTableColumnDefs(boolean isFairSchedulerPage) {
+  private static String getAppsTableColumnDefs(
+  boolean isFairSchedulerPage, boolean isResourceManager) {
 StringBuilder sb = new StringBuilder();
-return sb
-  .append([\n)
+sb.append([\n)
   .append({'sType':'string', 'aTargets': [0])
   .append(, 'mRender': parseHadoopID })
   .append(\n, {'sType':'numeric', 'aTargets':  +
   (isFairSchedulerPage ? [6, 7]: [5, 6]))
   .append(, 'mRender': renderHadoopDate })
-  .append(\n, {'sType':'numeric', bSearchable:false, 'aTargets': [9])
-  .append(, 'mRender': parseHadoopProgress }]).toString();
+  .append(\n, {'sType':'numeric', bSearchable:false, 'aTargets':);
+if (isFairSchedulerPage) {
+  sb.append([11]);
+} else if (isResourceManager) {
+  sb.append([10]);
+} else {
+  sb.append([9]);
+}
+sb.append(, 'mRender': parseHadoopProgress }]);
+return sb.toString();
   }
 
   public static String attemptsTableInit() {


hadoop git commit: YARN-3406. Display count of running containers in the RM's Web UI. Contributed by Ryu Kobayashi.

2015-04-24 Thread ozawa
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 bd750f160 - 3884948d6


YARN-3406. Display count of running containers in the RM's Web UI. Contributed 
by Ryu Kobayashi.

(cherry picked from commit 4a3dabd94fc39d7604b826065c23859d565f)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3884948d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3884948d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3884948d

Branch: refs/heads/branch-2
Commit: 3884948d6c7045de9123e78d5ef00602d7d4410b
Parents: bd750f1
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Sat Apr 25 07:17:11 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Sat Apr 25 07:17:42 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt |  3 +++
 .../hadoop/yarn/server/webapp/WebPageUtils.java | 25 +---
 .../hadoop/yarn/server/webapp/dao/AppInfo.java  |  9 +++
 .../webapp/FairSchedulerAppsBlock.java  |  2 ++
 .../webapp/FairSchedulerPage.java   |  2 +-
 .../resourcemanager/webapp/RMAppsBlock.java |  6 -
 6 files changed, 37 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3884948d/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 14c419b..1f486e4 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -120,6 +120,9 @@ Release 2.8.0 - UNRELEASED
 
 YARN-3511. Add errors and warnings page to ATS. (Varun Vasudev via xgong)
 
+YARN-3406. Display count of running containers in the RM's Web UI.
+(Ryu Kobayashi via ozawa)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3884948d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
index 5acabf5..6ca5011 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
@@ -24,10 +24,11 @@ import static 
org.apache.hadoop.yarn.webapp.view.JQueryUI.tableInit;
 public class WebPageUtils {
 
   public static String appsTableInit() {
-return appsTableInit(false);
+return appsTableInit(false, true);
   }
 
-  public static String appsTableInit(boolean isFairSchedulerPage) {
+  public static String appsTableInit(
+  boolean isFairSchedulerPage, boolean isResourceManager) {
 // id, user, name, queue, starttime, finishtime, state, status, progress, 
ui
 // FairSchedulerPage's table is a bit different
 return tableInit()
@@ -35,22 +36,30 @@ public class WebPageUtils {
   .append(, bDeferRender: true)
   .append(, bProcessing: true)
   .append(\n, aoColumnDefs: )
-  .append(getAppsTableColumnDefs(isFairSchedulerPage))
+  .append(getAppsTableColumnDefs(isFairSchedulerPage, isResourceManager))
   // Sort by id upon page load
   .append(, aaSorting: [[0, 'desc']]}).toString();
   }
 
-  private static String getAppsTableColumnDefs(boolean isFairSchedulerPage) {
+  private static String getAppsTableColumnDefs(
+  boolean isFairSchedulerPage, boolean isResourceManager) {
 StringBuilder sb = new StringBuilder();
-return sb
-  .append([\n)
+sb.append([\n)
   .append({'sType':'string', 'aTargets': [0])
   .append(, 'mRender': parseHadoopID })
   .append(\n, {'sType':'numeric', 'aTargets':  +
   (isFairSchedulerPage ? [6, 7]: [5, 6]))
   .append(, 'mRender': renderHadoopDate })
-  .append(\n, {'sType':'numeric', bSearchable:false, 'aTargets': [9])
-  .append(, 'mRender': parseHadoopProgress }]).toString();
+  .append(\n, {'sType':'numeric', bSearchable:false, 'aTargets':);
+if (isFairSchedulerPage) {
+  sb.append([11]);
+} else if (isResourceManager) {
+  sb.append([10]);
+} else {
+  sb.append([9]);
+}
+sb.append(, 'mRender': parseHadoopProgress }]);
+return sb.toString();
   }
 
   public static String attemptsTableInit() {


hadoop git commit: YARN-3537. NPE when NodeManager.serviceInit fails and stopRecoveryStore invoked. Contributed by Brahma Reddy Battula (cherry picked from commit 5e093f0d400f82f67d9b2d24253c79e4a5aba

2015-04-24 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 73ba3ebe7 - cf4154676


YARN-3537. NPE when NodeManager.serviceInit fails and stopRecoveryStore 
invoked. Contributed by Brahma Reddy Battula
(cherry picked from commit 5e093f0d400f82f67d9b2d24253c79e4a5abacf9)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cf415467
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cf415467
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cf415467

Branch: refs/heads/branch-2
Commit: cf4154676b892a36fe977c115bac52f9dabcc128
Parents: 73ba3eb
Author: Jason Lowe jl...@apache.org
Authored: Fri Apr 24 22:02:53 2015 +
Committer: Jason Lowe jl...@apache.org
Committed: Fri Apr 24 22:04:03 2015 +

--
 hadoop-yarn-project/CHANGES.txt |  3 +++
 .../yarn/server/nodemanager/NodeManager.java| 26 +++-
 2 files changed, 17 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cf415467/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index ea8c723..e4bf630 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -245,6 +245,9 @@ Release 2.7.1 - UNRELEASED
 YARN-3522. Fixed DistributedShell to instantiate TimeLineClient as the
 correct user. (Zhijie Shen via jianhe)
 
+YARN-3537. NPE when NodeManager.serviceInit fails and stopRecoveryStore
+invoked (Brahma Reddy Battula via jlowe)
+
 Release 2.7.0 - 2015-04-20
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cf415467/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
index 90e903b..46d75af 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
@@ -177,18 +177,20 @@ public class NodeManager extends CompositeService
   }
 
   private void stopRecoveryStore() throws IOException {
-nmStore.stop();
-if (null != context) {
-  if (context.getDecommissioned()  nmStore.canRecover()) {
-LOG.info(Removing state store due to decommission);
-Configuration conf = getConfig();
-Path recoveryRoot =
-new Path(conf.get(YarnConfiguration.NM_RECOVERY_DIR));
-LOG.info(Removing state store at  + recoveryRoot
-+  due to decommission);
-FileSystem recoveryFs = FileSystem.getLocal(conf);
-if (!recoveryFs.delete(recoveryRoot, true)) {
-  LOG.warn(Unable to delete  + recoveryRoot);
+if (null != nmStore) {
+  nmStore.stop();
+  if (null != context) {
+if (context.getDecommissioned()  nmStore.canRecover()) {
+  LOG.info(Removing state store due to decommission);
+  Configuration conf = getConfig();
+  Path recoveryRoot =
+  new Path(conf.get(YarnConfiguration.NM_RECOVERY_DIR));
+  LOG.info(Removing state store at  + recoveryRoot
+  +  due to decommission);
+  FileSystem recoveryFs = FileSystem.getLocal(conf);
+  if (!recoveryFs.delete(recoveryRoot, true)) {
+LOG.warn(Unable to delete  + recoveryRoot);
+  }
 }
   }
 }



[2/2] hadoop git commit: YARN-2498. Respect labels in preemption policy of capacity scheduler for inter-queue preemption. Contributed by Wangda Tan

2015-04-24 Thread jianhe
YARN-2498. Respect labels in preemption policy of capacity scheduler for 
inter-queue preemption. Contributed by Wangda Tan


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d497f6ea
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d497f6ea
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d497f6ea

Branch: refs/heads/trunk
Commit: d497f6ea2be559aa31ed76f37ae949dbfabe2a51
Parents: dcc5455
Author: Jian He jia...@apache.org
Authored: Fri Apr 24 17:03:13 2015 -0700
Committer: Jian He jia...@apache.org
Committed: Fri Apr 24 17:03:13 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |3 +
 .../ProportionalCapacityPreemptionPolicy.java   |  585 +
 .../rmcontainer/RMContainerImpl.java|   28 +-
 .../scheduler/capacity/CapacityScheduler.java   |2 +-
 .../scheduler/capacity/LeafQueue.java   |   70 +-
 .../scheduler/common/AssignmentInformation.java |   31 +-
 ...estProportionalCapacityPreemptionPolicy.java |   94 +-
 ...pacityPreemptionPolicyForNodePartitions.java | 1211 ++
 .../scheduler/capacity/TestChildQueueOrder.java |2 +-
 .../scheduler/capacity/TestLeafQueue.java   |4 +-
 .../TestNodeLabelContainerAllocation.java   |   16 +
 .../scheduler/capacity/TestParentQueue.java |2 +-
 12 files changed, 1750 insertions(+), 298 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d497f6ea/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 44b87e5..a830771 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -102,6 +102,9 @@ Release 2.8.0 - UNRELEASED
 
 YARN-3319. Implement a FairOrderingPolicy. (Craig Welch via wangda)
 
+YARN-2498. Respect labels in preemption policy of capacity scheduler for
+inter-queue preemption. (Wangda Tan via jianhe)
+
   IMPROVEMENTS
 
 YARN-1880. Cleanup TestApplicationClientProtocolOnHA

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d497f6ea/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
index 2ab4197..1f47b5f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity;
 
+import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Collections;
@@ -26,11 +27,10 @@ import java.util.HashSet;
 import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
-import java.util.NavigableSet;
 import java.util.PriorityQueue;
 import java.util.Set;
+import java.util.TreeSet;
 
-import org.apache.commons.collections.map.HashedMap;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
@@ -40,7 +40,6 @@ import org.apache.hadoop.yarn.api.records.Priority;
 import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.event.EventHandler;
 import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
-import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
 import 
org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingEditPolicy;
 import 
org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
 import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer;
@@ -49,7 +48,9 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ContainerPreemptE
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.PreemptableResourceScheduler;
 import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CSQueue;
 import 

hadoop git commit: YARN-3537. NPE when NodeManager.serviceInit fails and stopRecoveryStore invoked. Contributed by Brahma Reddy Battula

2015-04-24 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/trunk 5ce3a77f3 - 5e093f0d4


YARN-3537. NPE when NodeManager.serviceInit fails and stopRecoveryStore 
invoked. Contributed by Brahma Reddy Battula


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5e093f0d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5e093f0d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5e093f0d

Branch: refs/heads/trunk
Commit: 5e093f0d400f82f67d9b2d24253c79e4a5abacf9
Parents: 5ce3a77
Author: Jason Lowe jl...@apache.org
Authored: Fri Apr 24 22:02:53 2015 +
Committer: Jason Lowe jl...@apache.org
Committed: Fri Apr 24 22:02:53 2015 +

--
 hadoop-yarn-project/CHANGES.txt |  3 +++
 .../yarn/server/nodemanager/NodeManager.java| 26 +++-
 2 files changed, 17 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5e093f0d/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 9754c33..001396f 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -290,6 +290,9 @@ Release 2.7.1 - UNRELEASED
 YARN-3522. Fixed DistributedShell to instantiate TimeLineClient as the
 correct user. (Zhijie Shen via jianhe)
 
+YARN-3537. NPE when NodeManager.serviceInit fails and stopRecoveryStore
+invoked (Brahma Reddy Battula via jlowe)
+
 Release 2.7.0 - 2015-04-20
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5e093f0d/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
index 4a28c6f..6718b53 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java
@@ -178,18 +178,20 @@ public class NodeManager extends CompositeService
   }
 
   private void stopRecoveryStore() throws IOException {
-nmStore.stop();
-if (null != context) {
-  if (context.getDecommissioned()  nmStore.canRecover()) {
-LOG.info(Removing state store due to decommission);
-Configuration conf = getConfig();
-Path recoveryRoot =
-new Path(conf.get(YarnConfiguration.NM_RECOVERY_DIR));
-LOG.info(Removing state store at  + recoveryRoot
-+  due to decommission);
-FileSystem recoveryFs = FileSystem.getLocal(conf);
-if (!recoveryFs.delete(recoveryRoot, true)) {
-  LOG.warn(Unable to delete  + recoveryRoot);
+if (null != nmStore) {
+  nmStore.stop();
+  if (null != context) {
+if (context.getDecommissioned()  nmStore.canRecover()) {
+  LOG.info(Removing state store due to decommission);
+  Configuration conf = getConfig();
+  Path recoveryRoot =
+  new Path(conf.get(YarnConfiguration.NM_RECOVERY_DIR));
+  LOG.info(Removing state store at  + recoveryRoot
+  +  due to decommission);
+  FileSystem recoveryFs = FileSystem.getLocal(conf);
+  if (!recoveryFs.delete(recoveryRoot, true)) {
+LOG.warn(Unable to delete  + recoveryRoot);
+  }
 }
   }
 }



hadoop git commit: Fix commit version for YARN-3537 (cherry picked from commit 78fe6e57c7697dae192d74e2f5b91040a3579dfd)

2015-04-24 Thread jlowe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 cf4154676 - bd750f160


Fix commit version for YARN-3537
(cherry picked from commit 78fe6e57c7697dae192d74e2f5b91040a3579dfd)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bd750f16
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bd750f16
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bd750f16

Branch: refs/heads/branch-2
Commit: bd750f160b669c58036f5b2239d6ef8ee97db910
Parents: cf41546
Author: Jason Lowe jl...@apache.org
Authored: Fri Apr 24 22:07:53 2015 +
Committer: Jason Lowe jl...@apache.org
Committed: Fri Apr 24 22:09:01 2015 +

--
 hadoop-yarn-project/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bd750f16/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index e4bf630..14c419b 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -219,6 +219,9 @@ Release 2.8.0 - UNRELEASED
 
 YARN-3444. Fix typo capabililty. (Gabor Liptak via aajisaka)
 
+YARN-3537. NPE when NodeManager.serviceInit fails and stopRecoveryStore
+invoked (Brahma Reddy Battula via jlowe)
+
 Release 2.7.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES
@@ -245,9 +248,6 @@ Release 2.7.1 - UNRELEASED
 YARN-3522. Fixed DistributedShell to instantiate TimeLineClient as the
 correct user. (Zhijie Shen via jianhe)
 
-YARN-3537. NPE when NodeManager.serviceInit fails and stopRecoveryStore
-invoked (Brahma Reddy Battula via jlowe)
-
 Release 2.7.0 - 2015-04-20
 
   INCOMPATIBLE CHANGES



[2/2] hadoop git commit: HDFS-8211. DataNode UUID is always null in the JMX counter. (Contributed by Anu Engineer)

2015-04-24 Thread arp
HDFS-8211. DataNode UUID is always null in the JMX counter. (Contributed by Anu 
Engineer)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/932cff61
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/932cff61
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/932cff61

Branch: refs/heads/branch-2
Commit: 932cff610a5d65618c6c3e1a8bf15a0d11cb7d33
Parents: 3884948
Author: Arpit Agarwal a...@apache.org
Authored: Fri Apr 24 16:47:48 2015 -0700
Committer: Arpit Agarwal a...@apache.org
Committed: Fri Apr 24 16:47:56 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../hadoop/hdfs/server/datanode/DataNode.java   |  4 +-
 .../hdfs/server/datanode/TestDataNodeUUID.java  | 65 
 3 files changed, 70 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/932cff61/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index aebcf2e..640c7c9 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -239,6 +239,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-8191. Fix byte to integer casting in SimulatedFSDataset#simulatedByte.
 (Zhe Zhang via wang)
 
+HDFS-8211. DataNode UUID is always null in the JMX counter. (Anu Engineer
+via Arpit Agarwal)
+
 Release 2.7.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/932cff61/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
index ba02be2..8ea878b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
@@ -1233,7 +1233,7 @@ public class DataNode extends ReconfigurableBase
*
* @throws IOException
*/
-  private synchronized void checkDatanodeUuid() throws IOException {
+  synchronized void checkDatanodeUuid() throws IOException {
 if (storage.getDatanodeUuid() == null) {
   storage.setDatanodeUuid(generateUuid());
   storage.writeAll();
@@ -3166,7 +3166,7 @@ public class DataNode extends ReconfigurableBase
   }
 
   public String getDatanodeUuid() {
-return id == null ? null : id.getDatanodeUuid();
+return storage == null ? null : storage.getDatanodeUuid();
   }
 
   boolean shouldRun() {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/932cff61/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeUUID.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeUUID.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeUUID.java
new file mode 100644
index 000..34e53a3
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeUUID.java
@@ -0,0 +1,65 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.datanode;
+
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.HdfsConfiguration;
+import org.junit.Test;
+
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotEquals;
+
+public class TestDataNodeUUID {
+
+  /**
+   * This 

hadoop git commit: MAPREDUCE-6333. TestEvents, TestAMWebServicesTasks, TestAppController are broken due to MAPREDUCE-6297. (Siqi Li via gera)

2015-04-24 Thread gera
Repository: hadoop
Updated Branches:
  refs/heads/trunk 2f82ae042 - 78c6b4624


MAPREDUCE-6333. TestEvents,TestAMWebServicesTasks,TestAppController are broken 
due to MAPREDUCE-6297. (Siqi Li via gera)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/78c6b462
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/78c6b462
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/78c6b462

Branch: refs/heads/trunk
Commit: 78c6b462412bbadad4a1a13ed4c597927b0cf188
Parents: 2f82ae0
Author: Gera Shegalov g...@apache.org
Authored: Fri Apr 24 09:21:44 2015 -0700
Committer: Gera Shegalov g...@apache.org
Committed: Fri Apr 24 17:31:10 2015 -0700

--
 hadoop-mapreduce-project/CHANGES.txt|  3 ++
 .../hadoop/mapreduce/jobhistory/TestEvents.java | 29 ++--
 .../v2/app/webapp/TestAMWebServicesTasks.java   | 27 --
 .../v2/app/webapp/TestAppController.java|  9 +++---
 4 files changed, 41 insertions(+), 27 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/78c6b462/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 5b26910..397f94a 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -343,6 +343,9 @@ Release 2.8.0 - UNRELEASED
 MAPREDUCE-6330. Fix typo in Task Attempt API's URL in documentations.
 (Ryu Kobayashi via ozawa)
 
+MAPREDUCE-6333. TestEvents,TestAMWebServicesTasks,TestAppController are
+broken due to MAPREDUCE-6297. (Siqi Li via gera)
+
 Release 2.7.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/78c6b462/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestEvents.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestEvents.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestEvents.java
index 00be4b8..bb9b56b 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestEvents.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestEvents.java
@@ -39,6 +39,7 @@ import org.junit.Test;
 
 public class TestEvents {
 
+  private static final String taskId = task_1_2_r_3;
   /**
* test a getters of TaskAttemptFinishedEvent and TaskAttemptFinished
* 
@@ -131,7 +132,7 @@ public class TestEvents {
 
 e = reader.getNextEvent();
 assertTrue(e.getEventType().equals(EventType.REDUCE_ATTEMPT_KILLED));
-assertEquals(task_1_2_r03_4,
+assertEquals(taskId,
 ((TaskAttemptUnsuccessfulCompletion) e.getDatum()).taskid.toString());
 
 e = reader.getNextEvent();
@@ -141,42 +142,42 @@ public class TestEvents {
 
 e = reader.getNextEvent();
 assertTrue(e.getEventType().equals(EventType.REDUCE_ATTEMPT_STARTED));
-assertEquals(task_1_2_r03_4,
+assertEquals(taskId,
 ((TaskAttemptStarted) e.getDatum()).taskid.toString());
 
 e = reader.getNextEvent();
 assertTrue(e.getEventType().equals(EventType.REDUCE_ATTEMPT_FINISHED));
-assertEquals(task_1_2_r03_4,
+assertEquals(taskId,
 ((TaskAttemptFinished) e.getDatum()).taskid.toString());
 
 e = reader.getNextEvent();
 assertTrue(e.getEventType().equals(EventType.REDUCE_ATTEMPT_KILLED));
-assertEquals(task_1_2_r03_4,
+assertEquals(taskId,
 ((TaskAttemptUnsuccessfulCompletion) e.getDatum()).taskid.toString());
 
 e = reader.getNextEvent();
 assertTrue(e.getEventType().equals(EventType.REDUCE_ATTEMPT_KILLED));
-assertEquals(task_1_2_r03_4,
+assertEquals(taskId,
 ((TaskAttemptUnsuccessfulCompletion) e.getDatum()).taskid.toString());
 
 e = reader.getNextEvent();
 assertTrue(e.getEventType().equals(EventType.REDUCE_ATTEMPT_STARTED));
-assertEquals(task_1_2_r03_4,
+assertEquals(taskId,
 ((TaskAttemptStarted) e.getDatum()).taskid.toString());
 
 e = reader.getNextEvent();
 assertTrue(e.getEventType().equals(EventType.REDUCE_ATTEMPT_FINISHED));
-assertEquals(task_1_2_r03_4,
+assertEquals(taskId,
 ((TaskAttemptFinished) e.getDatum()).taskid.toString());
 
 e = reader.getNextEvent();
 assertTrue(e.getEventType().equals(EventType.REDUCE_ATTEMPT_KILLED));
-assertEquals(task_1_2_r03_4,

hadoop git commit: HDFS-8033. Erasure coding: stateful (non-positional) read from files in striped layout. Contributed by Zhe Zhang.

2015-04-24 Thread zhz
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-7285 b2ba6836b - 30e196354


HDFS-8033. Erasure coding: stateful (non-positional) read from files in striped 
layout. Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/30e19635
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/30e19635
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/30e19635

Branch: refs/heads/HDFS-7285
Commit: 30e196354330031bc9d2e10ba3e61117e0a3aee5
Parents: b2ba683
Author: Zhe Zhang z...@apache.org
Authored: Fri Apr 24 22:36:15 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Fri Apr 24 22:36:15 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|   3 +
 .../org/apache/hadoop/hdfs/DFSInputStream.java  |  55 ++--
 .../hadoop/hdfs/DFSStripedInputStream.java  | 311 ++-
 .../hadoop/hdfs/TestDFSStripedInputStream.java  |  43 +++
 .../apache/hadoop/hdfs/TestReadStripedFile.java | 110 ++-
 .../server/datanode/SimulatedFSDataset.java |   3 +
 6 files changed, 468 insertions(+), 57 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/30e19635/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index cf41a9b..e8db485 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -131,3 +131,6 @@
 
 HDFS-8228. Erasure Coding: SequentialBlockGroupIdGenerator#nextValue may 
cause 
 block id conflicts (Jing Zhao via Zhe Zhang)
+
+HDFS-8033. Erasure coding: stateful (non-positional) read from files in 
+striped layout (Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/30e19635/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
index 705e0b7..7f267b4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
@@ -95,34 +95,34 @@ implements ByteBufferReadable, CanSetDropBehind, 
CanSetReadahead,
   public static boolean tcpReadsDisabledForTesting = false;
   private long hedgedReadOpsLoopNumForTesting = 0;
   protected final DFSClient dfsClient;
-  private AtomicBoolean closed = new AtomicBoolean(false);
-  private final String src;
-  private final boolean verifyChecksum;
+  protected AtomicBoolean closed = new AtomicBoolean(false);
+  protected final String src;
+  protected final boolean verifyChecksum;
 
   // state by stateful read only:
   // (protected by lock on this)
   /
   private DatanodeInfo currentNode = null;
-  private LocatedBlock currentLocatedBlock = null;
-  private long pos = 0;
-  private long blockEnd = -1;
+  protected LocatedBlock currentLocatedBlock = null;
+  protected long pos = 0;
+  protected long blockEnd = -1;
   private BlockReader blockReader = null;
   
 
   // state shared by stateful and positional read:
   // (protected by lock on infoLock)
   
-  private LocatedBlocks locatedBlocks = null;
+  protected LocatedBlocks locatedBlocks = null;
   private long lastBlockBeingWrittenLength = 0;
   private FileEncryptionInfo fileEncryptionInfo = null;
-  private CachingStrategy cachingStrategy;
+  protected CachingStrategy cachingStrategy;
   
 
-  private final ReadStatistics readStatistics = new ReadStatistics();
+  protected final ReadStatistics readStatistics = new ReadStatistics();
   // lock for state shared between read and pread
   // Note: Never acquire a lock on this with this lock held to avoid 
deadlocks
   //   (it's OK to acquire this lock when the lock on this is held)
-  private final Object infoLock = new Object();
+  protected final Object infoLock = new Object();
 
   /**
* Track the ByteBuffers that we have handed out to readers.
@@ -239,7 +239,7 @@ implements ByteBufferReadable, CanSetDropBehind, 
CanSetReadahead,
* back to the namenode to get a new list of block locations, and is
* capped at maxBlockAcquireFailures
*/
-  private int failures = 0;
+  protected int failures = 0;
 
   /* XXX Use of CocurrentHashMap is temp fix. Need to fix 
* parallel accesses to DFSInputStream (through ptreads) properly */
@@ -476,7 +476,7 @@ implements ByteBufferReadable, CanSetDropBehind, 
CanSetReadahead,
   }
 
   /** Fetch 

hadoop git commit: HDFS-8228. Erasure Coding: SequentialBlockGroupIdGenerator#nextValue may cause block id conflicts. Contributed by Jing Zhao.

2015-04-24 Thread zhz
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-7285 ebb467f33 - b2ba6836b


HDFS-8228. Erasure Coding: SequentialBlockGroupIdGenerator#nextValue may cause 
block id conflicts. Contributed by Jing Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b2ba6836
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b2ba6836
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b2ba6836

Branch: refs/heads/HDFS-7285
Commit: b2ba6836bd49ed27a80ab3c7b5b21a810f9c4a30
Parents: ebb467f
Author: Zhe Zhang z...@apache.org
Authored: Fri Apr 24 09:30:38 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Fri Apr 24 09:31:51 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  3 ++
 .../SequentialBlockGroupIdGenerator.java| 39 +++---
 .../SequentialBlockIdGenerator.java |  2 +-
 .../hadoop/hdfs/TestDFSStripedInputStream.java  | 57 +++-
 .../server/namenode/TestAddStripedBlocks.java   | 21 
 5 files changed, 77 insertions(+), 45 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b2ba6836/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 9357e23..cf41a9b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -128,3 +128,6 @@
 
 HDFS-8223. Should calculate checksum for parity blocks in 
DFSStripedOutputStream.
 (Yi Liu via jing9)
+
+HDFS-8228. Erasure Coding: SequentialBlockGroupIdGenerator#nextValue may 
cause 
+block id conflicts (Jing Zhao via Zhe Zhang)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b2ba6836/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SequentialBlockGroupIdGenerator.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SequentialBlockGroupIdGenerator.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SequentialBlockGroupIdGenerator.java
index e9e22ee..de8e379 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SequentialBlockGroupIdGenerator.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/SequentialBlockGroupIdGenerator.java
@@ -19,9 +19,11 @@ package org.apache.hadoop.hdfs.server.blockmanagement;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.hdfs.protocol.Block;
-import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.util.SequentialNumber;
 
+import static 
org.apache.hadoop.hdfs.protocol.HdfsConstants.BLOCK_GROUP_INDEX_MASK;
+import static 
org.apache.hadoop.hdfs.protocol.HdfsConstants.MAX_BLOCKS_IN_GROUP;
+
 /**
  * Generate the next valid block group ID by incrementing the maximum block
  * group ID allocated so far, with the first 2^10 block group IDs reserved.
@@ -34,6 +36,9 @@ import org.apache.hadoop.util.SequentialNumber;
  * bits (n+2) to (64-m) represent the ID of its block group, while the last m
  * bits represent its index of the group. The value m is determined by the
  * maximum number of blocks in a group (MAX_BLOCKS_IN_GROUP).
+ *
+ * Note that the {@link #nextValue()} methods requires external lock to
+ * guarantee IDs have no conflicts.
  */
 @InterfaceAudience.Private
 public class SequentialBlockGroupIdGenerator extends SequentialNumber {
@@ -47,32 +52,30 @@ public class SequentialBlockGroupIdGenerator extends 
SequentialNumber {
 
   @Override // NumberGenerator
   public long nextValue() {
-// Skip to next legitimate block group ID based on the naming protocol
-while (super.getCurrentValue() % HdfsConstants.MAX_BLOCKS_IN_GROUP  0) {
-  super.nextValue();
-}
+skipTo((getCurrentValue()  ~BLOCK_GROUP_INDEX_MASK) + 
MAX_BLOCKS_IN_GROUP);
 // Make sure there's no conflict with existing random block IDs
-while (hasValidBlockInRange(super.getCurrentValue())) {
-  super.skipTo(super.getCurrentValue() +
-  HdfsConstants.MAX_BLOCKS_IN_GROUP);
+final Block b = new Block(getCurrentValue());
+while (hasValidBlockInRange(b)) {
+  skipTo(getCurrentValue() + MAX_BLOCKS_IN_GROUP);
+  b.setBlockId(getCurrentValue());
 }
-if (super.getCurrentValue() = 0) {
-  BlockManager.LOG.warn(All negative block group IDs are used,  +
-  growing into positive IDs,  +
-  which might conflict with non-erasure coded 

hadoop git commit: YARN-3511. Add errors and warnings page to ATS. Contributed by Varun Vasudev

2015-04-24 Thread xgong
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 2ec356fcd - c18446693


YARN-3511. Add errors and warnings page to ATS. Contributed by Varun Vasudev

(cherry picked from commit eee9facbbae52cb62dfca01b8bbe676b8e289863)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c1844669
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c1844669
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c1844669

Branch: refs/heads/branch-2
Commit: c184466939444fb584af410a32aceeda3d47ece6
Parents: 2ec356f
Author: Xuan xg...@apache.org
Authored: Fri Apr 24 09:41:59 2015 -0700
Committer: Xuan xg...@apache.org
Committed: Fri Apr 24 09:43:35 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  2 +
 .../webapp/AHSController.java   |  4 ++
 .../webapp/AHSErrorsAndWarningsPage.java| 57 
 .../webapp/AHSWebApp.java   |  1 +
 .../webapp/NavBlock.java| 30 +--
 .../server/webapp/ErrorsAndWarningsBlock.java   | 23 +++-
 .../server/resourcemanager/webapp/NavBlock.java |  2 +-
 7 files changed, 114 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c1844669/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 1de4e2d..aca570e 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -118,6 +118,8 @@ Release 2.8.0 - UNRELEASED
 YARN-3503. Expose disk utilization percentage and bad local and log dir 
 counts in NM metrics. (Varun Vasudev via jianhe)
 
+YARN-3511. Add errors and warnings page to ATS. (Varun Vasudev via xgong)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c1844669/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSController.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSController.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSController.java
index 4e00bc8..4037f51 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSController.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSController.java
@@ -52,4 +52,8 @@ public class AHSController extends Controller {
   public void logs() {
 render(AHSLogsPage.class);
   }
+
+  public void errorsAndWarnings() {
+render(AHSErrorsAndWarningsPage.class);
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c1844669/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSErrorsAndWarningsPage.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSErrorsAndWarningsPage.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSErrorsAndWarningsPage.java
new file mode 100644
index 000..3798ee5
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSErrorsAndWarningsPage.java
@@ -0,0 +1,57 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required 

hadoop git commit: HDFS-8176. Record from/to snapshots in audit log for snapshot diff report. Contributed by J. Andreina.

2015-04-24 Thread jing9
Repository: hadoop
Updated Branches:
  refs/heads/trunk eee9facbb - cf6c8a1b4


HDFS-8176. Record from/to snapshots in audit log for snapshot diff report. 
Contributed by J. Andreina.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cf6c8a1b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cf6c8a1b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cf6c8a1b

Branch: refs/heads/trunk
Commit: cf6c8a1b4ee70dd45c2e42ac61999e61a05db035
Parents: eee9fac
Author: Jing Zhao ji...@apache.org
Authored: Fri Apr 24 10:23:32 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Fri Apr 24 10:23:32 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  | 3 +++
 .../org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java | 8 ++--
 2 files changed, 9 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cf6c8a1b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 56f8ec3..1cc31b2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -472,6 +472,9 @@ Release 2.8.0 - UNRELEASED
 
 HDFS-8052. Move WebHdfsFileSystem into hadoop-hdfs-client. (wheat9)
 
+HDFS-8176. Record from/to snapshots in audit log for snapshot diff report.
+(J. Andreina via jing9)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cf6c8a1b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 4477dc4..229c4d1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -7406,8 +7406,12 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 } finally {
   readUnlock();
 }
-
-logAuditEvent(diffs != null, computeSnapshotDiff, null, null, null);
+String fromSnapshotRoot = (fromSnapshot == null || fromSnapshot.isEmpty()) 
?
+path : Snapshot.getSnapshotPath(path, fromSnapshot);
+String toSnapshotRoot = (toSnapshot == null || toSnapshot.isEmpty()) ?
+path : Snapshot.getSnapshotPath(path, toSnapshot);
+logAuditEvent(diffs != null, computeSnapshotDiff, fromSnapshotRoot,
+toSnapshotRoot, null);
 return diffs;
   }
   



hadoop git commit: YARN-3511. Add errors and warnings page to ATS. Contributed by Varun Vasudev

2015-04-24 Thread xgong
Repository: hadoop
Updated Branches:
  refs/heads/trunk 91b97c21c - eee9facbb


YARN-3511. Add errors and warnings page to ATS. Contributed by Varun Vasudev


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/eee9facb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/eee9facb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/eee9facb

Branch: refs/heads/trunk
Commit: eee9facbbae52cb62dfca01b8bbe676b8e289863
Parents: 91b97c2
Author: Xuan xg...@apache.org
Authored: Fri Apr 24 09:41:59 2015 -0700
Committer: Xuan xg...@apache.org
Committed: Fri Apr 24 09:41:59 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  2 +
 .../webapp/AHSController.java   |  4 ++
 .../webapp/AHSErrorsAndWarningsPage.java| 57 
 .../webapp/AHSWebApp.java   |  1 +
 .../webapp/NavBlock.java| 30 +--
 .../server/webapp/ErrorsAndWarningsBlock.java   | 23 +++-
 .../server/resourcemanager/webapp/NavBlock.java |  2 +-
 7 files changed, 114 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/eee9facb/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 6281ee4..3311a2e 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -166,6 +166,8 @@ Release 2.8.0 - UNRELEASED
 YARN-3503. Expose disk utilization percentage and bad local and log dir 
 counts in NM metrics. (Varun Vasudev via jianhe)
 
+YARN-3511. Add errors and warnings page to ATS. (Varun Vasudev via xgong)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/eee9facb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSController.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSController.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSController.java
index 4e00bc8..4037f51 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSController.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSController.java
@@ -52,4 +52,8 @@ public class AHSController extends Controller {
   public void logs() {
 render(AHSLogsPage.class);
   }
+
+  public void errorsAndWarnings() {
+render(AHSErrorsAndWarningsPage.class);
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/eee9facb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSErrorsAndWarningsPage.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSErrorsAndWarningsPage.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSErrorsAndWarningsPage.java
new file mode 100644
index 000..3798ee5
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSErrorsAndWarningsPage.java
@@ -0,0 +1,57 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under 

hadoop git commit: HDFS-8176. Record from/to snapshots in audit log for snapshot diff report. Contributed by J. Andreina.

2015-04-24 Thread jing9
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 c18446693 - 5870d504e


HDFS-8176. Record from/to snapshots in audit log for snapshot diff report. 
Contributed by J. Andreina.

(cherry picked from commit cf6c8a1b4ee70dd45c2e42ac61999e61a05db035)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5870d504
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5870d504
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5870d504

Branch: refs/heads/branch-2
Commit: 5870d504e1a30ec320c2533c8a6980b5b2f46947
Parents: c184466
Author: Jing Zhao ji...@apache.org
Authored: Fri Apr 24 10:23:32 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Fri Apr 24 10:24:12 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  | 3 +++
 .../org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java | 8 ++--
 2 files changed, 9 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5870d504/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index faf2320..75e261d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -151,6 +151,9 @@ Release 2.8.0 - UNRELEASED
 
 HDFS-8052. Move WebHdfsFileSystem into hadoop-hdfs-client. (wheat9)
 
+HDFS-8176. Record from/to snapshots in audit log for snapshot diff report.
+(J. Andreina via jing9)
+
   OPTIMIZATIONS
 
 HDFS-8026. Trace FSOutputSummer#writeChecksumChunks rather than

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5870d504/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index f175301..b2b68c6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -7400,8 +7400,12 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 } finally {
   readUnlock();
 }
-
-logAuditEvent(diffs != null, computeSnapshotDiff, null, null, null);
+String fromSnapshotRoot = (fromSnapshot == null || fromSnapshot.isEmpty()) 
?
+path : Snapshot.getSnapshotPath(path, fromSnapshot);
+String toSnapshotRoot = (toSnapshot == null || toSnapshot.isEmpty()) ?
+path : Snapshot.getSnapshotPath(path, toSnapshot);
+logAuditEvent(diffs != null, computeSnapshotDiff, fromSnapshotRoot,
+toSnapshotRoot, null);
 return diffs;
   }
   



hadoop git commit: HDFS-8191. Fix byte to integer casting in SimulatedFSDataset#simulatedByte. Contributed by Zhe Zhang.

2015-04-24 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/trunk cf6c8a1b4 - c7d9ad68e


HDFS-8191. Fix byte to integer casting in SimulatedFSDataset#simulatedByte. 
Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c7d9ad68
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c7d9ad68
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c7d9ad68

Branch: refs/heads/trunk
Commit: c7d9ad68e34c7f8b9efada6cfbf7d5474cbeff11
Parents: cf6c8a1
Author: Andrew Wang w...@apache.org
Authored: Fri Apr 24 11:54:25 2015 -0700
Committer: Andrew Wang w...@apache.org
Committed: Fri Apr 24 11:54:25 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../server/datanode/SimulatedFSDataset.java | 10 +--
 .../server/datanode/TestSimulatedFSDataset.java | 70 +---
 3 files changed, 54 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c7d9ad68/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 1cc31b2..317211e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -557,6 +557,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-8231. StackTrace displayed at client while QuotaByStorageType exceeds
 (J.Andreina and Xiaoyu Yao via vinayakumarb)
 
+HDFS-8191. Fix byte to integer casting in SimulatedFSDataset#simulatedByte.
+(Zhe Zhang via wang)
+
 Release 2.7.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c7d9ad68/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
index 344d1fe..060e055 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
@@ -80,6 +80,7 @@ import org.apache.hadoop.util.DiskChecker.DiskErrorException;
  * Note the synchronization is coarse grained - it is at each method. 
  */
 public class SimulatedFSDataset implements FsDatasetSpiFsVolumeSpi {
+  public final static int BYTE_MASK = 0xff;
   static class Factory extends FsDatasetSpi.FactorySimulatedFSDataset {
 @Override
 public SimulatedFSDataset newInstance(DataNode datanode,
@@ -99,8 +100,8 @@ public class SimulatedFSDataset implements 
FsDatasetSpiFsVolumeSpi {
   }
 
   public static byte simulatedByte(Block b, long offsetInBlk) {
-byte firstByte = (byte) (b.getBlockId() % Byte.MAX_VALUE);
-return (byte) ((firstByte + offsetInBlk) % Byte.MAX_VALUE);
+byte firstByte = (byte) (b.getBlockId()  BYTE_MASK);
+return (byte) ((firstByte + offsetInBlk)  BYTE_MASK);
   }
   
   public static final String CONFIG_PROPERTY_CAPACITY =
@@ -1028,12 +1029,13 @@ public class SimulatedFSDataset implements 
FsDatasetSpiFsVolumeSpi {
 
 @Override
 public int read() throws IOException {
-  if (currentPos = length)
+  if (currentPos = length) {
 return -1;
+  }
   if (data !=null) {
 return data[currentPos++];
   } else {
-return simulatedByte(theBlock, currentPos++);
+return simulatedByte(theBlock, currentPos++)  BYTE_MASK;
   }
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c7d9ad68/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
index f76781d..8dc80d5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
@@ -33,6 +33,7 @@ import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.protocol.BlockListAsLongs;
 import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
+import 

hadoop git commit: HDFS-8191. Fix byte to integer casting in SimulatedFSDataset#simulatedByte. Contributed by Zhe Zhang.

2015-04-24 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 5870d504e - 89a15d607


HDFS-8191. Fix byte to integer casting in SimulatedFSDataset#simulatedByte. 
Contributed by Zhe Zhang.

(cherry picked from commit c7d9ad68e34c7f8b9efada6cfbf7d5474cbeff11)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/89a15d60
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/89a15d60
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/89a15d60

Branch: refs/heads/branch-2
Commit: 89a15d60745ce2e395613ff1d933fedbfd011e27
Parents: 5870d50
Author: Andrew Wang w...@apache.org
Authored: Fri Apr 24 11:54:25 2015 -0700
Committer: Andrew Wang w...@apache.org
Committed: Fri Apr 24 11:54:48 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../server/datanode/SimulatedFSDataset.java | 10 +--
 .../server/datanode/TestSimulatedFSDataset.java | 70 +---
 3 files changed, 54 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/89a15d60/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 75e261d..aebcf2e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -236,6 +236,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-8231. StackTrace displayed at client while QuotaByStorageType exceeds
 (J.Andreina and Xiaoyu Yao via vinayakumarb)
 
+HDFS-8191. Fix byte to integer casting in SimulatedFSDataset#simulatedByte.
+(Zhe Zhang via wang)
+
 Release 2.7.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/89a15d60/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
index 344d1fe..060e055 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
@@ -80,6 +80,7 @@ import org.apache.hadoop.util.DiskChecker.DiskErrorException;
  * Note the synchronization is coarse grained - it is at each method. 
  */
 public class SimulatedFSDataset implements FsDatasetSpiFsVolumeSpi {
+  public final static int BYTE_MASK = 0xff;
   static class Factory extends FsDatasetSpi.FactorySimulatedFSDataset {
 @Override
 public SimulatedFSDataset newInstance(DataNode datanode,
@@ -99,8 +100,8 @@ public class SimulatedFSDataset implements 
FsDatasetSpiFsVolumeSpi {
   }
 
   public static byte simulatedByte(Block b, long offsetInBlk) {
-byte firstByte = (byte) (b.getBlockId() % Byte.MAX_VALUE);
-return (byte) ((firstByte + offsetInBlk) % Byte.MAX_VALUE);
+byte firstByte = (byte) (b.getBlockId()  BYTE_MASK);
+return (byte) ((firstByte + offsetInBlk)  BYTE_MASK);
   }
   
   public static final String CONFIG_PROPERTY_CAPACITY =
@@ -1028,12 +1029,13 @@ public class SimulatedFSDataset implements 
FsDatasetSpiFsVolumeSpi {
 
 @Override
 public int read() throws IOException {
-  if (currentPos = length)
+  if (currentPos = length) {
 return -1;
+  }
   if (data !=null) {
 return data[currentPos++];
   } else {
-return simulatedByte(theBlock, currentPos++);
+return simulatedByte(theBlock, currentPos++)  BYTE_MASK;
   }
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/89a15d60/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
index f76781d..8dc80d5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java
@@ -33,6 +33,7 @@ import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.protocol.BlockListAsLongs;
 import 

hadoop git commit: YARN-3387. Previous AM's container completed status couldn't pass to current AM if AM and RM restarted during the same time. Contributed by Sandflee

2015-04-24 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/trunk c7d9ad68e - d03dcb963


YARN-3387. Previous AM's container completed status couldn't pass to current AM 
if AM and RM restarted during the same time. Contributed by Sandflee


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d03dcb96
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d03dcb96
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d03dcb96

Branch: refs/heads/trunk
Commit: d03dcb9635dbd79a45d229d1cab5fd28e5e49f49
Parents: c7d9ad6
Author: Jian He jia...@apache.org
Authored: Fri Apr 24 12:12:28 2015 -0700
Committer: Jian He jia...@apache.org
Committed: Fri Apr 24 12:13:29 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../server/resourcemanager/rmapp/RMAppImpl.java |  2 +-
 .../rmapp/attempt/RMAppAttemptImpl.java |  9 ++-
 .../TestWorkPreservingRMRestart.java| 60 
 4 files changed, 72 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d03dcb96/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 3311a2e..19e3e27 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -262,6 +262,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3516. killing ContainerLocalizer action doesn't take effect when
 private localizer receives FETCH_FAILURE status.(zhihai xu via xgong)
 
+YARN-3387. Previous AM's container completed status couldn't pass to 
current
+AM if AM and RM restarted during the same time. (sandflee via jianhe)
+
 Release 2.7.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d03dcb96/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
index b4e4965..8abc478 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
@@ -1273,7 +1273,7 @@ public class RMAppImpl implements RMApp, Recoverable {
 // finished containers so that they can be acked to NM,
 // but when pulling finished container we will check this flag again.
 ((RMAppAttemptImpl) app.currentAttempt)
-  .transferStateFromPreviousAttempt(oldAttempt);
+  .transferStateFromAttempt(oldAttempt);
 return initialState;
   } else {
 if (numberOfFailure = app.maxAppAttempts) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d03dcb96/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
index 913d06b..8abc65a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
@@ -845,7 +845,7 @@ public class RMAppAttemptImpl implements RMAppAttempt, 
Recoverable {
 attemptState.getMemorySeconds(),attemptState.getVcoreSeconds());
   }
 
-  public void transferStateFromPreviousAttempt(RMAppAttempt attempt) {
+  public void transferStateFromAttempt(RMAppAttempt attempt) {
 this.justFinishedContainers = attempt.getJustFinishedContainersReference();
 this.finishedContainersSentToAM =
 

hadoop git commit: YARN-3387. Previous AM's container completed status couldn't pass to current AM if AM and RM restarted during the same time. Contributed by Sandflee (cherry picked from commit d03dc

2015-04-24 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 89a15d607 - 0583c27fb


YARN-3387. Previous AM's container completed status couldn't pass to current AM 
if AM and RM restarted during the same time. Contributed by Sandflee
(cherry picked from commit d03dcb9635dbd79a45d229d1cab5fd28e5e49f49)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0583c27f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0583c27f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0583c27f

Branch: refs/heads/branch-2
Commit: 0583c27fb17153f86e4ad829c7f7f33bb3bda376
Parents: 89a15d6
Author: Jian He jia...@apache.org
Authored: Fri Apr 24 12:12:28 2015 -0700
Committer: Jian He jia...@apache.org
Committed: Fri Apr 24 12:14:17 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../server/resourcemanager/rmapp/RMAppImpl.java |  2 +-
 .../rmapp/attempt/RMAppAttemptImpl.java |  9 ++-
 .../TestWorkPreservingRMRestart.java| 60 
 4 files changed, 72 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0583c27f/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index aca570e..fec9451 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -214,6 +214,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3516. killing ContainerLocalizer action doesn't take effect when
 private localizer receives FETCH_FAILURE status.(zhihai xu via xgong)
 
+YARN-3387. Previous AM's container completed status couldn't pass to 
current
+AM if AM and RM restarted during the same time. (sandflee via jianhe)
+
 Release 2.7.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0583c27f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
index b4e4965..8abc478 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java
@@ -1273,7 +1273,7 @@ public class RMAppImpl implements RMApp, Recoverable {
 // finished containers so that they can be acked to NM,
 // but when pulling finished container we will check this flag again.
 ((RMAppAttemptImpl) app.currentAttempt)
-  .transferStateFromPreviousAttempt(oldAttempt);
+  .transferStateFromAttempt(oldAttempt);
 return initialState;
   } else {
 if (numberOfFailure = app.maxAppAttempts) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0583c27f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
index 913d06b..8abc65a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptImpl.java
@@ -845,7 +845,7 @@ public class RMAppAttemptImpl implements RMAppAttempt, 
Recoverable {
 attemptState.getMemorySeconds(),attemptState.getVcoreSeconds());
   }
 
-  public void transferStateFromPreviousAttempt(RMAppAttempt attempt) {
+  public void transferStateFromAttempt(RMAppAttempt attempt) {
 this.justFinishedContainers = 

hadoop git commit: HDFS-8231. StackTrace displayed at client while QuotaByStorageType exceeds (Contributed by J.Andreina and Xiaoyu Yao)

2015-04-24 Thread vinayakumarb
Repository: hadoop
Updated Branches:
  refs/heads/trunk 262c1bc33 - c8d72907f


HDFS-8231. StackTrace displayed at client while QuotaByStorageType exceeds 
(Contributed by J.Andreina and Xiaoyu Yao)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c8d72907
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c8d72907
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c8d72907

Branch: refs/heads/trunk
Commit: c8d72907ff5a4cb9ce1effca8ad9b69689d11d1d
Parents: 262c1bc
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Apr 24 12:51:04 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Apr 24 12:51:04 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../src/main/java/org/apache/hadoop/hdfs/DFSClient.java   | 7 +++
 .../src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java | 2 ++
 .../src/main/java/org/apache/hadoop/hdfs/DataStreamer.java| 2 ++
 .../apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java | 2 ++
 5 files changed, 16 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c8d72907/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 0e00025..b442bad 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -548,6 +548,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-8217. During block recovery for truncate Log new Block Id in case of
 copy-on-truncate is true. (vinayakumarb)
 
+HDFS-8231. StackTrace displayed at client while QuotaByStorageType exceeds
+(J.Andreina and Xiaoyu Yao via vinayakumarb)
+
 Release 2.7.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c8d72907/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 63145b0..8fc9e77 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -1425,6 +1425,7 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
  ParentNotDirectoryException.class,
  NSQuotaExceededException.class, 
  DSQuotaExceededException.class,
+ QuotaByStorageTypeExceededException.class,
  UnresolvedPathException.class,
  SnapshotAccessControlException.class);
 } finally {
@@ -1467,6 +1468,7 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
  FileNotFoundException.class,
  SafeModeException.class,
  DSQuotaExceededException.class,
+ QuotaByStorageTypeExceededException.class,
  UnsupportedOperationException.class,
  UnresolvedPathException.class,
  SnapshotAccessControlException.class);
@@ -1542,6 +1544,7 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
  FileNotFoundException.class,
  SafeModeException.class,
  DSQuotaExceededException.class,
+ QuotaByStorageTypeExceededException.class,
  UnresolvedPathException.class,
  SnapshotAccessControlException.class);
 } finally {
@@ -1598,6 +1601,7 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
   throw re.unwrapRemoteException(AccessControlException.class,
  NSQuotaExceededException.class,
  DSQuotaExceededException.class,
+ QuotaByStorageTypeExceededException.class,
  UnresolvedPathException.class,
  SnapshotAccessControlException.class);
 } finally {
@@ -1635,6 +1639,7 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
 } 

hadoop git commit: HDFS-8231. StackTrace displayed at client while QuotaByStorageType exceeds (Contributed by J.Andreina and Xiaoyu Yao)

2015-04-24 Thread vinayakumarb
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 68063cac3 - 2ec356fcd


HDFS-8231. StackTrace displayed at client while QuotaByStorageType exceeds 
(Contributed by J.Andreina and Xiaoyu Yao)

(cherry picked from commit c8d72907ff5a4cb9ce1effca8ad9b69689d11d1d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2ec356fc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2ec356fc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2ec356fc

Branch: refs/heads/branch-2
Commit: 2ec356fcdb6d8e8f8167030d38c86e230dbbddcf
Parents: 68063ca
Author: Vinayakumar B vinayakum...@apache.org
Authored: Fri Apr 24 12:51:04 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Fri Apr 24 12:51:56 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../src/main/java/org/apache/hadoop/hdfs/DFSClient.java   | 7 +++
 .../src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java | 2 ++
 .../src/main/java/org/apache/hadoop/hdfs/DataStreamer.java| 2 ++
 .../apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java | 2 ++
 5 files changed, 16 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2ec356fc/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 913040f..faf2320 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -230,6 +230,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-8217. During block recovery for truncate Log new Block Id in case of
 copy-on-truncate is true. (vinayakumarb)
 
+HDFS-8231. StackTrace displayed at client while QuotaByStorageType exceeds
+(J.Andreina and Xiaoyu Yao via vinayakumarb)
+
 Release 2.7.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2ec356fc/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index b241815..22ed86f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -1427,6 +1427,7 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
  ParentNotDirectoryException.class,
  NSQuotaExceededException.class, 
  DSQuotaExceededException.class,
+ QuotaByStorageTypeExceededException.class,
  UnresolvedPathException.class,
  SnapshotAccessControlException.class);
 } finally {
@@ -1469,6 +1470,7 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
  FileNotFoundException.class,
  SafeModeException.class,
  DSQuotaExceededException.class,
+ QuotaByStorageTypeExceededException.class,
  UnsupportedOperationException.class,
  UnresolvedPathException.class,
  SnapshotAccessControlException.class);
@@ -1544,6 +1546,7 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
  FileNotFoundException.class,
  SafeModeException.class,
  DSQuotaExceededException.class,
+ QuotaByStorageTypeExceededException.class,
  UnresolvedPathException.class,
  SnapshotAccessControlException.class);
 } finally {
@@ -1600,6 +1603,7 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
   throw re.unwrapRemoteException(AccessControlException.class,
  NSQuotaExceededException.class,
  DSQuotaExceededException.class,
+ QuotaByStorageTypeExceededException.class,
  UnresolvedPathException.class,
  SnapshotAccessControlException.class);
 } finally {
@@ -1637,6 +1641,7 @@ public class 

hadoop git commit: YARN-3444. Fix typo capabililty. Contributed by Gabor Liptak.

2015-04-24 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/trunk a287d2fb7 - 5ce3a77f3


YARN-3444. Fix typo capabililty. Contributed by Gabor Liptak.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5ce3a77f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5ce3a77f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5ce3a77f

Branch: refs/heads/trunk
Commit: 5ce3a77f3c00aeabcd791c3373dd3c8c25160ce2
Parents: a287d2f
Author: Akira Ajisaka aajis...@apache.org
Authored: Sat Apr 25 06:08:16 2015 +0900
Committer: Akira Ajisaka aajis...@apache.org
Committed: Sat Apr 25 06:08:16 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt  | 2 ++
 .../yarn/applications/distributedshell/ApplicationMaster.java| 4 ++--
 .../apache/hadoop/yarn/applications/distributedshell/Client.java | 4 ++--
 .../src/site/markdown/WritingYarnApplications.md | 4 ++--
 4 files changed, 8 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5ce3a77f/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 19e3e27..9754c33 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -265,6 +265,8 @@ Release 2.8.0 - UNRELEASED
 YARN-3387. Previous AM's container completed status couldn't pass to 
current
 AM if AM and RM restarted during the same time. (sandflee via jianhe)
 
+YARN-3444. Fix typo capabililty. (Gabor Liptak via aajisaka)
+
 Release 2.7.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5ce3a77f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
index f5b3d0a..b62c24c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
@@ -577,10 +577,10 @@ public class ApplicationMaster {
 // Dump out information about cluster capability as seen by the
 // resource manager
 int maxMem = response.getMaximumResourceCapability().getMemory();
-LOG.info(Max mem capabililty of resources in this cluster  + maxMem);
+LOG.info(Max mem capability of resources in this cluster  + maxMem);
 
 int maxVCores = response.getMaximumResourceCapability().getVirtualCores();
-LOG.info(Max vcores capabililty of resources in this cluster  + 
maxVCores);
+LOG.info(Max vcores capability of resources in this cluster  + 
maxVCores);
 
 // A resource ask cannot exceed the max.
 if (containerMemory  maxMem) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5ce3a77f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
index 0e9a4e4..5a90880 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
@@ -488,7 +488,7 @@ public class Client {
 // Memory ask has to be a multiple of min and less than max. 
 // Dump out information about cluster capability as seen by the resource 
manager
 int maxMem = 

hadoop git commit: YARN-3444. Fix typo capabililty. Contributed by Gabor Liptak.

2015-04-24 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 122262a1f - 73ba3ebe7


YARN-3444. Fix typo capabililty. Contributed by Gabor Liptak.

(cherry picked from commit 5ce3a77f3c00aeabcd791c3373dd3c8c25160ce2)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/73ba3ebe
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/73ba3ebe
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/73ba3ebe

Branch: refs/heads/branch-2
Commit: 73ba3ebe7c3999b8123f7e19e01bb6e4e1cf0c90
Parents: 122262a
Author: Akira Ajisaka aajis...@apache.org
Authored: Sat Apr 25 06:08:16 2015 +0900
Committer: Akira Ajisaka aajis...@apache.org
Committed: Sat Apr 25 06:09:03 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt  | 2 ++
 .../yarn/applications/distributedshell/ApplicationMaster.java| 4 ++--
 .../apache/hadoop/yarn/applications/distributedshell/Client.java | 4 ++--
 .../src/site/markdown/WritingYarnApplications.md | 4 ++--
 4 files changed, 8 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/73ba3ebe/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index fec9451..ea8c723 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -217,6 +217,8 @@ Release 2.8.0 - UNRELEASED
 YARN-3387. Previous AM's container completed status couldn't pass to 
current
 AM if AM and RM restarted during the same time. (sandflee via jianhe)
 
+YARN-3444. Fix typo capabililty. (Gabor Liptak via aajisaka)
+
 Release 2.7.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/73ba3ebe/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
index f5b3d0a..b62c24c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
@@ -577,10 +577,10 @@ public class ApplicationMaster {
 // Dump out information about cluster capability as seen by the
 // resource manager
 int maxMem = response.getMaximumResourceCapability().getMemory();
-LOG.info(Max mem capabililty of resources in this cluster  + maxMem);
+LOG.info(Max mem capability of resources in this cluster  + maxMem);
 
 int maxVCores = response.getMaximumResourceCapability().getVirtualCores();
-LOG.info(Max vcores capabililty of resources in this cluster  + 
maxVCores);
+LOG.info(Max vcores capability of resources in this cluster  + 
maxVCores);
 
 // A resource ask cannot exceed the max.
 if (containerMemory  maxMem) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/73ba3ebe/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
index 0e9a4e4..5a90880 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java
@@ -488,7 +488,7 @@ public class Client {
 // Memory ask has to be a multiple of min and less than max. 
 // Dump out information about cluster 

hadoop git commit: HADOOP-11876. Refactor code to make it more readable, minor maybePrintStats bug (Zoran Dimitrijevic via raviprak)

2015-04-24 Thread raviprak
Repository: hadoop
Updated Branches:
  refs/heads/trunk 80935268f - a287d2fb7


HADOOP-11876. Refactor code to make it more readable, minor maybePrintStats bug 
(Zoran Dimitrijevic via raviprak)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a287d2fb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a287d2fb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a287d2fb

Branch: refs/heads/trunk
Commit: a287d2fb77d9873b61c6ab24134993d784ae8475
Parents: 8093526
Author: Ravi Prakash ravip...@altiscale.com
Authored: Fri Apr 24 13:39:07 2015 -0700
Committer: Ravi Prakash ravip...@altiscale.com
Committed: Fri Apr 24 13:39:07 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt   |  3 +++
 .../java/org/apache/hadoop/tools/SimpleCopyListing.java   | 10 +-
 2 files changed, 8 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a287d2fb/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 80c8a54..826c77e 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -561,6 +561,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-11864. JWTRedirectAuthenticationHandler breaks java8 javadocs.
 (Larry McCay via stevel)
 
+HADOOP-11876. Refactor code to make it more readable, minor
+maybePrintStats bug (Zoran Dimitrijevic via raviprak)
+
 Release 2.7.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a287d2fb/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
index b9ba099..4ea1dc9 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
@@ -343,11 +343,12 @@ public class SimpleCopyListing extends CopyListing {
   }
 }
 result = new WorkReportFileStatus[](
-fileSystem.listStatus(parent.getPath()), 0, true);
+fileSystem.listStatus(parent.getPath()), retry, true);
   } catch (FileNotFoundException fnf) {
 LOG.error(FileNotFoundException exception in listStatus:  +
   fnf.getMessage());
-result = new WorkReportFileStatus[](new FileStatus[0], 0, true, fnf);
+result = new WorkReportFileStatus[](new FileStatus[0], retry, true,
+  fnf);
   } catch (Exception e) {
 LOG.error(Exception in listStatus. Will send for retry.);
 FileStatus[] parentList = new FileStatus[1];
@@ -391,7 +392,6 @@ public class SimpleCopyListing extends CopyListing {
 
 for (FileStatus status : sourceDirs) {
   workers.put(new WorkRequestFileStatus(status, 0));
-  maybePrintStats();
 }
 
 while (workers.hasWork()) {
@@ -402,7 +402,7 @@ public class SimpleCopyListing extends CopyListing {
   if (LOG.isDebugEnabled()) {
 LOG.debug(Recording source-path:  + child.getPath() +  for 
copy.);
   }
-  if (retry == 0) {
+  if (workResult.getSuccess()) {
 CopyListingFileStatus childCopyListingStatus =
   DistCpUtils.toCopyListingFileStatus(sourceFS, child,
 preserveAcls  child.isDirectory(),
@@ -417,7 +417,6 @@ public class SimpleCopyListing extends CopyListing {
 LOG.debug(Traversing into source dir:  + child.getPath());
   }
   workers.put(new WorkRequestFileStatus(child, retry));
-  maybePrintStats();
 }
   } else {
 LOG.error(Giving up on  + child.getPath() +
@@ -472,5 +471,6 @@ public class SimpleCopyListing extends CopyListing {
   totalDirs++;
 }
 totalPaths++;
+maybePrintStats();
   }
 }



hadoop git commit: HADOOP-11876. Refactor code to make it more readable, minor maybePrintStats bug (Zoran Dimitrijevic via raviprak)

2015-04-24 Thread raviprak
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 524593ee8 - 122262a1f


HADOOP-11876. Refactor code to make it more readable, minor maybePrintStats bug 
(Zoran Dimitrijevic via raviprak)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/122262a1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/122262a1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/122262a1

Branch: refs/heads/branch-2
Commit: 122262a1fb2225b487ed34a970c23e95cee3528c
Parents: 524593e
Author: Ravi Prakash ravip...@altiscale.com
Authored: Fri Apr 24 13:39:07 2015 -0700
Committer: Ravi Prakash ravip...@altiscale.com
Committed: Fri Apr 24 13:39:48 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt   |  3 +++
 .../java/org/apache/hadoop/tools/SimpleCopyListing.java   | 10 +-
 2 files changed, 8 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/122262a1/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 8a55411..e018bc9 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -112,6 +112,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-11864. JWTRedirectAuthenticationHandler breaks java8 javadocs.
 (Larry McCay via stevel)
 
+HADOOP-11876. Refactor code to make it more readable, minor
+maybePrintStats bug (Zoran Dimitrijevic via raviprak)
+
 Release 2.7.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/122262a1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
index b9ba099..4ea1dc9 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
@@ -343,11 +343,12 @@ public class SimpleCopyListing extends CopyListing {
   }
 }
 result = new WorkReportFileStatus[](
-fileSystem.listStatus(parent.getPath()), 0, true);
+fileSystem.listStatus(parent.getPath()), retry, true);
   } catch (FileNotFoundException fnf) {
 LOG.error(FileNotFoundException exception in listStatus:  +
   fnf.getMessage());
-result = new WorkReportFileStatus[](new FileStatus[0], 0, true, fnf);
+result = new WorkReportFileStatus[](new FileStatus[0], retry, true,
+  fnf);
   } catch (Exception e) {
 LOG.error(Exception in listStatus. Will send for retry.);
 FileStatus[] parentList = new FileStatus[1];
@@ -391,7 +392,6 @@ public class SimpleCopyListing extends CopyListing {
 
 for (FileStatus status : sourceDirs) {
   workers.put(new WorkRequestFileStatus(status, 0));
-  maybePrintStats();
 }
 
 while (workers.hasWork()) {
@@ -402,7 +402,7 @@ public class SimpleCopyListing extends CopyListing {
   if (LOG.isDebugEnabled()) {
 LOG.debug(Recording source-path:  + child.getPath() +  for 
copy.);
   }
-  if (retry == 0) {
+  if (workResult.getSuccess()) {
 CopyListingFileStatus childCopyListingStatus =
   DistCpUtils.toCopyListingFileStatus(sourceFS, child,
 preserveAcls  child.isDirectory(),
@@ -417,7 +417,6 @@ public class SimpleCopyListing extends CopyListing {
 LOG.debug(Traversing into source dir:  + child.getPath());
   }
   workers.put(new WorkRequestFileStatus(child, retry));
-  maybePrintStats();
 }
   } else {
 LOG.error(Giving up on  + child.getPath() +
@@ -472,5 +471,6 @@ public class SimpleCopyListing extends CopyListing {
   totalDirs++;
 }
 totalPaths++;
+maybePrintStats();
   }
 }



[2/2] hadoop git commit: HADOOP-11843. Make setting up the build environment easier. Contributed by Niels Basjes.

2015-04-24 Thread cnauroth
HADOOP-11843. Make setting up the build environment easier. Contributed by 
Niels Basjes.

(cherry picked from commit 80935268f5fdd358070c6b68f89e8bd699785c54)

Conflicts:
hadoop-common-project/hadoop-common/CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/524593ee
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/524593ee
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/524593ee

Branch: refs/heads/branch-2
Commit: 524593ee84e4b993a4f42ae573b09024b6b76c8f
Parents: 0583c27
Author: cnauroth cnaur...@apache.org
Authored: Fri Apr 24 13:05:18 2015 -0700
Committer: cnauroth cnaur...@apache.org
Committed: Fri Apr 24 13:08:47 2015 -0700

--
 BUILDING.txt|  39 +-
 dev-support/docker/Dockerfile   |  67 +++
 dev-support/docker/hadoop_env_checks.sh | 118 +++
 hadoop-common-project/hadoop-common/CHANGES.txt |   3 +
 start-build-env.sh  |  50 
 5 files changed, 276 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/524593ee/BUILDING.txt
--
diff --git a/BUILDING.txt b/BUILDING.txt
index b30b30e..b61b11e 100644
--- a/BUILDING.txt
+++ b/BUILDING.txt
@@ -16,6 +16,43 @@ Requirements:
 * Internet connection for first build (to fetch all Maven and Hadoop 
dependencies)
 
 
--
+The easiest way to get an environment with all the appropriate tools is by 
means
+of the provided Docker config.
+This requires a recent version of docker ( 1.4.1 and higher are known to work 
).
+
+On Linux:
+Install Docker and run this command:
+
+$ ./start-build-env.sh
+
+On Mac:
+First make sure Homebrew has been installed ( http://brew.sh/ )
+$ brew install docker boot2docker
+$ boot2docker init -m 4096
+$ boot2docker start
+$ $(boot2docker shellinit)
+$ ./start-build-env.sh
+
+The prompt which is then presented is located at a mounted version of the 
source tree
+and all required tools for testing and building have been installed and 
configured.
+
+Note that from within this docker environment you ONLY have access to the 
Hadoop source
+tree from where you started. So if you need to run
+dev-support/test-patch.sh /path/to/my.patch
+then the patch must be placed inside the hadoop source tree.
+
+Known issues:
+- On Mac with Boot2Docker the performance on the mounted directory is 
currently extremely slow.
+  This is a known problem related to boot2docker on the Mac.
+  See:
+https://github.com/boot2docker/boot2docker/issues/593
+  This issue has been resolved as a duplicate, and they point to a new feature 
for utilizing NFS mounts
+  as the proposed solution:
+https://github.com/boot2docker/boot2docker/issues/64
+  An alternative solution to this problem is when you install Linux native 
inside a virtual machine
+  and run your IDE and Docker etc in side that VM.
+
+--
 Installing required packages for clean install of Ubuntu 14.04 LTS Desktop:
 
 * Oracle JDK 1.7 (preferred)
@@ -29,7 +66,7 @@ Installing required packages for clean install of Ubuntu 
14.04 LTS Desktop:
 * Native libraries
   $ sudo apt-get -y install build-essential autoconf automake libtool cmake 
zlib1g-dev pkg-config libssl-dev
 * ProtocolBuffer 2.5.0 (required)
-  $ sudo apt-get -y install libprotobuf-dev protobuf-compiler
+  $ sudo apt-get -y install protobuf-compiler
 
 Optional packages:
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/524593ee/dev-support/docker/Dockerfile
--
diff --git a/dev-support/docker/Dockerfile b/dev-support/docker/Dockerfile
new file mode 100644
index 000..81296dc
--- /dev/null
+++ b/dev-support/docker/Dockerfile
@@ -0,0 +1,67 @@
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# License); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an AS IS BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the 

[1/2] hadoop git commit: HADOOP-11843. Make setting up the build environment easier. Contributed by Niels Basjes.

2015-04-24 Thread cnauroth
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 0583c27fb - 524593ee8
  refs/heads/trunk d03dcb963 - 80935268f


HADOOP-11843. Make setting up the build environment easier. Contributed by 
Niels Basjes.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/80935268
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/80935268
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/80935268

Branch: refs/heads/trunk
Commit: 80935268f5fdd358070c6b68f89e8bd699785c54
Parents: d03dcb9
Author: cnauroth cnaur...@apache.org
Authored: Fri Apr 24 13:05:18 2015 -0700
Committer: cnauroth cnaur...@apache.org
Committed: Fri Apr 24 13:05:18 2015 -0700

--
 BUILDING.txt|  39 +-
 dev-support/docker/Dockerfile   |  67 +++
 dev-support/docker/hadoop_env_checks.sh | 118 +++
 hadoop-common-project/hadoop-common/CHANGES.txt |   3 +
 start-build-env.sh  |  50 
 5 files changed, 276 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/80935268/BUILDING.txt
--
diff --git a/BUILDING.txt b/BUILDING.txt
index 3ca9fae..de0e0e8 100644
--- a/BUILDING.txt
+++ b/BUILDING.txt
@@ -16,6 +16,43 @@ Requirements:
 * Internet connection for first build (to fetch all Maven and Hadoop 
dependencies)
 
 
--
+The easiest way to get an environment with all the appropriate tools is by 
means
+of the provided Docker config.
+This requires a recent version of docker ( 1.4.1 and higher are known to work 
).
+
+On Linux:
+Install Docker and run this command:
+
+$ ./start-build-env.sh
+
+On Mac:
+First make sure Homebrew has been installed ( http://brew.sh/ )
+$ brew install docker boot2docker
+$ boot2docker init -m 4096
+$ boot2docker start
+$ $(boot2docker shellinit)
+$ ./start-build-env.sh
+
+The prompt which is then presented is located at a mounted version of the 
source tree
+and all required tools for testing and building have been installed and 
configured.
+
+Note that from within this docker environment you ONLY have access to the 
Hadoop source
+tree from where you started. So if you need to run
+dev-support/test-patch.sh /path/to/my.patch
+then the patch must be placed inside the hadoop source tree.
+
+Known issues:
+- On Mac with Boot2Docker the performance on the mounted directory is 
currently extremely slow.
+  This is a known problem related to boot2docker on the Mac.
+  See:
+https://github.com/boot2docker/boot2docker/issues/593
+  This issue has been resolved as a duplicate, and they point to a new feature 
for utilizing NFS mounts
+  as the proposed solution:
+https://github.com/boot2docker/boot2docker/issues/64
+  An alternative solution to this problem is when you install Linux native 
inside a virtual machine
+  and run your IDE and Docker etc in side that VM.
+
+--
 Installing required packages for clean install of Ubuntu 14.04 LTS Desktop:
 
 * Oracle JDK 1.7 (preferred)
@@ -29,7 +66,7 @@ Installing required packages for clean install of Ubuntu 
14.04 LTS Desktop:
 * Native libraries
   $ sudo apt-get -y install build-essential autoconf automake libtool cmake 
zlib1g-dev pkg-config libssl-dev
 * ProtocolBuffer 2.5.0 (required)
-  $ sudo apt-get -y install libprotobuf-dev protobuf-compiler
+  $ sudo apt-get -y install protobuf-compiler
 
 Optional packages:
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/80935268/dev-support/docker/Dockerfile
--
diff --git a/dev-support/docker/Dockerfile b/dev-support/docker/Dockerfile
new file mode 100644
index 000..81296dc
--- /dev/null
+++ b/dev-support/docker/Dockerfile
@@ -0,0 +1,67 @@
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# License); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an AS IS BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#