hadoop git commit: HDFS-4396. Add START_MSG/SHUTDOWN_MSG for ZKFC. Contributed by Liang Xie.

2015-03-30 Thread harsh
Repository: hadoop
Updated Branches:
  refs/heads/trunk 1ed9fb766 - ae3e8c61f


HDFS-4396. Add START_MSG/SHUTDOWN_MSG for ZKFC. Contributed by Liang Xie.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ae3e8c61
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ae3e8c61
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ae3e8c61

Branch: refs/heads/trunk
Commit: ae3e8c61ff4c926ef3e71c782433ed9764d21478
Parents: 1ed9fb7
Author: Harsh J ha...@cloudera.com
Authored: Mon Mar 30 15:21:18 2015 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Mon Mar 30 15:21:18 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java | 2 ++
 2 files changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ae3e8c61/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 9b1cc3e..f437ad8 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -323,6 +323,9 @@ Release 2.8.0 - UNRELEASED
 
   IMPROVEMENTS
 
+HDFS-4396. Add START_MSG/SHUTDOWN_MSG for ZKFC
+(Liang Xie via harsh)
+
 HDFS-7875. Improve log message when wrong value configured for
 dfs.datanode.failed.volumes.tolerated.
 (nijel via harsh)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ae3e8c61/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
index 85f77f1..4e256a2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
@@ -167,6 +167,8 @@ public class DFSZKFailoverController extends 
ZKFailoverController {
 
   public static void main(String args[])
   throws Exception {
+StringUtils.startupShutdownMessage(DFSZKFailoverController.class,
+args, LOG);
 if (DFSUtil.parseHelpArgument(args, 
 ZKFailoverController.USAGE, System.out, true)) {
   System.exit(0);



hadoop git commit: HDFS-4396. Add START_MSG/SHUTDOWN_MSG for ZKFC. Contributed by Liang Xie.

2015-03-30 Thread harsh
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 9f49b3e93 - c58357939


HDFS-4396. Add START_MSG/SHUTDOWN_MSG for ZKFC. Contributed by Liang Xie.

(cherry picked from commit ae3e8c61ff4c926ef3e71c782433ed9764d21478)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c5835793
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c5835793
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c5835793

Branch: refs/heads/branch-2
Commit: c58357939fecf797d9556f70d434edba81681f6f
Parents: 9f49b3e
Author: Harsh J ha...@cloudera.com
Authored: Mon Mar 30 15:21:18 2015 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Mon Mar 30 15:22:57 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java | 2 ++
 2 files changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c5835793/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 151f71b..abc3d9a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -8,6 +8,9 @@ Release 2.8.0 - UNRELEASED
 
   IMPROVEMENTS
 
+HDFS-4396. Add START_MSG/SHUTDOWN_MSG for ZKFC
+(Liang Xie via harsh)
+
 HDFS-7875. Improve log message when wrong value configured for
 dfs.datanode.failed.volumes.tolerated.
 (nijel via harsh)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c5835793/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
index 85f77f1..4e256a2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
@@ -167,6 +167,8 @@ public class DFSZKFailoverController extends 
ZKFailoverController {
 
   public static void main(String args[])
   throws Exception {
+StringUtils.startupShutdownMessage(DFSZKFailoverController.class,
+args, LOG);
 if (DFSUtil.parseHelpArgument(args, 
 ZKFailoverController.USAGE, System.out, true)) {
   System.exit(0);



hadoop git commit: HDFS-7742. Favoring decommissioning node for replication can cause a block to stay underreplicated for long periods. Contributed by Nathan Roberts. (cherry picked from commit 04ee18

2015-03-30 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 b849a519c - a0ed29a05


HDFS-7742. Favoring decommissioning node for replication can cause a block to 
stay
underreplicated for long periods. Contributed by Nathan Roberts.
(cherry picked from commit 04ee18ed48ceef34598f954ff40940abc9fde1d2)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a0ed29a0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a0ed29a0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a0ed29a0

Branch: refs/heads/branch-2.7
Commit: a0ed29a058855537e12de3a691b5f6500595f00e
Parents: b849a51
Author: Kihwal Lee kih...@apache.org
Authored: Mon Mar 30 10:11:47 2015 -0500
Committer: Kihwal Lee kih...@apache.org
Committed: Mon Mar 30 10:11:47 2015 -0500

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../server/blockmanagement/BlockManager.java| 10 ++---
 .../blockmanagement/TestBlockManager.java   | 42 
 3 files changed, 50 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a0ed29a0/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 898907e..b96b24e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -459,6 +459,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7410. Support CreateFlags with append() to support hsync() for
 appending streams (Vinayakumar B via Colin P. McCabe)
 
+HDFS-7742. Favoring decommissioning node for replication can cause a block 
+to stay underreplicated for long periods (Nathan Roberts via kihwal)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a0ed29a0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 0ccd0bb..11965c1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -1640,7 +1640,8 @@ public class BlockManager {
   // If so, do not select the node as src node
   if ((nodesCorrupt != null)  nodesCorrupt.contains(node))
 continue;
-  if(priority != UnderReplicatedBlocks.QUEUE_HIGHEST_PRIORITY
+  if(priority != UnderReplicatedBlocks.QUEUE_HIGHEST_PRIORITY 
+   !node.isDecommissionInProgress() 
node.getNumberOfBlocksToBeReplicated() = maxReplicationStreams)
   {
 continue; // already reached replication limit
@@ -1655,13 +1656,12 @@ public class BlockManager {
   // never use already decommissioned nodes
   if(node.isDecommissioned())
 continue;
-  // we prefer nodes that are in DECOMMISSION_INPROGRESS state
-  if(node.isDecommissionInProgress() || srcNode == null) {
+
+  // We got this far, current node is a reasonable choice
+  if (srcNode == null) {
 srcNode = node;
 continue;
   }
-  if(srcNode.isDecommissionInProgress())
-continue;
   // switch to a different node randomly
   // this to prevent from deterministically selecting the same node even
   // if the node failed to replicate the block on previous iterations

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a0ed29a0/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
index 707c780..91abb2a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
@@ -535,6 +535,48 @@ public class TestBlockManager {
   }
 
   @Test
+  public void testFavorDecomUntilHardLimit() throws Exception {
+bm.maxReplicationStreams = 0;
+bm.replicationStreamsHardLimit = 1;
+
+long blockId = 42;

hadoop git commit: HDFS-7742. Favoring decommissioning node for replication can cause a block to stay underreplicated for long periods. Contributed by Nathan Roberts. (cherry picked from commit 04ee18

2015-03-30 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 c58357939 - c4cedfc1d


HDFS-7742. Favoring decommissioning node for replication can cause a block to 
stay
underreplicated for long periods. Contributed by Nathan Roberts.
(cherry picked from commit 04ee18ed48ceef34598f954ff40940abc9fde1d2)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c4cedfc1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c4cedfc1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c4cedfc1

Branch: refs/heads/branch-2
Commit: c4cedfc1d601127430c70ca8ca4d4e2ee2d1003d
Parents: c583579
Author: Kihwal Lee kih...@apache.org
Authored: Mon Mar 30 10:11:25 2015 -0500
Committer: Kihwal Lee kih...@apache.org
Committed: Mon Mar 30 10:11:25 2015 -0500

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../server/blockmanagement/BlockManager.java| 10 ++---
 .../blockmanagement/TestBlockManager.java   | 42 
 3 files changed, 50 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c4cedfc1/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index abc3d9a..6cf5227 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -520,6 +520,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7410. Support CreateFlags with append() to support hsync() for
 appending streams (Vinayakumar B via Colin P. McCabe)
 
+HDFS-7742. Favoring decommissioning node for replication can cause a block 
+to stay underreplicated for long periods (Nathan Roberts via kihwal)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c4cedfc1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 0ccd0bb..11965c1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -1640,7 +1640,8 @@ public class BlockManager {
   // If so, do not select the node as src node
   if ((nodesCorrupt != null)  nodesCorrupt.contains(node))
 continue;
-  if(priority != UnderReplicatedBlocks.QUEUE_HIGHEST_PRIORITY
+  if(priority != UnderReplicatedBlocks.QUEUE_HIGHEST_PRIORITY 
+   !node.isDecommissionInProgress() 
node.getNumberOfBlocksToBeReplicated() = maxReplicationStreams)
   {
 continue; // already reached replication limit
@@ -1655,13 +1656,12 @@ public class BlockManager {
   // never use already decommissioned nodes
   if(node.isDecommissioned())
 continue;
-  // we prefer nodes that are in DECOMMISSION_INPROGRESS state
-  if(node.isDecommissionInProgress() || srcNode == null) {
+
+  // We got this far, current node is a reasonable choice
+  if (srcNode == null) {
 srcNode = node;
 continue;
   }
-  if(srcNode.isDecommissionInProgress())
-continue;
   // switch to a different node randomly
   // this to prevent from deterministically selecting the same node even
   // if the node failed to replicate the block on previous iterations

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c4cedfc1/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
index 707c780..91abb2a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
@@ -535,6 +535,48 @@ public class TestBlockManager {
   }
 
   @Test
+  public void testFavorDecomUntilHardLimit() throws Exception {
+bm.maxReplicationStreams = 0;
+bm.replicationStreamsHardLimit = 1;
+
+long blockId = 42;

hadoop git commit: HDFS-7742. Favoring decommissioning node for replication can cause a block to stay underreplicated for long periods. Contributed by Nathan Roberts.

2015-03-30 Thread kihwal
Repository: hadoop
Updated Branches:
  refs/heads/trunk ae3e8c61f - 04ee18ed4


HDFS-7742. Favoring decommissioning node for replication can cause a block to 
stay
underreplicated for long periods. Contributed by Nathan Roberts.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/04ee18ed
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/04ee18ed
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/04ee18ed

Branch: refs/heads/trunk
Commit: 04ee18ed48ceef34598f954ff40940abc9fde1d2
Parents: ae3e8c6
Author: Kihwal Lee kih...@apache.org
Authored: Mon Mar 30 10:10:11 2015 -0500
Committer: Kihwal Lee kih...@apache.org
Committed: Mon Mar 30 10:10:11 2015 -0500

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../server/blockmanagement/BlockManager.java| 10 ++---
 .../blockmanagement/TestBlockManager.java   | 42 
 3 files changed, 50 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/04ee18ed/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index f437ad8..811ee75 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -829,6 +829,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7410. Support CreateFlags with append() to support hsync() for
 appending streams (Vinayakumar B via Colin P. McCabe)
 
+HDFS-7742. Favoring decommissioning node for replication can cause a block 
+to stay underreplicated for long periods (Nathan Roberts via kihwal)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/04ee18ed/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index ad40782..f6e15a3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -1637,7 +1637,8 @@ public class BlockManager {
   // If so, do not select the node as src node
   if ((nodesCorrupt != null)  nodesCorrupt.contains(node))
 continue;
-  if(priority != UnderReplicatedBlocks.QUEUE_HIGHEST_PRIORITY
+  if(priority != UnderReplicatedBlocks.QUEUE_HIGHEST_PRIORITY 
+   !node.isDecommissionInProgress() 
node.getNumberOfBlocksToBeReplicated() = maxReplicationStreams)
   {
 continue; // already reached replication limit
@@ -1652,13 +1653,12 @@ public class BlockManager {
   // never use already decommissioned nodes
   if(node.isDecommissioned())
 continue;
-  // we prefer nodes that are in DECOMMISSION_INPROGRESS state
-  if(node.isDecommissionInProgress() || srcNode == null) {
+
+  // We got this far, current node is a reasonable choice
+  if (srcNode == null) {
 srcNode = node;
 continue;
   }
-  if(srcNode.isDecommissionInProgress())
-continue;
   // switch to a different node randomly
   // this to prevent from deterministically selecting the same node even
   // if the node failed to replicate the block on previous iterations

http://git-wip-us.apache.org/repos/asf/hadoop/blob/04ee18ed/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
index 707c780..91abb2a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
@@ -535,6 +535,48 @@ public class TestBlockManager {
   }
 
   @Test
+  public void testFavorDecomUntilHardLimit() throws Exception {
+bm.maxReplicationStreams = 0;
+bm.replicationStreamsHardLimit = 1;
+
+long blockId = 42; // arbitrary
+Block aBlock = new Block(blockId, 0, 0);
+

hadoop git commit: HDFS-8002. Website refers to /trash directory. Contributd by Brahma Reddy Battula.

2015-03-30 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/trunk 04ee18ed4 - e7ea2a8e8


HDFS-8002. Website refers to /trash directory. Contributd by Brahma Reddy 
Battula.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e7ea2a8e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e7ea2a8e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e7ea2a8e

Branch: refs/heads/trunk
Commit: e7ea2a8e8f0a7b428ef10552885757b99b59e4dc
Parents: 04ee18e
Author: Akira Ajisaka aajis...@apache.org
Authored: Tue Mar 31 00:27:50 2015 +0900
Committer: Akira Ajisaka aajis...@apache.org
Committed: Tue Mar 31 00:27:50 2015 +0900

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 3 +++
 hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md | 4 ++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e7ea2a8e/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 811ee75..efba80e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -376,6 +376,9 @@ Release 2.8.0 - UNRELEASED
 greater or equal to 1 there is mismatch in the UI report
 (J.Andreina via vinayakumarb)
 
+HDFS-8002. Website refers to /trash directory. (Brahma Reddy Battula via
+aajisaka)
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e7ea2a8e/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
index 87a9fcd..5a8e366 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
@@ -224,9 +224,9 @@ Space Reclamation
 
 ### File Deletes and Undeletes
 
-When a file is deleted by a user or an application, it is not immediately 
removed from HDFS. Instead, HDFS first renames it to a file in the `/trash` 
directory. The file can be restored quickly as long as it remains in `/trash`. 
A file remains in `/trash` for a configurable amount of time. After the expiry 
of its life in `/trash`, the NameNode deletes the file from the HDFS namespace. 
The deletion of a file causes the blocks associated with the file to be freed. 
Note that there could be an appreciable time delay between the time a file is 
deleted by a user and the time of the corresponding increase in free space in 
HDFS.
+When a file is deleted by a user or an application, it is not immediately 
removed from HDFS. Instead, HDFS first renames it to a file in the trash 
directory(`/user/username/.Trash`). The file can be restored quickly as long 
as it remains in trash. A file remains in trash for a configurable amount of 
time. After the expiry of its life in trash, the NameNode deletes the file from 
the HDFS namespace. The deletion of a file causes the blocks associated with 
the file to be freed. Note that there could be an appreciable time delay 
between the time a file is deleted by a user and the time of the corresponding 
increase in free space in HDFS.
 
-A user can Undelete a file after deleting it as long as it remains in the 
`/trash` directory. If a user wants to undelete a file that he/she has deleted, 
he/she can navigate the `/trash` directory and retrieve the file. The `/trash` 
directory contains only the latest copy of the file that was deleted. The 
`/trash` directory is just like any other directory with one special feature: 
HDFS applies specified policies to automatically delete files from this 
directory. Current default trash interval is set to 0 (Deletes file without 
storing in trash). This value is configurable parameter stored as 
`fs.trash.interval` stored in core-site.xml.
+A user can Undelete a file after deleting it as long as it remains in the 
trash directory. If a user wants to undelete a file that he/she has deleted, 
he/she can navigate the trash directory and retrieve the file. The trash 
directory contains only the latest copy of the file that was deleted. The trash 
directory is just like any other directory with one special feature: HDFS 
applies specified policies to automatically delete files from this directory. 
Current default trash interval is set to 0 (Deletes file without storing in 
trash). This value is configurable parameter stored as `fs.trash.interval` 
stored in core-site.xml.
 
 ### Decrease Replication Factor
 



hadoop git commit: HDFS-8002. Website refers to /trash directory. Contributd by Brahma Reddy Battula.

2015-03-30 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 c4cedfc1d - d4bb9b214


HDFS-8002. Website refers to /trash directory. Contributd by Brahma Reddy 
Battula.

(cherry picked from commit e7ea2a8e8f0a7b428ef10552885757b99b59e4dc)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d4bb9b21
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d4bb9b21
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d4bb9b21

Branch: refs/heads/branch-2
Commit: d4bb9b21465e0fffa4282a34f7865e4ac53987f0
Parents: c4cedfc
Author: Akira Ajisaka aajis...@apache.org
Authored: Tue Mar 31 00:27:50 2015 +0900
Committer: Akira Ajisaka aajis...@apache.org
Committed: Tue Mar 31 00:28:27 2015 +0900

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 3 +++
 hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md | 4 ++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d4bb9b21/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 6cf5227..d4baaf3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -61,6 +61,9 @@ Release 2.8.0 - UNRELEASED
 greater or equal to 1 there is mismatch in the UI report
 (J.Andreina via vinayakumarb)
 
+HDFS-8002. Website refers to /trash directory. (Brahma Reddy Battula via
+aajisaka)
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d4bb9b21/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
index 87a9fcd..5a8e366 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
@@ -224,9 +224,9 @@ Space Reclamation
 
 ### File Deletes and Undeletes
 
-When a file is deleted by a user or an application, it is not immediately 
removed from HDFS. Instead, HDFS first renames it to a file in the `/trash` 
directory. The file can be restored quickly as long as it remains in `/trash`. 
A file remains in `/trash` for a configurable amount of time. After the expiry 
of its life in `/trash`, the NameNode deletes the file from the HDFS namespace. 
The deletion of a file causes the blocks associated with the file to be freed. 
Note that there could be an appreciable time delay between the time a file is 
deleted by a user and the time of the corresponding increase in free space in 
HDFS.
+When a file is deleted by a user or an application, it is not immediately 
removed from HDFS. Instead, HDFS first renames it to a file in the trash 
directory(`/user/username/.Trash`). The file can be restored quickly as long 
as it remains in trash. A file remains in trash for a configurable amount of 
time. After the expiry of its life in trash, the NameNode deletes the file from 
the HDFS namespace. The deletion of a file causes the blocks associated with 
the file to be freed. Note that there could be an appreciable time delay 
between the time a file is deleted by a user and the time of the corresponding 
increase in free space in HDFS.
 
-A user can Undelete a file after deleting it as long as it remains in the 
`/trash` directory. If a user wants to undelete a file that he/she has deleted, 
he/she can navigate the `/trash` directory and retrieve the file. The `/trash` 
directory contains only the latest copy of the file that was deleted. The 
`/trash` directory is just like any other directory with one special feature: 
HDFS applies specified policies to automatically delete files from this 
directory. Current default trash interval is set to 0 (Deletes file without 
storing in trash). This value is configurable parameter stored as 
`fs.trash.interval` stored in core-site.xml.
+A user can Undelete a file after deleting it as long as it remains in the 
trash directory. If a user wants to undelete a file that he/she has deleted, 
he/she can navigate the trash directory and retrieve the file. The trash 
directory contains only the latest copy of the file that was deleted. The trash 
directory is just like any other directory with one special feature: HDFS 
applies specified policies to automatically delete files from this directory. 
Current default trash interval is set to 0 (Deletes file without storing in 
trash). This value is configurable parameter stored as `fs.trash.interval` 
stored in core-site.xml.
 
 ### Decrease 

hadoop git commit: YARN-3304. Cleaning up ResourceCalculatorProcessTree APIs for public use and removing inconsistencies in the default values. Contributed by Junping Du and Karthik Kambatla.

2015-03-30 Thread vinodkv
Repository: hadoop
Updated Branches:
  refs/heads/trunk e7ea2a8e8 - c358368f5


YARN-3304. Cleaning up ResourceCalculatorProcessTree APIs for public use and 
removing inconsistencies in the default values. Contributed by Junping Du and 
Karthik Kambatla.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c358368f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c358368f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c358368f

Branch: refs/heads/trunk
Commit: c358368f511963ad8e35f030b9babee541e1bd01
Parents: e7ea2a8
Author: Vinod Kumar Vavilapalli vino...@apache.org
Authored: Mon Mar 30 10:09:40 2015 -0700
Committer: Vinod Kumar Vavilapalli vino...@apache.org
Committed: Mon Mar 30 10:09:40 2015 -0700

--
 .../java/org/apache/hadoop/mapred/Task.java | 26 --
 hadoop-yarn-project/CHANGES.txt |  4 +
 .../apache/hadoop/yarn/util/CpuTimeTracker.java |  3 +-
 .../yarn/util/ProcfsBasedProcessTree.java   | 80 +-
 .../util/ResourceCalculatorProcessTree.java | 66 ---
 .../yarn/util/WindowsBasedProcessTree.java  | 21 +++--
 .../yarn/util/TestProcfsBasedProcessTree.java   | 85 ++--
 .../util/TestResourceCalculatorProcessTree.java |  4 +-
 .../yarn/util/TestWindowsBasedProcessTree.java  | 28 +++
 .../monitor/ContainerMetrics.java   | 12 ++-
 .../monitor/ContainersMonitorImpl.java  | 12 +--
 11 files changed, 187 insertions(+), 154 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c358368f/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
index bf5ca22..80881bc 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
@@ -171,7 +171,7 @@ abstract public class Task implements Writable, 
Configurable {
 skipRanges.skipRangeIterator();
 
   private ResourceCalculatorProcessTree pTree;
-  private long initCpuCumulativeTime = 0;
+  private long initCpuCumulativeTime = 
ResourceCalculatorProcessTree.UNAVAILABLE;
 
   protected JobConf conf;
   protected MapOutputFile mapOutputFile;
@@ -866,13 +866,25 @@ abstract public class Task implements Writable, 
Configurable {
 }
 pTree.updateProcessTree();
 long cpuTime = pTree.getCumulativeCpuTime();
-long pMem = pTree.getCumulativeRssmem();
-long vMem = pTree.getCumulativeVmem();
+long pMem = pTree.getRssMemorySize();
+long vMem = pTree.getVirtualMemorySize();
 // Remove the CPU time consumed previously by JVM reuse
-cpuTime -= initCpuCumulativeTime;
-counters.findCounter(TaskCounter.CPU_MILLISECONDS).setValue(cpuTime);
-counters.findCounter(TaskCounter.PHYSICAL_MEMORY_BYTES).setValue(pMem);
-counters.findCounter(TaskCounter.VIRTUAL_MEMORY_BYTES).setValue(vMem);
+if (cpuTime != ResourceCalculatorProcessTree.UNAVAILABLE 
+initCpuCumulativeTime != ResourceCalculatorProcessTree.UNAVAILABLE) {
+  cpuTime -= initCpuCumulativeTime;
+}
+
+if (cpuTime != ResourceCalculatorProcessTree.UNAVAILABLE) {
+  counters.findCounter(TaskCounter.CPU_MILLISECONDS).setValue(cpuTime);
+}
+
+if (pMem != ResourceCalculatorProcessTree.UNAVAILABLE) {
+  counters.findCounter(TaskCounter.PHYSICAL_MEMORY_BYTES).setValue(pMem);
+}
+
+if (vMem != ResourceCalculatorProcessTree.UNAVAILABLE) {
+  counters.findCounter(TaskCounter.VIRTUAL_MEMORY_BYTES).setValue(vMem);
+}
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c358368f/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index fb233e3..b38c9ac 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -847,6 +847,10 @@ Release 2.7.0 - UNRELEASED
 YARN-2213. Change proxy-user cookie log in AmIpFilter to DEBUG.
 (Varun Saxena via xgong)
 
+YARN-3304. Cleaning up ResourceCalculatorProcessTree APIs for public use 
and
+removing inconsistencies in the default values. (Junping Du and Karthik
+Kambatla via vinodkv)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES


[04/50] [abbrv] hadoop git commit: HADOOP-11639. Clean up Windows native code compilation warnings related to Windows Secure Container Executor. Contributed by Remus Rusanu.

2015-03-30 Thread zhz
HADOOP-11639. Clean up Windows native code compilation warnings related to 
Windows Secure Container Executor. Contributed by Remus Rusanu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3836ad6c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3836ad6c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3836ad6c

Branch: refs/heads/HDFS-7285
Commit: 3836ad6c0b3331cf60286d134157c13985908230
Parents: 05499b1
Author: cnauroth cnaur...@apache.org
Authored: Fri Mar 27 15:03:41 2015 -0700
Committer: cnauroth cnaur...@apache.org
Committed: Fri Mar 27 15:03:41 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 ++
 .../windows_secure_container_executor.c |  2 +-
 .../hadoop-common/src/main/winutils/client.c| 17 --
 .../hadoop-common/src/main/winutils/config.cpp  |  2 +-
 .../src/main/winutils/include/winutils.h| 24 +++---
 .../src/main/winutils/libwinutils.c | 18 +--
 .../hadoop-common/src/main/winutils/service.c   | 34 ++--
 .../src/main/winutils/systeminfo.c  |  3 ++
 .../hadoop-common/src/main/winutils/task.c  | 28 +---
 9 files changed, 76 insertions(+), 55 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3836ad6c/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index febbf6b..8643901 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1172,6 +1172,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11691. X86 build of libwinutils is broken.
 (Kiran Kumar M R via cnauroth)
 
+HADOOP-11639. Clean up Windows native code compilation warnings related to
+Windows Secure Container Executor. (Remus Rusanu via cnauroth)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3836ad6c/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/yarn/server/nodemanager/windows_secure_container_executor.c
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/yarn/server/nodemanager/windows_secure_container_executor.c
 
b/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/yarn/server/nodemanager/windows_secure_container_executor.c
index 7e65065..b37359d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/yarn/server/nodemanager/windows_secure_container_executor.c
+++ 
b/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/yarn/server/nodemanager/windows_secure_container_executor.c
@@ -409,7 +409,7 @@ 
Java_org_apache_hadoop_yarn_server_nodemanager_WindowsSecureContainerExecutor_00
 
 done:
   if (path) (*env)-ReleaseStringChars(env, jpath, path);
-  return hFile;
+  return (jlong) hFile;
 #endif
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3836ad6c/hadoop-common-project/hadoop-common/src/main/winutils/client.c
--
diff --git a/hadoop-common-project/hadoop-common/src/main/winutils/client.c 
b/hadoop-common-project/hadoop-common/src/main/winutils/client.c
index 047bfb5..e3a2c37 100644
--- a/hadoop-common-project/hadoop-common/src/main/winutils/client.c
+++ b/hadoop-common-project/hadoop-common/src/main/winutils/client.c
@@ -28,8 +28,6 @@ static ACCESS_MASK CLIENT_MASK = 1;
 VOID ReportClientError(LPWSTR lpszLocation, DWORD dwError) {
   LPWSTR  debugMsg = NULL;
   int len;
-  WCHAR   hexError[32];
-  HRESULT hr;
 
   if (IsDebuggerPresent()) {
 len = FormatMessageW(
@@ -49,7 +47,6 @@ DWORD PrepareRpcBindingHandle(
   DWORD   dwError = EXIT_FAILURE;
   RPC_STATUS  status;
   LPWSTR  lpszStringBinding= NULL;
-  ULONG   ulCode;
   RPC_SECURITY_QOS_V3 qos;
   SID_IDENTIFIER_AUTHORITY authNT = SECURITY_NT_AUTHORITY;
   BOOL rpcBindingInit = FALSE;
@@ -104,7 +101,7 @@ DWORD PrepareRpcBindingHandle(
   RPC_C_AUTHN_WINNT,  // AuthnSvc
   NULL,   // AuthnIdentity (self)
   RPC_C_AUTHZ_NONE,   // AuthzSvc
-  qos);
+  (RPC_SECURITY_QOS*) qos);
   if (RPC_S_OK != status) {
 ReportClientError(LRpcBindingSetAuthInfoEx, status);
 dwError = status;
@@ -375,7 +372,7 @@ DWORD RpcCall_WinutilsCreateFile(
   RpcEndExcept;
 
   if (ERROR_SUCCESS == dwError) {
-*hFile = response-hFile;
+*hFile = (HANDLE) response-hFile;
   }
 
 done:

[30/50] [abbrv] hadoop git commit: HDFS-7872. Erasure Coding: INodeFile.dumpTreeRecursively() supports to print striped blocks. Contributed by Takuya Fukudome.

2015-03-30 Thread zhz
HDFS-7872. Erasure Coding: INodeFile.dumpTreeRecursively() supports to print 
striped blocks. Contributed by Takuya Fukudome.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/daa78e31
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/daa78e31
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/daa78e31

Branch: refs/heads/HDFS-7285
Commit: daa78e3134643051204f584a95d97ef72ad17eb2
Parents: 3dbde16
Author: Jing Zhao ji...@apache.org
Authored: Thu Mar 5 16:44:38 2015 -0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:26 2015 -0700

--
 .../java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/daa78e31/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
index f522850..452c230 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
@@ -876,8 +876,8 @@ public class INodeFile extends INodeWithAdditionalFields
 out.print(, fileSize= + computeFileSize(snapshotId));
 // only compare the first block
 out.print(, blocks=);
-out.print(blocks == null || blocks.length == 0? null: blocks[0]);
-// TODO print striped blocks
+BlockInfo[] blks = getBlocks();
+out.print(blks == null || blks.length == 0? null: blks[0]);
 out.println();
   }
 



[15/50] [abbrv] hadoop git commit: HDFS-8002. Website refers to /trash directory. Contributd by Brahma Reddy Battula.

2015-03-30 Thread zhz
HDFS-8002. Website refers to /trash directory. Contributd by Brahma Reddy 
Battula.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e7ea2a8e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e7ea2a8e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e7ea2a8e

Branch: refs/heads/HDFS-7285
Commit: e7ea2a8e8f0a7b428ef10552885757b99b59e4dc
Parents: 04ee18e
Author: Akira Ajisaka aajis...@apache.org
Authored: Tue Mar 31 00:27:50 2015 +0900
Committer: Akira Ajisaka aajis...@apache.org
Committed: Tue Mar 31 00:27:50 2015 +0900

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 3 +++
 hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md | 4 ++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e7ea2a8e/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 811ee75..efba80e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -376,6 +376,9 @@ Release 2.8.0 - UNRELEASED
 greater or equal to 1 there is mismatch in the UI report
 (J.Andreina via vinayakumarb)
 
+HDFS-8002. Website refers to /trash directory. (Brahma Reddy Battula via
+aajisaka)
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e7ea2a8e/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
index 87a9fcd..5a8e366 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
@@ -224,9 +224,9 @@ Space Reclamation
 
 ### File Deletes and Undeletes
 
-When a file is deleted by a user or an application, it is not immediately 
removed from HDFS. Instead, HDFS first renames it to a file in the `/trash` 
directory. The file can be restored quickly as long as it remains in `/trash`. 
A file remains in `/trash` for a configurable amount of time. After the expiry 
of its life in `/trash`, the NameNode deletes the file from the HDFS namespace. 
The deletion of a file causes the blocks associated with the file to be freed. 
Note that there could be an appreciable time delay between the time a file is 
deleted by a user and the time of the corresponding increase in free space in 
HDFS.
+When a file is deleted by a user or an application, it is not immediately 
removed from HDFS. Instead, HDFS first renames it to a file in the trash 
directory(`/user/username/.Trash`). The file can be restored quickly as long 
as it remains in trash. A file remains in trash for a configurable amount of 
time. After the expiry of its life in trash, the NameNode deletes the file from 
the HDFS namespace. The deletion of a file causes the blocks associated with 
the file to be freed. Note that there could be an appreciable time delay 
between the time a file is deleted by a user and the time of the corresponding 
increase in free space in HDFS.
 
-A user can Undelete a file after deleting it as long as it remains in the 
`/trash` directory. If a user wants to undelete a file that he/she has deleted, 
he/she can navigate the `/trash` directory and retrieve the file. The `/trash` 
directory contains only the latest copy of the file that was deleted. The 
`/trash` directory is just like any other directory with one special feature: 
HDFS applies specified policies to automatically delete files from this 
directory. Current default trash interval is set to 0 (Deletes file without 
storing in trash). This value is configurable parameter stored as 
`fs.trash.interval` stored in core-site.xml.
+A user can Undelete a file after deleting it as long as it remains in the 
trash directory. If a user wants to undelete a file that he/she has deleted, 
he/she can navigate the trash directory and retrieve the file. The trash 
directory contains only the latest copy of the file that was deleted. The trash 
directory is just like any other directory with one special feature: HDFS 
applies specified policies to automatically delete files from this directory. 
Current default trash interval is set to 0 (Deletes file without storing in 
trash). This value is configurable parameter stored as `fs.trash.interval` 
stored in core-site.xml.
 
 ### Decrease Replication Factor
 



[03/50] [abbrv] hadoop git commit: MAPREDUCE-6294. Remove an extra parameter described in Javadoc of TockenCache. Contributed by Brahma Reddy Battula.

2015-03-30 Thread zhz
MAPREDUCE-6294. Remove an extra parameter described in Javadoc of TockenCache. 
Contributed by Brahma Reddy Battula.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/05499b10
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/05499b10
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/05499b10

Branch: refs/heads/HDFS-7285
Commit: 05499b1093ea6ba6a39a1354d67b0a46a2982824
Parents: e074952
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Sat Mar 28 00:08:35 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Sat Mar 28 00:08:35 2015 +0900

--
 hadoop-mapreduce-project/CHANGES.txt  | 3 +++
 .../java/org/apache/hadoop/mapreduce/security/TokenCache.java | 1 -
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/05499b10/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 9d6f1d4..ce16510 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -305,6 +305,9 @@ Release 2.8.0 - UNRELEASED
 MAPREDUCE-6242. Progress report log is incredibly excessive in 
 application master. (Varun Saxena via devaraj)
 
+MAPREDUCE-6294. Remove an extra parameter described in Javadoc of
+TockenCache. (Brahma Reddy Battula via ozawa)
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/05499b10/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/TokenCache.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/TokenCache.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/TokenCache.java
index 7b1f657..6c0de1b 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/TokenCache.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/security/TokenCache.java
@@ -105,7 +105,6 @@ public class TokenCache {
* get delegation token for a specific FS
* @param fs
* @param credentials
-   * @param p
* @param conf
* @throws IOException
*/



hadoop git commit: HDFS-7261. storageMap is accessed without synchronization in DatanodeDescriptor#updateHeartbeatState() (Brahma Reddy Battula via Colin P. McCabe)

2015-03-30 Thread cmccabe
Repository: hadoop
Updated Branches:
  refs/heads/trunk 5358b8316 - 1feb9569f


HDFS-7261. storageMap is accessed without synchronization in 
DatanodeDescriptor#updateHeartbeatState() (Brahma Reddy Battula via Colin P. 
McCabe)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1feb9569
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1feb9569
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1feb9569

Branch: refs/heads/trunk
Commit: 1feb9569f366a29ecb43592d71ee21023162c18f
Parents: 5358b83
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Mon Mar 30 10:46:21 2015 -0700
Committer: Colin Patrick Mccabe cmcc...@cloudera.com
Committed: Mon Mar 30 10:46:21 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  4 +++
 .../blockmanagement/DatanodeDescriptor.java | 29 
 2 files changed, 21 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1feb9569/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index efba80e..79a81c6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -379,6 +379,10 @@ Release 2.8.0 - UNRELEASED
 HDFS-8002. Website refers to /trash directory. (Brahma Reddy Battula via
 aajisaka)
 
+HDFS-7261. storageMap is accessed without synchronization in
+DatanodeDescriptor#updateHeartbeatState() (Brahma Reddy Battula via Colin
+P. McCabe)
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1feb9569/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
index d0d7a72..4731ad4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
@@ -447,8 +447,10 @@ public class DatanodeDescriptor extends DatanodeInfo {
 if (checkFailedStorages) {
   LOG.info(Number of failed storage changes from 
   + this.volumeFailures +  to  + volFailures);
-  failedStorageInfos = new HashSetDatanodeStorageInfo(
-  storageMap.values());
+  synchronized (storageMap) {
+failedStorageInfos =
+new HashSetDatanodeStorageInfo(storageMap.values());
+  }
 }
 
 setCacheCapacity(cacheCapacity);
@@ -480,8 +482,11 @@ public class DatanodeDescriptor extends DatanodeInfo {
 if (checkFailedStorages) {
   updateFailedStorage(failedStorageInfos);
 }
-
-if (storageMap.size() != reports.length) {
+long storageMapSize;
+synchronized (storageMap) {
+  storageMapSize = storageMap.size();
+}
+if (storageMapSize != reports.length) {
   pruneStorageMap(reports);
 }
   }
@@ -491,14 +496,14 @@ public class DatanodeDescriptor extends DatanodeInfo {
* as long as they have associated block replicas.
*/
   private void pruneStorageMap(final StorageReport[] reports) {
-if (LOG.isDebugEnabled()) {
-  LOG.debug(Number of storages reported in heartbeat= + reports.length +
-; Number of storages in storageMap= + storageMap.size());
-}
+synchronized (storageMap) {
+  if (LOG.isDebugEnabled()) {
+LOG.debug(Number of storages reported in heartbeat= + reports.length
++ ; Number of storages in storageMap= + storageMap.size());
+  }
 
-HashMapString, DatanodeStorageInfo excessStorages;
+  HashMapString, DatanodeStorageInfo excessStorages;
 
-synchronized (storageMap) {
   // Init excessStorages with all known storages.
   excessStorages = new HashMapString, DatanodeStorageInfo(storageMap);
 
@@ -515,8 +520,8 @@ public class DatanodeDescriptor extends DatanodeInfo {
   LOG.info(Removed storage  + storageInfo +  from DataNode + this);
 } else if (LOG.isDebugEnabled()) {
   // This can occur until all block reports are received.
-  LOG.debug(Deferring removal of stale storage  + storageInfo +
- with  + storageInfo.numBlocks() +  blocks);
+  LOG.debug(Deferring removal of stale storage  + storageInfo
+  +  with  + storageInfo.numBlocks() +  

[29/50] [abbrv] hadoop git commit: HDFS-7749. Erasure Coding: Add striped block support in INodeFile. Contributed by Jing Zhao.

2015-03-30 Thread zhz
HDFS-7749. Erasure Coding: Add striped block support in INodeFile. Contributed 
by Jing Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7527a599
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7527a599
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7527a599

Branch: refs/heads/HDFS-7285
Commit: 7527a599b839b4e675c4b9f0ca36cbee2ddd2381
Parents: c37d982
Author: Jing Zhao ji...@apache.org
Authored: Wed Feb 25 22:10:26 2015 -0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:25 2015 -0700

--
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |  17 ++
 .../server/blockmanagement/BlockCollection.java |  13 +-
 .../hdfs/server/blockmanagement/BlockInfo.java  |  88 ++-
 .../BlockInfoContiguousUnderConstruction.java   |   6 +-
 .../blockmanagement/BlockInfoStriped.java   |  31 +++
 .../BlockInfoStripedUnderConstruction.java  | 240 ++
 .../server/blockmanagement/BlockManager.java| 147 +--
 .../CacheReplicationMonitor.java|  16 +-
 .../hdfs/server/namenode/FSDirConcatOp.java |   8 +-
 .../hdfs/server/namenode/FSDirectory.java   |   5 +-
 .../hadoop/hdfs/server/namenode/FSEditLog.java  |   8 +-
 .../hdfs/server/namenode/FSEditLogLoader.java   |  16 +-
 .../hdfs/server/namenode/FSImageFormat.java |   7 +-
 .../server/namenode/FSImageFormatPBINode.java   |  46 +++-
 .../hdfs/server/namenode/FSNamesystem.java  | 130 ++
 .../namenode/FileUnderConstructionFeature.java  |  15 +-
 .../namenode/FileWithStripedBlocksFeature.java  | 112 
 .../hadoop/hdfs/server/namenode/INodeFile.java  | 254 +--
 .../hdfs/server/namenode/LeaseManager.java  |   6 +-
 .../hdfs/server/namenode/NamenodeFsck.java  |   4 +-
 .../hadoop/hdfs/server/namenode/Namesystem.java |   3 +-
 .../snapshot/FSImageFormatPBSnapshot.java   |   7 +-
 .../server/namenode/snapshot/FileDiffList.java  |   9 +-
 .../hadoop-hdfs/src/main/proto/fsimage.proto|   5 +
 .../hadoop-hdfs/src/main/proto/hdfs.proto   |  10 +
 .../org/apache/hadoop/hdfs/DFSTestUtil.java |   3 +-
 .../blockmanagement/TestReplicationPolicy.java  |   4 +-
 .../hdfs/server/namenode/TestAddBlock.java  |  12 +-
 .../hdfs/server/namenode/TestAddBlockgroup.java |   3 +-
 .../namenode/TestBlockUnderConstruction.java|   6 +-
 .../hdfs/server/namenode/TestFSImage.java   |   4 +-
 .../hdfs/server/namenode/TestFileTruncate.java  |   4 +-
 .../hadoop/hdfs/server/namenode/TestFsck.java   |   4 +-
 .../snapshot/TestSnapshotBlocksMap.java |  24 +-
 .../namenode/snapshot/TestSnapshotDeletion.java |  16 +-
 35 files changed, 963 insertions(+), 320 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7527a599/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
index 9446b70..f31acc5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
@@ -172,6 +172,7 @@ import 
org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.StorageReportProto;
 import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.StorageTypeProto;
 import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.StorageTypesProto;
 import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.StorageUuidsProto;
+import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.StripedBlockProto;
 import org.apache.hadoop.hdfs.protocol.proto.InotifyProtos;
 import 
org.apache.hadoop.hdfs.protocol.proto.JournalProtocolProtos.JournalInfoProto;
 import 
org.apache.hadoop.hdfs.protocol.proto.XAttrProtos.GetXAttrsResponseProto;
@@ -184,6 +185,7 @@ import 
org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier;
 import org.apache.hadoop.hdfs.security.token.block.DataEncryptionKey;
 import org.apache.hadoop.hdfs.security.token.block.ExportedBlockKeys;
 import 
org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockStoragePolicySuite;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NamenodeRole;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType;
@@ -430,6 +432,21 @@ public class PBHelper {
 return new Block(b.getBlockId(), b.getNumBytes(), b.getGenStamp());
   }
 
+  public static BlockInfoStriped 

[44/50] [abbrv] hadoop git commit: HDFS-7864. Erasure Coding: Update safemode calculation for striped blocks. Contributed by GAO Rui.

2015-03-30 Thread zhz
HDFS-7864. Erasure Coding: Update safemode calculation for striped blocks. 
Contributed by GAO Rui.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e238608f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e238608f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e238608f

Branch: refs/heads/HDFS-7285
Commit: e238608fe1dc02907efc5243213cf16e762b5fee
Parents: 9a0f626
Author: Jing Zhao ji...@apache.org
Authored: Mon Mar 23 15:06:53 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:13:07 2015 -0700

--
 .../server/blockmanagement/BlockIdManager.java |  6 ++
 .../hdfs/server/blockmanagement/BlockManager.java  | 12 +++-
 .../hdfs/server/blockmanagement/BlocksMap.java |  2 +-
 .../hadoop/hdfs/server/namenode/FSNamesystem.java  | 17 -
 .../hadoop/hdfs/server/namenode/SafeMode.java  |  5 +++--
 .../java/org/apache/hadoop/hdfs/TestSafeMode.java  | 15 +--
 6 files changed, 42 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e238608f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
index 1d69d74..187f8c9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
@@ -233,6 +233,12 @@ public class BlockIdManager {
 return id  0;
   }
 
+  /**
+   * The last 4 bits of HdfsConstants.BLOCK_GROUP_INDEX_MASK(15) is ,
+   * so the last 4 bits of (~HdfsConstants.BLOCK_GROUP_INDEX_MASK) is 
+   * and the other 60 bits are 1. Group ID is the first 60 bits of any
+   * data/parity block id in the same striped block group.
+   */
   public static long convertToStripedID(long id) {
 return id  (~HdfsConstants.BLOCK_GROUP_INDEX_MASK);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e238608f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 7dfe0a4..abe44f0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -684,8 +684,10 @@ public class BlockManager {
 // a forced completion when a file is getting closed by an
 // OP_CLOSE edit on the standby).
 namesystem.adjustSafeModeBlockTotals(0, 1);
+final int minStorage = curBlock.isStriped() ?
+((BlockInfoStriped) curBlock).getDataBlockNum() : minReplication;
 namesystem.incrementSafeBlockCount(
-Math.min(numNodes, minReplication));
+Math.min(numNodes, minStorage), curBlock);
 
 // replace block in the blocksMap
 return blocksMap.replaceBlock(completeBlock);
@@ -2208,7 +2210,7 @@ public class BlockManager {
 // refer HDFS-5283
 if (namesystem.isInSnapshot(storedBlock.getBlockCollection())) {
   int numOfReplicas = BlockInfo.getNumExpectedLocations(storedBlock);
-  namesystem.incrementSafeBlockCount(numOfReplicas);
+  namesystem.incrementSafeBlockCount(numOfReplicas, storedBlock);
 }
 //and fall through to next clause
   }  
@@ -2589,14 +2591,14 @@ public class BlockManager {
   // only complete blocks are counted towards that.
   // In the case that the block just became complete above, completeBlock()
   // handles the safe block count maintenance.
-  namesystem.incrementSafeBlockCount(numCurrentReplica);
+  namesystem.incrementSafeBlockCount(numCurrentReplica, storedBlock);
 }
   }
 
   /**
* Modify (block--datanode) map. Remove block from set of
* needed replications if this takes care of the problem.
-   * @return the block that is stored in blockMap.
+   * @return the block that is stored in blocksMap.
*/
   private Block addStoredBlock(final BlockInfo block,
final Block reportedBlock,
@@ -2665,7 +2667,7 @@ public class BlockManager {

hadoop git commit: HDFS-7261. storageMap is accessed without synchronization in DatanodeDescriptor#updateHeartbeatState() (Brahma Reddy Battula via Colin P. McCabe)

2015-03-30 Thread cmccabe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 cc130a033 - 02ed22cd2


HDFS-7261. storageMap is accessed without synchronization in 
DatanodeDescriptor#updateHeartbeatState() (Brahma Reddy Battula via Colin P. 
McCabe)

(cherry picked from commit 1feb9569f366a29ecb43592d71ee21023162c18f)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/02ed22cd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/02ed22cd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/02ed22cd

Branch: refs/heads/branch-2
Commit: 02ed22cd2db7b5ff4d6e3d5be2a662973e5d3759
Parents: cc130a0
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Mon Mar 30 10:46:21 2015 -0700
Committer: Colin Patrick Mccabe cmcc...@cloudera.com
Committed: Mon Mar 30 10:53:57 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  4 +++
 .../blockmanagement/DatanodeDescriptor.java | 29 
 2 files changed, 21 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/02ed22cd/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index d4baaf3..b3cc6b7 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -64,6 +64,10 @@ Release 2.8.0 - UNRELEASED
 HDFS-8002. Website refers to /trash directory. (Brahma Reddy Battula via
 aajisaka)
 
+HDFS-7261. storageMap is accessed without synchronization in
+DatanodeDescriptor#updateHeartbeatState() (Brahma Reddy Battula via Colin
+P. McCabe)
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/02ed22cd/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
index 96084a4..1ab2ab9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
@@ -449,8 +449,10 @@ public class DatanodeDescriptor extends DatanodeInfo {
 if (checkFailedStorages) {
   LOG.info(Number of failed storage changes from 
   + this.volumeFailures +  to  + volFailures);
-  failedStorageInfos = new HashSetDatanodeStorageInfo(
-  storageMap.values());
+  synchronized (storageMap) {
+failedStorageInfos =
+new HashSetDatanodeStorageInfo(storageMap.values());
+  }
 }
 
 setCacheCapacity(cacheCapacity);
@@ -482,8 +484,11 @@ public class DatanodeDescriptor extends DatanodeInfo {
 if (checkFailedStorages) {
   updateFailedStorage(failedStorageInfos);
 }
-
-if (storageMap.size() != reports.length) {
+long storageMapSize;
+synchronized (storageMap) {
+  storageMapSize = storageMap.size();
+}
+if (storageMapSize != reports.length) {
   pruneStorageMap(reports);
 }
   }
@@ -493,14 +498,14 @@ public class DatanodeDescriptor extends DatanodeInfo {
* as long as they have associated block replicas.
*/
   private void pruneStorageMap(final StorageReport[] reports) {
-if (LOG.isDebugEnabled()) {
-  LOG.debug(Number of storages reported in heartbeat= + reports.length +
-; Number of storages in storageMap= + storageMap.size());
-}
+synchronized (storageMap) {
+  if (LOG.isDebugEnabled()) {
+LOG.debug(Number of storages reported in heartbeat= + reports.length
++ ; Number of storages in storageMap= + storageMap.size());
+  }
 
-HashMapString, DatanodeStorageInfo excessStorages;
+  HashMapString, DatanodeStorageInfo excessStorages;
 
-synchronized (storageMap) {
   // Init excessStorages with all known storages.
   excessStorages = new HashMapString, DatanodeStorageInfo(storageMap);
 
@@ -517,8 +522,8 @@ public class DatanodeDescriptor extends DatanodeInfo {
   LOG.info(Removed storage  + storageInfo +  from DataNode + this);
 } else if (LOG.isDebugEnabled()) {
   // This can occur until all block reports are received.
-  LOG.debug(Deferring removal of stale storage  + storageInfo +
- with  + storageInfo.numBlocks() +  blocks);
+  LOG.debug(Deferring removal of stale 

hadoop git commit: YARN-3304. Cleaning up ResourceCalculatorProcessTree APIs for public use and removing inconsistencies in the default values. Contributed by Junping Du and Karthik Kambatla.

2015-03-30 Thread vinodkv
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 d4bb9b214 - c5bc48946


YARN-3304. Cleaning up ResourceCalculatorProcessTree APIs for public use and 
removing inconsistencies in the default values. Contributed by Junping Du and 
Karthik Kambatla.

(cherry picked from commit c358368f511963ad8e35f030b9babee541e1bd01)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c5bc4894
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c5bc4894
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c5bc4894

Branch: refs/heads/branch-2
Commit: c5bc48946d4cdcd7b5ce7113a69bde36ec7955de
Parents: d4bb9b2
Author: Vinod Kumar Vavilapalli vino...@apache.org
Authored: Mon Mar 30 10:09:40 2015 -0700
Committer: Vinod Kumar Vavilapalli vino...@apache.org
Committed: Mon Mar 30 10:11:12 2015 -0700

--
 .../java/org/apache/hadoop/mapred/Task.java | 26 --
 hadoop-yarn-project/CHANGES.txt |  4 +
 .../apache/hadoop/yarn/util/CpuTimeTracker.java |  3 +-
 .../yarn/util/ProcfsBasedProcessTree.java   | 80 +-
 .../util/ResourceCalculatorProcessTree.java | 66 ---
 .../yarn/util/WindowsBasedProcessTree.java  | 21 +++--
 .../yarn/util/TestProcfsBasedProcessTree.java   | 85 ++--
 .../util/TestResourceCalculatorProcessTree.java |  4 +-
 .../yarn/util/TestWindowsBasedProcessTree.java  | 28 +++
 .../monitor/ContainerMetrics.java   | 12 ++-
 .../monitor/ContainersMonitorImpl.java  | 12 +--
 11 files changed, 187 insertions(+), 154 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c5bc4894/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
index 9fab545..b2a575b 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
@@ -172,7 +172,7 @@ abstract public class Task implements Writable, 
Configurable {
 skipRanges.skipRangeIterator();
 
   private ResourceCalculatorProcessTree pTree;
-  private long initCpuCumulativeTime = 0;
+  private long initCpuCumulativeTime = 
ResourceCalculatorProcessTree.UNAVAILABLE;
 
   protected JobConf conf;
   protected MapOutputFile mapOutputFile;
@@ -851,13 +851,25 @@ abstract public class Task implements Writable, 
Configurable {
 }
 pTree.updateProcessTree();
 long cpuTime = pTree.getCumulativeCpuTime();
-long pMem = pTree.getCumulativeRssmem();
-long vMem = pTree.getCumulativeVmem();
+long pMem = pTree.getRssMemorySize();
+long vMem = pTree.getVirtualMemorySize();
 // Remove the CPU time consumed previously by JVM reuse
-cpuTime -= initCpuCumulativeTime;
-counters.findCounter(TaskCounter.CPU_MILLISECONDS).setValue(cpuTime);
-counters.findCounter(TaskCounter.PHYSICAL_MEMORY_BYTES).setValue(pMem);
-counters.findCounter(TaskCounter.VIRTUAL_MEMORY_BYTES).setValue(vMem);
+if (cpuTime != ResourceCalculatorProcessTree.UNAVAILABLE 
+initCpuCumulativeTime != ResourceCalculatorProcessTree.UNAVAILABLE) {
+  cpuTime -= initCpuCumulativeTime;
+}
+
+if (cpuTime != ResourceCalculatorProcessTree.UNAVAILABLE) {
+  counters.findCounter(TaskCounter.CPU_MILLISECONDS).setValue(cpuTime);
+}
+
+if (pMem != ResourceCalculatorProcessTree.UNAVAILABLE) {
+  counters.findCounter(TaskCounter.PHYSICAL_MEMORY_BYTES).setValue(pMem);
+}
+
+if (vMem != ResourceCalculatorProcessTree.UNAVAILABLE) {
+  counters.findCounter(TaskCounter.VIRTUAL_MEMORY_BYTES).setValue(vMem);
+}
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c5bc4894/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 0d1bef1..c36649e 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -802,6 +802,10 @@ Release 2.7.0 - UNRELEASED
 YARN-2213. Change proxy-user cookie log in AmIpFilter to DEBUG.
 (Varun Saxena via xgong)
 
+YARN-3304. Cleaning up ResourceCalculatorProcessTree APIs for public use 
and
+removing inconsistencies in the default values. (Junping Du and Karthik
+Kambatla via vinodkv)
+
 

hadoop git commit: MAPREDUCE-6288. Changed permissions on JobHistory server's done directory so that user's client can load the conf files directly. Contributed by Robert Kanter.

2015-03-30 Thread vinodkv
Repository: hadoop
Updated Branches:
  refs/heads/trunk c358368f5 - 5358b8316


MAPREDUCE-6288. Changed permissions on JobHistory server's done directory so 
that user's client can load the conf files directly. Contributed by Robert 
Kanter.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5358b831
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5358b831
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5358b831

Branch: refs/heads/trunk
Commit: 5358b8316a7108b32c9900fb0d01ca0fe961
Parents: c358368
Author: Vinod Kumar Vavilapalli vino...@apache.org
Authored: Mon Mar 30 10:27:19 2015 -0700
Committer: Vinod Kumar Vavilapalli vino...@apache.org
Committed: Mon Mar 30 10:27:19 2015 -0700

--
 hadoop-mapreduce-project/CHANGES.txt|  4 ++
 .../v2/jobhistory/JobHistoryUtils.java  |  4 +-
 .../mapreduce/v2/hs/HistoryFileManager.java | 31 -
 .../mapreduce/v2/hs/TestHistoryFileManager.java | 73 
 4 files changed, 108 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5358b831/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index b0367a7..69ff96b 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -510,6 +510,10 @@ Release 2.7.0 - UNRELEASED
 MAPREDUCE-6285. ClientServiceDelegate should not retry upon
 AuthenticationException. (Jonathan Eagles via ozawa)
 
+MAPREDUCE-6288. Changed permissions on JobHistory server's done directory
+so that user's client can load the conf files directly. (Robert Kanter via
+vinodkv)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5358b831/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java
index e279c03..8966e4e 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java
@@ -72,7 +72,7 @@ public class JobHistoryUtils {
* Permissions for the history done dir and derivatives.
*/
   public static final FsPermission HISTORY_DONE_DIR_PERMISSION =
-FsPermission.createImmutable((short) 0770); 
+FsPermission.createImmutable((short) 0771);
 
   public static final FsPermission HISTORY_DONE_FILE_PERMISSION =
 FsPermission.createImmutable((short) 0770); // rwx--
@@ -81,7 +81,7 @@ public class JobHistoryUtils {
* Umask for the done dir and derivatives.
*/
   public static final FsPermission HISTORY_DONE_DIR_UMASK = FsPermission
-  .createImmutable((short) (0770 ^ 0777));
+  .createImmutable((short) (0771 ^ 0777));
 
   
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5358b831/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryFileManager.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryFileManager.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryFileManager.java
index 65f8a4f..5377075 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryFileManager.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryFileManager.java
@@ -571,8 +571,10 @@ public class HistoryFileManager extends AbstractService {
   new Path(doneDirPrefix));
   doneDirFc = FileContext.getFileContext(doneDirPrefixPath.toUri(), conf);
   doneDirFc.setUMask(JobHistoryUtils.HISTORY_DONE_DIR_UMASK);
-  mkdir(doneDirFc, doneDirPrefixPath, new FsPermission(
-  

hadoop git commit: YARN-3304. Cleaning up ResourceCalculatorProcessTree APIs for public use and removing inconsistencies in the default values. Contributed by Junping Du and Karthik Kambatla.

2015-03-30 Thread vinodkv
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 a0ed29a05 - 35af6f180


YARN-3304. Cleaning up ResourceCalculatorProcessTree APIs for public use and 
removing inconsistencies in the default values. Contributed by Junping Du and 
Karthik Kambatla.

(cherry picked from commit c358368f511963ad8e35f030b9babee541e1bd01)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/35af6f18
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/35af6f18
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/35af6f18

Branch: refs/heads/branch-2.7
Commit: 35af6f18029476a156a1d7dd15506576a1c3892b
Parents: a0ed29a
Author: Vinod Kumar Vavilapalli vino...@apache.org
Authored: Mon Mar 30 10:09:40 2015 -0700
Committer: Vinod Kumar Vavilapalli vino...@apache.org
Committed: Mon Mar 30 10:11:49 2015 -0700

--
 .../java/org/apache/hadoop/mapred/Task.java | 26 --
 hadoop-yarn-project/CHANGES.txt |  4 +
 .../apache/hadoop/yarn/util/CpuTimeTracker.java |  3 +-
 .../yarn/util/ProcfsBasedProcessTree.java   | 80 +-
 .../util/ResourceCalculatorProcessTree.java | 66 ---
 .../yarn/util/WindowsBasedProcessTree.java  | 21 +++--
 .../yarn/util/TestProcfsBasedProcessTree.java   | 85 ++--
 .../util/TestResourceCalculatorProcessTree.java |  4 +-
 .../yarn/util/TestWindowsBasedProcessTree.java  | 28 +++
 .../monitor/ContainerMetrics.java   | 12 ++-
 .../monitor/ContainersMonitorImpl.java  | 12 +--
 11 files changed, 187 insertions(+), 154 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/35af6f18/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
index 1ea1666..7bd9b31 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
@@ -170,7 +170,7 @@ abstract public class Task implements Writable, 
Configurable {
 skipRanges.skipRangeIterator();
 
   private ResourceCalculatorProcessTree pTree;
-  private long initCpuCumulativeTime = 0;
+  private long initCpuCumulativeTime = 
ResourceCalculatorProcessTree.UNAVAILABLE;
 
   protected JobConf conf;
   protected MapOutputFile mapOutputFile;
@@ -844,13 +844,25 @@ abstract public class Task implements Writable, 
Configurable {
 }
 pTree.updateProcessTree();
 long cpuTime = pTree.getCumulativeCpuTime();
-long pMem = pTree.getCumulativeRssmem();
-long vMem = pTree.getCumulativeVmem();
+long pMem = pTree.getRssMemorySize();
+long vMem = pTree.getVirtualMemorySize();
 // Remove the CPU time consumed previously by JVM reuse
-cpuTime -= initCpuCumulativeTime;
-counters.findCounter(TaskCounter.CPU_MILLISECONDS).setValue(cpuTime);
-counters.findCounter(TaskCounter.PHYSICAL_MEMORY_BYTES).setValue(pMem);
-counters.findCounter(TaskCounter.VIRTUAL_MEMORY_BYTES).setValue(vMem);
+if (cpuTime != ResourceCalculatorProcessTree.UNAVAILABLE 
+initCpuCumulativeTime != ResourceCalculatorProcessTree.UNAVAILABLE) {
+  cpuTime -= initCpuCumulativeTime;
+}
+
+if (cpuTime != ResourceCalculatorProcessTree.UNAVAILABLE) {
+  counters.findCounter(TaskCounter.CPU_MILLISECONDS).setValue(cpuTime);
+}
+
+if (pMem != ResourceCalculatorProcessTree.UNAVAILABLE) {
+  counters.findCounter(TaskCounter.PHYSICAL_MEMORY_BYTES).setValue(pMem);
+}
+
+if (vMem != ResourceCalculatorProcessTree.UNAVAILABLE) {
+  counters.findCounter(TaskCounter.VIRTUAL_MEMORY_BYTES).setValue(vMem);
+}
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/35af6f18/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index aeac82b..c40da6d 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -741,6 +741,10 @@ Release 2.7.0 - UNRELEASED
 YARN-2213. Change proxy-user cookie log in AmIpFilter to DEBUG.
 (Varun Saxena via xgong)
 
+YARN-3304. Cleaning up ResourceCalculatorProcessTree APIs for public use 
and
+removing inconsistencies in the default values. (Junping Du and Karthik
+Kambatla via vinodkv)
+
 

[01/50] [abbrv] hadoop git commit: HDFS-7990. IBR delete ack should not be delayed. Contributed by Daryn Sharp.

2015-03-30 Thread zhz
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-7285 27cb5701e - a6543ac97 (forced update)


HDFS-7990. IBR delete ack should not be delayed. Contributed by Daryn Sharp.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/60882ab2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/60882ab2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/60882ab2

Branch: refs/heads/HDFS-7285
Commit: 60882ab26d49f05cbf0686944af6559f86b3417d
Parents: af618f2
Author: Kihwal Lee kih...@apache.org
Authored: Fri Mar 27 09:05:17 2015 -0500
Committer: Kihwal Lee kih...@apache.org
Committed: Fri Mar 27 09:05:17 2015 -0500

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt|  2 ++
 .../hdfs/server/datanode/BPServiceActor.java   | 17 +++--
 .../apache/hadoop/hdfs/server/datanode/DNConf.java |  2 --
 .../hdfs/server/datanode/SimulatedFSDataset.java   | 13 -
 .../datanode/TestIncrementalBlockReports.java  |  4 ++--
 5 files changed, 23 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/60882ab2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index dff8bd2..72ea4fb 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -342,6 +342,8 @@ Release 2.8.0 - UNRELEASED
 HDFS-7928. Scanning blocks from disk during rolling upgrade startup takes
 a lot of time if disks are busy (Rushabh S Shah via kihwal)
 
+HDFS-7990. IBR delete ack should not be delayed. (daryn via kihwal)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/60882ab2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
index 10cce45..3b4756c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
@@ -82,12 +82,11 @@ class BPServiceActor implements Runnable {
 
   final BPOfferService bpos;
   
-  // lastBlockReport, lastDeletedReport and lastHeartbeat may be assigned/read
+  // lastBlockReport and lastHeartbeat may be assigned/read
   // by testing threads (through BPServiceActor#triggerXXX), while also 
   // assigned/read by the actor thread. Thus they should be declared as 
volatile
   // to make sure the happens-before consistency.
   volatile long lastBlockReport = 0;
-  volatile long lastDeletedReport = 0;
 
   boolean resetBlockReportTime = true;
 
@@ -417,10 +416,10 @@ class BPServiceActor implements Runnable {
   @VisibleForTesting
   void triggerDeletionReportForTests() {
 synchronized (pendingIncrementalBRperStorage) {
-  lastDeletedReport = 0;
+  sendImmediateIBR = true;
   pendingIncrementalBRperStorage.notifyAll();
 
-  while (lastDeletedReport == 0) {
+  while (sendImmediateIBR) {
 try {
   pendingIncrementalBRperStorage.wait(100);
 } catch (InterruptedException e) {
@@ -465,7 +464,6 @@ class BPServiceActor implements Runnable {
 // or we will report an RBW replica after the BlockReport already reports
 // a FINALIZED one.
 reportReceivedDeletedBlocks();
-lastDeletedReport = startTime;
 
 long brCreateStartTime = monotonicNow();
 MapDatanodeStorage, BlockListAsLongs perVolumeBlockLists =
@@ -674,7 +672,6 @@ class BPServiceActor implements Runnable {
*/
   private void offerService() throws Exception {
 LOG.info(For namenode  + nnAddr +  using
-+  DELETEREPORT_INTERVAL of  + dnConf.deleteReportInterval +  msec 
 +  BLOCKREPORT_INTERVAL of  + dnConf.blockReportInterval + msec
 +  CACHEREPORT_INTERVAL of  + dnConf.cacheReportInterval + msec
 +  Initial delay:  + dnConf.initialBlockReportDelay + msec
@@ -690,7 +687,9 @@ class BPServiceActor implements Runnable {
 //
 // Every so often, send heartbeat or block-report
 //
-if (startTime - lastHeartbeat = dnConf.heartBeatInterval) {
+boolean sendHeartbeat =
+startTime - lastHeartbeat = dnConf.heartBeatInterval;
+if (sendHeartbeat) {
   //
   // All heartbeat messages include following info:
   // -- Datanode name
@@ -729,10 +728,8 @@ class 

[43/50] [abbrv] hadoop git commit: HADOOP-11647. Reed-Solomon ErasureCoder. Contributed by Kai Zheng

2015-03-30 Thread zhz
HADOOP-11647. Reed-Solomon ErasureCoder. Contributed by Kai Zheng


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/26b5a06c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/26b5a06c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/26b5a06c

Branch: refs/heads/HDFS-7285
Commit: 26b5a06c222a89b4fdc165a7e648a4d0f42384a2
Parents: eb86ab7
Author: Kai Zheng kai.zh...@intel.com
Authored: Fri Mar 20 19:15:52 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:13:07 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt  |  3 +
 .../hadoop/fs/CommonConfigurationKeys.java  | 15 
 .../erasurecode/coder/AbstractErasureCoder.java | 65 ++
 .../coder/AbstractErasureDecoder.java   |  6 +-
 .../coder/AbstractErasureEncoder.java   |  6 +-
 .../io/erasurecode/coder/RSErasureDecoder.java  | 83 ++
 .../io/erasurecode/coder/RSErasureEncoder.java  | 47 ++
 .../io/erasurecode/coder/XorErasureDecoder.java |  2 +-
 .../io/erasurecode/coder/XorErasureEncoder.java |  2 +-
 .../erasurecode/coder/TestRSErasureCoder.java   | 92 
 10 files changed, 315 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/26b5a06c/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index f566f0e..b69e69a 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -26,3 +26,6 @@
 
 HADOOP-11707. Add factory to create raw erasure coder. Contributed by Kai 
Zheng
 ( Kai Zheng )
+
+HADOOP-11647. Reed-Solomon ErasureCoder. Contributed by Kai Zheng
+( Kai Zheng )

http://git-wip-us.apache.org/repos/asf/hadoop/blob/26b5a06c/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
index 7575496..70fea01 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
@@ -135,6 +135,21 @@ public class CommonConfigurationKeys extends 
CommonConfigurationKeysPublic {
   false;
 
   /**
+   * Erasure Coding configuration family
+   */
+
+  /** Supported erasure codec classes */
+  public static final String IO_ERASURECODE_CODECS_KEY = 
io.erasurecode.codecs;
+
+  /** Use XOR raw coder when possible for the RS codec */
+  public static final String IO_ERASURECODE_CODEC_RS_USEXOR_KEY =
+  io.erasurecode.codec.rs.usexor;
+
+  /** Raw coder factory for the RS codec */
+  public static final String IO_ERASURECODE_CODEC_RS_RAWCODER_KEY =
+  io.erasurecode.codec.rs.rawcoder;
+
+  /**
* Service Authorization
*/
   public static final String 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/26b5a06c/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
index 8d3bc34..0e4de89 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
@@ -17,7 +17,12 @@
  */
 package org.apache.hadoop.io.erasurecode.coder;
 
+import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.io.erasurecode.rawcoder.RawErasureCoder;
+import org.apache.hadoop.io.erasurecode.rawcoder.RawErasureCoderFactory;
+import org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder;
+import org.apache.hadoop.io.erasurecode.rawcoder.RawErasureEncoder;
 
 /**
  * A common class of basic facilities to be shared by encoder and decoder
@@ -31,6 +36,66 @@ public abstract class AbstractErasureCoder
   private int numParityUnits;
   private int chunkSize;
 
+  /**
+   * Create raw decoder using the factory specified by 

[22/50] [abbrv] hadoop git commit: HADOOP-11534. Minor improvements for raw erasure coders ( Contributed by Kai Zheng )

2015-03-30 Thread zhz
HADOOP-11534. Minor improvements for raw erasure coders ( Contributed by Kai 
Zheng )


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7302ab12
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7302ab12
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7302ab12

Branch: refs/heads/HDFS-7285
Commit: 7302ab1281058bc927b8148a88669fc38df77f75
Parents: 2fc3e35
Author: Vinayakumar B vinayakuma...@intel.com
Authored: Mon Feb 2 14:39:53 2015 +0530
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:24 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt   |  5 -
 .../org/apache/hadoop/io/erasurecode/ECChunk.java| 15 +--
 .../rawcoder/AbstractRawErasureCoder.java| 12 ++--
 3 files changed, 23 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7302ab12/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index 8ce5a89..2124800 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -1,4 +1,7 @@
   BREAKDOWN OF HADOOP-11264 SUBTASKS AND RELATED JIRAS (Common part of 
HDFS-7285)
 
 HADOOP-11514. Raw Erasure Coder API for concrete encoding and decoding
-(Kai Zheng via umamahesh)
\ No newline at end of file
+(Kai Zheng via umamahesh)
+
+HADOOP-11534. Minor improvements for raw erasure coders
+( Kai Zheng via vinayakumarb )
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7302ab12/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
index f84eb11..01e8f35 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
@@ -66,15 +66,26 @@ public class ECChunk {
   }
 
   /**
-   * Convert an array of this chunks to an array of byte array
+   * Convert an array of this chunks to an array of byte array.
+   * Note the chunk buffers are not affected.
* @param chunks
* @return an array of byte array
*/
   public static byte[][] toArray(ECChunk[] chunks) {
 byte[][] bytesArr = new byte[chunks.length][];
 
+ByteBuffer buffer;
 for (int i = 0; i  chunks.length; i++) {
-  bytesArr[i] = chunks[i].getBuffer().array();
+  buffer = chunks[i].getBuffer();
+  if (buffer.hasArray()) {
+bytesArr[i] = buffer.array();
+  } else {
+bytesArr[i] = new byte[buffer.remaining()];
+// Avoid affecting the original one
+buffer.mark();
+buffer.get(bytesArr[i]);
+buffer.reset();
+  }
 }
 
 return bytesArr;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7302ab12/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
index 474542b..74d2ab6 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
@@ -24,26 +24,26 @@ package org.apache.hadoop.io.erasurecode.rawcoder;
  */
 public abstract class AbstractRawErasureCoder implements RawErasureCoder {
 
-  private int dataSize;
-  private int paritySize;
+  private int numDataUnits;
+  private int numParityUnits;
   private int chunkSize;
 
   @Override
   public void initialize(int numDataUnits, int numParityUnits,
  int chunkSize) {
-this.dataSize = numDataUnits;
-this.paritySize = numParityUnits;
+this.numDataUnits = numDataUnits;
+this.numParityUnits = numParityUnits;
 this.chunkSize = chunkSize;
   }
 
   @Override
   public int getNumDataUnits() {
-return dataSize;
+return numDataUnits;
   }
 
   @Override
   public int 

[13/50] [abbrv] hadoop git commit: HDFS-4396. Add START_MSG/SHUTDOWN_MSG for ZKFC. Contributed by Liang Xie.

2015-03-30 Thread zhz
HDFS-4396. Add START_MSG/SHUTDOWN_MSG for ZKFC. Contributed by Liang Xie.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ae3e8c61
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ae3e8c61
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ae3e8c61

Branch: refs/heads/HDFS-7285
Commit: ae3e8c61ff4c926ef3e71c782433ed9764d21478
Parents: 1ed9fb7
Author: Harsh J ha...@cloudera.com
Authored: Mon Mar 30 15:21:18 2015 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Mon Mar 30 15:21:18 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java | 2 ++
 2 files changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ae3e8c61/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 9b1cc3e..f437ad8 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -323,6 +323,9 @@ Release 2.8.0 - UNRELEASED
 
   IMPROVEMENTS
 
+HDFS-4396. Add START_MSG/SHUTDOWN_MSG for ZKFC
+(Liang Xie via harsh)
+
 HDFS-7875. Improve log message when wrong value configured for
 dfs.datanode.failed.volumes.tolerated.
 (nijel via harsh)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ae3e8c61/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
index 85f77f1..4e256a2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
@@ -167,6 +167,8 @@ public class DFSZKFailoverController extends 
ZKFailoverController {
 
   public static void main(String args[])
   throws Exception {
+StringUtils.startupShutdownMessage(DFSZKFailoverController.class,
+args, LOG);
 if (DFSUtil.parseHelpArgument(args, 
 ZKFailoverController.USAGE, System.out, true)) {
   System.exit(0);



[18/50] [abbrv] hadoop git commit: Fix Compilation Error in TestAddBlockgroup.java after the merge

2015-03-30 Thread zhz
Fix Compilation Error in TestAddBlockgroup.java after the merge


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ffc4171a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ffc4171a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ffc4171a

Branch: refs/heads/HDFS-7285
Commit: ffc4171a750e8b122b29ec32307124d1a4f8e11b
Parents: 7886ed1
Author: Jing Zhao ji...@apache.org
Authored: Sun Feb 8 16:01:03 2015 -0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:23 2015 -0700

--
 .../apache/hadoop/hdfs/server/namenode/TestAddBlockgroup.java| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ffc4171a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockgroup.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockgroup.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockgroup.java
index 95133ce..06dfade 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockgroup.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockgroup.java
@@ -26,7 +26,7 @@ import org.apache.hadoop.hdfs.DFSTestUtil;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.MiniDFSCluster;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
-import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguous;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
@@ -75,7 +75,7 @@ public class TestAddBlockgroup {
 final Path file1 = new Path(/file1);
 DFSTestUtil.createFile(fs, file1, BLOCKSIZE * 2, REPLICATION, 0L);
 INodeFile file1Node = fsdir.getINode4Write(file1.toString()).asFile();
-BlockInfo[] file1Blocks = file1Node.getBlocks();
+BlockInfoContiguous[] file1Blocks = file1Node.getBlocks();
 assertEquals(2, file1Blocks.length);
 assertEquals(GROUP_SIZE, file1Blocks[0].numNodes());
 assertEquals(HdfsConstants.MAX_BLOCKS_IN_GROUP,



[45/50] [abbrv] hadoop git commit: HADOOP-11707. Add factory to create raw erasure coder. Contributed by Kai Zheng

2015-03-30 Thread zhz
HADOOP-11707. Add factory to create raw erasure coder.  Contributed by Kai Zheng


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/eb86ab78
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/eb86ab78
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/eb86ab78

Branch: refs/heads/HDFS-7285
Commit: eb86ab788a4ebce924d9bfc19fcb1a6398fb297b
Parents: 5eb2c92
Author: Kai Zheng kai.zh...@intel.com
Authored: Fri Mar 20 15:07:00 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:13:07 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt  |  3 +-
 .../rawcoder/JRSRawErasureCoderFactory.java | 34 ++
 .../rawcoder/RawErasureCoderFactory.java| 38 
 .../rawcoder/XorRawErasureCoderFactory.java | 34 ++
 4 files changed, 108 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/eb86ab78/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index e27ff5c..f566f0e 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -24,4 +24,5 @@
 HADOOP-11706. Refine a little bit erasure coder API. Contributed by Kai 
Zheng
 ( Kai Zheng )
 
-
+HADOOP-11707. Add factory to create raw erasure coder. Contributed by Kai 
Zheng
+( Kai Zheng )

http://git-wip-us.apache.org/repos/asf/hadoop/blob/eb86ab78/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/JRSRawErasureCoderFactory.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/JRSRawErasureCoderFactory.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/JRSRawErasureCoderFactory.java
new file mode 100644
index 000..d6b40aa
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/JRSRawErasureCoderFactory.java
@@ -0,0 +1,34 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.io.erasurecode.rawcoder;
+
+/**
+ * A raw coder factory for raw Reed-Solomon coder in Java.
+ */
+public class JRSRawErasureCoderFactory implements RawErasureCoderFactory {
+
+  @Override
+  public RawErasureEncoder createEncoder() {
+return new JRSRawEncoder();
+  }
+
+  @Override
+  public RawErasureDecoder createDecoder() {
+return new JRSRawDecoder();
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/eb86ab78/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderFactory.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderFactory.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderFactory.java
new file mode 100644
index 000..95a1cfe
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderFactory.java
@@ -0,0 +1,38 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in 

[33/50] [abbrv] hadoop git commit: HADOOP-11643. Define EC schema API for ErasureCodec. Contributed by Kai Zheng

2015-03-30 Thread zhz
HADOOP-11643. Define EC schema API for ErasureCodec. Contributed by Kai Zheng


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3dbde16c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3dbde16c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3dbde16c

Branch: refs/heads/HDFS-7285
Commit: 3dbde16cb763bc642bd89e76e95d76fc39353c46
Parents: 67e4d1f
Author: drankye kai.zh...@intel.com
Authored: Thu Mar 5 22:51:52 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:26 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt  |   4 +
 .../apache/hadoop/io/erasurecode/ECSchema.java  | 203 +++
 .../hadoop/io/erasurecode/TestECSchema.java |  54 +
 3 files changed, 261 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3dbde16c/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index 7bbacf7..ee42c84 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -12,3 +12,7 @@
 HADOOP-11542. Raw Reed-Solomon coder in pure Java. Contributed by Kai Zheng
 ( Kai Zheng )
 
+HADOOP-11643. Define EC schema API for ErasureCodec. Contributed by Kai 
Zheng
+( Kai Zheng )
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3dbde16c/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
new file mode 100644
index 000..8dc3f45
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
@@ -0,0 +1,203 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.io.erasurecode;
+
+import java.util.Collections;
+import java.util.Map;
+
+/**
+ * Erasure coding schema to housekeeper relevant information.
+ */
+public class ECSchema {
+  public static final String NUM_DATA_UNITS_KEY = k;
+  public static final String NUM_PARITY_UNITS_KEY = m;
+  public static final String CODEC_NAME_KEY = codec;
+  public static final String CHUNK_SIZE_KEY = chunkSize;
+  public static final int DEFAULT_CHUNK_SIZE = 64 * 1024; // 64K
+
+  private String schemaName;
+  private String codecName;
+  private MapString, String options;
+  private int numDataUnits;
+  private int numParityUnits;
+  private int chunkSize;
+
+  /**
+   * Constructor with schema name and provided options. Note the options may
+   * contain additional information for the erasure codec to interpret further.
+   * @param schemaName schema name
+   * @param options schema options
+   */
+  public ECSchema(String schemaName, MapString, String options) {
+assert (schemaName != null  ! schemaName.isEmpty());
+
+this.schemaName = schemaName;
+
+if (options == null || options.isEmpty()) {
+  throw new IllegalArgumentException(No schema options are provided);
+}
+
+String codecName = options.get(CODEC_NAME_KEY);
+if (codecName == null || codecName.isEmpty()) {
+  throw new IllegalArgumentException(No codec option is provided);
+}
+
+int dataUnits = 0, parityUnits = 0;
+try {
+  if (options.containsKey(NUM_DATA_UNITS_KEY)) {
+dataUnits = Integer.parseInt(options.get(NUM_DATA_UNITS_KEY));
+  }
+} catch (NumberFormatException e) {
+  throw new IllegalArgumentException(Option value  +
+  options.get(CHUNK_SIZE_KEY) +  for  + CHUNK_SIZE_KEY +
+   is found. It should be an integer);
+}
+
+try {
+  if 

[26/50] [abbrv] hadoop git commit: HDFS-7716. Erasure Coding: extend BlockInfo to handle EC info. Contributed by Jing Zhao.

2015-03-30 Thread zhz
HDFS-7716. Erasure Coding: extend BlockInfo to handle EC info. Contributed by 
Jing Zhao.

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/850a9ef8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/850a9ef8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/850a9ef8

Branch: refs/heads/HDFS-7285
Commit: 850a9ef88a283be241baff50309acb14fa03b4cf
Parents: 0614729
Author: Jing Zhao ji...@apache.org
Authored: Tue Feb 10 17:54:10 2015 -0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:25 2015 -0700

--
 .../hadoop/hdfs/protocol/HdfsConstants.java |   1 +
 .../server/blockmanagement/BlockCollection.java |  13 +-
 .../server/blockmanagement/BlockIdManager.java  |   7 +-
 .../hdfs/server/blockmanagement/BlockInfo.java  | 339 +
 .../blockmanagement/BlockInfoContiguous.java| 363 +++
 .../BlockInfoContiguousUnderConstruction.java   | 137 +--
 .../blockmanagement/BlockInfoStriped.java   | 179 +
 .../server/blockmanagement/BlockManager.java| 188 +-
 .../hdfs/server/blockmanagement/BlocksMap.java  |  46 +--
 .../CacheReplicationMonitor.java|  10 +-
 .../blockmanagement/DatanodeDescriptor.java |  22 +-
 .../blockmanagement/DatanodeStorageInfo.java|  38 +-
 .../ReplicaUnderConstruction.java   | 119 ++
 .../hdfs/server/namenode/FSDirectory.java   |   4 +-
 .../hdfs/server/namenode/FSNamesystem.java  |  24 +-
 .../hdfs/server/namenode/NamenodeFsck.java  |   3 +-
 .../snapshot/FSImageFormatPBSnapshot.java   |   4 +-
 .../org/apache/hadoop/hdfs/DFSTestUtil.java |   4 +-
 .../server/blockmanagement/TestBlockInfo.java   |   6 +-
 .../blockmanagement/TestBlockInfoStriped.java   | 219 +++
 .../blockmanagement/TestBlockManager.java   |   4 +-
 .../blockmanagement/TestReplicationPolicy.java  |   2 +-
 22 files changed, 1125 insertions(+), 607 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/850a9ef8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
index de60b6e..245b630 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
@@ -184,5 +184,6 @@ public class HdfsConstants {
 
   public static final byte NUM_DATA_BLOCKS = 3;
   public static final byte NUM_PARITY_BLOCKS = 2;
+  public static final long BLOCK_GROUP_INDEX_MASK = 15;
   public static final byte MAX_BLOCKS_IN_GROUP = 16;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/850a9ef8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
index e9baf85..b14efb4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
@@ -39,12 +39,12 @@ public interface BlockCollection {
   public ContentSummary computeContentSummary(BlockStoragePolicySuite bsps);
 
   /**
-   * @return the number of blocks
+   * @return the number of blocks or block groups
*/ 
   public int numBlocks();
 
   /**
-   * Get the blocks.
+   * Get the blocks or block groups.
*/
   public BlockInfoContiguous[] getBlocks();
 
@@ -55,8 +55,8 @@ public interface BlockCollection {
   public long getPreferredBlockSize();
 
   /**
-   * Get block replication for the collection 
-   * @return block replication value
+   * Get block replication for the collection.
+   * @return block replication value. Return 0 if the file is erasure coded.
*/
   public short getBlockReplication();
 
@@ -71,7 +71,7 @@ public interface BlockCollection {
   public String getName();
 
   /**
-   * Set the block at the given index.
+   * Set the block/block-group at the given index.
*/
   public void setBlock(int index, BlockInfoContiguous blk);
 
@@ -79,7 

[49/50] [abbrv] hadoop git commit: HADOOP-11664. Loading predefined EC schemas from configuration. Contributed by Kai Zheng.

2015-03-30 Thread zhz
HADOOP-11664. Loading predefined EC schemas from configuration. Contributed by 
Kai Zheng.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d50bbd71
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d50bbd71
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d50bbd71

Branch: refs/heads/HDFS-7285
Commit: d50bbd71a08b56bccb4b47d11f131c0deb34bf2f
Parents: 8d49fc3
Author: Zhe Zhang z...@apache.org
Authored: Fri Mar 27 14:52:50 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:13:09 2015 -0700

--
 .../src/main/conf/ecschema-def.xml  |  40 +
 .../hadoop/fs/CommonConfigurationKeys.java  |   5 +
 .../hadoop/io/erasurecode/SchemaLoader.java | 147 +++
 .../hadoop/io/erasurecode/TestSchemaLoader.java |  80 ++
 4 files changed, 272 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d50bbd71/hadoop-common-project/hadoop-common/src/main/conf/ecschema-def.xml
--
diff --git a/hadoop-common-project/hadoop-common/src/main/conf/ecschema-def.xml 
b/hadoop-common-project/hadoop-common/src/main/conf/ecschema-def.xml
new file mode 100644
index 000..e619485
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/src/main/conf/ecschema-def.xml
@@ -0,0 +1,40 @@
+?xml version=1.0?
+
+!--
+ 
+ Licensed to the Apache Software Foundation (ASF) under one
+ or more contributor license agreements.  See the NOTICE file
+ distributed with this work for additional information
+ regarding copyright ownership.  The ASF licenses this file
+ to you under the Apache License, Version 2.0 (the
+ License); you may not use this file except in compliance
+ with the License.  You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an AS IS BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+--
+
+!--
+Please define your EC schemas here. Note, once these schemas are loaded
+and referenced by EC storage policies, any change to them will be ignored.
+You can modify and remove those not used yet, or add new ones.
+--
+
+schemas
+  schema name=RS-6-3
+k6/k
+m3/m
+codecRS/codec
+  /schema
+  schema name=RS-10-4
+k10/k
+m4/m
+codecRS/codec
+  /schema
+/schemas
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d50bbd71/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
index 70fea01..af32674 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
@@ -141,6 +141,11 @@ public class CommonConfigurationKeys extends 
CommonConfigurationKeysPublic {
   /** Supported erasure codec classes */
   public static final String IO_ERASURECODE_CODECS_KEY = 
io.erasurecode.codecs;
 
+  public static final String IO_ERASURECODE_SCHEMA_FILE_KEY =
+  io.erasurecode.schema.file;
+  public static final String IO_ERASURECODE_SCHEMA_FILE_DEFAULT =
+  ecschema-def.xml;
+
   /** Use XOR raw coder when possible for the RS codec */
   public static final String IO_ERASURECODE_CODEC_RS_USEXOR_KEY =
   io.erasurecode.codec.rs.usexor;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d50bbd71/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/SchemaLoader.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/SchemaLoader.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/SchemaLoader.java
new file mode 100644
index 000..c51ed37
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/SchemaLoader.java
@@ -0,0 +1,147 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ 

[32/50] [abbrv] hadoop git commit: HDFS-7853. Erasure coding: extend LocatedBlocks to support reading from striped files. Contributed by Jing Zhao.

2015-03-30 Thread zhz
HDFS-7853. Erasure coding: extend LocatedBlocks to support reading from striped 
files. Contributed by Jing Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ec4f2243
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ec4f2243
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ec4f2243

Branch: refs/heads/HDFS-7285
Commit: ec4f2243fd2c50ca00db3a609d11feaf177846a9
Parents: 0c6ed98
Author: Jing Zhao ji...@apache.org
Authored: Mon Mar 9 14:59:58 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:26 2015 -0700

--
 .../hadoop/hdfs/protocol/LocatedBlock.java  |   5 +-
 .../hdfs/protocol/LocatedStripedBlock.java  |  68 +
 ...tNamenodeProtocolServerSideTranslatorPB.java |  14 +-
 .../ClientNamenodeProtocolTranslatorPB.java |  13 +-
 .../DatanodeProtocolClientSideTranslatorPB.java |   2 +-
 .../DatanodeProtocolServerSideTranslatorPB.java |   2 +-
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |  80 +++
 .../blockmanagement/BlockInfoStriped.java   |   5 +
 .../BlockInfoStripedUnderConstruction.java  |  99 +++--
 .../server/blockmanagement/BlockManager.java|  51 ---
 .../blockmanagement/DatanodeDescriptor.java |   4 +-
 .../blockmanagement/DatanodeStorageInfo.java|   3 +-
 .../server/namenode/FSImageFormatPBINode.java   |  21 +--
 .../hdfs/server/namenode/FSNamesystem.java  |  34 +++--
 .../hadoop-hdfs/src/main/proto/hdfs.proto   |   1 +
 .../org/apache/hadoop/hdfs/DFSTestUtil.java |  12 ++
 .../hadoop/hdfs/protocolPB/TestPBHelper.java|  16 +--
 .../datanode/TestIncrementalBrVariations.java   |  14 +-
 .../server/namenode/TestAddStripedBlocks.java   | 141 +++
 .../hdfs/server/namenode/TestFSImage.java   |   5 +-
 20 files changed, 444 insertions(+), 146 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec4f2243/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
index e729869..a38e8f2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
@@ -25,7 +25,6 @@ import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier;
 import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo;
-import org.apache.hadoop.hdfs.protocol.DatanodeInfoWithStorage;
 import org.apache.hadoop.security.token.Token;
 
 import com.google.common.collect.Lists;
@@ -51,14 +50,14 @@ public class LocatedBlock {
   // else false. If block has few corrupt replicas, they are filtered and 
   // their locations are not part of this object
   private boolean corrupt;
-  private TokenBlockTokenIdentifier blockToken = new 
TokenBlockTokenIdentifier();
+  private TokenBlockTokenIdentifier blockToken = new Token();
   /**
* List of cached datanode locations
*/
   private DatanodeInfo[] cachedLocs;
 
   // Used when there are no locations
-  private static final DatanodeInfoWithStorage[] EMPTY_LOCS =
+  static final DatanodeInfoWithStorage[] EMPTY_LOCS =
   new DatanodeInfoWithStorage[0];
 
   public LocatedBlock(ExtendedBlock b, DatanodeInfo[] locs) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec4f2243/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedStripedBlock.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedStripedBlock.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedStripedBlock.java
new file mode 100644
index 000..97e3a69
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedStripedBlock.java
@@ -0,0 +1,68 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required 

[38/50] [abbrv] hadoop git commit: HADOOP-11706 Refine a little bit erasure coder API

2015-03-30 Thread zhz
HADOOP-11706 Refine a little bit erasure coder API


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/671db987
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/671db987
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/671db987

Branch: refs/heads/HDFS-7285
Commit: 671db987ee966e015c47955f96d019602c934c0b
Parents: ace0181
Author: Kai Zheng kai.zh...@intel.com
Authored: Wed Mar 18 19:21:37 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:28 2015 -0700

--
 .../io/erasurecode/coder/ErasureCoder.java  |  4 +++-
 .../erasurecode/rawcoder/RawErasureCoder.java   |  4 +++-
 .../hadoop/io/erasurecode/TestCoderBase.java| 17 +---
 .../erasurecode/coder/TestErasureCoderBase.java | 21 +++-
 .../erasurecode/rawcoder/TestJRSRawCoder.java   | 12 +--
 .../erasurecode/rawcoder/TestRawCoderBase.java  |  2 ++
 6 files changed, 31 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/671db987/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
index 68875c0..c5922f3 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.io.erasurecode.coder;
 
+import org.apache.hadoop.conf.Configurable;
+
 /**
  * An erasure coder to perform encoding or decoding given a group. Generally it
  * involves calculating necessary internal steps according to codec logic. For
@@ -31,7 +33,7 @@ package org.apache.hadoop.io.erasurecode.coder;
  * of multiple coding steps.
  *
  */
-public interface ErasureCoder {
+public interface ErasureCoder extends Configurable {
 
   /**
* Initialize with the important parameters for the code.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/671db987/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
index 91a9abf..9af5b6c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.io.erasurecode.rawcoder;
 
+import org.apache.hadoop.conf.Configurable;
+
 /**
  * RawErasureCoder is a common interface for {@link RawErasureEncoder} and
  * {@link RawErasureDecoder} as both encoder and decoder share some properties.
@@ -31,7 +33,7 @@ package org.apache.hadoop.io.erasurecode.rawcoder;
  * low level constructs, since it only takes care of the math calculation with
  * a group of byte buffers.
  */
-public interface RawErasureCoder {
+public interface RawErasureCoder extends Configurable {
 
   /**
* Initialize with the important parameters for the code.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/671db987/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
index 194413a..22fd98d 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
@@ -17,11 +17,12 @@
  */
 package org.apache.hadoop.io.erasurecode;
 
+import org.apache.hadoop.conf.Configuration;
+
 import java.nio.ByteBuffer;
 import java.util.Arrays;
 import java.util.Random;
 
-import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertTrue;
 
 /**
@@ -31,6 +32,7 @@ import static org.junit.Assert.assertTrue;
 public abstract class TestCoderBase {
   protected static Random RAND = new Random();
 
+  private 

[40/50] [abbrv] hadoop git commit: HDFS-7912. Erasure Coding: track BlockInfo instead of Block in UnderReplicatedBlocks and PendingReplicationBlocks. Contributed by Jing Zhao.

2015-03-30 Thread zhz
HDFS-7912. Erasure Coding: track BlockInfo instead of Block in 
UnderReplicatedBlocks and PendingReplicationBlocks. Contributed by Jing Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ace0181d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ace0181d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ace0181d

Branch: refs/heads/HDFS-7285
Commit: ace0181d97d8d51faa0a5d020d0f7445c231f060
Parents: d2f2604
Author: Jing Zhao ji...@apache.org
Authored: Tue Mar 17 10:18:50 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:28 2015 -0700

--
 .../server/blockmanagement/BlockManager.java| 47 -
 .../PendingReplicationBlocks.java   | 51 +--
 .../blockmanagement/UnderReplicatedBlocks.java  | 49 +-
 .../hdfs/server/namenode/FSDirAttrOp.java   | 10 ++--
 .../hdfs/server/namenode/FSNamesystem.java  | 21 
 .../hadoop/hdfs/server/namenode/INode.java  | 12 ++---
 .../hadoop/hdfs/server/namenode/INodeFile.java  |  4 +-
 .../hdfs/server/namenode/NamenodeFsck.java  | 10 ++--
 .../hadoop/hdfs/server/namenode/SafeMode.java   |  3 +-
 .../blockmanagement/BlockManagerTestUtil.java   |  5 +-
 .../blockmanagement/TestBlockManager.java   |  8 +--
 .../server/blockmanagement/TestNodeCount.java   |  3 +-
 .../TestOverReplicatedBlocks.java   |  5 +-
 .../blockmanagement/TestPendingReplication.java | 19 ---
 .../TestRBWBlockInvalidation.java   |  4 +-
 .../blockmanagement/TestReplicationPolicy.java  | 53 +++-
 .../TestUnderReplicatedBlockQueues.java | 16 +++---
 .../datanode/TestReadOnlySharedStorage.java |  9 ++--
 .../namenode/TestProcessCorruptBlocks.java  |  5 +-
 19 files changed, 180 insertions(+), 154 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ace0181d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 6481738..1e8ce1f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -1336,7 +1336,7 @@ public class BlockManager {
* @return number of blocks scheduled for replication during this iteration.
*/
   int computeReplicationWork(int blocksToProcess) {
-ListListBlock blocksToReplicate = null;
+ListListBlockInfo blocksToReplicate = null;
 namesystem.writeLock();
 try {
   // Choose the blocks to be replicated
@@ -1354,7 +1354,7 @@ public class BlockManager {
* @return the number of blocks scheduled for replication
*/
   @VisibleForTesting
-  int computeReplicationWorkForBlocks(ListListBlock blocksToReplicate) {
+  int computeReplicationWorkForBlocks(ListListBlockInfo blocksToReplicate) 
{
 int requiredReplication, numEffectiveReplicas;
 ListDatanodeDescriptor containingNodes;
 DatanodeDescriptor srcNode;
@@ -1368,7 +1368,7 @@ public class BlockManager {
 try {
   synchronized (neededReplications) {
 for (int priority = 0; priority  blocksToReplicate.size(); 
priority++) {
-  for (Block block : blocksToReplicate.get(priority)) {
+  for (BlockInfo block : blocksToReplicate.get(priority)) {
 // block should belong to a file
 bc = blocksMap.getBlockCollection(block);
 // abandoned block or block reopened for append
@@ -1452,7 +1452,7 @@ public class BlockManager {
 }
 
 synchronized (neededReplications) {
-  Block block = rw.block;
+  BlockInfo block = rw.block;
   int priority = rw.priority;
   // Recheck since global lock was released
   // block should belong to a file
@@ -1710,7 +1710,7 @@ public class BlockManager {
* and put them back into the neededReplication queue
*/
   private void processPendingReplications() {
-Block[] timedOutItems = pendingReplications.getTimedOutBlocks();
+BlockInfo[] timedOutItems = pendingReplications.getTimedOutBlocks();
 if (timedOutItems != null) {
   namesystem.writeLock();
   try {
@@ -2883,13 +2883,13 @@ public class BlockManager {
   
   /** Set replication for the blocks. */
   public void setReplication(final short oldRepl, final short newRepl,
-  final String src, final Block... blocks) {
+  final String src, 

[14/50] [abbrv] hadoop git commit: HDFS-7742. Favoring decommissioning node for replication can cause a block to stay underreplicated for long periods. Contributed by Nathan Roberts.

2015-03-30 Thread zhz
HDFS-7742. Favoring decommissioning node for replication can cause a block to 
stay
underreplicated for long periods. Contributed by Nathan Roberts.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/04ee18ed
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/04ee18ed
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/04ee18ed

Branch: refs/heads/HDFS-7285
Commit: 04ee18ed48ceef34598f954ff40940abc9fde1d2
Parents: ae3e8c6
Author: Kihwal Lee kih...@apache.org
Authored: Mon Mar 30 10:10:11 2015 -0500
Committer: Kihwal Lee kih...@apache.org
Committed: Mon Mar 30 10:10:11 2015 -0500

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../server/blockmanagement/BlockManager.java| 10 ++---
 .../blockmanagement/TestBlockManager.java   | 42 
 3 files changed, 50 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/04ee18ed/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index f437ad8..811ee75 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -829,6 +829,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7410. Support CreateFlags with append() to support hsync() for
 appending streams (Vinayakumar B via Colin P. McCabe)
 
+HDFS-7742. Favoring decommissioning node for replication can cause a block 
+to stay underreplicated for long periods (Nathan Roberts via kihwal)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/04ee18ed/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index ad40782..f6e15a3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -1637,7 +1637,8 @@ public class BlockManager {
   // If so, do not select the node as src node
   if ((nodesCorrupt != null)  nodesCorrupt.contains(node))
 continue;
-  if(priority != UnderReplicatedBlocks.QUEUE_HIGHEST_PRIORITY
+  if(priority != UnderReplicatedBlocks.QUEUE_HIGHEST_PRIORITY 
+   !node.isDecommissionInProgress() 
node.getNumberOfBlocksToBeReplicated() = maxReplicationStreams)
   {
 continue; // already reached replication limit
@@ -1652,13 +1653,12 @@ public class BlockManager {
   // never use already decommissioned nodes
   if(node.isDecommissioned())
 continue;
-  // we prefer nodes that are in DECOMMISSION_INPROGRESS state
-  if(node.isDecommissionInProgress() || srcNode == null) {
+
+  // We got this far, current node is a reasonable choice
+  if (srcNode == null) {
 srcNode = node;
 continue;
   }
-  if(srcNode.isDecommissionInProgress())
-continue;
   // switch to a different node randomly
   // this to prevent from deterministically selecting the same node even
   // if the node failed to replicate the block on previous iterations

http://git-wip-us.apache.org/repos/asf/hadoop/blob/04ee18ed/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
index 707c780..91abb2a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
@@ -535,6 +535,48 @@ public class TestBlockManager {
   }
 
   @Test
+  public void testFavorDecomUntilHardLimit() throws Exception {
+bm.maxReplicationStreams = 0;
+bm.replicationStreamsHardLimit = 1;
+
+long blockId = 42; // arbitrary
+Block aBlock = new Block(blockId, 0, 0);
+ListDatanodeDescriptor origNodes = getNodes(0, 1);
+// Add the block to the first 

[27/50] [abbrv] hadoop git commit: HDFS-7837. Erasure Coding: allocate and persist striped blocks in NameNode. Contributed by Jing Zhao.

2015-03-30 Thread zhz
HDFS-7837. Erasure Coding: allocate and persist striped blocks in NameNode. 
Contributed by Jing Zhao.

 Conflicts:
 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/67e4d1f6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/67e4d1f6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/67e4d1f6

Branch: refs/heads/HDFS-7285
Commit: 67e4d1f6a79e40201204b06f74a5c7fea7fadaf4
Parents: 7527a59
Author: Jing Zhao ji...@apache.org
Authored: Mon Mar 2 13:44:33 2015 -0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:25 2015 -0700

--
 .../server/blockmanagement/BlockIdManager.java  |  31 +++-
 .../hdfs/server/blockmanagement/BlockInfo.java  |   4 +-
 .../blockmanagement/BlockInfoContiguous.java|   5 +
 .../blockmanagement/BlockInfoStriped.java   |   8 +-
 .../server/blockmanagement/BlockManager.java|  44 --
 .../hdfs/server/blockmanagement/BlocksMap.java  |  20 ++-
 .../blockmanagement/DecommissionManager.java|   9 +-
 .../hdfs/server/namenode/FSDirectory.java   |  27 +++-
 .../hdfs/server/namenode/FSEditLogLoader.java   |  69 ++---
 .../hdfs/server/namenode/FSImageFormat.java |  12 +-
 .../server/namenode/FSImageFormatPBINode.java   |   5 +-
 .../server/namenode/FSImageFormatProtobuf.java  |   9 +-
 .../hdfs/server/namenode/FSNamesystem.java  |  39 ++---
 .../hadoop/hdfs/server/namenode/INodeFile.java  |  25 +++-
 .../server/namenode/NameNodeLayoutVersion.java  |   3 +-
 .../hadoop-hdfs/src/main/proto/fsimage.proto|   1 +
 .../hdfs/server/namenode/TestAddBlockgroup.java |  85 ---
 .../server/namenode/TestAddStripedBlocks.java   | 146 +++
 18 files changed, 354 insertions(+), 188 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/67e4d1f6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
index 3ae54ce..1d69d74 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
@@ -103,21 +103,38 @@ public class BlockIdManager {
   }
 
   /**
-   * Sets the maximum allocated block ID for this filesystem. This is
+   * Sets the maximum allocated contiguous block ID for this filesystem. This 
is
* the basis for allocating new block IDs.
*/
-  public void setLastAllocatedBlockId(long blockId) {
+  public void setLastAllocatedContiguousBlockId(long blockId) {
 blockIdGenerator.skipTo(blockId);
   }
 
   /**
-   * Gets the maximum sequentially allocated block ID for this filesystem
+   * Gets the maximum sequentially allocated contiguous block ID for this
+   * filesystem
*/
-  public long getLastAllocatedBlockId() {
+  public long getLastAllocatedContiguousBlockId() {
 return blockIdGenerator.getCurrentValue();
   }
 
   /**
+   * Sets the maximum allocated striped block ID for this filesystem. This is
+   * the basis for allocating new block IDs.
+   */
+  public void setLastAllocatedStripedBlockId(long blockId) {
+blockGroupIdGenerator.skipTo(blockId);
+  }
+
+  /**
+   * Gets the maximum sequentially allocated striped block ID for this
+   * filesystem
+   */
+  public long getLastAllocatedStripedBlockId() {
+return blockGroupIdGenerator.getCurrentValue();
+  }
+
+  /**
* Sets the current generation stamp for legacy blocks
*/
   public void setGenerationStampV1(long stamp) {
@@ -188,11 +205,11 @@ public class BlockIdManager {
   /**
* Increments, logs and then returns the block ID
*/
-  public long nextBlockId() {
+  public long nextContiguousBlockId() {
 return blockIdGenerator.nextValue();
   }
 
-  public long nextBlockGroupId() {
+  public long nextStripedBlockId() {
 return blockGroupIdGenerator.nextValue();
   }
 
@@ -216,7 +233,7 @@ public class BlockIdManager {
 return id  0;
   }
 
-  public static long convertToGroupID(long id) {
+  public static long convertToStripedID(long id) {
 return id  (~HdfsConstants.BLOCK_GROUP_INDEX_MASK);
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/67e4d1f6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
--

[07/50] [abbrv] hadoop git commit: MAPREDUCE-6291. Correct mapred queue usage command. Contributed by Brahma Reddy Battula.

2015-03-30 Thread zhz
MAPREDUCE-6291. Correct mapred queue usage command. Contributed by Brahma Reddy 
Battula.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/27d49e67
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/27d49e67
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/27d49e67

Branch: refs/heads/HDFS-7285
Commit: 27d49e6714ad7fc6038bc001e70ff5be3755f1ef
Parents: 89fb0f5
Author: Harsh J ha...@cloudera.com
Authored: Sat Mar 28 11:57:21 2015 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Sat Mar 28 11:58:17 2015 +0530

--
 hadoop-mapreduce-project/CHANGES.txt  | 3 +++
 .../src/main/java/org/apache/hadoop/mapred/JobQueueClient.java| 2 +-
 .../src/main/java/org/apache/hadoop/mapred/pipes/Submitter.java   | 2 +-
 .../src/main/java/org/apache/hadoop/mapreduce/tools/CLI.java  | 2 +-
 .../src/main/java/org/apache/hadoop/tools/HadoopArchives.java | 2 +-
 5 files changed, 7 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/27d49e67/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index ce16510..b0367a7 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -256,6 +256,9 @@ Release 2.8.0 - UNRELEASED
 
   IMPROVEMENTS
 
+MAPREDUCE-6291. Correct mapred queue usage command.
+(Brahma Reddu Battula via harsh)
+
 MAPREDUCE-579. Streaming slowmatch documentation. (harsh)
 
 MAPREDUCE-6287. Deprecated methods in org.apache.hadoop.examples.Sort

http://git-wip-us.apache.org/repos/asf/hadoop/blob/27d49e67/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobQueueClient.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobQueueClient.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobQueueClient.java
index 097e338..81f6140 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobQueueClient.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobQueueClient.java
@@ -224,7 +224,7 @@ class JobQueueClient extends Configured implements Tool {
   }
 
   private void displayUsage(String cmd) {
-String prefix = Usage: JobQueueClient ;
+String prefix = Usage: queue ;
 if (-queueinfo.equals(cmd)) {
   System.err.println(prefix + [ + cmd + job-queue-name [-showJobs]]);
 } else {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/27d49e67/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Submitter.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Submitter.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Submitter.java
index 8f4259e..4f5b6a1 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Submitter.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Submitter.java
@@ -363,7 +363,7 @@ public class Submitter extends Configured implements Tool {
 void printUsage() {
   // The CLI package should do this for us, but I can't figure out how
   // to make it print something reasonable.
-  System.out.println(bin/hadoop pipes);
+  System.out.println(Usage: pipes );
   System.out.println(  [-input path] // Input directory);
   System.out.println(  [-output path] // Output directory);
   System.out.println(  [-jar jar file // jar filename);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/27d49e67/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/tools/CLI.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/tools/CLI.java
 

[37/50] [abbrv] hadoop git commit: HDFS-7826. Erasure Coding: Update INodeFile quota computation for striped blocks. Contributed by Kai Sasaki.

2015-03-30 Thread zhz
HDFS-7826. Erasure Coding: Update INodeFile quota computation for striped 
blocks. Contributed by Kai Sasaki.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d2f26041
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d2f26041
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d2f26041

Branch: refs/heads/HDFS-7285
Commit: d2f2604118c5aa9cabf20f6bb7d05a6cd432b04c
Parents: c59219b
Author: Jing Zhao ji...@apache.org
Authored: Mon Mar 16 16:37:08 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:27 2015 -0700

--
 .../hadoop/hdfs/protocol/HdfsConstants.java |  3 +
 .../blockmanagement/BlockInfoStriped.java   | 12 ++-
 .../hadoop/hdfs/server/namenode/INodeFile.java  | 89 +---
 3 files changed, 90 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d2f26041/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
index 245b630..07b72e6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
@@ -186,4 +186,7 @@ public class HdfsConstants {
   public static final byte NUM_PARITY_BLOCKS = 2;
   public static final long BLOCK_GROUP_INDEX_MASK = 15;
   public static final byte MAX_BLOCKS_IN_GROUP = 16;
+
+  // The chunk size for striped block which is used by erasure coding
+  public static final int BLOCK_STRIPED_CHUNK_SIZE = 64 * 1024;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d2f26041/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
index 84c3be6..cef8318 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.hdfs.server.blockmanagement;
 
 import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState;
 
 /**
@@ -34,6 +35,7 @@ import 
org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState;
  * array to record the block index for each triplet.
  */
 public class BlockInfoStriped extends BlockInfo {
+  private final int   chunkSize = HdfsConstants.BLOCK_STRIPED_CHUNK_SIZE;
   private final short dataBlockNum;
   private final short parityBlockNum;
   /**
@@ -56,7 +58,7 @@ public class BlockInfoStriped extends BlockInfo {
 this.setBlockCollection(b.getBlockCollection());
   }
 
-  short getTotalBlockNum() {
+  public short getTotalBlockNum() {
 return (short) (dataBlockNum + parityBlockNum);
   }
 
@@ -178,6 +180,14 @@ public class BlockInfoStriped extends BlockInfo {
 }
   }
 
+  public long spaceConsumed() {
+// In case striped blocks, total usage by this striped blocks should
+// be the total of data blocks and parity blocks because
+// `getNumBytes` is the total of actual data block size.
+return ((getNumBytes() - 1) / (dataBlockNum * chunkSize) + 1)
+* chunkSize * parityBlockNum + getNumBytes();
+  }
+
   @Override
   public final boolean isStriped() {
 return true;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d2f26041/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
index 452c230..b1c57b0 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
@@ -42,6 +42,7 @@ import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
 import 

[50/50] [abbrv] hadoop git commit: HDFS-7936. Erasure coding: resolving conflicts in the branch when merging (this commit is for HDFS-7742)

2015-03-30 Thread zhz
HDFS-7936. Erasure coding: resolving conflicts in the branch when merging (this 
commit is for HDFS-7742)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a6543ac9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a6543ac9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a6543ac9

Branch: refs/heads/HDFS-7285
Commit: a6543ac978ac6e0e8dbf7ce32c41fb80525639ab
Parents: d50bbd7
Author: Zhe Zhang z...@apache.org
Authored: Mon Mar 30 10:23:09 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:23:09 2015 -0700

--
 .../hdfs/server/blockmanagement/TestBlockManager.java   | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a6543ac9/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
index cbea3d8..43f4607 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
@@ -552,11 +552,11 @@ public class TestBlockManager {
 assertNotNull(Chooses decommissioning source node for a normal 
replication
 +  if all available source nodes have reached their replication
 +  limits below the hard limit.,
-bm.chooseSourceDatanode(
-aBlock,
+bm.chooseSourceDatanodes(
+bm.getStoredBlock(aBlock),
 cntNodes,
 liveNodes,
-new NumberReplicas(),
+new NumberReplicas(), new LinkedListShort(), 1,
 UnderReplicatedBlocks.QUEUE_UNDER_REPLICATED));
 
 
@@ -566,11 +566,11 @@ public class TestBlockManager {
 
 assertNull(Does not choose a source decommissioning node for a normal
 +  replication when all available nodes exceed the hard limit.,
-bm.chooseSourceDatanode(
-aBlock,
+bm.chooseSourceDatanodes(
+bm.getStoredBlock(aBlock),
 cntNodes,
 liveNodes,
-new NumberReplicas(),
+new NumberReplicas(), new LinkedListShort(), 1,
 UnderReplicatedBlocks.QUEUE_UNDER_REPLICATED));
   }
 



[11/50] [abbrv] hadoop git commit: HDFS-6408. Remove redundant definitions in log4j.properties. Contributed by Abhiraj Butala.

2015-03-30 Thread zhz
HDFS-6408. Remove redundant definitions in log4j.properties. Contributed by 
Abhiraj Butala.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/232eca94
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/232eca94
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/232eca94

Branch: refs/heads/HDFS-7285
Commit: 232eca944a721c62f37e9012546a7fa814da6e01
Parents: 257c77f
Author: Akira Ajisaka aajis...@apache.org
Authored: Mon Mar 30 11:25:35 2015 +0900
Committer: Akira Ajisaka aajis...@apache.org
Committed: Mon Mar 30 11:25:35 2015 +0900

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 3 +++
 .../src/contrib/bkjournal/src/test/resources/log4j.properties   | 5 -
 2 files changed, 3 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/232eca94/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index e026f85..f4991da 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -350,6 +350,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-6263. Remove DRFA.MaxBackupIndex config from log4j.properties.
 (Abhiraj Butala via aajisaka)
 
+HDFS-6408. Remove redundant definitions in log4j.properties.
+(Abhiraj Butala via aajisaka)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/232eca94/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties
 
b/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties
index f66c84b..93c22f7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties
@@ -53,8 +53,3 @@ 
log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p - [%t:%C{
 
 # Max log file size of 10MB
 log4j.appender.ROLLINGFILE.MaxFileSize=10MB
-
-log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
-log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p 
[%t:%C{1}@%L] - %m%n
-
-



[06/50] [abbrv] hadoop git commit: HDFS-7700. Document quota support for storage types. (Contributed by Xiaoyu Yao)

2015-03-30 Thread zhz
HDFS-7700. Document quota support for storage types. (Contributed by Xiaoyu Yao)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/89fb0f57
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/89fb0f57
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/89fb0f57

Branch: refs/heads/HDFS-7285
Commit: 89fb0f57ef5d2e736ad2d50215369e55f3039e40
Parents: e97f8e4
Author: Arpit Agarwal a...@apache.org
Authored: Fri Mar 27 19:49:26 2015 -0700
Committer: Arpit Agarwal a...@apache.org
Committed: Fri Mar 27 19:49:26 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../src/site/markdown/HDFSCommands.md   |  8 ++--
 .../src/site/markdown/HdfsQuotaAdminGuide.md| 41 ++--
 3 files changed, 45 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/89fb0f57/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index af1dd60..f7cc2bc 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1311,6 +1311,9 @@ Release 2.7.0 - UNRELEASED
   HDFS-7824. GetContentSummary API and its namenode implementation for
   Storage Type Quota/Usage. (Xiaoyu Yao via Arpit Agarwal)
 
+  HDFS-7700. Document quota support for storage types. (Xiaoyu Yao via
+  Arpit Agarwal)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/89fb0f57/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
index 191b5bc..bdb051b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
@@ -307,8 +307,8 @@ Usage:
   [-refreshNodes]
   [-setQuota quota dirname...dirname]
   [-clrQuota dirname...dirname]
-  [-setSpaceQuota quota dirname...dirname]
-  [-clrSpaceQuota dirname...dirname]
+  [-setSpaceQuota quota [-storageType storagetype] 
dirname...dirname]
+  [-clrSpaceQuota [-storageType storagetype] 
dirname...dirname]
   [-setStoragePolicy path policyName]
   [-getStoragePolicy path]
   [-finalizeUpgrade]
@@ -342,8 +342,8 @@ Usage:
 | `-refreshNodes` | Re-read the hosts and exclude files to update the set of 
Datanodes that are allowed to connect to the Namenode and those that should be 
decommissioned or recommissioned. |
 | `-setQuota` \quota\ \dirname\...\dirname\ | See [HDFS Quotas 
Guide](../hadoop-hdfs/HdfsQuotaAdminGuide.html#Administrative_Commands) for the 
detail. |
 | `-clrQuota` \dirname\...\dirname\ | See [HDFS Quotas 
Guide](../hadoop-hdfs/HdfsQuotaAdminGuide.html#Administrative_Commands) for the 
detail. |
-| `-setSpaceQuota` \quota\ \dirname\...\dirname\ | See [HDFS Quotas 
Guide](../hadoop-hdfs/HdfsQuotaAdminGuide.html#Administrative_Commands) for the 
detail. |
-| `-clrSpaceQuota` \dirname\...\dirname\ | See [HDFS Quotas 
Guide](../hadoop-hdfs/HdfsQuotaAdminGuide.html#Administrative_Commands) for the 
detail. |
+| `-setSpaceQuota` \quota\ `[-storageType storagetype]` 
\dirname\...\dirname\ | See [HDFS Quotas 
Guide](../hadoop-hdfs/HdfsQuotaAdminGuide.html#Administrative_Commands) for the 
detail. |
+| `-clrSpaceQuota` `[-storageType storagetype]` \dirname\...\dirname\ | 
See [HDFS Quotas 
Guide](../hadoop-hdfs/HdfsQuotaAdminGuide.html#Administrative_Commands) for the 
detail. |
 | `-setStoragePolicy` \path\ \policyName\ | Set a storage policy to a file 
or a directory. |
 | `-getStoragePolicy` \path\ | Get the storage policy of a file or a 
directory. |
 | `-finalizeUpgrade` | Finalize upgrade of HDFS. Datanodes delete their 
previous version working directories, followed by Namenode doing the same. This 
completes the upgrade process. |

http://git-wip-us.apache.org/repos/asf/hadoop/blob/89fb0f57/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsQuotaAdminGuide.md
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsQuotaAdminGuide.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsQuotaAdminGuide.md
index a1bcd78..7c15bb1 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsQuotaAdminGuide.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsQuotaAdminGuide.md
@@ -19,6 +19,7 @@ HDFS Quotas Guide
 * 

[10/50] [abbrv] hadoop git commit: HDFS-6263. Remove DRFA.MaxBackupIndex config from log4j.properties. Contributed by Abhiraj Butala.

2015-03-30 Thread zhz
HDFS-6263. Remove DRFA.MaxBackupIndex config from log4j.properties. Contributed 
by Abhiraj Butala.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/257c77f8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/257c77f8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/257c77f8

Branch: refs/heads/HDFS-7285
Commit: 257c77f895e8e4c3d8748909ebbd3ba7e7f880fc
Parents: 3d9132d
Author: Akira Ajisaka aajis...@apache.org
Authored: Mon Mar 30 10:52:15 2015 +0900
Committer: Akira Ajisaka aajis...@apache.org
Committed: Mon Mar 30 10:52:15 2015 +0900

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../src/contrib/bkjournal/src/test/resources/log4j.properties | 2 --
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/257c77f8/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 496db06..e026f85 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -347,6 +347,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-8004. Use KeyProviderCryptoExtension#warmUpEncryptedKeys when creating
 an encryption zone. (awang via asuresh)
 
+HDFS-6263. Remove DRFA.MaxBackupIndex config from log4j.properties.
+(Abhiraj Butala via aajisaka)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/257c77f8/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties
 
b/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties
index 8a6b217..f66c84b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties
@@ -53,8 +53,6 @@ 
log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p - [%t:%C{
 
 # Max log file size of 10MB
 log4j.appender.ROLLINGFILE.MaxFileSize=10MB
-# uncomment the next line to limit number of backup files
-#log4j.appender.ROLLINGFILE.MaxBackupIndex=10
 
 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
 log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p 
[%t:%C{1}@%L] - %m%n



[02/50] [abbrv] hadoop git commit: HADOOP-11760. Fix typo of javadoc in DistCp. Contributed by Brahma Reddy Battula.

2015-03-30 Thread zhz
HADOOP-11760. Fix typo of javadoc in DistCp. Contributed by Brahma Reddy 
Battula.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e074952b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e074952b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e074952b

Branch: refs/heads/HDFS-7285
Commit: e074952bd6bedf58d993bbea690bad08c9a0e6aa
Parents: 60882ab
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Fri Mar 27 23:15:51 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Fri Mar 27 23:15:51 2015 +0900

--
 hadoop-common-project/hadoop-common/CHANGES.txt   | 3 +++
 .../src/main/java/org/apache/hadoop/tools/DistCp.java | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e074952b/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index a7d4adc..febbf6b 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -481,6 +481,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-11724. DistCp throws NPE when the target directory is root.
 (Lei Eddy Xu via Yongjun Zhang) 
 
+HADOOP-11760. Fix typo of javadoc in DistCp. (Brahma Reddy Battula via
+ozawa).
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e074952b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
index ada4b25..6921a1e 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
@@ -401,7 +401,7 @@ public class DistCp extends Configured implements Tool {
* job staging directory
*
* @return Returns the working folder information
-   * @throws Exception - EXception if any
+   * @throws Exception - Exception if any
*/
   private Path createMetaFolderPath() throws Exception {
 Configuration configuration = getConf();



[21/50] [abbrv] hadoop git commit: HADOOP-11541. Raw XOR coder

2015-03-30 Thread zhz
HADOOP-11541. Raw XOR coder


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/808cb1d3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/808cb1d3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/808cb1d3

Branch: refs/heads/HDFS-7285
Commit: 808cb1d38bcb143501fdb711e8de9e959f71e856
Parents: 7302ab1
Author: Kai Zheng dran...@apache.org
Authored: Sun Feb 8 01:40:27 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:24 2015 -0700

--
 .../io/erasurecode/rawcoder/XorRawDecoder.java  |  81 ++
 .../io/erasurecode/rawcoder/XorRawEncoder.java  |  61 +
 .../hadoop/io/erasurecode/TestCoderBase.java| 262 +++
 .../erasurecode/rawcoder/TestRawCoderBase.java  |  96 +++
 .../erasurecode/rawcoder/TestXorRawCoder.java   |  52 
 5 files changed, 552 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/808cb1d3/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XorRawDecoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XorRawDecoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XorRawDecoder.java
new file mode 100644
index 000..98307a7
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XorRawDecoder.java
@@ -0,0 +1,81 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.io.erasurecode.rawcoder;
+
+import java.nio.ByteBuffer;
+
+/**
+ * A raw decoder in XOR code scheme in pure Java, adapted from HDFS-RAID.
+ */
+public class XorRawDecoder extends AbstractRawErasureDecoder {
+
+  @Override
+  protected void doDecode(ByteBuffer[] inputs, int[] erasedIndexes,
+  ByteBuffer[] outputs) {
+assert(erasedIndexes.length == outputs.length);
+assert(erasedIndexes.length = 1);
+
+int bufSize = inputs[0].remaining();
+int erasedIdx = erasedIndexes[0];
+
+// Set the output to zeros.
+for (int j = 0; j  bufSize; j++) {
+  outputs[0].put(j, (byte) 0);
+}
+
+// Process the inputs.
+for (int i = 0; i  inputs.length; i++) {
+  // Skip the erased location.
+  if (i == erasedIdx) {
+continue;
+  }
+
+  for (int j = 0; j  bufSize; j++) {
+outputs[0].put(j, (byte) (outputs[0].get(j) ^ inputs[i].get(j)));
+  }
+}
+  }
+
+  @Override
+  protected void doDecode(byte[][] inputs, int[] erasedIndexes,
+  byte[][] outputs) {
+assert(erasedIndexes.length == outputs.length);
+assert(erasedIndexes.length = 1);
+
+int bufSize = inputs[0].length;
+int erasedIdx = erasedIndexes[0];
+
+// Set the output to zeros.
+for (int j = 0; j  bufSize; j++) {
+  outputs[0][j] = 0;
+}
+
+// Process the inputs.
+for (int i = 0; i  inputs.length; i++) {
+  // Skip the erased location.
+  if (i == erasedIdx) {
+continue;
+  }
+
+  for (int j = 0; j  bufSize; j++) {
+outputs[0][j] ^= inputs[i][j];
+  }
+}
+  }
+
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/808cb1d3/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XorRawEncoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XorRawEncoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XorRawEncoder.java
new file mode 100644
index 000..99b20b9
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XorRawEncoder.java
@@ -0,0 +1,61 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the 

[34/50] [abbrv] hadoop git commit: HDFS-7936. Erasure coding: resolving conflicts when merging with HDFS-7903 and HDFS-7435. Contributed by Zhe Zhang.

2015-03-30 Thread zhz
HDFS-7936. Erasure coding: resolving conflicts when merging with HDFS-7903 and 
HDFS-7435. Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c59219ba
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c59219ba
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c59219ba

Branch: refs/heads/HDFS-7285
Commit: c59219ba605e7083d037f6420c2680237a55e60a
Parents: 80fe23f
Author: Jing Zhao ji...@apache.org
Authored: Mon Mar 16 14:27:21 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:27 2015 -0700

--
 .../hadoop/hdfs/server/blockmanagement/DecommissionManager.java | 2 +-
 .../org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java| 2 +-
 .../hadoop/hdfs/server/namenode/snapshot/FileDiffList.java  | 3 ++-
 .../src/test/java/org/apache/hadoop/hdfs/TestDecommission.java  | 5 ++---
 .../hadoop/hdfs/server/namenode/TestAddStripedBlocks.java   | 4 ++--
 5 files changed, 8 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c59219ba/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
index 0faf3ad..df31d6e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
@@ -536,7 +536,7 @@ public class DecommissionManager {
  */
 private void processBlocksForDecomInternal(
 final DatanodeDescriptor datanode,
-final IteratorBlockInfoContiguous it,
+final Iterator? extends BlockInfo it,
 final ListBlockInfoContiguous insufficientlyReplicated,
 boolean pruneSufficientlyReplicated) {
   boolean firstReplicationLog = true;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c59219ba/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 7bb504ef..bf1011a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -2001,7 +2001,7 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 }
 
 // Check if the file is already being truncated with the same length
-final BlockInfoContiguous last = file.getLastBlock();
+final BlockInfo last = file.getLastBlock();
 if (last != null  last.getBlockUCState() == BlockUCState.UNDER_RECOVERY) 
{
   final Block truncateBlock
   = ((BlockInfoContiguousUnderConstruction)last).getTruncateBlock();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c59219ba/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
index a1263c5..d0248eb 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
@@ -21,6 +21,7 @@ import java.util.Collections;
 import java.util.List;
 
 import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguous;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockStoragePolicySuite;
@@ -132,7 +133,7 @@ public class FileDiffList extends
   break;
 }
 // Check if last block is part of truncate recovery
-BlockInfoContiguous lastBlock = file.getLastBlock();
+BlockInfo lastBlock = file.getLastBlock();
 Block 

[28/50] [abbrv] hadoop git commit: HDFS-7749. Erasure Coding: Add striped block support in INodeFile. Contributed by Jing Zhao.

2015-03-30 Thread zhz
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7527a599/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileWithStripedBlocksFeature.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileWithStripedBlocksFeature.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileWithStripedBlocksFeature.java
new file mode 100644
index 000..47445be
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileWithStripedBlocksFeature.java
@@ -0,0 +1,112 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguous;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped;
+
+/**
+ * Feature for file with striped blocks
+ */
+class FileWithStripedBlocksFeature implements INode.Feature {
+  private BlockInfoStriped[] blocks;
+
+  FileWithStripedBlocksFeature() {
+blocks = new BlockInfoStriped[0];
+  }
+
+  FileWithStripedBlocksFeature(BlockInfoStriped[] blocks) {
+Preconditions.checkArgument(blocks != null);
+this.blocks = blocks;
+  }
+
+  BlockInfoStriped[] getBlocks() {
+return this.blocks;
+  }
+
+  void setBlock(int index, BlockInfoStriped blk) {
+blocks[index] = blk;
+  }
+
+  BlockInfoStriped getLastBlock() {
+return blocks == null || blocks.length == 0 ?
+null : blocks[blocks.length - 1];
+  }
+
+  int numBlocks() {
+return blocks == null ? 0 : blocks.length;
+  }
+
+  void updateBlockCollection(INodeFile file) {
+if (blocks != null) {
+  for (BlockInfoStriped blk : blocks) {
+blk.setBlockCollection(file);
+  }
+}
+  }
+
+  private void setBlocks(BlockInfoStriped[] blocks) {
+this.blocks = blocks;
+  }
+
+  void addBlock(BlockInfoStriped newBlock) {
+if (this.blocks == null) {
+  this.setBlocks(new BlockInfoStriped[]{newBlock});
+} else {
+  int size = this.blocks.length;
+  BlockInfoStriped[] newlist = new BlockInfoStriped[size + 1];
+  System.arraycopy(this.blocks, 0, newlist, 0, size);
+  newlist[size] = newBlock;
+  this.setBlocks(newlist);
+}
+  }
+
+  boolean removeLastBlock(Block oldblock) {
+if (blocks == null || blocks.length == 0) {
+  return false;
+}
+int newSize = blocks.length - 1;
+if (!blocks[newSize].equals(oldblock)) {
+  return false;
+}
+
+//copy to a new list
+BlockInfoStriped[] newlist = new BlockInfoStriped[newSize];
+System.arraycopy(blocks, 0, newlist, 0, newSize);
+setBlocks(newlist);
+return true;
+  }
+
+  void truncateStripedBlocks(int n) {
+final BlockInfoStriped[] newBlocks;
+if (n == 0) {
+  newBlocks = new BlockInfoStriped[0];
+} else {
+  newBlocks = new BlockInfoStriped[n];
+  System.arraycopy(getBlocks(), 0, newBlocks, 0, n);
+}
+// set new blocks
+setBlocks(newBlocks);
+  }
+
+  void clear() {
+this.blocks = null;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7527a599/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
index 1858e0a..640fc57 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.hdfs.server.namenode;
 
+import static 
org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState.UNDER_CONSTRUCTION;
 import static 

[17/50] [abbrv] hadoop git commit: HDFS-7652. Process block reports for erasure coded blocks. Contributed by Zhe Zhang

2015-03-30 Thread zhz
HDFS-7652. Process block reports for erasure coded blocks. Contributed by Zhe 
Zhang


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7886ed17
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7886ed17
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7886ed17

Branch: refs/heads/HDFS-7285
Commit: 7886ed1705a6c082475578e33bb9a715ab888b22
Parents: 42e26e2
Author: Zhe Zhang z...@apache.org
Authored: Mon Feb 9 10:27:14 2015 -0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:23 2015 -0700

--
 .../server/blockmanagement/BlockIdManager.java|  8 
 .../hdfs/server/blockmanagement/BlockManager.java | 18 +-
 2 files changed, 21 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7886ed17/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
index c8b9d20..e7f8a05 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
@@ -211,4 +211,12 @@ public class BlockIdManager {
   .LAST_RESERVED_BLOCK_ID);
 generationStampV1Limit = GenerationStamp.GRANDFATHER_GENERATION_STAMP;
   }
+
+  public static boolean isStripedBlockID(long id) {
+return id  0;
+  }
+
+  public static long convertToGroupID(long id) {
+return id  (~(HdfsConstants.MAX_BLOCKS_IN_GROUP - 1));
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7886ed17/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index f6e15a3..3102a08 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -1925,7 +1925,7 @@ public class BlockManager {
   break;
 }
 
-BlockInfoContiguous bi = blocksMap.getStoredBlock(b);
+BlockInfoContiguous bi = getStoredBlock(b);
 if (bi == null) {
   if (LOG.isDebugEnabled()) {
 LOG.debug(BLOCK* rescanPostponedMisreplicatedBlocks:  +
@@ -2068,7 +2068,7 @@ public class BlockManager {
 continue;
   }
   
-  BlockInfoContiguous storedBlock = blocksMap.getStoredBlock(iblk);
+  BlockInfoContiguous storedBlock = getStoredBlock(iblk);
   // If block does not belong to any file, we are done.
   if (storedBlock == null) continue;
   
@@ -2208,7 +2208,7 @@ public class BlockManager {
 }
 
 // find block by blockId
-BlockInfoContiguous storedBlock = blocksMap.getStoredBlock(block);
+BlockInfoContiguous storedBlock = getStoredBlock(block);
 if(storedBlock == null) {
   // If blocksMap does not contain reported block id,
   // the replica should be removed from the data-node.
@@ -2499,7 +2499,7 @@ public class BlockManager {
 DatanodeDescriptor node = storageInfo.getDatanodeDescriptor();
 if (block instanceof BlockInfoContiguousUnderConstruction) {
   //refresh our copy in case the block got completed in another thread
-  storedBlock = blocksMap.getStoredBlock(block);
+  storedBlock = getStoredBlock(block);
 } else {
   storedBlock = block;
 }
@@ -3362,7 +3362,15 @@ public class BlockManager {
   }
 
   public BlockInfoContiguous getStoredBlock(Block block) {
-return blocksMap.getStoredBlock(block);
+BlockInfoContiguous info = null;
+if (BlockIdManager.isStripedBlockID(block.getBlockId())) {
+  info = blocksMap.getStoredBlock(
+  new Block(BlockIdManager.convertToGroupID(block.getBlockId(;
+}
+if (info == null) {
+  info = blocksMap.getStoredBlock(block);
+}
+return info;
   }
 
   /** updates a block in under replication queue */



[08/50] [abbrv] hadoop git commit: YARN-3288. Document and fix indentation in the DockerContainerExecutor code

2015-03-30 Thread zhz
YARN-3288. Document and fix indentation in the DockerContainerExecutor code


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e0ccea33
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e0ccea33
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e0ccea33

Branch: refs/heads/HDFS-7285
Commit: e0ccea33c9e12f6930b2867e14b1b37569fed659
Parents: 27d49e6
Author: Ravi Prakash ravip...@altiscale.com
Authored: Sat Mar 28 08:00:41 2015 -0700
Committer: Ravi Prakash ravip...@altiscale.com
Committed: Sat Mar 28 08:00:41 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   2 +
 .../server/nodemanager/ContainerExecutor.java   |  18 +-
 .../nodemanager/DockerContainerExecutor.java| 229 +++
 .../launcher/ContainerLaunch.java   |   8 +-
 .../TestDockerContainerExecutor.java|  98 
 .../TestDockerContainerExecutorWithMocks.java   | 110 +
 6 files changed, 277 insertions(+), 188 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e0ccea33/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index d6ded77..fb233e3 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -81,6 +81,8 @@ Release 2.8.0 - UNRELEASED
 
 YARN-3397. yarn rmadmin should skip -failover. (J.Andreina via kasha)
 
+YARN-3288. Document and fix indentation in the DockerContainerExecutor code
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e0ccea33/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
index 377fd1d..1c670a1 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
@@ -210,8 +210,22 @@ public abstract class ContainerExecutor implements 
Configurable {
 }
   }
 
-  public void writeLaunchEnv(OutputStream out, MapString, String 
environment, MapPath, ListString resources, ListString command) throws 
IOException{
-ContainerLaunch.ShellScriptBuilder sb = 
ContainerLaunch.ShellScriptBuilder.create();
+  /**
+   * This method writes out the launch environment of a container. This can be
+   * overridden by extending ContainerExecutors to provide different behaviors
+   * @param out the output stream to which the environment is written (usually
+   * a script file which will be executed by the Launcher)
+   * @param environment The environment variables and their values
+   * @param resources The resources which have been localized for this 
container
+   * Symlinks will be created to these localized resources
+   * @param command The command that will be run.
+   * @throws IOException if any errors happened writing to the OutputStream,
+   * while creating symlinks
+   */
+  public void writeLaunchEnv(OutputStream out, MapString, String environment,
+MapPath, ListString resources, ListString command) throws 
IOException{
+ContainerLaunch.ShellScriptBuilder sb =
+  ContainerLaunch.ShellScriptBuilder.create();
 if (environment != null) {
   for (Map.EntryString,String env : environment.entrySet()) {
 sb.env(env.getKey().toString(), env.getValue().toString());

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e0ccea33/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DockerContainerExecutor.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DockerContainerExecutor.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DockerContainerExecutor.java
index c854173..71eaa04 100644
--- 

[23/50] [abbrv] hadoop git commit: HADOOP-11514. Raw Erasure Coder API for concrete encoding and decoding (Kai Zheng via umamahesh)

2015-03-30 Thread zhz
HADOOP-11514. Raw Erasure Coder API for concrete encoding and decoding (Kai 
Zheng via umamahesh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2fc3e353
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2fc3e353
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2fc3e353

Branch: refs/heads/HDFS-7285
Commit: 2fc3e353249eff6bc3d0e24c7d9d4c4121b7051c
Parents: ffc4171
Author: Uma Maheswara Rao G umamah...@apache.org
Authored: Thu Jan 29 14:15:13 2015 +0530
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:24 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt  |  4 +
 .../apache/hadoop/io/erasurecode/ECChunk.java   | 82 +
 .../rawcoder/AbstractRawErasureCoder.java   | 63 +
 .../rawcoder/AbstractRawErasureDecoder.java | 93 
 .../rawcoder/AbstractRawErasureEncoder.java | 93 
 .../erasurecode/rawcoder/RawErasureCoder.java   | 78 
 .../erasurecode/rawcoder/RawErasureDecoder.java | 55 
 .../erasurecode/rawcoder/RawErasureEncoder.java | 54 
 8 files changed, 522 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2fc3e353/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
new file mode 100644
index 000..8ce5a89
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -0,0 +1,4 @@
+  BREAKDOWN OF HADOOP-11264 SUBTASKS AND RELATED JIRAS (Common part of 
HDFS-7285)
+
+HADOOP-11514. Raw Erasure Coder API for concrete encoding and decoding
+(Kai Zheng via umamahesh)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2fc3e353/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
new file mode 100644
index 000..f84eb11
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
@@ -0,0 +1,82 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.io.erasurecode;
+
+import java.nio.ByteBuffer;
+
+/**
+ * A wrapper for ByteBuffer or bytes array for an erasure code chunk.
+ */
+public class ECChunk {
+
+  private ByteBuffer chunkBuffer;
+
+  /**
+   * Wrapping a ByteBuffer
+   * @param buffer
+   */
+  public ECChunk(ByteBuffer buffer) {
+this.chunkBuffer = buffer;
+  }
+
+  /**
+   * Wrapping a bytes array
+   * @param buffer
+   */
+  public ECChunk(byte[] buffer) {
+this.chunkBuffer = ByteBuffer.wrap(buffer);
+  }
+
+  /**
+   * Convert to ByteBuffer
+   * @return ByteBuffer
+   */
+  public ByteBuffer getBuffer() {
+return chunkBuffer;
+  }
+
+  /**
+   * Convert an array of this chunks to an array of ByteBuffers
+   * @param chunks
+   * @return an array of ByteBuffers
+   */
+  public static ByteBuffer[] toBuffers(ECChunk[] chunks) {
+ByteBuffer[] buffers = new ByteBuffer[chunks.length];
+
+for (int i = 0; i  chunks.length; i++) {
+  buffers[i] = chunks[i].getBuffer();
+}
+
+return buffers;
+  }
+
+  /**
+   * Convert an array of this chunks to an array of byte array
+   * @param chunks
+   * @return an array of byte array
+   */
+  public static byte[][] toArray(ECChunk[] chunks) {
+byte[][] bytesArr = new byte[chunks.length][];
+
+for (int i = 0; i  chunks.length; i++) {
+  bytesArr[i] = chunks[i].getBuffer().array();
+}
+
+return bytesArr;
+  }
+}


[12/50] [abbrv] hadoop git commit: HDFS-7890. Improve information on Top users for metrics in RollingWindowsManager and lower log level (Contributed by J.Andreina)

2015-03-30 Thread zhz
HDFS-7890. Improve information on Top users for metrics in 
RollingWindowsManager and lower log level (Contributed by J.Andreina)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1ed9fb76
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1ed9fb76
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1ed9fb76

Branch: refs/heads/HDFS-7285
Commit: 1ed9fb76645ecd195afe0067497dca10a3fb997d
Parents: 232eca9
Author: Vinayakumar B vinayakum...@apache.org
Authored: Mon Mar 30 10:02:48 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Mon Mar 30 10:02:48 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../hdfs/server/namenode/top/window/RollingWindowManager.java | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1ed9fb76/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index f4991da..9b1cc3e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -353,6 +353,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-6408. Remove redundant definitions in log4j.properties.
 (Abhiraj Butala via aajisaka)
 
+HDFS-7890. Improve information on Top users for metrics in
+RollingWindowsManager and lower log level (J.Andreina via vinayakumarb)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1ed9fb76/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/window/RollingWindowManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/window/RollingWindowManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/window/RollingWindowManager.java
index 00e7087..4759cc8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/window/RollingWindowManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/window/RollingWindowManager.java
@@ -245,7 +245,7 @@ public class RollingWindowManager {
   metricName, userName, windowSum);
   topN.offer(new NameValuePair(userName, windowSum));
 }
-LOG.info(topN size for command {} is: {}, metricName, topN.size());
+LOG.debug(topN users size for command {} is: {}, metricName, 
topN.size());
 return topN;
   }
 



[24/50] [abbrv] hadoop git commit: HADOOP-11542. Raw Reed-Solomon coder in pure Java. Contributed by Kai Zheng

2015-03-30 Thread zhz
HADOOP-11542. Raw Reed-Solomon coder in pure Java. Contributed by Kai Zheng


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c37d9823
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c37d9823
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c37d9823

Branch: refs/heads/HDFS-7285
Commit: c37d9823d41b3b6a840ac057b39216ba32b94fa3
Parents: 850a9ef
Author: drankye dran...@gmail.com
Authored: Thu Feb 12 21:12:44 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:25 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt  |   4 +
 .../io/erasurecode/rawcoder/JRSRawDecoder.java  |  69 +++
 .../io/erasurecode/rawcoder/JRSRawEncoder.java  |  78 +++
 .../erasurecode/rawcoder/RawErasureCoder.java   |   2 +-
 .../erasurecode/rawcoder/util/GaloisField.java  | 497 +++
 .../io/erasurecode/rawcoder/util/RSUtil.java|  22 +
 .../hadoop/io/erasurecode/TestCoderBase.java|  28 +-
 .../erasurecode/rawcoder/TestJRSRawCoder.java   |  93 
 .../erasurecode/rawcoder/TestRawCoderBase.java  |   5 +-
 .../erasurecode/rawcoder/TestXorRawCoder.java   |   1 -
 10 files changed, 786 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c37d9823/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index 9728f97..7bbacf7 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -8,3 +8,7 @@
 
 HADOOP-11541. Raw XOR coder
 ( Kai Zheng )
+
+HADOOP-11542. Raw Reed-Solomon coder in pure Java. Contributed by Kai Zheng
+( Kai Zheng )
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c37d9823/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/JRSRawDecoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/JRSRawDecoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/JRSRawDecoder.java
new file mode 100644
index 000..dbb689e
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/JRSRawDecoder.java
@@ -0,0 +1,69 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.io.erasurecode.rawcoder;
+
+import org.apache.hadoop.io.erasurecode.rawcoder.util.RSUtil;
+
+import java.nio.ByteBuffer;
+
+/**
+ * A raw erasure decoder in RS code scheme in pure Java in case native one
+ * isn't available in some environment. Please always use native 
implementations
+ * when possible.
+ */
+public class JRSRawDecoder extends AbstractRawErasureDecoder {
+  // To describe and calculate the needed Vandermonde matrix
+  private int[] errSignature;
+  private int[] primitivePower;
+
+  @Override
+  public void initialize(int numDataUnits, int numParityUnits, int chunkSize) {
+super.initialize(numDataUnits, numParityUnits, chunkSize);
+assert (getNumDataUnits() + getNumParityUnits()  
RSUtil.GF.getFieldSize());
+
+this.errSignature = new int[getNumParityUnits()];
+this.primitivePower = RSUtil.getPrimitivePower(getNumDataUnits(),
+getNumParityUnits());
+  }
+
+  @Override
+  protected void doDecode(ByteBuffer[] inputs, int[] erasedIndexes,
+  ByteBuffer[] outputs) {
+for (int i = 0; i  erasedIndexes.length; i++) {
+  errSignature[i] = primitivePower[erasedIndexes[i]];
+  RSUtil.GF.substitute(inputs, outputs[i], primitivePower[i]);
+}
+
+int dataLen = inputs[0].remaining();
+RSUtil.GF.solveVandermondeSystem(errSignature, outputs,
+erasedIndexes.length, dataLen);
+  }
+
+  @Override
+  protected void 

[16/50] [abbrv] hadoop git commit: HDFS-7347. Configurable erasure coding policy for individual files and directories ( Contributed by Zhe Zhang )

2015-03-30 Thread zhz
HDFS-7347. Configurable erasure coding policy for individual files and 
directories ( Contributed by Zhe Zhang )


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1d63b947
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1d63b947
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1d63b947

Branch: refs/heads/HDFS-7285
Commit: 1d63b947e93001a6e9d4f51f09cfdc2050c44dea
Parents: e7ea2a8
Author: Vinayakumar B vinayakum...@apache.org
Authored: Thu Nov 6 10:03:26 2014 +0530
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:22 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  4 ++
 .../hadoop/hdfs/protocol/HdfsConstants.java |  2 +
 .../BlockStoragePolicySuite.java|  5 ++
 .../hadoop/hdfs/TestBlockStoragePolicy.java | 12 +++-
 .../TestBlockInitialEncoding.java   | 75 
 5 files changed, 95 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1d63b947/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
new file mode 100644
index 000..2ef8527
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -0,0 +1,4 @@
+  BREAKDOWN OF HDFS-7285 SUBTASKS AND RELATED JIRAS
+
+HDFS-7347. Configurable erasure coding policy for individual files and
+directories ( Zhe Zhang via vinayakumarb )
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1d63b947/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
index 7cf8a47..54c650b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
@@ -171,6 +171,7 @@ public class HdfsConstants {
   public static final String ONESSD_STORAGE_POLICY_NAME = ONE_SSD;
   public static final String HOT_STORAGE_POLICY_NAME = HOT;
   public static final String WARM_STORAGE_POLICY_NAME = WARM;
+  public static final String EC_STORAGE_POLICY_NAME = EC;
   public static final String COLD_STORAGE_POLICY_NAME = COLD;
 
   public static final byte MEMORY_STORAGE_POLICY_ID = 15;
@@ -178,5 +179,6 @@ public class HdfsConstants {
   public static final byte ONESSD_STORAGE_POLICY_ID = 10;
   public static final byte HOT_STORAGE_POLICY_ID = 7;
   public static final byte WARM_STORAGE_POLICY_ID = 5;
+  public static final byte EC_STORAGE_POLICY_ID = 4;
   public static final byte COLD_STORAGE_POLICY_ID = 2;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1d63b947/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
index 020cb5f..3d121cc 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
@@ -78,6 +78,11 @@ public class BlockStoragePolicySuite {
 new StorageType[]{StorageType.DISK, StorageType.ARCHIVE},
 new StorageType[]{StorageType.DISK, StorageType.ARCHIVE},
 new StorageType[]{StorageType.DISK, StorageType.ARCHIVE});
+final byte ecId = HdfsConstants.EC_STORAGE_POLICY_ID;
+policies[ecId] = new BlockStoragePolicy(ecId,
+HdfsConstants.EC_STORAGE_POLICY_NAME,
+new StorageType[]{StorageType.DISK}, StorageType.EMPTY_ARRAY,
+new StorageType[]{StorageType.ARCHIVE});
 final byte coldId = HdfsConstants.COLD_STORAGE_POLICY_ID;
 policies[coldId] = new BlockStoragePolicy(coldId,
 HdfsConstants.COLD_STORAGE_POLICY_NAME,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1d63b947/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
--
diff --git 

[09/50] [abbrv] hadoop git commit: HDFS-7501. TransactionsSinceLastCheckpoint can be negative on SBNs. Contributed by Gautam Gopalakrishnan.

2015-03-30 Thread zhz
HDFS-7501. TransactionsSinceLastCheckpoint can be negative on SBNs. Contributed 
by Gautam Gopalakrishnan.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3d9132d4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3d9132d4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3d9132d4

Branch: refs/heads/HDFS-7285
Commit: 3d9132d434c39e9b6e142e5cf9fd7a8afa4190a6
Parents: e0ccea3
Author: Harsh J ha...@cloudera.com
Authored: Sun Mar 29 00:45:01 2015 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Sun Mar 29 00:45:01 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../hdfs/server/namenode/FSNamesystem.java  |  2 +-
 .../namenode/metrics/TestNameNodeMetrics.java   | 84 
 3 files changed, 88 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3d9132d4/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index f7cc2bc..496db06 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -351,6 +351,9 @@ Release 2.8.0 - UNRELEASED
 
   BUG FIXES
 
+HDFS-7501. TransactionsSinceLastCheckpoint can be negative on SBNs.
+(Gautam Gopalakrishnan via harsh)
+
 HDFS-5356. MiniDFSCluster should close all open FileSystems when shutdown()
 (Rakesh R via vinayakumarb)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3d9132d4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index d0999b8..0e0f484 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -4784,7 +4784,7 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   @Metric({TransactionsSinceLastCheckpoint,
   Number of transactions since last checkpoint})
   public long getTransactionsSinceLastCheckpoint() {
-return getEditLog().getLastWrittenTxId() -
+return getFSImage().getLastAppliedOrWrittenTxId() -
 getFSImage().getStorage().getMostRecentCheckpointTxId();
   }
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3d9132d4/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
index 011db3c..64ea1e4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
@@ -22,12 +22,16 @@ import static 
org.apache.hadoop.test.MetricsAsserts.assertCounter;
 import static org.apache.hadoop.test.MetricsAsserts.assertGauge;
 import static org.apache.hadoop.test.MetricsAsserts.assertQuantileGauges;
 import static org.apache.hadoop.test.MetricsAsserts.getMetrics;
+import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
 
 import java.io.DataInputStream;
 import java.io.IOException;
 import java.util.Random;
+import com.google.common.collect.ImmutableList;
+import com.google.common.io.Files;
 
+import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.commons.logging.impl.Log4JLogger;
 import org.apache.hadoop.conf.Configuration;
@@ -39,6 +43,7 @@ import org.apache.hadoop.hdfs.DFSTestUtil;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.MiniDFSNNTopology;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants.SafeModeAction;
 import org.apache.hadoop.hdfs.protocol.LocatedBlock;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockManager;
@@ -47,7 +52,9 @@ import 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor;
 import org.apache.hadoop.hdfs.server.datanode.DataNode;
 import 

[19/50] [abbrv] hadoop git commit: HDFS-7339. Allocating and persisting block groups in NameNode. Contributed by Zhe Zhang

2015-03-30 Thread zhz
HDFS-7339. Allocating and persisting block groups in NameNode. Contributed by 
Zhe Zhang

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/42e26e26
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/42e26e26
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/42e26e26

Branch: refs/heads/HDFS-7285
Commit: 42e26e2624b4f733df631c6f8878e8b160b49ed6
Parents: 1d63b94
Author: Zhe Zhang z...@apache.org
Authored: Fri Jan 30 16:16:26 2015 -0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:23 2015 -0700

--
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  2 +
 .../hadoop/hdfs/protocol/HdfsConstants.java |  4 +
 .../server/blockmanagement/BlockIdManager.java  |  8 +-
 .../SequentialBlockGroupIdGenerator.java| 82 +++
 .../SequentialBlockIdGenerator.java |  6 +-
 .../hdfs/server/namenode/FSDirectory.java   |  8 +-
 .../hdfs/server/namenode/FSNamesystem.java  | 34 +---
 .../hadoop/hdfs/server/namenode/INodeFile.java  | 11 +++
 .../hdfs/server/namenode/TestAddBlockgroup.java | 84 
 9 files changed, 223 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/42e26e26/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 610932a..eff457c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -221,6 +221,8 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
   public static final int DFS_NAMENODE_REPLICATION_INTERVAL_DEFAULT = 3;
   public static final String  DFS_NAMENODE_REPLICATION_MIN_KEY = 
dfs.namenode.replication.min;
   public static final int DFS_NAMENODE_REPLICATION_MIN_DEFAULT = 1;
+  public static final String  DFS_NAMENODE_STRIPE_MIN_KEY = 
dfs.namenode.stripe.min;
+  public static final int DFS_NAMENODE_STRIPE_MIN_DEFAULT = 1;
   public static final String  DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_KEY 
= dfs.namenode.replication.pending.timeout-sec;
   public static final int 
DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_DEFAULT = -1;
   public static final String  DFS_NAMENODE_REPLICATION_MAX_STREAMS_KEY = 
dfs.namenode.replication.max-streams;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/42e26e26/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
index 54c650b..de60b6e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
@@ -181,4 +181,8 @@ public class HdfsConstants {
   public static final byte WARM_STORAGE_POLICY_ID = 5;
   public static final byte EC_STORAGE_POLICY_ID = 4;
   public static final byte COLD_STORAGE_POLICY_ID = 2;
+
+  public static final byte NUM_DATA_BLOCKS = 3;
+  public static final byte NUM_PARITY_BLOCKS = 2;
+  public static final byte MAX_BLOCKS_IN_GROUP = 16;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/42e26e26/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
index 1c69203..c8b9d20 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
@@ -53,10 +53,12 @@ public 

[05/50] [abbrv] hadoop git commit: HDFS-8004. Use KeyProviderCryptoExtension#warmUpEncryptedKeys when creating an encryption zone. (awang via asuresh)

2015-03-30 Thread zhz
HDFS-8004. Use KeyProviderCryptoExtension#warmUpEncryptedKeys when creating an 
encryption zone. (awang via asuresh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e97f8e44
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e97f8e44
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e97f8e44

Branch: refs/heads/HDFS-7285
Commit: e97f8e44af9dffc42c030278425cffe0c9da723b
Parents: 3836ad6
Author: Arun Suresh asur...@apache.org
Authored: Fri Mar 27 19:23:45 2015 -0700
Committer: Arun Suresh asur...@apache.org
Committed: Fri Mar 27 19:23:45 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e97f8e44/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 72ea4fb..af1dd60 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -344,6 +344,9 @@ Release 2.8.0 - UNRELEASED
 
 HDFS-7990. IBR delete ack should not be delayed. (daryn via kihwal)
 
+HDFS-8004. Use KeyProviderCryptoExtension#warmUpEncryptedKeys when creating
+an encryption zone. (awang via asuresh)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e97f8e44/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 1226a26..d0999b8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -7957,7 +7957,7 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 throw new IOException(Key  + keyName +  doesn't exist.);
   }
   // If the provider supports pool for EDEKs, this will fill in the pool
-  generateEncryptedDataEncryptionKey(keyName);
+  provider.warmUpEncryptedKeys(keyName);
   createEncryptionZoneInt(src, metadata.getCipher(),
   keyName, logRetryCache);
 } catch (AccessControlException e) {



[20/50] [abbrv] hadoop git commit: Added the missed entry for commit of HADOOP-11541

2015-03-30 Thread zhz
Added the missed entry for commit of HADOOP-11541


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0614729e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0614729e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0614729e

Branch: refs/heads/HDFS-7285
Commit: 0614729ebfd1fc7d6373874e6a9e810e75643f58
Parents: 808cb1d
Author: drankye dran...@gmail.com
Authored: Mon Feb 9 22:04:08 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:24 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0614729e/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index 2124800..9728f97 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -4,4 +4,7 @@
 (Kai Zheng via umamahesh)
 
 HADOOP-11534. Minor improvements for raw erasure coders
-( Kai Zheng via vinayakumarb )
\ No newline at end of file
+( Kai Zheng via vinayakumarb )
+
+HADOOP-11541. Raw XOR coder
+( Kai Zheng )



[25/50] [abbrv] hadoop git commit: HDFS-7716. Erasure Coding: extend BlockInfo to handle EC info. Contributed by Jing Zhao.

2015-03-30 Thread zhz
http://git-wip-us.apache.org/repos/asf/hadoop/blob/850a9ef8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
index be16a87..fa7f263 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
@@ -24,6 +24,7 @@ import java.util.List;
 import com.google.common.annotations.VisibleForTesting;
 
 import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State;
@@ -80,10 +81,10 @@ public class DatanodeStorageInfo {
   /**
* Iterates over the list of blocks belonging to the data-node.
*/
-  class BlockIterator implements IteratorBlockInfoContiguous {
-private BlockInfoContiguous current;
+  class BlockIterator implements IteratorBlockInfo {
+private BlockInfo current;
 
-BlockIterator(BlockInfoContiguous head) {
+BlockIterator(BlockInfo head) {
   this.current = head;
 }
 
@@ -91,8 +92,8 @@ public class DatanodeStorageInfo {
   return current != null;
 }
 
-public BlockInfoContiguous next() {
-  BlockInfoContiguous res = current;
+public BlockInfo next() {
+  BlockInfo res = current;
   current = 
current.getNext(current.findStorageInfo(DatanodeStorageInfo.this));
   return res;
 }
@@ -112,7 +113,7 @@ public class DatanodeStorageInfo {
   private volatile long remaining;
   private long blockPoolUsed;
 
-  private volatile BlockInfoContiguous blockList = null;
+  private volatile BlockInfo blockList = null;
   private int numBlocks = 0;
 
   // The ID of the last full block report which updated this storage.
@@ -226,7 +227,7 @@ public class DatanodeStorageInfo {
 return blockPoolUsed;
   }
 
-  public AddBlockResult addBlock(BlockInfoContiguous b) {
+  public AddBlockResult addBlock(BlockInfo b, Block reportedBlock) {
 // First check whether the block belongs to a different storage
 // on the same DN.
 AddBlockResult result = AddBlockResult.ADDED;
@@ -245,13 +246,21 @@ public class DatanodeStorageInfo {
 }
 
 // add to the head of the data-node list
-b.addStorage(this);
+b.addStorage(this, reportedBlock);
+insertToList(b);
+return result;
+  }
+
+  AddBlockResult addBlock(BlockInfoContiguous b) {
+return addBlock(b, b);
+  }
+
+  public void insertToList(BlockInfo b) {
 blockList = b.listInsert(blockList, this);
 numBlocks++;
-return result;
   }
 
-  public boolean removeBlock(BlockInfoContiguous b) {
+  public boolean removeBlock(BlockInfo b) {
 blockList = b.listRemove(blockList, this);
 if (b.removeStorage(this)) {
   numBlocks--;
@@ -265,16 +274,15 @@ public class DatanodeStorageInfo {
 return numBlocks;
   }
   
-  IteratorBlockInfoContiguous getBlockIterator() {
+  IteratorBlockInfo getBlockIterator() {
 return new BlockIterator(blockList);
-
   }
 
   /**
* Move block to the head of the list of blocks belonging to the data-node.
* @return the index of the head of the blockList
*/
-  int moveBlockToHead(BlockInfoContiguous b, int curIndex, int headIndex) {
+  int moveBlockToHead(BlockInfo b, int curIndex, int headIndex) {
 blockList = b.moveBlockToHead(blockList, this, curIndex, headIndex);
 return curIndex;
   }
@@ -284,7 +292,7 @@ public class DatanodeStorageInfo {
* @return the head of the blockList
*/
   @VisibleForTesting
-  BlockInfoContiguous getBlockListHeadForTesting(){
+  BlockInfo getBlockListHeadForTesting(){
 return blockList;
   }
 
@@ -371,6 +379,6 @@ public class DatanodeStorageInfo {
   }
 
   static enum AddBlockResult {
-ADDED, REPLACED, ALREADY_EXIST;
+ADDED, REPLACED, ALREADY_EXIST
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/850a9ef8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ReplicaUnderConstruction.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ReplicaUnderConstruction.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ReplicaUnderConstruction.java
new file mode 100644
index 000..f4600cb7
--- /dev/null
+++ 

[39/50] [abbrv] hadoop git commit: Updated CHANGES-HDFS-EC-7285.txt accordingly

2015-03-30 Thread zhz
Updated CHANGES-HDFS-EC-7285.txt accordingly


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b46621c9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b46621c9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b46621c9

Branch: refs/heads/HDFS-7285
Commit: b46621c95245d65a8790a99120df2d97e8bf0a84
Parents: 671db98
Author: Kai Zheng kai.zh...@intel.com
Authored: Wed Mar 18 19:24:24 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:28 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b46621c9/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index a97dc34..e27ff5c 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -19,6 +19,9 @@
 ( Kai Zheng via vinayakumarb )
 
 HADOOP-11705. Make erasure coder configurable. Contributed by Kai Zheng
-( Kai Zheng )
+( Kai Zheng )
+
+HADOOP-11706. Refine a little bit erasure coder API. Contributed by Kai 
Zheng
+( Kai Zheng )
 
 



[41/50] [abbrv] hadoop git commit: HDFS-7369. Erasure coding: distribute recovery work for striped blocks to DataNode. Contributed by Zhe Zhang.

2015-03-30 Thread zhz
HDFS-7369. Erasure coding: distribute recovery work for striped blocks to 
DataNode. Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5eb2c926
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5eb2c926
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5eb2c926

Branch: refs/heads/HDFS-7285
Commit: 5eb2c92685602ecb57b521cf75d4821c0d632bd4
Parents: b46621c
Author: Zhe Zhang z...@apache.org
Authored: Wed Mar 18 15:52:36 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:12:45 2015 -0700

--
 .../server/blockmanagement/BlockCollection.java |   5 +
 .../server/blockmanagement/BlockManager.java| 294 +--
 .../blockmanagement/DatanodeDescriptor.java |  72 -
 .../server/blockmanagement/DatanodeManager.java |  20 +-
 .../hadoop/hdfs/server/namenode/INodeFile.java  |   9 +-
 .../server/protocol/BlockECRecoveryCommand.java |  63 
 .../hdfs/server/protocol/DatanodeProtocol.java  |   1 +
 .../blockmanagement/BlockManagerTestUtil.java   |   2 +-
 .../blockmanagement/TestBlockManager.java   |  22 +-
 .../TestRecoverStripedBlocks.java   | 107 +++
 10 files changed, 486 insertions(+), 109 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5eb2c926/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
index 440a081..50dd17b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
@@ -86,4 +86,9 @@ public interface BlockCollection {
* @return whether the block collection is under construction.
*/
   public boolean isUnderConstruction();
+
+  /**
+   * @return whether the block collection is in striping format
+   */
+  public boolean isStriped();
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5eb2c926/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 1e8ce1f..091c85b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -42,6 +42,7 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.HAUtil;
@@ -531,9 +532,9 @@ public class BlockManager {
 
 NumberReplicas numReplicas = new NumberReplicas();
 // source node returned is not used
-chooseSourceDatanode(block, containingNodes,
+chooseSourceDatanodes(getStoredBlock(block), containingNodes,
 containingLiveReplicasNodes, numReplicas,
-UnderReplicatedBlocks.LEVEL);
+new LinkedListShort(), 1, UnderReplicatedBlocks.LEVEL);
 
 // containingLiveReplicasNodes can include READ_ONLY_SHARED replicas which 
are 
 // not included in the numReplicas.liveReplicas() count
@@ -1327,15 +1328,15 @@ public class BlockManager {
   }
 
   /**
-   * Scan blocks in {@link #neededReplications} and assign replication
-   * work to data-nodes they belong to.
+   * Scan blocks in {@link #neededReplications} and assign recovery
+   * (replication or erasure coding) work to data-nodes they belong to.
*
* The number of process blocks equals either twice the number of live
* data-nodes or the number of under-replicated blocks whichever is less.
*
* @return number of blocks scheduled for replication during this iteration.
*/
-  int computeReplicationWork(int blocksToProcess) {
+  int computeBlockRecoveryWork(int blocksToProcess) {
 ListListBlockInfo blocksToReplicate = null;
 namesystem.writeLock();
 try {
@@ -1345,30 +1346,32 @@ public 

[35/50] [abbrv] hadoop git commit: HADOOP-11705. Make erasure coder configurable. Contributed by Kai Zheng

2015-03-30 Thread zhz
HADOOP-11705. Make erasure coder configurable. Contributed by Kai Zheng


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/af9dc8e5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/af9dc8e5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/af9dc8e5

Branch: refs/heads/HDFS-7285
Commit: af9dc8e50f34c226ce03a7b9517f9af5352921b3
Parents: ec4f224
Author: drankye kai.zh...@intel.com
Authored: Thu Mar 12 23:35:22 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:27 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt  |  4 +++
 .../erasurecode/coder/AbstractErasureCoder.java |  5 ++-
 .../rawcoder/AbstractRawErasureCoder.java   |  5 ++-
 .../hadoop/io/erasurecode/TestCoderBase.java|  6 
 .../erasurecode/coder/TestErasureCoderBase.java | 36 +---
 .../erasurecode/rawcoder/TestRawCoderBase.java  | 13 +--
 6 files changed, 60 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/af9dc8e5/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index c17a1bd..a97dc34 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -18,3 +18,7 @@
 HADOOP-11646. Erasure Coder API for encoding and decoding of block group
 ( Kai Zheng via vinayakumarb )
 
+HADOOP-11705. Make erasure coder configurable. Contributed by Kai Zheng
+( Kai Zheng )
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/af9dc8e5/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
index f2cc041..8d3bc34 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
@@ -17,12 +17,15 @@
  */
 package org.apache.hadoop.io.erasurecode.coder;
 
+import org.apache.hadoop.conf.Configured;
+
 /**
  * A common class of basic facilities to be shared by encoder and decoder
  *
  * It implements the {@link ErasureCoder} interface.
  */
-public abstract class AbstractErasureCoder implements ErasureCoder {
+public abstract class AbstractErasureCoder
+extends Configured implements ErasureCoder {
 
   private int numDataUnits;
   private int numParityUnits;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/af9dc8e5/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
index 74d2ab6..e6f3d92 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
@@ -17,12 +17,15 @@
  */
 package org.apache.hadoop.io.erasurecode.rawcoder;
 
+import org.apache.hadoop.conf.Configured;
+
 /**
  * A common class of basic facilities to be shared by encoder and decoder
  *
  * It implements the {@link RawErasureCoder} interface.
  */
-public abstract class AbstractRawErasureCoder implements RawErasureCoder {
+public abstract class AbstractRawErasureCoder
+extends Configured implements RawErasureCoder {
 
   private int numDataUnits;
   private int numParityUnits;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/af9dc8e5/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
index 3c4288c..194413a 100644
--- 

[46/50] [abbrv] hadoop git commit: HDFS-7936. Erasure coding: resolving conflicts when merging with HDFS-7903, HDFS-7435, HDFS-7930, HDFS-7960 (this commit is for HDFS-7960)

2015-03-30 Thread zhz
HDFS-7936. Erasure coding: resolving conflicts when merging with HDFS-7903, 
HDFS-7435, HDFS-7930, HDFS-7960 (this commit is for HDFS-7960)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d0dad770
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d0dad770
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d0dad770

Branch: refs/heads/HDFS-7285
Commit: d0dad770197d4ff9352ce285daa121a22e7babaa
Parents: 37ecc11
Author: Zhe Zhang z...@apache.org
Authored: Tue Mar 24 11:39:36 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:13:08 2015 -0700

--
 .../apache/hadoop/hdfs/server/blockmanagement/BlockManager.java | 4 ++--
 .../blockmanagement/TestNameNodePrunesMissingStorages.java  | 5 -
 .../hadoop/hdfs/server/namenode/TestAddStripedBlocks.java   | 2 +-
 3 files changed, 7 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d0dad770/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index abe44f0..545e66b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -1975,10 +1975,10 @@ public class BlockManager {
  longer exists on the DataNode.,
   Long.toHexString(context.getReportId()), zombie.getStorageID());
 assert(namesystem.hasWriteLock());
-IteratorBlockInfoContiguous iter = zombie.getBlockIterator();
+IteratorBlockInfo iter = zombie.getBlockIterator();
 int prevBlocks = zombie.numBlocks();
 while (iter.hasNext()) {
-  BlockInfoContiguous block = iter.next();
+  BlockInfo block = iter.next();
   // We assume that a block can be on only one storage in a DataNode.
   // That's why we pass in the DatanodeDescriptor rather than the
   // DatanodeStorageInfo.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d0dad770/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java
index 4b97d01..e9329cd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestNameNodePrunesMissingStorages.java
@@ -171,9 +171,12 @@ public class TestNameNodePrunesMissingStorages {
   String datanodeUuid;
   // Find the first storage which this block is in.
   try {
+BlockInfo storedBlock =
+cluster.getNamesystem().getBlockManager().
+getStoredBlock(block.getLocalBlock());
 IteratorDatanodeStorageInfo storageInfoIter =
 cluster.getNamesystem().getBlockManager().
-getStorages(block.getLocalBlock()).iterator();
+blocksMap.getStorages(storedBlock).iterator();
 assertTrue(storageInfoIter.hasNext());
 DatanodeStorageInfo info = storageInfoIter.next();
 storageIdToRemove = info.getStorageID();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d0dad770/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddStripedBlocks.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddStripedBlocks.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddStripedBlocks.java
index 05aec4b..7d7c81e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddStripedBlocks.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddStripedBlocks.java
@@ -269,7 +269,7 @@ public class TestAddStripedBlocks {
   StorageBlockReport[] reports = {new StorageBlockReport(storage,
   bll)};
   

[48/50] [abbrv] hadoop git commit: HDFS-7827. Erasure Coding: support striped blocks in non-protobuf fsimage. Contributed by Hui Zheng.

2015-03-30 Thread zhz
HDFS-7827. Erasure Coding: support striped blocks in non-protobuf fsimage. 
Contributed by Hui Zheng.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/37ecc116
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/37ecc116
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/37ecc116

Branch: refs/heads/HDFS-7285
Commit: 37ecc116e38c25e8a94ebd719f27235dee63ff9a
Parents: e238608
Author: Jing Zhao ji...@apache.org
Authored: Mon Mar 23 15:10:10 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:13:08 2015 -0700

--
 .../blockmanagement/BlockInfoStriped.java   |  11 +-
 .../hdfs/server/namenode/FSImageFormat.java |  62 ++--
 .../server/namenode/FSImageSerialization.java   |  78 +++---
 .../blockmanagement/TestBlockInfoStriped.java   |  34 +
 .../hdfs/server/namenode/TestFSImage.java   | 148 ++-
 5 files changed, 300 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/37ecc116/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
index cef8318..30b5ee7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
@@ -20,6 +20,8 @@ package org.apache.hadoop.hdfs.server.blockmanagement;
 import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState;
+import java.io.DataOutput;
+import java.io.IOException;
 
 /**
  * Subclass of {@link BlockInfo}, presenting a block group in erasure coding.
@@ -206,6 +208,13 @@ public class BlockInfoStriped extends BlockInfo {
 return num;
   }
 
+  @Override
+  public void write(DataOutput out) throws IOException {
+out.writeShort(dataBlockNum);
+out.writeShort(parityBlockNum);
+super.write(out);
+  }
+
   /**
* Convert a complete block to an under construction block.
* @return BlockInfoUnderConstruction -  an under construction block.
@@ -215,7 +224,7 @@ public class BlockInfoStriped extends BlockInfo {
 final BlockInfoStripedUnderConstruction ucBlock;
 if(isComplete()) {
   ucBlock = new BlockInfoStripedUnderConstruction(this, getDataBlockNum(),
-  getParityBlockNum(),  s, targets);
+  getParityBlockNum(), s, targets);
   ucBlock.setBlockCollection(getBlockCollection());
 } else {
   // the block is already under construction

http://git-wip-us.apache.org/repos/asf/hadoop/blob/37ecc116/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
index 2e6e741..ad96863 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
@@ -47,13 +47,16 @@ import org.apache.hadoop.fs.PathIsNotDirectoryException;
 import org.apache.hadoop.fs.UnresolvedLinkException;
 import org.apache.hadoop.fs.permission.PermissionStatus;
 import org.apache.hadoop.hdfs.DFSUtil;
+import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.protocol.LayoutFlags;
 import org.apache.hadoop.hdfs.protocol.LayoutVersion;
 import org.apache.hadoop.hdfs.protocol.LayoutVersion.Feature;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguous;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction;
+import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockManager;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.StartupOption;
 import org.apache.hadoop.hdfs.server.common.InconsistentFSStateException;
@@ 

[42/50] [abbrv] hadoop git commit: HDFS-7936. Erasure coding: resolving conflicts when merging with HDFS-7903, HDFS-7435 and HDFS-7930 (this commit is for HDFS-7930 only)

2015-03-30 Thread zhz
HDFS-7936. Erasure coding: resolving conflicts when merging with HDFS-7903, 
HDFS-7435 and HDFS-7930 (this commit is for HDFS-7930 only)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9a0f626f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9a0f626f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9a0f626f

Branch: refs/heads/HDFS-7285
Commit: 9a0f626f74345ffa4622db558edea6244b663109
Parents: 26b5a06
Author: Zhe Zhang z...@apache.org
Authored: Mon Mar 23 11:25:40 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:13:07 2015 -0700

--
 .../hadoop/hdfs/server/blockmanagement/BlockManager.java  | 7 ---
 .../org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java  | 7 ---
 .../org/apache/hadoop/hdfs/server/namenode/INodeFile.java | 2 +-
 3 files changed, 9 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9a0f626f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 091c85b..7dfe0a4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -2114,17 +2114,18 @@ public class BlockManager {
* Mark block replicas as corrupt except those on the storages in 
* newStorages list.
*/
-  public void markBlockReplicasAsCorrupt(BlockInfoContiguous block, 
+  public void markBlockReplicasAsCorrupt(Block oldBlock,
+  BlockInfo block,
   long oldGenerationStamp, long oldNumBytes, 
   DatanodeStorageInfo[] newStorages) throws IOException {
 assert namesystem.hasWriteLock();
 BlockToMarkCorrupt b = null;
 if (block.getGenerationStamp() != oldGenerationStamp) {
-  b = new BlockToMarkCorrupt(block, oldGenerationStamp,
+  b = new BlockToMarkCorrupt(oldBlock, block, oldGenerationStamp,
   genstamp does not match  + oldGenerationStamp
   +  :  + block.getGenerationStamp(), Reason.GENSTAMP_MISMATCH);
 } else if (block.getNumBytes() != oldNumBytes) {
-  b = new BlockToMarkCorrupt(block,
+  b = new BlockToMarkCorrupt(oldBlock, block,
   length does not match  + oldNumBytes
   +  :  + block.getNumBytes(), Reason.SIZE_MISMATCH);
 } else {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9a0f626f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 4e6b5e3..4519cee 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -2795,7 +2795,7 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   /** Compute quota change for converting a complete block to a UC block */
   private QuotaCounts computeQuotaDeltaForUCBlock(INodeFile file) {
 final QuotaCounts delta = new QuotaCounts.Builder().build();
-final BlockInfoContiguous lastBlock = file.getLastBlock();
+final BlockInfo lastBlock = file.getLastBlock();
 if (lastBlock != null) {
   final long diff = file.getPreferredBlockSize() - lastBlock.getNumBytes();
   final short repl = file.getBlockReplication();
@@ -4390,8 +4390,9 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 } else {
   iFile.convertLastBlockToUC(storedBlock, trimmedStorageInfos);
   if (closeFile) {
-blockManager.markBlockReplicasAsCorrupt(storedBlock,
-oldGenerationStamp, oldNumBytes, trimmedStorageInfos);
+blockManager.markBlockReplicasAsCorrupt(oldBlock.getLocalBlock(),
+storedBlock, oldGenerationStamp, oldNumBytes,
+trimmedStorageInfos);
   }
 }
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9a0f626f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
--
diff --git 

[47/50] [abbrv] hadoop git commit: HDFS-7716. Add a test for BlockGroup support in FSImage. Contributed by Takuya Fukudome

2015-03-30 Thread zhz
HDFS-7716. Add a test for BlockGroup support in FSImage.  Contributed by Takuya 
Fukudome


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8d49fc33
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8d49fc33
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8d49fc33

Branch: refs/heads/HDFS-7285
Commit: 8d49fc33900f21e499432edda500fad046e30023
Parents: d0dad77
Author: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Authored: Wed Mar 25 19:01:03 2015 +0900
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:13:08 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  6 ++-
 .../hdfs/server/namenode/TestFSImage.java   | 53 
 2 files changed, 58 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8d49fc33/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
index 2ef8527..21e4c03 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -1,4 +1,8 @@
   BREAKDOWN OF HDFS-7285 SUBTASKS AND RELATED JIRAS
 
 HDFS-7347. Configurable erasure coding policy for individual files and
-directories ( Zhe Zhang via vinayakumarb )
\ No newline at end of file
+directories ( Zhe Zhang via vinayakumarb )
+
+HDFS-7716. Add a test for BlockGroup support in FSImage.
+(Takuya Fukudome via szetszwo)
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8d49fc33/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
index 71dc978..440f5cd 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
@@ -19,6 +19,7 @@ package org.apache.hadoop.hdfs.server.namenode;
 
 import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertTrue;
 
 import java.io.File;
@@ -31,7 +32,12 @@ import java.io.ByteArrayInputStream;
 import java.io.IOException;
 import java.util.EnumSet;
 
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguous;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped;
+import org.apache.hadoop.hdfs.server.common.HdfsServerConstants;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.StartupOption;
 import org.junit.Assert;
 
@@ -46,6 +52,7 @@ import org.apache.hadoop.fs.FileUtil;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSOutputStream;
+import org.apache.hadoop.hdfs.DFSTestUtil;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.MiniDFSCluster;
@@ -378,4 +385,50 @@ public class TestFSImage {
   FileUtil.fullyDelete(dfsDir);
 }
   }
+
+  /**
+   * Ensure that FSImage supports BlockGroup.
+   */
+  @Test
+  public void testSupportBlockGroup() throws IOException {
+final short GROUP_SIZE = HdfsConstants.NUM_DATA_BLOCKS +
+HdfsConstants.NUM_PARITY_BLOCKS;
+final int BLOCK_SIZE = 8 * 1024 * 1024;
+Configuration conf = new HdfsConfiguration();
+conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, BLOCK_SIZE);
+MiniDFSCluster cluster = null;
+try {
+  cluster = new MiniDFSCluster.Builder(conf).numDataNodes(GROUP_SIZE)
+  .build();
+  cluster.waitActive();
+  DistributedFileSystem fs = cluster.getFileSystem();
+  fs.setStoragePolicy(new Path(/), HdfsConstants.EC_STORAGE_POLICY_NAME);
+  Path file = new Path(/striped);
+  FSDataOutputStream out = fs.create(file);
+  byte[] bytes = DFSTestUtil.generateSequentialBytes(0, BLOCK_SIZE);
+  out.write(bytes);
+  out.close();
+
+  fs.setSafeMode(SafeModeAction.SAFEMODE_ENTER);
+  fs.saveNamespace();
+  fs.setSafeMode(SafeModeAction.SAFEMODE_LEAVE);
+
+  cluster.restartNameNodes();
+  fs = cluster.getFileSystem();
+  

[31/50] [abbrv] hadoop git commit: HADOOP-11646. Erasure Coder API for encoding and decoding of block group ( Contributed by Kai Zheng )

2015-03-30 Thread zhz
HADOOP-11646. Erasure Coder API for encoding and decoding of block group ( 
Contributed by Kai Zheng )


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0c6ed987
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0c6ed987
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0c6ed987

Branch: refs/heads/HDFS-7285
Commit: 0c6ed987fbb59fe210fae7dd2144ba0c16ace517
Parents: daa78e3
Author: Vinayakumar B vinayakum...@apache.org
Authored: Mon Mar 9 12:32:26 2015 +0530
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:26 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt  |   2 +
 .../apache/hadoop/io/erasurecode/ECBlock.java   |  80 ++
 .../hadoop/io/erasurecode/ECBlockGroup.java |  82 ++
 .../erasurecode/coder/AbstractErasureCoder.java |  63 +
 .../coder/AbstractErasureCodingStep.java|  59 
 .../coder/AbstractErasureDecoder.java   | 152 +++
 .../coder/AbstractErasureEncoder.java   |  50 
 .../io/erasurecode/coder/ErasureCoder.java  |  77 ++
 .../io/erasurecode/coder/ErasureCodingStep.java |  55 
 .../io/erasurecode/coder/ErasureDecoder.java|  41 +++
 .../erasurecode/coder/ErasureDecodingStep.java  |  52 
 .../io/erasurecode/coder/ErasureEncoder.java|  39 +++
 .../erasurecode/coder/ErasureEncodingStep.java  |  49 
 .../io/erasurecode/coder/XorErasureDecoder.java |  78 ++
 .../io/erasurecode/coder/XorErasureEncoder.java |  45 
 .../erasurecode/rawcoder/RawErasureCoder.java   |   2 +-
 .../erasurecode/coder/TestErasureCoderBase.java | 266 +++
 .../io/erasurecode/coder/TestXorCoder.java  |  50 
 18 files changed, 1241 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0c6ed987/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index ee42c84..c17a1bd 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -15,4 +15,6 @@
 HADOOP-11643. Define EC schema API for ErasureCodec. Contributed by Kai 
Zheng
 ( Kai Zheng )
 
+HADOOP-11646. Erasure Coder API for encoding and decoding of block group
+( Kai Zheng via vinayakumarb )
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0c6ed987/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlock.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlock.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlock.java
new file mode 100644
index 000..956954a
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlock.java
@@ -0,0 +1,80 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.io.erasurecode;
+
+/**
+ * A wrapper of block level data source/output that {@link ECChunk}s can be
+ * extracted from. For HDFS, it can be an HDFS block (250MB). Note it only 
cares
+ * about erasure coding specific logic thus avoids coupling with any HDFS block
+ * details. We can have something like HdfsBlock extend it.
+ */
+public class ECBlock {
+
+  private boolean isParity;
+  private boolean isErased;
+
+  /**
+   * A default constructor. isParity and isErased are false by default.
+   */
+  public ECBlock() {
+this(false, false);
+  }
+
+  /**
+   * A constructor specifying isParity and isErased.
+   * @param isParity
+   * @param isErased
+   */
+  public ECBlock(boolean isParity, boolean isErased) {
+this.isParity = isParity;
+this.isErased = isErased;
+  }
+
+  /**
+   * Set true if it's for a parity block.
+   * @param isParity
+   */

[36/50] [abbrv] hadoop git commit: Fixed a compiling issue introduced by HADOOP-11705.

2015-03-30 Thread zhz
Fixed a compiling issue introduced by HADOOP-11705.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/80fe23f6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/80fe23f6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/80fe23f6

Branch: refs/heads/HDFS-7285
Commit: 80fe23f61c4cc7e5332752185d558f4a685e
Parents: af9dc8e
Author: Kai Zheng kai.zh...@intel.com
Authored: Fri Mar 13 00:13:06 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 30 10:11:27 2015 -0700

--
 .../apache/hadoop/io/erasurecode/coder/TestErasureCoderBase.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/80fe23f6/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/coder/TestErasureCoderBase.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/coder/TestErasureCoderBase.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/coder/TestErasureCoderBase.java
index 36e061a..d911db9 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/coder/TestErasureCoderBase.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/coder/TestErasureCoderBase.java
@@ -162,7 +162,7 @@ public abstract class TestErasureCoderBase extends 
TestCoderBase {
 }
 
 encoder.initialize(numDataUnits, numParityUnits, chunkSize);
-encoder.setConf(conf);
+((AbstractErasureCoder)encoder).setConf(conf);
 return encoder;
   }
 
@@ -179,7 +179,7 @@ public abstract class TestErasureCoderBase extends 
TestCoderBase {
 }
 
 decoder.initialize(numDataUnits, numParityUnits, chunkSize);
-decoder.setConf(conf);
+((AbstractErasureCoder)decoder).setConf(conf);
 return decoder;
   }
 



hadoop git commit: HADOOP-11761. Fix findbugs warnings in org.apache.hadoop.security.authentication. Contributed by Li Lu.

2015-03-30 Thread wheat9
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 02ed22cd2 - a84fdd565


HADOOP-11761. Fix findbugs warnings in 
org.apache.hadoop.security.authentication. Contributed by Li Lu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a84fdd56
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a84fdd56
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a84fdd56

Branch: refs/heads/branch-2
Commit: a84fdd5650350f561c0deeb085e1783c59075a97
Parents: 02ed22c
Author: Haohui Mai whe...@apache.org
Authored: Mon Mar 30 11:08:54 2015 -0700
Committer: Haohui Mai whe...@apache.org
Committed: Mon Mar 30 11:09:08 2015 -0700

--
 .../hadoop-auth/dev-support/findbugsExcludeFile.xml   | 10 ++
 hadoop-common-project/hadoop-common/CHANGES.txt   |  3 +++
 2 files changed, 13 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a84fdd56/hadoop-common-project/hadoop-auth/dev-support/findbugsExcludeFile.xml
--
diff --git 
a/hadoop-common-project/hadoop-auth/dev-support/findbugsExcludeFile.xml 
b/hadoop-common-project/hadoop-auth/dev-support/findbugsExcludeFile.xml
index 1ecf37a..ddda63c 100644
--- a/hadoop-common-project/hadoop-auth/dev-support/findbugsExcludeFile.xml
+++ b/hadoop-common-project/hadoop-auth/dev-support/findbugsExcludeFile.xml
@@ -34,5 +34,15 @@
 Method name=getCurrentSecret /
 Bug pattern=EI_EXPOSE_REP /
   /Match
+  Match
+Class 
name=org.apache.hadoop.security.authentication.util.FileSignerSecretProvider 
/
+Method name=getAllSecrets /
+Bug pattern=EI_EXPOSE_REP /
+  /Match
+  Match
+Class 
name=org.apache.hadoop.security.authentication.util.FileSignerSecretProvider 
/
+Method name=getCurrentSecret /
+Bug pattern=EI_EXPOSE_REP /
+  /Match
 
 /FindBugsFilter

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a84fdd56/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 31c5556..80b8459 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -759,6 +759,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11639. Clean up Windows native code compilation warnings related to
 Windows Secure Container Executor. (Remus Rusanu via cnauroth)
 
+HADOOP-11761. Fix findbugs warnings in org.apache.hadoop.security
+.authentication. (Li Lu via wheat9)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES



hadoop git commit: HADOOP-11761. Fix findbugs warnings in org.apache.hadoop.security.authentication. Contributed by Li Lu.

2015-03-30 Thread wheat9
Repository: hadoop
Updated Branches:
  refs/heads/trunk 1feb9569f - 82fa3adfd


HADOOP-11761. Fix findbugs warnings in 
org.apache.hadoop.security.authentication. Contributed by Li Lu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/82fa3adf
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/82fa3adf
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/82fa3adf

Branch: refs/heads/trunk
Commit: 82fa3adfd889700c05464345bee91f861d78ba4f
Parents: 1feb956
Author: Haohui Mai whe...@apache.org
Authored: Mon Mar 30 11:08:54 2015 -0700
Committer: Haohui Mai whe...@apache.org
Committed: Mon Mar 30 11:08:54 2015 -0700

--
 .../hadoop-auth/dev-support/findbugsExcludeFile.xml   | 10 ++
 hadoop-common-project/hadoop-common/CHANGES.txt   |  3 +++
 2 files changed, 13 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/82fa3adf/hadoop-common-project/hadoop-auth/dev-support/findbugsExcludeFile.xml
--
diff --git 
a/hadoop-common-project/hadoop-auth/dev-support/findbugsExcludeFile.xml 
b/hadoop-common-project/hadoop-auth/dev-support/findbugsExcludeFile.xml
index 1ecf37a..ddda63c 100644
--- a/hadoop-common-project/hadoop-auth/dev-support/findbugsExcludeFile.xml
+++ b/hadoop-common-project/hadoop-auth/dev-support/findbugsExcludeFile.xml
@@ -34,5 +34,15 @@
 Method name=getCurrentSecret /
 Bug pattern=EI_EXPOSE_REP /
   /Match
+  Match
+Class 
name=org.apache.hadoop.security.authentication.util.FileSignerSecretProvider 
/
+Method name=getAllSecrets /
+Bug pattern=EI_EXPOSE_REP /
+  /Match
+  Match
+Class 
name=org.apache.hadoop.security.authentication.util.FileSignerSecretProvider 
/
+Method name=getCurrentSecret /
+Bug pattern=EI_EXPOSE_REP /
+  /Match
 
 /FindBugsFilter

http://git-wip-us.apache.org/repos/asf/hadoop/blob/82fa3adf/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 8643901..8b59972 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1175,6 +1175,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11639. Clean up Windows native code compilation warnings related to
 Windows Secure Container Executor. (Remus Rusanu via cnauroth)
 
+HADOOP-11761. Fix findbugs warnings in org.apache.hadoop.security
+.authentication. (Li Lu via wheat9)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES



hadoop git commit: HADOOP-11761. Fix findbugs warnings in org.apache.hadoop.security.authentication. Contributed by Li Lu.

2015-03-30 Thread wheat9
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 fa30500d7 - 3ec1ad900


HADOOP-11761. Fix findbugs warnings in 
org.apache.hadoop.security.authentication. Contributed by Li Lu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3ec1ad90
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3ec1ad90
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3ec1ad90

Branch: refs/heads/branch-2.7
Commit: 3ec1ad9001697bde2c1b2e6f551cbd7ad4551bf6
Parents: fa30500
Author: Haohui Mai whe...@apache.org
Authored: Mon Mar 30 11:08:54 2015 -0700
Committer: Haohui Mai whe...@apache.org
Committed: Mon Mar 30 11:09:25 2015 -0700

--
 .../hadoop-auth/dev-support/findbugsExcludeFile.xml   | 10 ++
 hadoop-common-project/hadoop-common/CHANGES.txt   |  3 +++
 2 files changed, 13 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3ec1ad90/hadoop-common-project/hadoop-auth/dev-support/findbugsExcludeFile.xml
--
diff --git 
a/hadoop-common-project/hadoop-auth/dev-support/findbugsExcludeFile.xml 
b/hadoop-common-project/hadoop-auth/dev-support/findbugsExcludeFile.xml
index 1ecf37a..ddda63c 100644
--- a/hadoop-common-project/hadoop-auth/dev-support/findbugsExcludeFile.xml
+++ b/hadoop-common-project/hadoop-auth/dev-support/findbugsExcludeFile.xml
@@ -34,5 +34,15 @@
 Method name=getCurrentSecret /
 Bug pattern=EI_EXPOSE_REP /
   /Match
+  Match
+Class 
name=org.apache.hadoop.security.authentication.util.FileSignerSecretProvider 
/
+Method name=getAllSecrets /
+Bug pattern=EI_EXPOSE_REP /
+  /Match
+  Match
+Class 
name=org.apache.hadoop.security.authentication.util.FileSignerSecretProvider 
/
+Method name=getCurrentSecret /
+Bug pattern=EI_EXPOSE_REP /
+  /Match
 
 /FindBugsFilter

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3ec1ad90/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 8dec5d5..d85725c 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -707,6 +707,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11639. Clean up Windows native code compilation warnings related to
 Windows Secure Container Executor. (Remus Rusanu via cnauroth)
 
+HADOOP-11761. Fix findbugs warnings in org.apache.hadoop.security
+.authentication. (Li Lu via wheat9)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES



hadoop git commit: MAPREDUCE-6288. Changed permissions on JobHistory server's done directory so that user's client can load the conf files directly. Contributed by Robert Kanter.

2015-03-30 Thread vinodkv
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 35af6f180 - fa30500d7


MAPREDUCE-6288. Changed permissions on JobHistory server's done directory so 
that user's client can load the conf files directly. Contributed by Robert 
Kanter.

(cherry picked from commit 5358b8316a7108b32c9900fb0d01ca0fe961)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fa30500d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fa30500d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fa30500d

Branch: refs/heads/branch-2.7
Commit: fa30500d7277fcb1ac194bc8a95c9f9fa8a3d164
Parents: 35af6f1
Author: Vinod Kumar Vavilapalli vino...@apache.org
Authored: Mon Mar 30 10:27:19 2015 -0700
Committer: Vinod Kumar Vavilapalli vino...@apache.org
Committed: Mon Mar 30 10:29:05 2015 -0700

--
 hadoop-mapreduce-project/CHANGES.txt|  4 ++
 .../v2/jobhistory/JobHistoryUtils.java  |  4 +-
 .../mapreduce/v2/hs/HistoryFileManager.java | 31 -
 .../mapreduce/v2/hs/TestHistoryFileManager.java | 73 
 4 files changed, 108 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa30500d/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index da29b4e..a9b739a 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -196,6 +196,10 @@ Release 2.7.0 - UNRELEASED
 MAPREDUCE-6285. ClientServiceDelegate should not retry upon
 AuthenticationException. (Jonathan Eagles via ozawa)
 
+MAPREDUCE-6288. Changed permissions on JobHistory server's done directory
+so that user's client can load the conf files directly. (Robert Kanter via
+vinodkv)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa30500d/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java
index e279c03..8966e4e 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java
@@ -72,7 +72,7 @@ public class JobHistoryUtils {
* Permissions for the history done dir and derivatives.
*/
   public static final FsPermission HISTORY_DONE_DIR_PERMISSION =
-FsPermission.createImmutable((short) 0770); 
+FsPermission.createImmutable((short) 0771);
 
   public static final FsPermission HISTORY_DONE_FILE_PERMISSION =
 FsPermission.createImmutable((short) 0770); // rwx--
@@ -81,7 +81,7 @@ public class JobHistoryUtils {
* Umask for the done dir and derivatives.
*/
   public static final FsPermission HISTORY_DONE_DIR_UMASK = FsPermission
-  .createImmutable((short) (0770 ^ 0777));
+  .createImmutable((short) (0771 ^ 0777));
 
   
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa30500d/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryFileManager.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryFileManager.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryFileManager.java
index 6b9f146..77b3867 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryFileManager.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryFileManager.java
@@ -571,8 +571,10 @@ public class HistoryFileManager extends AbstractService {
   new Path(doneDirPrefix));
   doneDirFc = FileContext.getFileContext(doneDirPrefixPath.toUri(), conf);
   doneDirFc.setUMask(JobHistoryUtils.HISTORY_DONE_DIR_UMASK);
-  mkdir(doneDirFc, 

hadoop git commit: MAPREDUCE-6288. Changed permissions on JobHistory server's done directory so that user's client can load the conf files directly. Contributed by Robert Kanter.

2015-03-30 Thread vinodkv
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 c5bc48946 - cc130a033


MAPREDUCE-6288. Changed permissions on JobHistory server's done directory so 
that user's client can load the conf files directly. Contributed by Robert 
Kanter.

(cherry picked from commit 5358b8316a7108b32c9900fb0d01ca0fe961)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cc130a03
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cc130a03
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cc130a03

Branch: refs/heads/branch-2
Commit: cc130a033ad01a75eaeffa586244fc5aee153f3d
Parents: c5bc489
Author: Vinod Kumar Vavilapalli vino...@apache.org
Authored: Mon Mar 30 10:27:19 2015 -0700
Committer: Vinod Kumar Vavilapalli vino...@apache.org
Committed: Mon Mar 30 10:28:28 2015 -0700

--
 hadoop-mapreduce-project/CHANGES.txt|  4 ++
 .../v2/jobhistory/JobHistoryUtils.java  |  4 +-
 .../mapreduce/v2/hs/HistoryFileManager.java | 31 -
 .../mapreduce/v2/hs/TestHistoryFileManager.java | 73 
 4 files changed, 108 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cc130a03/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 3efe73a..6b4c9c3 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -262,6 +262,10 @@ Release 2.7.0 - UNRELEASED
 MAPREDUCE-6285. ClientServiceDelegate should not retry upon
 AuthenticationException. (Jonathan Eagles via ozawa)
 
+MAPREDUCE-6288. Changed permissions on JobHistory server's done directory
+so that user's client can load the conf files directly. (Robert Kanter via
+vinodkv)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cc130a03/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java
index e279c03..8966e4e 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java
@@ -72,7 +72,7 @@ public class JobHistoryUtils {
* Permissions for the history done dir and derivatives.
*/
   public static final FsPermission HISTORY_DONE_DIR_PERMISSION =
-FsPermission.createImmutable((short) 0770); 
+FsPermission.createImmutable((short) 0771);
 
   public static final FsPermission HISTORY_DONE_FILE_PERMISSION =
 FsPermission.createImmutable((short) 0770); // rwx--
@@ -81,7 +81,7 @@ public class JobHistoryUtils {
* Umask for the done dir and derivatives.
*/
   public static final FsPermission HISTORY_DONE_DIR_UMASK = FsPermission
-  .createImmutable((short) (0770 ^ 0777));
+  .createImmutable((short) (0771 ^ 0777));
 
   
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cc130a03/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryFileManager.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryFileManager.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryFileManager.java
index 6b9f146..77b3867 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryFileManager.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryFileManager.java
@@ -571,8 +571,10 @@ public class HistoryFileManager extends AbstractService {
   new Path(doneDirPrefix));
   doneDirFc = FileContext.getFileContext(doneDirPrefixPath.toUri(), conf);
   doneDirFc.setUMask(JobHistoryUtils.HISTORY_DONE_DIR_UMASK);
-  mkdir(doneDirFc, 

hadoop git commit: HDFS-7907. Erasure Coding: track invalid, corrupt, and under-recovery striped blocks in NameNode. Contributed by Jing Zhao.

2015-03-30 Thread jing9
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-7285 a6543ac97 - a1075153e


HDFS-7907. Erasure Coding: track invalid, corrupt, and under-recovery striped 
blocks in NameNode. Contributed by Jing Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a1075153
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a1075153
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a1075153

Branch: refs/heads/HDFS-7285
Commit: a1075153e4367fc8dc134f25dd38ab1d750289ba
Parents: a6543ac
Author: Jing Zhao ji...@apache.org
Authored: Mon Mar 30 11:25:09 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon Mar 30 11:25:09 2015 -0700

--
 .../blockmanagement/BlockInfoStriped.java   |  25 ++-
 .../server/blockmanagement/BlockManager.java| 203 ++-
 .../blockmanagement/DecommissionManager.java|  86 
 .../hdfs/server/namenode/FSNamesystem.java  |   8 +-
 .../server/blockmanagement/TestNodeCount.java   |   2 +-
 .../TestOverReplicatedBlocks.java   |   4 +-
 6 files changed, 172 insertions(+), 156 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a1075153/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
index 30b5ee7..4a85efb 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
@@ -18,11 +18,13 @@
 package org.apache.hadoop.hdfs.server.blockmanagement;
 
 import org.apache.hadoop.hdfs.protocol.Block;
-import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState;
+
 import java.io.DataOutput;
 import java.io.IOException;
 
+import static 
org.apache.hadoop.hdfs.protocol.HdfsConstants.BLOCK_STRIPED_CHUNK_SIZE;
+
 /**
  * Subclass of {@link BlockInfo}, presenting a block group in erasure coding.
  *
@@ -37,7 +39,6 @@ import java.io.IOException;
  * array to record the block index for each triplet.
  */
 public class BlockInfoStriped extends BlockInfo {
-  private final int   chunkSize = HdfsConstants.BLOCK_STRIPED_CHUNK_SIZE;
   private final short dataBlockNum;
   private final short parityBlockNum;
   /**
@@ -132,6 +133,22 @@ public class BlockInfoStriped extends BlockInfo {
 return i == -1 ? -1 : indices[i];
   }
 
+  /**
+   * Identify the block stored in the given datanode storage. Note that
+   * the returned block has the same block Id with the one seen/reported by the
+   * DataNode.
+   */
+  Block getBlockOnStorage(DatanodeStorageInfo storage) {
+int index = getStorageBlockIndex(storage);
+if (index  0) {
+  return null;
+} else {
+  Block block = new Block(this);
+  block.setBlockId(this.getBlockId() + index);
+  return block;
+}
+  }
+
   @Override
   boolean removeStorage(DatanodeStorageInfo storage) {
 int dnIndex = findStorageInfoFromEnd(storage);
@@ -186,8 +203,8 @@ public class BlockInfoStriped extends BlockInfo {
 // In case striped blocks, total usage by this striped blocks should
 // be the total of data blocks and parity blocks because
 // `getNumBytes` is the total of actual data block size.
-return ((getNumBytes() - 1) / (dataBlockNum * chunkSize) + 1)
-* chunkSize * parityBlockNum + getNumBytes();
+return ((getNumBytes() - 1) / (dataBlockNum * BLOCK_STRIPED_CHUNK_SIZE) + 
1)
+* BLOCK_STRIPED_CHUNK_SIZE * parityBlockNum + getNumBytes();
   }
 
   @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a1075153/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 545e66b..7e8a88c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -177,7 +177,11 @@ public class BlockManager {
   /** Store blocks - datanodedescriptor(s) map of corrupt 

[2/2] hadoop git commit: YARN-2495. Allow admin specify labels from each NM (Distributed configuration for node label). (Naganarasimha G R via wangda)

2015-03-30 Thread wangda
YARN-2495. Allow admin specify labels from each NM (Distributed configuration 
for node label). (Naganarasimha G R via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2a945d24
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2a945d24
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2a945d24

Branch: refs/heads/trunk
Commit: 2a945d24f7de1a7ae6e7bd6636188ce3b55c7f52
Parents: b804571
Author: Wangda Tan wan...@apache.org
Authored: Mon Mar 30 12:04:51 2015 -0700
Committer: Wangda Tan wan...@apache.org
Committed: Mon Mar 30 12:05:21 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../hadoop/yarn/conf/YarnConfiguration.java |  12 +
 .../src/main/proto/yarn_protos.proto|   4 +
 .../yarn/client/TestResourceTrackerOnHA.java|   2 +-
 .../protocolrecords/NodeHeartbeatRequest.java   |   8 +-
 .../protocolrecords/NodeHeartbeatResponse.java  |   3 +
 .../RegisterNodeManagerRequest.java |  12 +
 .../RegisterNodeManagerResponse.java|   3 +
 .../impl/pb/NodeHeartbeatRequestPBImpl.java |  37 ++
 .../impl/pb/NodeHeartbeatResponsePBImpl.java|  13 +
 .../pb/RegisterNodeManagerRequestPBImpl.java|  48 ++-
 .../pb/RegisterNodeManagerResponsePBImpl.java   |  13 +
 .../yarn_server_common_service_protos.proto |   4 +
 .../hadoop/yarn/TestYarnServerApiClasses.java   |  94 
 .../yarn/server/nodemanager/NodeManager.java|  34 +-
 .../nodemanager/NodeStatusUpdaterImpl.java  | 114 -
 .../nodelabels/NodeLabelsProvider.java  |  43 ++
 .../nodemanager/TestNodeStatusUpdater.java  |   2 +-
 .../TestNodeStatusUpdaterForLabels.java | 281 
 .../resourcemanager/ResourceTrackerService.java |  80 +++-
 .../TestResourceTrackerService.java | 430 ++-
 21 files changed, 1199 insertions(+), 41 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2a945d24/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index b38c9ac..f72d06d 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -83,6 +83,9 @@ Release 2.8.0 - UNRELEASED
 
 YARN-3288. Document and fix indentation in the DockerContainerExecutor code
 
+YARN-2495. Allow admin specify labels from each NM (Distributed 
+configuration for node label). (Naganarasimha G R via wangda)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2a945d24/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index a527af4..13e9a10 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -1719,6 +1719,18 @@ public class YarnConfiguration extends Configuration {
   public static final String NODE_LABELS_ENABLED = NODE_LABELS_PREFIX
   + enabled;
   public static final boolean DEFAULT_NODE_LABELS_ENABLED = false;
+  
+  public static final String NODELABEL_CONFIGURATION_TYPE =
+  NODE_LABELS_PREFIX + configuration-type;
+  
+  public static final String CENTALIZED_NODELABEL_CONFIGURATION_TYPE =
+  centralized;
+  
+  public static final String DISTRIBUTED_NODELABEL_CONFIGURATION_TYPE =
+  distributed;
+  
+  public static final String DEFAULT_NODELABEL_CONFIGURATION_TYPE =
+  CENTALIZED_NODELABEL_CONFIGURATION_TYPE;
 
   public YarnConfiguration() {
 super();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2a945d24/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
index 194be82..b396f4d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
@@ -239,6 +239,10 @@ message NodeIdToLabelsProto {
   repeated string nodeLabels = 2;
 }
 
+message StringArrayProto {
+  

[1/2] hadoop git commit: YARN-2495. Allow admin specify labels from each NM (Distributed configuration for node label). (Naganarasimha G R via wangda)

2015-03-30 Thread wangda
Repository: hadoop
Updated Branches:
  refs/heads/trunk b80457158 - 2a945d24f


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2a945d24/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceTrackerService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceTrackerService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceTrackerService.java
index a904dc0..18d7df4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceTrackerService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceTrackerService.java
@@ -27,8 +27,10 @@ import java.io.File;
 import java.io.FileOutputStream;
 import java.io.IOException;
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.HashMap;
 import java.util.List;
+import java.util.Set;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.io.IOUtils;
@@ -49,11 +51,16 @@ import org.apache.hadoop.yarn.event.Dispatcher;
 import org.apache.hadoop.yarn.event.DrainDispatcher;
 import org.apache.hadoop.yarn.event.Event;
 import org.apache.hadoop.yarn.event.EventHandler;
+import org.apache.hadoop.yarn.nodelabels.NodeLabelTestBase;
 import org.apache.hadoop.yarn.server.api.protocolrecords.NMContainerStatus;
+import org.apache.hadoop.yarn.server.api.protocolrecords.NodeHeartbeatRequest;
 import org.apache.hadoop.yarn.server.api.protocolrecords.NodeHeartbeatResponse;
 import 
org.apache.hadoop.yarn.server.api.protocolrecords.RegisterNodeManagerRequest;
 import 
org.apache.hadoop.yarn.server.api.protocolrecords.RegisterNodeManagerResponse;
 import org.apache.hadoop.yarn.server.api.records.NodeAction;
+import org.apache.hadoop.yarn.server.api.records.NodeStatus;
+import 
org.apache.hadoop.yarn.server.resourcemanager.nodelabels.NullRMNodeLabelsManager;
+import 
org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
 import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp;
 import 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl;
 import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNode;
@@ -66,7 +73,7 @@ import org.junit.After;
 import org.junit.Assert;
 import org.junit.Test;
 
-public class TestResourceTrackerService {
+public class TestResourceTrackerService extends NodeLabelTestBase {
 
   private final static File TEMP_DIR = new File(System.getProperty(
   test.build.data, /tmp), decommision);
@@ -305,8 +312,425 @@ public class TestResourceTrackerService {
 req.setHttpPort(1234);
 req.setNMVersion(YarnVersionInfo.getVersion());
 // trying to register a invalid node.
-RegisterNodeManagerResponse response = 
resourceTrackerService.registerNodeManager(req);
-Assert.assertEquals(NodeAction.NORMAL,response.getNodeAction());
+RegisterNodeManagerResponse response =
+resourceTrackerService.registerNodeManager(req);
+Assert.assertEquals(NodeAction.NORMAL, response.getNodeAction());
+  }
+
+  @Test
+  public void testNodeRegistrationWithLabels() throws Exception {
+writeToHostsFile(host2);
+Configuration conf = new Configuration();
+conf.set(YarnConfiguration.RM_NODES_INCLUDE_FILE_PATH,
+hostFile.getAbsolutePath());
+conf.set(YarnConfiguration.NODELABEL_CONFIGURATION_TYPE,
+YarnConfiguration.DISTRIBUTED_NODELABEL_CONFIGURATION_TYPE);
+
+final RMNodeLabelsManager nodeLabelsMgr = new NullRMNodeLabelsManager();
+
+rm = new MockRM(conf) {
+  @Override
+  protected RMNodeLabelsManager createNodeLabelManager() {
+return nodeLabelsMgr;
+  }
+};
+rm.start();
+
+try {
+  nodeLabelsMgr.addToCluserNodeLabels(toSet(A, B, C));
+} catch (IOException e) {
+  Assert.fail(Caught Exception while intializing);
+  e.printStackTrace();
+}
+
+ResourceTrackerService resourceTrackerService =
+rm.getResourceTrackerService();
+RegisterNodeManagerRequest registerReq =
+Records.newRecord(RegisterNodeManagerRequest.class);
+NodeId nodeId = NodeId.newInstance(host2, 1234);
+Resource capability = BuilderUtils.newResource(1024, 1);
+registerReq.setResource(capability);
+registerReq.setNodeId(nodeId);
+registerReq.setHttpPort(1234);
+registerReq.setNMVersion(YarnVersionInfo.getVersion());
+registerReq.setNodeLabels(toSet(A));
+RegisterNodeManagerResponse response =
+

hadoop git commit: HDFS-7748. Separate ECN flags from the Status in the DataTransferPipelineAck. Contributed by Anu Engineer and Haohui Mai.

2015-03-30 Thread wheat9
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 530c2ef91 - 86c0c6b04


HDFS-7748. Separate ECN flags from the Status in the DataTransferPipelineAck. 
Contributed by Anu Engineer and Haohui Mai.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/86c0c6b0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/86c0c6b0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/86c0c6b0

Branch: refs/heads/branch-2.7
Commit: 86c0c6b0446a29e2551a2b207ddfc25051a0cd47
Parents: 530c2ef
Author: Haohui Mai whe...@apache.org
Authored: Mon Mar 30 11:59:21 2015 -0700
Committer: Haohui Mai whe...@apache.org
Committed: Mon Mar 30 12:16:25 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../org/apache/hadoop/hdfs/DFSOutputStream.java |  2 +-
 .../hdfs/protocol/datatransfer/PipelineAck.java | 31 ---
 .../hdfs/server/datanode/BlockReceiver.java |  2 +-
 .../src/main/proto/datatransfer.proto   |  3 +-
 .../hadoop/hdfs/TestDataTransferProtocol.java   | 32 
 6 files changed, 59 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/86c0c6b0/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index b96b24e..90efc99 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -941,6 +941,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7963. Fix expected tracing spans in TestTracing along with HDFS-7054.
 (Masatake Iwasaki via kihwal)
 
+HDFS-7748. Separate ECN flags from the Status in the 
DataTransferPipelineAck.
+(Anu Engineer and Haohui Mai via wheat9)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/86c0c6b0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
index a1b220a..f105530 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
@@ -747,7 +747,7 @@ public class DFSOutputStream extends FSOutputSummer
 // processes response status from datanodes.
 for (int i = ack.getNumOfReplies()-1; i =0   
dfsClient.clientRunning; i--) {
   final Status reply = PipelineAck.getStatusFromHeader(ack
-.getReply(i));
+.getHeaderFlag(i));
   // Restart will not be treated differently unless it is
   // the local node or the only one in the pipeline.
   if (PipelineAck.isRestartOOBStatus(reply) 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/86c0c6b0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java
index 35e5bb8..9bd4115 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java
@@ -130,13 +130,16 @@ public class PipelineAck {
*/
   public PipelineAck(long seqno, int[] replies,
  long downstreamAckTimeNanos) {
-ArrayListInteger replyList = Lists.newArrayList();
+ArrayListStatus statusList = Lists.newArrayList();
+ArrayListInteger flagList = Lists.newArrayList();
 for (int r : replies) {
-  replyList.add(r);
+  statusList.add(StatusFormat.getStatus(r));
+  flagList.add(r);
 }
 proto = PipelineAckProto.newBuilder()
   .setSeqno(seqno)
-  .addAllReply(replyList)
+  .addAllReply(statusList)
+  .addAllFlag(flagList)
   .setDownstreamAckTimeNanos(downstreamAckTimeNanos)
   .build();
   }
@@ -158,11 +161,18 @@ public class PipelineAck {
   }
   
   /**
-   * get the ith reply
-   * @return the the ith reply
+   * get the header flag of ith reply
*/
-  public int getReply(int i) {
-return proto.getReply(i);

[12/20] hadoop git commit: HDFS-7742. Favoring decommissioning node for replication can cause a block to stay underreplicated for long periods. Contributed by Nathan Roberts.

2015-03-30 Thread zjshen
HDFS-7742. Favoring decommissioning node for replication can cause a block to 
stay
underreplicated for long periods. Contributed by Nathan Roberts.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/040fd169
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/040fd169
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/040fd169

Branch: refs/heads/YARN-2928
Commit: 040fd169007acb6c310f317e63a50306b8b4cb49
Parents: 1bfe248
Author: Kihwal Lee kih...@apache.org
Authored: Mon Mar 30 10:10:11 2015 -0500
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Mar 30 12:10:48 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../server/blockmanagement/BlockManager.java| 10 ++---
 .../blockmanagement/TestBlockManager.java   | 42 
 3 files changed, 50 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/040fd169/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index f437ad8..811ee75 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -829,6 +829,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7410. Support CreateFlags with append() to support hsync() for
 appending streams (Vinayakumar B via Colin P. McCabe)
 
+HDFS-7742. Favoring decommissioning node for replication can cause a block 
+to stay underreplicated for long periods (Nathan Roberts via kihwal)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/040fd169/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index ad40782..f6e15a3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -1637,7 +1637,8 @@ public class BlockManager {
   // If so, do not select the node as src node
   if ((nodesCorrupt != null)  nodesCorrupt.contains(node))
 continue;
-  if(priority != UnderReplicatedBlocks.QUEUE_HIGHEST_PRIORITY
+  if(priority != UnderReplicatedBlocks.QUEUE_HIGHEST_PRIORITY 
+   !node.isDecommissionInProgress() 
node.getNumberOfBlocksToBeReplicated() = maxReplicationStreams)
   {
 continue; // already reached replication limit
@@ -1652,13 +1653,12 @@ public class BlockManager {
   // never use already decommissioned nodes
   if(node.isDecommissioned())
 continue;
-  // we prefer nodes that are in DECOMMISSION_INPROGRESS state
-  if(node.isDecommissionInProgress() || srcNode == null) {
+
+  // We got this far, current node is a reasonable choice
+  if (srcNode == null) {
 srcNode = node;
 continue;
   }
-  if(srcNode.isDecommissionInProgress())
-continue;
   // switch to a different node randomly
   // this to prevent from deterministically selecting the same node even
   // if the node failed to replicate the block on previous iterations

http://git-wip-us.apache.org/repos/asf/hadoop/blob/040fd169/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
index 707c780..91abb2a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
@@ -535,6 +535,48 @@ public class TestBlockManager {
   }
 
   @Test
+  public void testFavorDecomUntilHardLimit() throws Exception {
+bm.maxReplicationStreams = 0;
+bm.replicationStreamsHardLimit = 1;
+
+long blockId = 42; // arbitrary
+Block aBlock = new Block(blockId, 0, 0);
+ListDatanodeDescriptor origNodes = getNodes(0, 1);
+// Add the block to the 

[18/20] hadoop git commit: HDFS-7261. storageMap is accessed without synchronization in DatanodeDescriptor#updateHeartbeatState() (Brahma Reddy Battula via Colin P. McCabe)

2015-03-30 Thread zjshen
HDFS-7261. storageMap is accessed without synchronization in 
DatanodeDescriptor#updateHeartbeatState() (Brahma Reddy Battula via Colin P. 
McCabe)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/afb05c84
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/afb05c84
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/afb05c84

Branch: refs/heads/YARN-2928
Commit: afb05c84e625d85fd12287968ee6124470016ad7
Parents: 5c42a67
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Mon Mar 30 10:46:21 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Mar 30 12:10:49 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  4 +++
 .../blockmanagement/DatanodeDescriptor.java | 29 
 2 files changed, 21 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/afb05c84/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index efba80e..79a81c6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -379,6 +379,10 @@ Release 2.8.0 - UNRELEASED
 HDFS-8002. Website refers to /trash directory. (Brahma Reddy Battula via
 aajisaka)
 
+HDFS-7261. storageMap is accessed without synchronization in
+DatanodeDescriptor#updateHeartbeatState() (Brahma Reddy Battula via Colin
+P. McCabe)
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/afb05c84/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
index d0d7a72..4731ad4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
@@ -447,8 +447,10 @@ public class DatanodeDescriptor extends DatanodeInfo {
 if (checkFailedStorages) {
   LOG.info(Number of failed storage changes from 
   + this.volumeFailures +  to  + volFailures);
-  failedStorageInfos = new HashSetDatanodeStorageInfo(
-  storageMap.values());
+  synchronized (storageMap) {
+failedStorageInfos =
+new HashSetDatanodeStorageInfo(storageMap.values());
+  }
 }
 
 setCacheCapacity(cacheCapacity);
@@ -480,8 +482,11 @@ public class DatanodeDescriptor extends DatanodeInfo {
 if (checkFailedStorages) {
   updateFailedStorage(failedStorageInfos);
 }
-
-if (storageMap.size() != reports.length) {
+long storageMapSize;
+synchronized (storageMap) {
+  storageMapSize = storageMap.size();
+}
+if (storageMapSize != reports.length) {
   pruneStorageMap(reports);
 }
   }
@@ -491,14 +496,14 @@ public class DatanodeDescriptor extends DatanodeInfo {
* as long as they have associated block replicas.
*/
   private void pruneStorageMap(final StorageReport[] reports) {
-if (LOG.isDebugEnabled()) {
-  LOG.debug(Number of storages reported in heartbeat= + reports.length +
-; Number of storages in storageMap= + storageMap.size());
-}
+synchronized (storageMap) {
+  if (LOG.isDebugEnabled()) {
+LOG.debug(Number of storages reported in heartbeat= + reports.length
++ ; Number of storages in storageMap= + storageMap.size());
+  }
 
-HashMapString, DatanodeStorageInfo excessStorages;
+  HashMapString, DatanodeStorageInfo excessStorages;
 
-synchronized (storageMap) {
   // Init excessStorages with all known storages.
   excessStorages = new HashMapString, DatanodeStorageInfo(storageMap);
 
@@ -515,8 +520,8 @@ public class DatanodeDescriptor extends DatanodeInfo {
   LOG.info(Removed storage  + storageInfo +  from DataNode + this);
 } else if (LOG.isDebugEnabled()) {
   // This can occur until all block reports are received.
-  LOG.debug(Deferring removal of stale storage  + storageInfo +
- with  + storageInfo.numBlocks() +  blocks);
+  LOG.debug(Deferring removal of stale storage  + storageInfo
+  +  with  + storageInfo.numBlocks() +  blocks);
 }
   }
 }



[16/20] hadoop git commit: HADOOP-11754. RM fails to start in non-secure mode due to authentication filter failure. Contributed by Haohui Mai.

2015-03-30 Thread zjshen
HADOOP-11754. RM fails to start in non-secure mode due to authentication filter 
failure. Contributed by Haohui Mai.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/471b1d93
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/471b1d93
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/471b1d93

Branch: refs/heads/YARN-2928
Commit: 471b1d9362b2fdcc3514720176210ab363ea8bfa
Parents: 6e598f8
Author: Haohui Mai whe...@apache.org
Authored: Mon Mar 30 11:44:22 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Mar 30 12:10:49 2015 -0700

--
 .../server/AuthenticationFilter.java| 108 +--
 .../server/TestAuthenticationFilter.java|  20 ++--
 hadoop-common-project/hadoop-common/CHANGES.txt |   3 +
 .../org/apache/hadoop/http/HttpServer2.java |  53 -
 .../AuthenticationFilterInitializer.java|  18 ++--
 5 files changed, 128 insertions(+), 74 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/471b1d93/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
index 5c22fce..684e91c 100644
--- 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
+++ 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
@@ -25,6 +25,7 @@ import org.slf4j.LoggerFactory;
 import javax.servlet.Filter;
 import javax.servlet.FilterChain;
 import javax.servlet.FilterConfig;
+import javax.servlet.ServletContext;
 import javax.servlet.ServletException;
 import javax.servlet.ServletRequest;
 import javax.servlet.ServletResponse;
@@ -183,8 +184,6 @@ public class AuthenticationFilter implements Filter {
   private Signer signer;
   private SignerSecretProvider secretProvider;
   private AuthenticationHandler authHandler;
-  private boolean randomSecret;
-  private boolean customSecretProvider;
   private long validity;
   private String cookieDomain;
   private String cookiePath;
@@ -226,7 +225,6 @@ public class AuthenticationFilter implements Filter {
 
 initializeAuthHandler(authHandlerClassName, filterConfig);
 
-
 cookieDomain = config.getProperty(COOKIE_DOMAIN, null);
 cookiePath = config.getProperty(COOKIE_PATH, null);
   }
@@ -237,11 +235,8 @@ public class AuthenticationFilter implements Filter {
   Class? klass = 
Thread.currentThread().getContextClassLoader().loadClass(authHandlerClassName);
   authHandler = (AuthenticationHandler) klass.newInstance();
   authHandler.init(config);
-} catch (ClassNotFoundException ex) {
-  throw new ServletException(ex);
-} catch (InstantiationException ex) {
-  throw new ServletException(ex);
-} catch (IllegalAccessException ex) {
+} catch (ClassNotFoundException | InstantiationException |
+IllegalAccessException ex) {
   throw new ServletException(ex);
 }
   }
@@ -251,62 +246,59 @@ public class AuthenticationFilter implements Filter {
 secretProvider = (SignerSecretProvider) filterConfig.getServletContext().
 getAttribute(SIGNER_SECRET_PROVIDER_ATTRIBUTE);
 if (secretProvider == null) {
-  Class? extends SignerSecretProvider providerClass
-  = getProviderClass(config);
-  try {
-secretProvider = providerClass.newInstance();
-  } catch (InstantiationException ex) {
-throw new ServletException(ex);
-  } catch (IllegalAccessException ex) {
-throw new ServletException(ex);
-  }
+  // As tomcat cannot specify the provider object in the configuration.
+  // It'll go into this path
   try {
-secretProvider.init(config, filterConfig.getServletContext(), 
validity);
+secretProvider = constructSecretProvider(
+filterConfig.getServletContext(),
+config, false);
   } catch (Exception ex) {
 throw new ServletException(ex);
   }
-} else {
-  customSecretProvider = true;
 }
 signer = new Signer(secretProvider);
   }
 
-  @SuppressWarnings(unchecked)
-  private Class? extends SignerSecretProvider getProviderClass(Properties 
config)
-  throws ServletException {
-String providerClassName;
-String signerSecretProviderName
-= config.getProperty(SIGNER_SECRET_PROVIDER, null);
-// fallback to old behavior
-if (signerSecretProviderName 

[08/20] hadoop git commit: YARN-3288. Document and fix indentation in the DockerContainerExecutor code

2015-03-30 Thread zjshen
YARN-3288. Document and fix indentation in the DockerContainerExecutor code


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/74e941da
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/74e941da
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/74e941da

Branch: refs/heads/YARN-2928
Commit: 74e941daeb7c8d2d60e4949364a0fcdf2983fe04
Parents: fa7cc99
Author: Ravi Prakash ravip...@altiscale.com
Authored: Sat Mar 28 08:00:41 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Mar 30 12:10:47 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   2 +
 .../server/nodemanager/ContainerExecutor.java   |  18 +-
 .../nodemanager/DockerContainerExecutor.java| 229 +++
 .../launcher/ContainerLaunch.java   |   8 +-
 .../TestDockerContainerExecutor.java|  98 
 .../TestDockerContainerExecutorWithMocks.java   | 110 +
 6 files changed, 277 insertions(+), 188 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/74e941da/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index cd39b1a..0d07032 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -131,6 +131,8 @@ Release 2.8.0 - UNRELEASED
 
 YARN-3397. yarn rmadmin should skip -failover. (J.Andreina via kasha)
 
+YARN-3288. Document and fix indentation in the DockerContainerExecutor code
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/74e941da/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
index 377fd1d..1c670a1 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
@@ -210,8 +210,22 @@ public abstract class ContainerExecutor implements 
Configurable {
 }
   }
 
-  public void writeLaunchEnv(OutputStream out, MapString, String 
environment, MapPath, ListString resources, ListString command) throws 
IOException{
-ContainerLaunch.ShellScriptBuilder sb = 
ContainerLaunch.ShellScriptBuilder.create();
+  /**
+   * This method writes out the launch environment of a container. This can be
+   * overridden by extending ContainerExecutors to provide different behaviors
+   * @param out the output stream to which the environment is written (usually
+   * a script file which will be executed by the Launcher)
+   * @param environment The environment variables and their values
+   * @param resources The resources which have been localized for this 
container
+   * Symlinks will be created to these localized resources
+   * @param command The command that will be run.
+   * @throws IOException if any errors happened writing to the OutputStream,
+   * while creating symlinks
+   */
+  public void writeLaunchEnv(OutputStream out, MapString, String environment,
+MapPath, ListString resources, ListString command) throws 
IOException{
+ContainerLaunch.ShellScriptBuilder sb =
+  ContainerLaunch.ShellScriptBuilder.create();
 if (environment != null) {
   for (Map.EntryString,String env : environment.entrySet()) {
 sb.env(env.getKey().toString(), env.getValue().toString());

http://git-wip-us.apache.org/repos/asf/hadoop/blob/74e941da/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DockerContainerExecutor.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DockerContainerExecutor.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DockerContainerExecutor.java
index c854173..71eaa04 100644
--- 

[14/20] hadoop git commit: HDFS-4396. Add START_MSG/SHUTDOWN_MSG for ZKFC. Contributed by Liang Xie.

2015-03-30 Thread zjshen
HDFS-4396. Add START_MSG/SHUTDOWN_MSG for ZKFC. Contributed by Liang Xie.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1bfe248d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1bfe248d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1bfe248d

Branch: refs/heads/YARN-2928
Commit: 1bfe248dae743b7b045c9b5363f4ebe6757b7db7
Parents: dc44141
Author: Harsh J ha...@cloudera.com
Authored: Mon Mar 30 15:21:18 2015 +0530
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Mar 30 12:10:48 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java | 2 ++
 2 files changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1bfe248d/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 9b1cc3e..f437ad8 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -323,6 +323,9 @@ Release 2.8.0 - UNRELEASED
 
   IMPROVEMENTS
 
+HDFS-4396. Add START_MSG/SHUTDOWN_MSG for ZKFC
+(Liang Xie via harsh)
+
 HDFS-7875. Improve log message when wrong value configured for
 dfs.datanode.failed.volumes.tolerated.
 (nijel via harsh)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1bfe248d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
index 85f77f1..4e256a2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSZKFailoverController.java
@@ -167,6 +167,8 @@ public class DFSZKFailoverController extends 
ZKFailoverController {
 
   public static void main(String args[])
   throws Exception {
+StringUtils.startupShutdownMessage(DFSZKFailoverController.class,
+args, LOG);
 if (DFSUtil.parseHelpArgument(args, 
 ZKFailoverController.USAGE, System.out, true)) {
   System.exit(0);



[01/20] hadoop git commit: HADOOP-11639. Clean up Windows native code compilation warnings related to Windows Secure Container Executor. Contributed by Remus Rusanu.

2015-03-30 Thread zjshen
Repository: hadoop
Updated Branches:
  refs/heads/YARN-2928 ee3526587 - 471b1d936


HADOOP-11639. Clean up Windows native code compilation warnings related to 
Windows Secure Container Executor. Contributed by Remus Rusanu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a3d37787
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a3d37787
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a3d37787

Branch: refs/heads/YARN-2928
Commit: a3d37787f3dd42b30e1666047a1d240761544691
Parents: 597feeb
Author: cnauroth cnaur...@apache.org
Authored: Fri Mar 27 15:03:41 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Mar 30 12:10:46 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 ++
 .../windows_secure_container_executor.c |  2 +-
 .../hadoop-common/src/main/winutils/client.c| 17 --
 .../hadoop-common/src/main/winutils/config.cpp  |  2 +-
 .../src/main/winutils/include/winutils.h| 24 +++---
 .../src/main/winutils/libwinutils.c | 18 +--
 .../hadoop-common/src/main/winutils/service.c   | 34 ++--
 .../src/main/winutils/systeminfo.c  |  3 ++
 .../hadoop-common/src/main/winutils/task.c  | 28 +---
 9 files changed, 76 insertions(+), 55 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a3d37787/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index febbf6b..8643901 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1172,6 +1172,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11691. X86 build of libwinutils is broken.
 (Kiran Kumar M R via cnauroth)
 
+HADOOP-11639. Clean up Windows native code compilation warnings related to
+Windows Secure Container Executor. (Remus Rusanu via cnauroth)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a3d37787/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/yarn/server/nodemanager/windows_secure_container_executor.c
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/yarn/server/nodemanager/windows_secure_container_executor.c
 
b/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/yarn/server/nodemanager/windows_secure_container_executor.c
index 7e65065..b37359d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/yarn/server/nodemanager/windows_secure_container_executor.c
+++ 
b/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/yarn/server/nodemanager/windows_secure_container_executor.c
@@ -409,7 +409,7 @@ 
Java_org_apache_hadoop_yarn_server_nodemanager_WindowsSecureContainerExecutor_00
 
 done:
   if (path) (*env)-ReleaseStringChars(env, jpath, path);
-  return hFile;
+  return (jlong) hFile;
 #endif
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a3d37787/hadoop-common-project/hadoop-common/src/main/winutils/client.c
--
diff --git a/hadoop-common-project/hadoop-common/src/main/winutils/client.c 
b/hadoop-common-project/hadoop-common/src/main/winutils/client.c
index 047bfb5..e3a2c37 100644
--- a/hadoop-common-project/hadoop-common/src/main/winutils/client.c
+++ b/hadoop-common-project/hadoop-common/src/main/winutils/client.c
@@ -28,8 +28,6 @@ static ACCESS_MASK CLIENT_MASK = 1;
 VOID ReportClientError(LPWSTR lpszLocation, DWORD dwError) {
   LPWSTR  debugMsg = NULL;
   int len;
-  WCHAR   hexError[32];
-  HRESULT hr;
 
   if (IsDebuggerPresent()) {
 len = FormatMessageW(
@@ -49,7 +47,6 @@ DWORD PrepareRpcBindingHandle(
   DWORD   dwError = EXIT_FAILURE;
   RPC_STATUS  status;
   LPWSTR  lpszStringBinding= NULL;
-  ULONG   ulCode;
   RPC_SECURITY_QOS_V3 qos;
   SID_IDENTIFIER_AUTHORITY authNT = SECURITY_NT_AUTHORITY;
   BOOL rpcBindingInit = FALSE;
@@ -104,7 +101,7 @@ DWORD PrepareRpcBindingHandle(
   RPC_C_AUTHN_WINNT,  // AuthnSvc
   NULL,   // AuthnIdentity (self)
   RPC_C_AUTHZ_NONE,   // AuthzSvc
-  qos);
+  (RPC_SECURITY_QOS*) qos);
   if (RPC_S_OK != status) {
 ReportClientError(LRpcBindingSetAuthInfoEx, status);
 dwError = status;
@@ -375,7 +372,7 @@ DWORD RpcCall_WinutilsCreateFile(
   RpcEndExcept;
 
   if (ERROR_SUCCESS == dwError) 

[17/20] hadoop git commit: YARN-3304. Cleaning up ResourceCalculatorProcessTree APIs for public use and removing inconsistencies in the default values. Contributed by Junping Du and Karthik Kambatla.

2015-03-30 Thread zjshen
YARN-3304. Cleaning up ResourceCalculatorProcessTree APIs for public use and 
removing inconsistencies in the default values. Contributed by Junping Du and 
Karthik Kambatla.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4e4f1b88
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4e4f1b88
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4e4f1b88

Branch: refs/heads/YARN-2928
Commit: 4e4f1b88dd29de11b9535194cc05f9db2a5570b1
Parents: 6baa8fd
Author: Vinod Kumar Vavilapalli vino...@apache.org
Authored: Mon Mar 30 10:09:40 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Mar 30 12:10:49 2015 -0700

--
 .../java/org/apache/hadoop/mapred/Task.java | 26 --
 hadoop-yarn-project/CHANGES.txt |  4 +
 .../apache/hadoop/yarn/util/CpuTimeTracker.java |  3 +-
 .../yarn/util/ProcfsBasedProcessTree.java   | 80 +-
 .../util/ResourceCalculatorProcessTree.java | 66 ---
 .../yarn/util/WindowsBasedProcessTree.java  | 21 +++--
 .../yarn/util/TestProcfsBasedProcessTree.java   | 85 ++--
 .../util/TestResourceCalculatorProcessTree.java |  4 +-
 .../yarn/util/TestWindowsBasedProcessTree.java  | 28 +++
 .../monitor/ContainerMetrics.java   | 12 ++-
 .../monitor/ContainersMonitorImpl.java  | 12 +--
 11 files changed, 187 insertions(+), 154 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4e4f1b88/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
index bf5ca22..80881bc 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
@@ -171,7 +171,7 @@ abstract public class Task implements Writable, 
Configurable {
 skipRanges.skipRangeIterator();
 
   private ResourceCalculatorProcessTree pTree;
-  private long initCpuCumulativeTime = 0;
+  private long initCpuCumulativeTime = 
ResourceCalculatorProcessTree.UNAVAILABLE;
 
   protected JobConf conf;
   protected MapOutputFile mapOutputFile;
@@ -866,13 +866,25 @@ abstract public class Task implements Writable, 
Configurable {
 }
 pTree.updateProcessTree();
 long cpuTime = pTree.getCumulativeCpuTime();
-long pMem = pTree.getCumulativeRssmem();
-long vMem = pTree.getCumulativeVmem();
+long pMem = pTree.getRssMemorySize();
+long vMem = pTree.getVirtualMemorySize();
 // Remove the CPU time consumed previously by JVM reuse
-cpuTime -= initCpuCumulativeTime;
-counters.findCounter(TaskCounter.CPU_MILLISECONDS).setValue(cpuTime);
-counters.findCounter(TaskCounter.PHYSICAL_MEMORY_BYTES).setValue(pMem);
-counters.findCounter(TaskCounter.VIRTUAL_MEMORY_BYTES).setValue(vMem);
+if (cpuTime != ResourceCalculatorProcessTree.UNAVAILABLE 
+initCpuCumulativeTime != ResourceCalculatorProcessTree.UNAVAILABLE) {
+  cpuTime -= initCpuCumulativeTime;
+}
+
+if (cpuTime != ResourceCalculatorProcessTree.UNAVAILABLE) {
+  counters.findCounter(TaskCounter.CPU_MILLISECONDS).setValue(cpuTime);
+}
+
+if (pMem != ResourceCalculatorProcessTree.UNAVAILABLE) {
+  counters.findCounter(TaskCounter.PHYSICAL_MEMORY_BYTES).setValue(pMem);
+}
+
+if (vMem != ResourceCalculatorProcessTree.UNAVAILABLE) {
+  counters.findCounter(TaskCounter.VIRTUAL_MEMORY_BYTES).setValue(vMem);
+}
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4e4f1b88/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 0d07032..3c16f24 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -897,6 +897,10 @@ Release 2.7.0 - UNRELEASED
 YARN-2213. Change proxy-user cookie log in AmIpFilter to DEBUG.
 (Varun Saxena via xgong)
 
+YARN-3304. Cleaning up ResourceCalculatorProcessTree APIs for public use 
and
+removing inconsistencies in the default values. (Junping Du and Karthik
+Kambatla via vinodkv)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES


[11/20] hadoop git commit: HDFS-7890. Improve information on Top users for metrics in RollingWindowsManager and lower log level (Contributed by J.Andreina)

2015-03-30 Thread zjshen
HDFS-7890. Improve information on Top users for metrics in 
RollingWindowsManager and lower log level (Contributed by J.Andreina)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/dc441418
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/dc441418
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/dc441418

Branch: refs/heads/YARN-2928
Commit: dc441418edd626bf67e8019cb6c8ee7bd5a29a62
Parents: 53e3d8c
Author: Vinayakumar B vinayakum...@apache.org
Authored: Mon Mar 30 10:02:48 2015 +0530
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Mar 30 12:10:48 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../hdfs/server/namenode/top/window/RollingWindowManager.java | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/dc441418/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index f4991da..9b1cc3e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -353,6 +353,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-6408. Remove redundant definitions in log4j.properties.
 (Abhiraj Butala via aajisaka)
 
+HDFS-7890. Improve information on Top users for metrics in
+RollingWindowsManager and lower log level (J.Andreina via vinayakumarb)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/dc441418/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/window/RollingWindowManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/window/RollingWindowManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/window/RollingWindowManager.java
index 00e7087..4759cc8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/window/RollingWindowManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/top/window/RollingWindowManager.java
@@ -245,7 +245,7 @@ public class RollingWindowManager {
   metricName, userName, windowSum);
   topN.offer(new NameValuePair(userName, windowSum));
 }
-LOG.info(topN size for command {} is: {}, metricName, topN.size());
+LOG.debug(topN users size for command {} is: {}, metricName, 
topN.size());
 return topN;
   }
 



[19/20] hadoop git commit: HADOOP-11761. Fix findbugs warnings in org.apache.hadoop.security.authentication. Contributed by Li Lu.

2015-03-30 Thread zjshen
HADOOP-11761. Fix findbugs warnings in 
org.apache.hadoop.security.authentication. Contributed by Li Lu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6e598f8b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6e598f8b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6e598f8b

Branch: refs/heads/YARN-2928
Commit: 6e598f8b670e477a69d7a28cb47e1b73d7e8f5f0
Parents: afb05c8
Author: Haohui Mai whe...@apache.org
Authored: Mon Mar 30 11:08:54 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Mar 30 12:10:49 2015 -0700

--
 .../hadoop-auth/dev-support/findbugsExcludeFile.xml   | 10 ++
 hadoop-common-project/hadoop-common/CHANGES.txt   |  3 +++
 2 files changed, 13 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6e598f8b/hadoop-common-project/hadoop-auth/dev-support/findbugsExcludeFile.xml
--
diff --git 
a/hadoop-common-project/hadoop-auth/dev-support/findbugsExcludeFile.xml 
b/hadoop-common-project/hadoop-auth/dev-support/findbugsExcludeFile.xml
index 1ecf37a..ddda63c 100644
--- a/hadoop-common-project/hadoop-auth/dev-support/findbugsExcludeFile.xml
+++ b/hadoop-common-project/hadoop-auth/dev-support/findbugsExcludeFile.xml
@@ -34,5 +34,15 @@
 Method name=getCurrentSecret /
 Bug pattern=EI_EXPOSE_REP /
   /Match
+  Match
+Class 
name=org.apache.hadoop.security.authentication.util.FileSignerSecretProvider 
/
+Method name=getAllSecrets /
+Bug pattern=EI_EXPOSE_REP /
+  /Match
+  Match
+Class 
name=org.apache.hadoop.security.authentication.util.FileSignerSecretProvider 
/
+Method name=getCurrentSecret /
+Bug pattern=EI_EXPOSE_REP /
+  /Match
 
 /FindBugsFilter

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6e598f8b/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 8643901..8b59972 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1175,6 +1175,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11639. Clean up Windows native code compilation warnings related to
 Windows Secure Container Executor. (Remus Rusanu via cnauroth)
 
+HADOOP-11761. Fix findbugs warnings in org.apache.hadoop.security
+.authentication. (Li Lu via wheat9)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES



[15/20] hadoop git commit: HDFS-8002. Website refers to /trash directory. Contributd by Brahma Reddy Battula.

2015-03-30 Thread zjshen
HDFS-8002. Website refers to /trash directory. Contributd by Brahma Reddy 
Battula.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6baa8fd2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6baa8fd2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6baa8fd2

Branch: refs/heads/YARN-2928
Commit: 6baa8fd21cbd070a0652983f252fbb30ae90c2b5
Parents: 040fd16
Author: Akira Ajisaka aajis...@apache.org
Authored: Tue Mar 31 00:27:50 2015 +0900
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Mar 30 12:10:48 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 3 +++
 hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md | 4 ++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6baa8fd2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 811ee75..efba80e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -376,6 +376,9 @@ Release 2.8.0 - UNRELEASED
 greater or equal to 1 there is mismatch in the UI report
 (J.Andreina via vinayakumarb)
 
+HDFS-8002. Website refers to /trash directory. (Brahma Reddy Battula via
+aajisaka)
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6baa8fd2/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
index 87a9fcd..5a8e366 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsDesign.md
@@ -224,9 +224,9 @@ Space Reclamation
 
 ### File Deletes and Undeletes
 
-When a file is deleted by a user or an application, it is not immediately 
removed from HDFS. Instead, HDFS first renames it to a file in the `/trash` 
directory. The file can be restored quickly as long as it remains in `/trash`. 
A file remains in `/trash` for a configurable amount of time. After the expiry 
of its life in `/trash`, the NameNode deletes the file from the HDFS namespace. 
The deletion of a file causes the blocks associated with the file to be freed. 
Note that there could be an appreciable time delay between the time a file is 
deleted by a user and the time of the corresponding increase in free space in 
HDFS.
+When a file is deleted by a user or an application, it is not immediately 
removed from HDFS. Instead, HDFS first renames it to a file in the trash 
directory(`/user/username/.Trash`). The file can be restored quickly as long 
as it remains in trash. A file remains in trash for a configurable amount of 
time. After the expiry of its life in trash, the NameNode deletes the file from 
the HDFS namespace. The deletion of a file causes the blocks associated with 
the file to be freed. Note that there could be an appreciable time delay 
between the time a file is deleted by a user and the time of the corresponding 
increase in free space in HDFS.
 
-A user can Undelete a file after deleting it as long as it remains in the 
`/trash` directory. If a user wants to undelete a file that he/she has deleted, 
he/she can navigate the `/trash` directory and retrieve the file. The `/trash` 
directory contains only the latest copy of the file that was deleted. The 
`/trash` directory is just like any other directory with one special feature: 
HDFS applies specified policies to automatically delete files from this 
directory. Current default trash interval is set to 0 (Deletes file without 
storing in trash). This value is configurable parameter stored as 
`fs.trash.interval` stored in core-site.xml.
+A user can Undelete a file after deleting it as long as it remains in the 
trash directory. If a user wants to undelete a file that he/she has deleted, 
he/she can navigate the trash directory and retrieve the file. The trash 
directory contains only the latest copy of the file that was deleted. The trash 
directory is just like any other directory with one special feature: HDFS 
applies specified policies to automatically delete files from this directory. 
Current default trash interval is set to 0 (Deletes file without storing in 
trash). This value is configurable parameter stored as `fs.trash.interval` 
stored in core-site.xml.
 
 ### Decrease Replication Factor
 



hadoop git commit: HDFS-8005. Erasure Coding: simplify striped block recovery work computation and add tests. Contributed by Jing Zhao.

2015-03-30 Thread jing9
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-7285 a1075153e - 5ef6204c0


HDFS-8005. Erasure Coding: simplify striped block recovery work computation and 
add tests. Contributed by Jing Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5ef6204c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5ef6204c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5ef6204c

Branch: refs/heads/HDFS-7285
Commit: 5ef6204c01f96be6d6c93cf797330dc6eaaeac65
Parents: a107515
Author: Jing Zhao ji...@apache.org
Authored: Mon Mar 30 13:35:36 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon Mar 30 13:35:36 2015 -0700

--
 .../server/blockmanagement/BlockManager.java| 138 +---
 .../blockmanagement/DatanodeDescriptor.java |  14 +-
 .../hadoop/hdfs/server/namenode/INodeFile.java  |   1 +
 .../blockmanagement/TestBlockManager.java   |  33 +--
 .../TestRecoverStripedBlocks.java   | 107 --
 .../server/namenode/TestAddStripedBlocks.java   |   2 +-
 .../namenode/TestRecoverStripedBlocks.java  | 210 +++
 7 files changed, 292 insertions(+), 213 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5ef6204c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 7e8a88c..063b396 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -538,7 +538,7 @@ public class BlockManager {
 // source node returned is not used
 chooseSourceDatanodes(getStoredBlock(block), containingNodes,
 containingLiveReplicasNodes, numReplicas,
-new LinkedListShort(), 1, UnderReplicatedBlocks.LEVEL);
+new LinkedListShort(), UnderReplicatedBlocks.LEVEL);
 
 // containingLiveReplicasNodes can include READ_ONLY_SHARED replicas which 
are 
 // not included in the numReplicas.liveReplicas() count
@@ -1376,7 +1376,7 @@ public class BlockManager {
   int computeRecoveryWorkForBlocks(ListListBlockInfo blocksToRecover) {
 int requiredReplication, numEffectiveReplicas;
 ListDatanodeDescriptor containingNodes;
-BlockCollection bc = null;
+BlockCollection bc;
 int additionalReplRequired;
 
 int scheduledWork = 0;
@@ -1404,13 +1404,10 @@ public class BlockManager {
 containingNodes = new ArrayList();
 ListDatanodeStorageInfo liveReplicaNodes = new ArrayList();
 NumberReplicas numReplicas = new NumberReplicas();
-ListShort missingBlockIndices = new LinkedList();
-DatanodeDescriptor[] srcNodes;
-int numSourceNodes = bc.isStriped() ?
-HdfsConstants.NUM_DATA_BLOCKS : 1;
-srcNodes = chooseSourceDatanodes(
-block, containingNodes, liveReplicaNodes, numReplicas,
-missingBlockIndices, numSourceNodes, priority);
+ListShort liveBlockIndices = new ArrayList();
+final DatanodeDescriptor[] srcNodes = chooseSourceDatanodes(block,
+containingNodes, liveReplicaNodes, numReplicas,
+liveBlockIndices, priority);
 if(srcNodes == null || srcNodes.length == 0) {
   // block can not be replicated from any node
   LOG.debug(Block  + block +  cannot be recovered  +
@@ -1442,15 +1439,14 @@ public class BlockManager {
 } else {
   additionalReplRequired = 1; // Needed on a new rack
 }
-if (bc.isStriped()) {
+if (block.isStriped()) {
+  short[] indices = new short[liveBlockIndices.size()];
+  for (int i = 0 ; i  liveBlockIndices.size(); i++) {
+indices[i] = liveBlockIndices.get(i);
+  }
   ErasureCodingWork ecw = new ErasureCodingWork(block, bc, 
srcNodes,
   containingNodes, liveReplicaNodes, additionalReplRequired,
-  priority);
-  short[] missingBlockArray = new 
short[missingBlockIndices.size()];
-  for (int i = 0 ; i  missingBlockIndices.size(); i++) {
-missingBlockArray[i] = missingBlockIndices.get(i);
-  }
-  ecw.setMissingBlockIndices(missingBlockArray);
+  priority, indices);
   recovWork.add(ecw);
  

hadoop git commit: Addendum for HDFS-7748.

2015-03-30 Thread wheat9
Repository: hadoop
Updated Branches:
  refs/heads/trunk 2a945d24f - 0967b1d99


Addendum for HDFS-7748.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0967b1d9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0967b1d9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0967b1d9

Branch: refs/heads/trunk
Commit: 0967b1d99d7001cd1d09ebd29b9360f1079410e8
Parents: 2a945d2
Author: Haohui Mai whe...@apache.org
Authored: Mon Mar 30 12:23:45 2015 -0700
Committer: Haohui Mai whe...@apache.org
Committed: Mon Mar 30 13:11:09 2015 -0700

--
 .../org/apache/hadoop/hdfs/TestDataTransferProtocol.java| 9 -
 1 file changed, 4 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0967b1d9/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
index fcfaa0c..bf011f7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
@@ -33,7 +33,6 @@ import java.net.Socket;
 import java.nio.ByteBuffer;
 import java.util.Random;
 
-import com.sun.xml.internal.messaging.saaj.util.ByteOutputStream;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
@@ -534,17 +533,17 @@ public class TestDataTransferProtocol {
Status.CHECKSUM_OK))
 .build();
 
-ByteOutputStream oldAckBytes = new ByteOutputStream();
+ByteArrayOutputStream oldAckBytes = new ByteArrayOutputStream();
 proto.writeDelimitedTo(oldAckBytes);
 PipelineAck oldAck = new PipelineAck();
-oldAck.readFields(new ByteArrayInputStream(oldAckBytes.getBytes()));
+oldAck.readFields(new ByteArrayInputStream(oldAckBytes.toByteArray()));
 assertEquals(PipelineAck.combineHeader(PipelineAck.ECN.DISABLED, Status
 .CHECKSUM_OK), oldAck.getHeaderFlag(0));
 
 PipelineAck newAck = new PipelineAck();
-ByteOutputStream newAckBytes = new ByteOutputStream();
+ByteArrayOutputStream newAckBytes = new ByteArrayOutputStream();
 newProto.writeDelimitedTo(newAckBytes);
-newAck.readFields(new ByteArrayInputStream(newAckBytes.getBytes()));
+newAck.readFields(new ByteArrayInputStream(newAckBytes.toByteArray()));
 assertEquals(PipelineAck.combineHeader(PipelineAck.ECN.SUPPORTED, Status
 .CHECKSUM_OK), newAck.getHeaderFlag(0));
   }



hadoop git commit: Addendum for HDFS-7748.

2015-03-30 Thread wheat9
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 cba4ed167 - 02fcf622f


Addendum for HDFS-7748.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/02fcf622
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/02fcf622
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/02fcf622

Branch: refs/heads/branch-2
Commit: 02fcf622fb020b2a6152f1c36650f7789ee02e13
Parents: cba4ed1
Author: Haohui Mai whe...@apache.org
Authored: Mon Mar 30 12:23:45 2015 -0700
Committer: Haohui Mai whe...@apache.org
Committed: Mon Mar 30 13:11:24 2015 -0700

--
 .../org/apache/hadoop/hdfs/TestDataTransferProtocol.java| 9 -
 1 file changed, 4 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/02fcf622/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
index 16889d5..c5d889c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataTransferProtocol.java
@@ -33,7 +33,6 @@ import java.net.Socket;
 import java.nio.ByteBuffer;
 import java.util.Random;
 
-import com.sun.xml.internal.messaging.saaj.util.ByteOutputStream;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
@@ -540,17 +539,17 @@ public class TestDataTransferProtocol {
Status.CHECKSUM_OK))
 .build();
 
-ByteOutputStream oldAckBytes = new ByteOutputStream();
+ByteArrayOutputStream oldAckBytes = new ByteArrayOutputStream();
 proto.writeDelimitedTo(oldAckBytes);
 PipelineAck oldAck = new PipelineAck();
-oldAck.readFields(new ByteArrayInputStream(oldAckBytes.getBytes()));
+oldAck.readFields(new ByteArrayInputStream(oldAckBytes.toByteArray()));
 assertEquals(PipelineAck.combineHeader(PipelineAck.ECN.DISABLED, Status
 .CHECKSUM_OK), oldAck.getHeaderFlag(0));
 
 PipelineAck newAck = new PipelineAck();
-ByteOutputStream newAckBytes = new ByteOutputStream();
+ByteArrayOutputStream newAckBytes = new ByteArrayOutputStream();
 newProto.writeDelimitedTo(newAckBytes);
-newAck.readFields(new ByteArrayInputStream(newAckBytes.getBytes()));
+newAck.readFields(new ByteArrayInputStream(newAckBytes.toByteArray()));
 assertEquals(PipelineAck.combineHeader(PipelineAck.ECN.SUPPORTED, Status
 .CHECKSUM_OK), newAck.getHeaderFlag(0));
   }



hadoop git commit: HDFS-7748. Separate ECN flags from the Status in the DataTransferPipelineAck. Contributed by Anu Engineer and Haohui Mai.

2015-03-30 Thread wheat9
Repository: hadoop
Updated Branches:
  refs/heads/trunk 90e07d55a - b80457158


HDFS-7748. Separate ECN flags from the Status in the DataTransferPipelineAck. 
Contributed by Anu Engineer and Haohui Mai.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b8045715
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b8045715
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b8045715

Branch: refs/heads/trunk
Commit: b80457158daf0dc712fbe5695625cc17d70d4bb4
Parents: 90e07d5
Author: Haohui Mai whe...@apache.org
Authored: Mon Mar 30 11:59:21 2015 -0700
Committer: Haohui Mai whe...@apache.org
Committed: Mon Mar 30 11:59:21 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../org/apache/hadoop/hdfs/DataStreamer.java|  2 +-
 .../hdfs/protocol/datatransfer/PipelineAck.java | 31 +---
 .../hdfs/server/datanode/BlockReceiver.java |  2 +-
 .../src/main/proto/datatransfer.proto   |  3 +-
 .../hadoop/hdfs/TestDataTransferProtocol.java   | 31 
 6 files changed, 58 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b8045715/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 79a81c6..2f42db2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1313,6 +1313,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7963. Fix expected tracing spans in TestTracing along with HDFS-7054.
 (Masatake Iwasaki via kihwal)
 
+HDFS-7748. Separate ECN flags from the Status in the 
DataTransferPipelineAck.
+(Anu Engineer and Haohui Mai via wheat9)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b8045715/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
index 6047825..9c437ba 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
@@ -817,7 +817,7 @@ class DataStreamer extends Daemon {
   // processes response status from datanodes.
   for (int i = ack.getNumOfReplies()-1; i =0   
dfsClient.clientRunning; i--) {
 final Status reply = PipelineAck.getStatusFromHeader(ack
-.getReply(i));
+.getHeaderFlag(i));
 // Restart will not be treated differently unless it is
 // the local node or the only one in the pipeline.
 if (PipelineAck.isRestartOOBStatus(reply) 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b8045715/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java
index 35e5bb8..9bd4115 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java
@@ -130,13 +130,16 @@ public class PipelineAck {
*/
   public PipelineAck(long seqno, int[] replies,
  long downstreamAckTimeNanos) {
-ArrayListInteger replyList = Lists.newArrayList();
+ArrayListStatus statusList = Lists.newArrayList();
+ArrayListInteger flagList = Lists.newArrayList();
 for (int r : replies) {
-  replyList.add(r);
+  statusList.add(StatusFormat.getStatus(r));
+  flagList.add(r);
 }
 proto = PipelineAckProto.newBuilder()
   .setSeqno(seqno)
-  .addAllReply(replyList)
+  .addAllReply(statusList)
+  .addAllFlag(flagList)
   .setDownstreamAckTimeNanos(downstreamAckTimeNanos)
   .build();
   }
@@ -158,11 +161,18 @@ public class PipelineAck {
   }
   
   /**
-   * get the ith reply
-   * @return the the ith reply
+   * get the header flag of ith reply
*/
-  public int getReply(int i) {
-return proto.getReply(i);
+  public int getHeaderFlag(int i) {
+if 

hadoop git commit: HDFS-7748. Separate ECN flags from the Status in the DataTransferPipelineAck. Contributed by Anu Engineer and Haohui Mai.

2015-03-30 Thread wheat9
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 24d879026 - dd5b2dac5


HDFS-7748. Separate ECN flags from the Status in the DataTransferPipelineAck. 
Contributed by Anu Engineer and Haohui Mai.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/dd5b2dac
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/dd5b2dac
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/dd5b2dac

Branch: refs/heads/branch-2
Commit: dd5b2dac5a81952f579906ddd1c95a2e915b513e
Parents: 24d8790
Author: Haohui Mai whe...@apache.org
Authored: Mon Mar 30 11:59:21 2015 -0700
Committer: Haohui Mai whe...@apache.org
Committed: Mon Mar 30 11:59:32 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../org/apache/hadoop/hdfs/DataStreamer.java|  2 +-
 .../hdfs/protocol/datatransfer/PipelineAck.java | 31 +---
 .../hdfs/server/datanode/BlockReceiver.java |  2 +-
 .../src/main/proto/datatransfer.proto   |  3 +-
 .../hadoop/hdfs/TestDataTransferProtocol.java   | 31 
 6 files changed, 58 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/dd5b2dac/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index b3cc6b7..667aa05 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1009,6 +1009,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7963. Fix expected tracing spans in TestTracing along with HDFS-7054.
 (Masatake Iwasaki via kihwal)
 
+HDFS-7748. Separate ECN flags from the Status in the 
DataTransferPipelineAck.
+(Anu Engineer and Haohui Mai via wheat9)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/dd5b2dac/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
index 6047825..9c437ba 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
@@ -817,7 +817,7 @@ class DataStreamer extends Daemon {
   // processes response status from datanodes.
   for (int i = ack.getNumOfReplies()-1; i =0   
dfsClient.clientRunning; i--) {
 final Status reply = PipelineAck.getStatusFromHeader(ack
-.getReply(i));
+.getHeaderFlag(i));
 // Restart will not be treated differently unless it is
 // the local node or the only one in the pipeline.
 if (PipelineAck.isRestartOOBStatus(reply) 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/dd5b2dac/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java
index 35e5bb8..9bd4115 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java
@@ -130,13 +130,16 @@ public class PipelineAck {
*/
   public PipelineAck(long seqno, int[] replies,
  long downstreamAckTimeNanos) {
-ArrayListInteger replyList = Lists.newArrayList();
+ArrayListStatus statusList = Lists.newArrayList();
+ArrayListInteger flagList = Lists.newArrayList();
 for (int r : replies) {
-  replyList.add(r);
+  statusList.add(StatusFormat.getStatus(r));
+  flagList.add(r);
 }
 proto = PipelineAckProto.newBuilder()
   .setSeqno(seqno)
-  .addAllReply(replyList)
+  .addAllReply(statusList)
+  .addAllFlag(flagList)
   .setDownstreamAckTimeNanos(downstreamAckTimeNanos)
   .build();
   }
@@ -158,11 +161,18 @@ public class PipelineAck {
   }
   
   /**
-   * get the ith reply
-   * @return the the ith reply
+   * get the header flag of ith reply
*/
-  public int getReply(int i) {
-return proto.getReply(i);
+  public int getHeaderFlag(int i) {
+

[2/2] hadoop git commit: YARN-2495. Allow admin specify labels from each NM (Distributed configuration for node label). (Naganarasimha G R via wangda)

2015-03-30 Thread wangda
YARN-2495. Allow admin specify labels from each NM (Distributed configuration 
for node label). (Naganarasimha G R via wangda)

(cherry picked from commit 2a945d24f7de1a7ae6e7bd6636188ce3b55c7f52)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cba4ed16
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cba4ed16
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cba4ed16

Branch: refs/heads/branch-2
Commit: cba4ed1678b70745f5f03be9a8129fdf26bccc72
Parents: dd5b2da
Author: Wangda Tan wan...@apache.org
Authored: Mon Mar 30 12:04:51 2015 -0700
Committer: Wangda Tan wan...@apache.org
Committed: Mon Mar 30 12:05:54 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../hadoop/yarn/conf/YarnConfiguration.java |  12 +
 .../src/main/proto/yarn_protos.proto|   4 +
 .../yarn/client/TestResourceTrackerOnHA.java|   2 +-
 .../protocolrecords/NodeHeartbeatRequest.java   |   8 +-
 .../protocolrecords/NodeHeartbeatResponse.java  |   3 +
 .../RegisterNodeManagerRequest.java |  12 +
 .../RegisterNodeManagerResponse.java|   3 +
 .../impl/pb/NodeHeartbeatRequestPBImpl.java |  37 ++
 .../impl/pb/NodeHeartbeatResponsePBImpl.java|  13 +
 .../pb/RegisterNodeManagerRequestPBImpl.java|  48 ++-
 .../pb/RegisterNodeManagerResponsePBImpl.java   |  13 +
 .../yarn_server_common_service_protos.proto |   4 +
 .../hadoop/yarn/TestYarnServerApiClasses.java   |  94 
 .../yarn/server/nodemanager/NodeManager.java|  34 +-
 .../nodemanager/NodeStatusUpdaterImpl.java  | 114 -
 .../nodelabels/NodeLabelsProvider.java  |  43 ++
 .../nodemanager/TestNodeStatusUpdater.java  |   2 +-
 .../TestNodeStatusUpdaterForLabels.java | 281 
 .../resourcemanager/ResourceTrackerService.java |  80 +++-
 .../TestResourceTrackerService.java | 430 ++-
 21 files changed, 1199 insertions(+), 41 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cba4ed16/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index c36649e..d2850ad 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -35,6 +35,9 @@ Release 2.8.0 - UNRELEASED
 
 YARN-3288. Document and fix indentation in the DockerContainerExecutor code
 
+YARN-2495. Allow admin specify labels from each NM (Distributed 
+configuration for node label). (Naganarasimha G R via wangda)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cba4ed16/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index be5471d..a25cfe9 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -1719,6 +1719,18 @@ public class YarnConfiguration extends Configuration {
   public static final String NODE_LABELS_ENABLED = NODE_LABELS_PREFIX
   + enabled;
   public static final boolean DEFAULT_NODE_LABELS_ENABLED = false;
+  
+  public static final String NODELABEL_CONFIGURATION_TYPE =
+  NODE_LABELS_PREFIX + configuration-type;
+  
+  public static final String CENTALIZED_NODELABEL_CONFIGURATION_TYPE =
+  centralized;
+  
+  public static final String DISTRIBUTED_NODELABEL_CONFIGURATION_TYPE =
+  distributed;
+  
+  public static final String DEFAULT_NODELABEL_CONFIGURATION_TYPE =
+  CENTALIZED_NODELABEL_CONFIGURATION_TYPE;
 
   public YarnConfiguration() {
 super();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cba4ed16/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
index 194be82..b396f4d 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
@@ -239,6 +239,10 @@ message NodeIdToLabelsProto {
   

hadoop git commit: HADOOP-11754. RM fails to start in non-secure mode due to authentication filter failure. Contributed by Haohui Mai.

2015-03-30 Thread wheat9
Repository: hadoop
Updated Branches:
  refs/heads/trunk 82fa3adfd - 90e07d55a


HADOOP-11754. RM fails to start in non-secure mode due to authentication filter 
failure. Contributed by Haohui Mai.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/90e07d55
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/90e07d55
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/90e07d55

Branch: refs/heads/trunk
Commit: 90e07d55ace7221081a58a90e54b360ad68fa1ef
Parents: 82fa3ad
Author: Haohui Mai whe...@apache.org
Authored: Mon Mar 30 11:44:22 2015 -0700
Committer: Haohui Mai whe...@apache.org
Committed: Mon Mar 30 11:44:22 2015 -0700

--
 .../server/AuthenticationFilter.java| 108 +--
 .../server/TestAuthenticationFilter.java|  20 ++--
 hadoop-common-project/hadoop-common/CHANGES.txt |   3 +
 .../org/apache/hadoop/http/HttpServer2.java |  53 -
 .../AuthenticationFilterInitializer.java|  18 ++--
 5 files changed, 128 insertions(+), 74 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/90e07d55/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
index 5c22fce..684e91c 100644
--- 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
+++ 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
@@ -25,6 +25,7 @@ import org.slf4j.LoggerFactory;
 import javax.servlet.Filter;
 import javax.servlet.FilterChain;
 import javax.servlet.FilterConfig;
+import javax.servlet.ServletContext;
 import javax.servlet.ServletException;
 import javax.servlet.ServletRequest;
 import javax.servlet.ServletResponse;
@@ -183,8 +184,6 @@ public class AuthenticationFilter implements Filter {
   private Signer signer;
   private SignerSecretProvider secretProvider;
   private AuthenticationHandler authHandler;
-  private boolean randomSecret;
-  private boolean customSecretProvider;
   private long validity;
   private String cookieDomain;
   private String cookiePath;
@@ -226,7 +225,6 @@ public class AuthenticationFilter implements Filter {
 
 initializeAuthHandler(authHandlerClassName, filterConfig);
 
-
 cookieDomain = config.getProperty(COOKIE_DOMAIN, null);
 cookiePath = config.getProperty(COOKIE_PATH, null);
   }
@@ -237,11 +235,8 @@ public class AuthenticationFilter implements Filter {
   Class? klass = 
Thread.currentThread().getContextClassLoader().loadClass(authHandlerClassName);
   authHandler = (AuthenticationHandler) klass.newInstance();
   authHandler.init(config);
-} catch (ClassNotFoundException ex) {
-  throw new ServletException(ex);
-} catch (InstantiationException ex) {
-  throw new ServletException(ex);
-} catch (IllegalAccessException ex) {
+} catch (ClassNotFoundException | InstantiationException |
+IllegalAccessException ex) {
   throw new ServletException(ex);
 }
   }
@@ -251,62 +246,59 @@ public class AuthenticationFilter implements Filter {
 secretProvider = (SignerSecretProvider) filterConfig.getServletContext().
 getAttribute(SIGNER_SECRET_PROVIDER_ATTRIBUTE);
 if (secretProvider == null) {
-  Class? extends SignerSecretProvider providerClass
-  = getProviderClass(config);
-  try {
-secretProvider = providerClass.newInstance();
-  } catch (InstantiationException ex) {
-throw new ServletException(ex);
-  } catch (IllegalAccessException ex) {
-throw new ServletException(ex);
-  }
+  // As tomcat cannot specify the provider object in the configuration.
+  // It'll go into this path
   try {
-secretProvider.init(config, filterConfig.getServletContext(), 
validity);
+secretProvider = constructSecretProvider(
+filterConfig.getServletContext(),
+config, false);
   } catch (Exception ex) {
 throw new ServletException(ex);
   }
-} else {
-  customSecretProvider = true;
 }
 signer = new Signer(secretProvider);
   }
 
-  @SuppressWarnings(unchecked)
-  private Class? extends SignerSecretProvider getProviderClass(Properties 
config)
-  throws ServletException {
-String providerClassName;
-String signerSecretProviderName
-= config.getProperty(SIGNER_SECRET_PROVIDER, 

hadoop git commit: HADOOP-11754. RM fails to start in non-secure mode due to authentication filter failure. Contributed by Haohui Mai.

2015-03-30 Thread wheat9
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 a84fdd565 - 24d879026


HADOOP-11754. RM fails to start in non-secure mode due to authentication filter 
failure. Contributed by Haohui Mai.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/24d87902
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/24d87902
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/24d87902

Branch: refs/heads/branch-2
Commit: 24d879026d3316fe4015aab627bc13ca7dc08fa5
Parents: a84fdd5
Author: Haohui Mai whe...@apache.org
Authored: Mon Mar 30 11:44:22 2015 -0700
Committer: Haohui Mai whe...@apache.org
Committed: Mon Mar 30 11:44:30 2015 -0700

--
 .../server/AuthenticationFilter.java| 108 +--
 .../server/TestAuthenticationFilter.java|  20 ++--
 hadoop-common-project/hadoop-common/CHANGES.txt |   3 +
 .../org/apache/hadoop/http/HttpServer2.java |  53 -
 .../AuthenticationFilterInitializer.java|  18 ++--
 5 files changed, 128 insertions(+), 74 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/24d87902/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
index 5c22fce..684e91c 100644
--- 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
+++ 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
@@ -25,6 +25,7 @@ import org.slf4j.LoggerFactory;
 import javax.servlet.Filter;
 import javax.servlet.FilterChain;
 import javax.servlet.FilterConfig;
+import javax.servlet.ServletContext;
 import javax.servlet.ServletException;
 import javax.servlet.ServletRequest;
 import javax.servlet.ServletResponse;
@@ -183,8 +184,6 @@ public class AuthenticationFilter implements Filter {
   private Signer signer;
   private SignerSecretProvider secretProvider;
   private AuthenticationHandler authHandler;
-  private boolean randomSecret;
-  private boolean customSecretProvider;
   private long validity;
   private String cookieDomain;
   private String cookiePath;
@@ -226,7 +225,6 @@ public class AuthenticationFilter implements Filter {
 
 initializeAuthHandler(authHandlerClassName, filterConfig);
 
-
 cookieDomain = config.getProperty(COOKIE_DOMAIN, null);
 cookiePath = config.getProperty(COOKIE_PATH, null);
   }
@@ -237,11 +235,8 @@ public class AuthenticationFilter implements Filter {
   Class? klass = 
Thread.currentThread().getContextClassLoader().loadClass(authHandlerClassName);
   authHandler = (AuthenticationHandler) klass.newInstance();
   authHandler.init(config);
-} catch (ClassNotFoundException ex) {
-  throw new ServletException(ex);
-} catch (InstantiationException ex) {
-  throw new ServletException(ex);
-} catch (IllegalAccessException ex) {
+} catch (ClassNotFoundException | InstantiationException |
+IllegalAccessException ex) {
   throw new ServletException(ex);
 }
   }
@@ -251,62 +246,59 @@ public class AuthenticationFilter implements Filter {
 secretProvider = (SignerSecretProvider) filterConfig.getServletContext().
 getAttribute(SIGNER_SECRET_PROVIDER_ATTRIBUTE);
 if (secretProvider == null) {
-  Class? extends SignerSecretProvider providerClass
-  = getProviderClass(config);
-  try {
-secretProvider = providerClass.newInstance();
-  } catch (InstantiationException ex) {
-throw new ServletException(ex);
-  } catch (IllegalAccessException ex) {
-throw new ServletException(ex);
-  }
+  // As tomcat cannot specify the provider object in the configuration.
+  // It'll go into this path
   try {
-secretProvider.init(config, filterConfig.getServletContext(), 
validity);
+secretProvider = constructSecretProvider(
+filterConfig.getServletContext(),
+config, false);
   } catch (Exception ex) {
 throw new ServletException(ex);
   }
-} else {
-  customSecretProvider = true;
 }
 signer = new Signer(secretProvider);
   }
 
-  @SuppressWarnings(unchecked)
-  private Class? extends SignerSecretProvider getProviderClass(Properties 
config)
-  throws ServletException {
-String providerClassName;
-String signerSecretProviderName
-= 

hadoop git commit: HADOOP-11754. RM fails to start in non-secure mode due to authentication filter failure. Contributed by Haohui Mai.

2015-03-30 Thread wheat9
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 3ec1ad900 - 530c2ef91


HADOOP-11754. RM fails to start in non-secure mode due to authentication filter 
failure. Contributed by Haohui Mai.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/530c2ef9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/530c2ef9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/530c2ef9

Branch: refs/heads/branch-2.7
Commit: 530c2ef91a37458bb71316398d368a327b94a37d
Parents: 3ec1ad9
Author: Haohui Mai whe...@apache.org
Authored: Mon Mar 30 11:44:22 2015 -0700
Committer: Haohui Mai whe...@apache.org
Committed: Mon Mar 30 11:45:11 2015 -0700

--
 .../server/AuthenticationFilter.java| 108 +--
 .../server/TestAuthenticationFilter.java|  20 ++--
 hadoop-common-project/hadoop-common/CHANGES.txt |   3 +
 .../org/apache/hadoop/http/HttpServer2.java |  53 -
 .../AuthenticationFilterInitializer.java|  18 ++--
 5 files changed, 128 insertions(+), 74 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/530c2ef9/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
--
diff --git 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
index 5c22fce..684e91c 100644
--- 
a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
+++ 
b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
@@ -25,6 +25,7 @@ import org.slf4j.LoggerFactory;
 import javax.servlet.Filter;
 import javax.servlet.FilterChain;
 import javax.servlet.FilterConfig;
+import javax.servlet.ServletContext;
 import javax.servlet.ServletException;
 import javax.servlet.ServletRequest;
 import javax.servlet.ServletResponse;
@@ -183,8 +184,6 @@ public class AuthenticationFilter implements Filter {
   private Signer signer;
   private SignerSecretProvider secretProvider;
   private AuthenticationHandler authHandler;
-  private boolean randomSecret;
-  private boolean customSecretProvider;
   private long validity;
   private String cookieDomain;
   private String cookiePath;
@@ -226,7 +225,6 @@ public class AuthenticationFilter implements Filter {
 
 initializeAuthHandler(authHandlerClassName, filterConfig);
 
-
 cookieDomain = config.getProperty(COOKIE_DOMAIN, null);
 cookiePath = config.getProperty(COOKIE_PATH, null);
   }
@@ -237,11 +235,8 @@ public class AuthenticationFilter implements Filter {
   Class? klass = 
Thread.currentThread().getContextClassLoader().loadClass(authHandlerClassName);
   authHandler = (AuthenticationHandler) klass.newInstance();
   authHandler.init(config);
-} catch (ClassNotFoundException ex) {
-  throw new ServletException(ex);
-} catch (InstantiationException ex) {
-  throw new ServletException(ex);
-} catch (IllegalAccessException ex) {
+} catch (ClassNotFoundException | InstantiationException |
+IllegalAccessException ex) {
   throw new ServletException(ex);
 }
   }
@@ -251,62 +246,59 @@ public class AuthenticationFilter implements Filter {
 secretProvider = (SignerSecretProvider) filterConfig.getServletContext().
 getAttribute(SIGNER_SECRET_PROVIDER_ATTRIBUTE);
 if (secretProvider == null) {
-  Class? extends SignerSecretProvider providerClass
-  = getProviderClass(config);
-  try {
-secretProvider = providerClass.newInstance();
-  } catch (InstantiationException ex) {
-throw new ServletException(ex);
-  } catch (IllegalAccessException ex) {
-throw new ServletException(ex);
-  }
+  // As tomcat cannot specify the provider object in the configuration.
+  // It'll go into this path
   try {
-secretProvider.init(config, filterConfig.getServletContext(), 
validity);
+secretProvider = constructSecretProvider(
+filterConfig.getServletContext(),
+config, false);
   } catch (Exception ex) {
 throw new ServletException(ex);
   }
-} else {
-  customSecretProvider = true;
 }
 signer = new Signer(secretProvider);
   }
 
-  @SuppressWarnings(unchecked)
-  private Class? extends SignerSecretProvider getProviderClass(Properties 
config)
-  throws ServletException {
-String providerClassName;
-String signerSecretProviderName
-= 

[1/2] hadoop git commit: YARN-2495. Allow admin specify labels from each NM (Distributed configuration for node label). (Naganarasimha G R via wangda)

2015-03-30 Thread wangda
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 dd5b2dac5 - cba4ed167


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cba4ed16/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceTrackerService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceTrackerService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceTrackerService.java
index a904dc0..18d7df4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceTrackerService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceTrackerService.java
@@ -27,8 +27,10 @@ import java.io.File;
 import java.io.FileOutputStream;
 import java.io.IOException;
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.HashMap;
 import java.util.List;
+import java.util.Set;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.io.IOUtils;
@@ -49,11 +51,16 @@ import org.apache.hadoop.yarn.event.Dispatcher;
 import org.apache.hadoop.yarn.event.DrainDispatcher;
 import org.apache.hadoop.yarn.event.Event;
 import org.apache.hadoop.yarn.event.EventHandler;
+import org.apache.hadoop.yarn.nodelabels.NodeLabelTestBase;
 import org.apache.hadoop.yarn.server.api.protocolrecords.NMContainerStatus;
+import org.apache.hadoop.yarn.server.api.protocolrecords.NodeHeartbeatRequest;
 import org.apache.hadoop.yarn.server.api.protocolrecords.NodeHeartbeatResponse;
 import 
org.apache.hadoop.yarn.server.api.protocolrecords.RegisterNodeManagerRequest;
 import 
org.apache.hadoop.yarn.server.api.protocolrecords.RegisterNodeManagerResponse;
 import org.apache.hadoop.yarn.server.api.records.NodeAction;
+import org.apache.hadoop.yarn.server.api.records.NodeStatus;
+import 
org.apache.hadoop.yarn.server.resourcemanager.nodelabels.NullRMNodeLabelsManager;
+import 
org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
 import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp;
 import 
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl;
 import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNode;
@@ -66,7 +73,7 @@ import org.junit.After;
 import org.junit.Assert;
 import org.junit.Test;
 
-public class TestResourceTrackerService {
+public class TestResourceTrackerService extends NodeLabelTestBase {
 
   private final static File TEMP_DIR = new File(System.getProperty(
   test.build.data, /tmp), decommision);
@@ -305,8 +312,425 @@ public class TestResourceTrackerService {
 req.setHttpPort(1234);
 req.setNMVersion(YarnVersionInfo.getVersion());
 // trying to register a invalid node.
-RegisterNodeManagerResponse response = 
resourceTrackerService.registerNodeManager(req);
-Assert.assertEquals(NodeAction.NORMAL,response.getNodeAction());
+RegisterNodeManagerResponse response =
+resourceTrackerService.registerNodeManager(req);
+Assert.assertEquals(NodeAction.NORMAL, response.getNodeAction());
+  }
+
+  @Test
+  public void testNodeRegistrationWithLabels() throws Exception {
+writeToHostsFile(host2);
+Configuration conf = new Configuration();
+conf.set(YarnConfiguration.RM_NODES_INCLUDE_FILE_PATH,
+hostFile.getAbsolutePath());
+conf.set(YarnConfiguration.NODELABEL_CONFIGURATION_TYPE,
+YarnConfiguration.DISTRIBUTED_NODELABEL_CONFIGURATION_TYPE);
+
+final RMNodeLabelsManager nodeLabelsMgr = new NullRMNodeLabelsManager();
+
+rm = new MockRM(conf) {
+  @Override
+  protected RMNodeLabelsManager createNodeLabelManager() {
+return nodeLabelsMgr;
+  }
+};
+rm.start();
+
+try {
+  nodeLabelsMgr.addToCluserNodeLabels(toSet(A, B, C));
+} catch (IOException e) {
+  Assert.fail(Caught Exception while intializing);
+  e.printStackTrace();
+}
+
+ResourceTrackerService resourceTrackerService =
+rm.getResourceTrackerService();
+RegisterNodeManagerRequest registerReq =
+Records.newRecord(RegisterNodeManagerRequest.class);
+NodeId nodeId = NodeId.newInstance(host2, 1234);
+Resource capability = BuilderUtils.newResource(1024, 1);
+registerReq.setResource(capability);
+registerReq.setNodeId(nodeId);
+registerReq.setHttpPort(1234);
+registerReq.setNMVersion(YarnVersionInfo.getVersion());
+registerReq.setNodeLabels(toSet(A));
+RegisterNodeManagerResponse response =
+

[20/20] hadoop git commit: MAPREDUCE-6288. Changed permissions on JobHistory server's done directory so that user's client can load the conf files directly. Contributed by Robert Kanter.

2015-03-30 Thread zjshen
MAPREDUCE-6288. Changed permissions on JobHistory server's done directory so 
that user's client can load the conf files directly. Contributed by Robert 
Kanter.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5c42a674
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5c42a674
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5c42a674

Branch: refs/heads/YARN-2928
Commit: 5c42a674f8a497159a9cf76b834625f3e2d98122
Parents: 4e4f1b8
Author: Vinod Kumar Vavilapalli vino...@apache.org
Authored: Mon Mar 30 10:27:19 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Mar 30 12:10:49 2015 -0700

--
 hadoop-mapreduce-project/CHANGES.txt|  4 ++
 .../v2/jobhistory/JobHistoryUtils.java  |  4 +-
 .../mapreduce/v2/hs/HistoryFileManager.java | 31 -
 .../mapreduce/v2/hs/TestHistoryFileManager.java | 73 
 4 files changed, 108 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5c42a674/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index b0367a7..69ff96b 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -510,6 +510,10 @@ Release 2.7.0 - UNRELEASED
 MAPREDUCE-6285. ClientServiceDelegate should not retry upon
 AuthenticationException. (Jonathan Eagles via ozawa)
 
+MAPREDUCE-6288. Changed permissions on JobHistory server's done directory
+so that user's client can load the conf files directly. (Robert Kanter via
+vinodkv)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5c42a674/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java
index e279c03..8966e4e 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java
@@ -72,7 +72,7 @@ public class JobHistoryUtils {
* Permissions for the history done dir and derivatives.
*/
   public static final FsPermission HISTORY_DONE_DIR_PERMISSION =
-FsPermission.createImmutable((short) 0770); 
+FsPermission.createImmutable((short) 0771);
 
   public static final FsPermission HISTORY_DONE_FILE_PERMISSION =
 FsPermission.createImmutable((short) 0770); // rwx--
@@ -81,7 +81,7 @@ public class JobHistoryUtils {
* Umask for the done dir and derivatives.
*/
   public static final FsPermission HISTORY_DONE_DIR_UMASK = FsPermission
-  .createImmutable((short) (0770 ^ 0777));
+  .createImmutable((short) (0771 ^ 0777));
 
   
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5c42a674/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryFileManager.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryFileManager.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryFileManager.java
index 65f8a4f..5377075 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryFileManager.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/HistoryFileManager.java
@@ -571,8 +571,10 @@ public class HistoryFileManager extends AbstractService {
   new Path(doneDirPrefix));
   doneDirFc = FileContext.getFileContext(doneDirPrefixPath.toUri(), conf);
   doneDirFc.setUMask(JobHistoryUtils.HISTORY_DONE_DIR_UMASK);
-  mkdir(doneDirFc, doneDirPrefixPath, new FsPermission(
-  JobHistoryUtils.HISTORY_DONE_DIR_PERMISSION));
+  FsPermission doneDirPerm = new FsPermission(
+  

[04/20] hadoop git commit: HADOOP-11760. Fix typo of javadoc in DistCp. Contributed by Brahma Reddy Battula.

2015-03-30 Thread zjshen
HADOOP-11760. Fix typo of javadoc in DistCp. Contributed by Brahma Reddy 
Battula.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a639fdd4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a639fdd4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a639fdd4

Branch: refs/heads/YARN-2928
Commit: a639fdd43b6ade9637e18c30e3e2dfb5d940ceb2
Parents: f402f6d
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Fri Mar 27 23:15:51 2015 +0900
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Mar 30 12:10:46 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt   | 3 +++
 .../src/main/java/org/apache/hadoop/tools/DistCp.java | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a639fdd4/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index a7d4adc..febbf6b 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -481,6 +481,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-11724. DistCp throws NPE when the target directory is root.
 (Lei Eddy Xu via Yongjun Zhang) 
 
+HADOOP-11760. Fix typo of javadoc in DistCp. (Brahma Reddy Battula via
+ozawa).
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a639fdd4/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
--
diff --git 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
index ada4b25..6921a1e 100644
--- 
a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
+++ 
b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
@@ -401,7 +401,7 @@ public class DistCp extends Configured implements Tool {
* job staging directory
*
* @return Returns the working folder information
-   * @throws Exception - EXception if any
+   * @throws Exception - Exception if any
*/
   private Path createMetaFolderPath() throws Exception {
 Configuration configuration = getConf();



[09/20] hadoop git commit: HDFS-6263. Remove DRFA.MaxBackupIndex config from log4j.properties. Contributed by Abhiraj Butala.

2015-03-30 Thread zjshen
HDFS-6263. Remove DRFA.MaxBackupIndex config from log4j.properties. Contributed 
by Abhiraj Butala.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e700a4b9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e700a4b9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e700a4b9

Branch: refs/heads/YARN-2928
Commit: e700a4b9d0d008643241496200efd3746609350c
Parents: 7d4d615
Author: Akira Ajisaka aajis...@apache.org
Authored: Mon Mar 30 10:52:15 2015 +0900
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Mar 30 12:10:47 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../src/contrib/bkjournal/src/test/resources/log4j.properties | 2 --
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e700a4b9/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 496db06..e026f85 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -347,6 +347,9 @@ Release 2.8.0 - UNRELEASED
 HDFS-8004. Use KeyProviderCryptoExtension#warmUpEncryptedKeys when creating
 an encryption zone. (awang via asuresh)
 
+HDFS-6263. Remove DRFA.MaxBackupIndex config from log4j.properties.
+(Abhiraj Butala via aajisaka)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e700a4b9/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties
 
b/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties
index 8a6b217..f66c84b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/test/resources/log4j.properties
@@ -53,8 +53,6 @@ 
log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p - [%t:%C{
 
 # Max log file size of 10MB
 log4j.appender.ROLLINGFILE.MaxFileSize=10MB
-# uncomment the next line to limit number of backup files
-#log4j.appender.ROLLINGFILE.MaxBackupIndex=10
 
 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
 log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p 
[%t:%C{1}@%L] - %m%n



[05/20] hadoop git commit: HDFS-7501. TransactionsSinceLastCheckpoint can be negative on SBNs. Contributed by Gautam Gopalakrishnan.

2015-03-30 Thread zjshen
HDFS-7501. TransactionsSinceLastCheckpoint can be negative on SBNs. Contributed 
by Gautam Gopalakrishnan.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7d4d6150
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7d4d6150
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7d4d6150

Branch: refs/heads/YARN-2928
Commit: 7d4d6150f8c81a242f7676e27d65db9f31136007
Parents: 74e941d
Author: Harsh J ha...@cloudera.com
Authored: Sun Mar 29 00:45:01 2015 +0530
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Mar 30 12:10:47 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../hdfs/server/namenode/FSNamesystem.java  |  2 +-
 .../namenode/metrics/TestNameNodeMetrics.java   | 84 
 3 files changed, 88 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7d4d6150/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index f7cc2bc..496db06 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -351,6 +351,9 @@ Release 2.8.0 - UNRELEASED
 
   BUG FIXES
 
+HDFS-7501. TransactionsSinceLastCheckpoint can be negative on SBNs.
+(Gautam Gopalakrishnan via harsh)
+
 HDFS-5356. MiniDFSCluster should close all open FileSystems when shutdown()
 (Rakesh R via vinayakumarb)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7d4d6150/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index d0999b8..0e0f484 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -4784,7 +4784,7 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   @Metric({TransactionsSinceLastCheckpoint,
   Number of transactions since last checkpoint})
   public long getTransactionsSinceLastCheckpoint() {
-return getEditLog().getLastWrittenTxId() -
+return getFSImage().getLastAppliedOrWrittenTxId() -
 getFSImage().getStorage().getMostRecentCheckpointTxId();
   }
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7d4d6150/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
index 011db3c..64ea1e4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
@@ -22,12 +22,16 @@ import static 
org.apache.hadoop.test.MetricsAsserts.assertCounter;
 import static org.apache.hadoop.test.MetricsAsserts.assertGauge;
 import static org.apache.hadoop.test.MetricsAsserts.assertQuantileGauges;
 import static org.apache.hadoop.test.MetricsAsserts.getMetrics;
+import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
 
 import java.io.DataInputStream;
 import java.io.IOException;
 import java.util.Random;
+import com.google.common.collect.ImmutableList;
+import com.google.common.io.Files;
 
+import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.commons.logging.impl.Log4JLogger;
 import org.apache.hadoop.conf.Configuration;
@@ -39,6 +43,7 @@ import org.apache.hadoop.hdfs.DFSTestUtil;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.MiniDFSNNTopology;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants.SafeModeAction;
 import org.apache.hadoop.hdfs.protocol.LocatedBlock;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockManager;
@@ -47,7 +52,9 @@ import 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor;
 import org.apache.hadoop.hdfs.server.datanode.DataNode;
 import 

[03/20] hadoop git commit: HDFS-7990. IBR delete ack should not be delayed. Contributed by Daryn Sharp.

2015-03-30 Thread zjshen
HDFS-7990. IBR delete ack should not be delayed. Contributed by Daryn Sharp.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f402f6d5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f402f6d5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f402f6d5

Branch: refs/heads/YARN-2928
Commit: f402f6d592569601efee5682316aad0a403447b3
Parents: ee35265
Author: Kihwal Lee kih...@apache.org
Authored: Fri Mar 27 09:05:17 2015 -0500
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Mar 30 12:10:46 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt|  2 ++
 .../hdfs/server/datanode/BPServiceActor.java   | 17 +++--
 .../apache/hadoop/hdfs/server/datanode/DNConf.java |  2 --
 .../hdfs/server/datanode/SimulatedFSDataset.java   | 13 -
 .../datanode/TestIncrementalBlockReports.java  |  4 ++--
 5 files changed, 23 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f402f6d5/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index dff8bd2..72ea4fb 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -342,6 +342,8 @@ Release 2.8.0 - UNRELEASED
 HDFS-7928. Scanning blocks from disk during rolling upgrade startup takes
 a lot of time if disks are busy (Rushabh S Shah via kihwal)
 
+HDFS-7990. IBR delete ack should not be delayed. (daryn via kihwal)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f402f6d5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
index 10cce45..3b4756c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
@@ -82,12 +82,11 @@ class BPServiceActor implements Runnable {
 
   final BPOfferService bpos;
   
-  // lastBlockReport, lastDeletedReport and lastHeartbeat may be assigned/read
+  // lastBlockReport and lastHeartbeat may be assigned/read
   // by testing threads (through BPServiceActor#triggerXXX), while also 
   // assigned/read by the actor thread. Thus they should be declared as 
volatile
   // to make sure the happens-before consistency.
   volatile long lastBlockReport = 0;
-  volatile long lastDeletedReport = 0;
 
   boolean resetBlockReportTime = true;
 
@@ -417,10 +416,10 @@ class BPServiceActor implements Runnable {
   @VisibleForTesting
   void triggerDeletionReportForTests() {
 synchronized (pendingIncrementalBRperStorage) {
-  lastDeletedReport = 0;
+  sendImmediateIBR = true;
   pendingIncrementalBRperStorage.notifyAll();
 
-  while (lastDeletedReport == 0) {
+  while (sendImmediateIBR) {
 try {
   pendingIncrementalBRperStorage.wait(100);
 } catch (InterruptedException e) {
@@ -465,7 +464,6 @@ class BPServiceActor implements Runnable {
 // or we will report an RBW replica after the BlockReport already reports
 // a FINALIZED one.
 reportReceivedDeletedBlocks();
-lastDeletedReport = startTime;
 
 long brCreateStartTime = monotonicNow();
 MapDatanodeStorage, BlockListAsLongs perVolumeBlockLists =
@@ -674,7 +672,6 @@ class BPServiceActor implements Runnable {
*/
   private void offerService() throws Exception {
 LOG.info(For namenode  + nnAddr +  using
-+  DELETEREPORT_INTERVAL of  + dnConf.deleteReportInterval +  msec 
 +  BLOCKREPORT_INTERVAL of  + dnConf.blockReportInterval + msec
 +  CACHEREPORT_INTERVAL of  + dnConf.cacheReportInterval + msec
 +  Initial delay:  + dnConf.initialBlockReportDelay + msec
@@ -690,7 +687,9 @@ class BPServiceActor implements Runnable {
 //
 // Every so often, send heartbeat or block-report
 //
-if (startTime - lastHeartbeat = dnConf.heartBeatInterval) {
+boolean sendHeartbeat =
+startTime - lastHeartbeat = dnConf.heartBeatInterval;
+if (sendHeartbeat) {
   //
   // All heartbeat messages include following info:
   // -- Datanode name
@@ -729,10 +728,8 @@ class BPServiceActor implements Runnable {
 }
   }
 }
-if (sendImmediateIBR ||

[10/20] hadoop git commit: HDFS-8004. Use KeyProviderCryptoExtension#warmUpEncryptedKeys when creating an encryption zone. (awang via asuresh)

2015-03-30 Thread zjshen
HDFS-8004. Use KeyProviderCryptoExtension#warmUpEncryptedKeys when creating an 
encryption zone. (awang via asuresh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8f63bd79
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8f63bd79
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8f63bd79

Branch: refs/heads/YARN-2928
Commit: 8f63bd795da85602a1e21c8951fd978cc7e76e77
Parents: a3d3778
Author: Arun Suresh asur...@apache.org
Authored: Fri Mar 27 19:23:45 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Mar 30 12:10:47 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8f63bd79/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 72ea4fb..af1dd60 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -344,6 +344,9 @@ Release 2.8.0 - UNRELEASED
 
 HDFS-7990. IBR delete ack should not be delayed. (daryn via kihwal)
 
+HDFS-8004. Use KeyProviderCryptoExtension#warmUpEncryptedKeys when creating
+an encryption zone. (awang via asuresh)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8f63bd79/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 1226a26..d0999b8 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -7957,7 +7957,7 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 throw new IOException(Key  + keyName +  doesn't exist.);
   }
   // If the provider supports pool for EDEKs, this will fill in the pool
-  generateEncryptedDataEncryptionKey(keyName);
+  provider.warmUpEncryptedKeys(keyName);
   createEncryptionZoneInt(src, metadata.getCipher(),
   keyName, logRetryCache);
 } catch (AccessControlException e) {



[06/20] hadoop git commit: MAPREDUCE-6291. Correct mapred queue usage command. Contributed by Brahma Reddy Battula.

2015-03-30 Thread zjshen
MAPREDUCE-6291. Correct mapred queue usage command. Contributed by Brahma Reddy 
Battula.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fa7cc99c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fa7cc99c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fa7cc99c

Branch: refs/heads/YARN-2928
Commit: fa7cc99cd158168b8c7ff32428c3e2409315d7cb
Parents: 7fa9e0e
Author: Harsh J ha...@cloudera.com
Authored: Sat Mar 28 11:57:21 2015 +0530
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Mar 30 12:10:47 2015 -0700

--
 hadoop-mapreduce-project/CHANGES.txt  | 3 +++
 .../src/main/java/org/apache/hadoop/mapred/JobQueueClient.java| 2 +-
 .../src/main/java/org/apache/hadoop/mapred/pipes/Submitter.java   | 2 +-
 .../src/main/java/org/apache/hadoop/mapreduce/tools/CLI.java  | 2 +-
 .../src/main/java/org/apache/hadoop/tools/HadoopArchives.java | 2 +-
 5 files changed, 7 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa7cc99c/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index ce16510..b0367a7 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -256,6 +256,9 @@ Release 2.8.0 - UNRELEASED
 
   IMPROVEMENTS
 
+MAPREDUCE-6291. Correct mapred queue usage command.
+(Brahma Reddu Battula via harsh)
+
 MAPREDUCE-579. Streaming slowmatch documentation. (harsh)
 
 MAPREDUCE-6287. Deprecated methods in org.apache.hadoop.examples.Sort

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa7cc99c/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobQueueClient.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobQueueClient.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobQueueClient.java
index 097e338..81f6140 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobQueueClient.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobQueueClient.java
@@ -224,7 +224,7 @@ class JobQueueClient extends Configured implements Tool {
   }
 
   private void displayUsage(String cmd) {
-String prefix = Usage: JobQueueClient ;
+String prefix = Usage: queue ;
 if (-queueinfo.equals(cmd)) {
   System.err.println(prefix + [ + cmd + job-queue-name [-showJobs]]);
 } else {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa7cc99c/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Submitter.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Submitter.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Submitter.java
index 8f4259e..4f5b6a1 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Submitter.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/pipes/Submitter.java
@@ -363,7 +363,7 @@ public class Submitter extends Configured implements Tool {
 void printUsage() {
   // The CLI package should do this for us, but I can't figure out how
   // to make it print something reasonable.
-  System.out.println(bin/hadoop pipes);
+  System.out.println(Usage: pipes );
   System.out.println(  [-input path] // Input directory);
   System.out.println(  [-output path] // Output directory);
   System.out.println(  [-jar jar file // jar filename);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa7cc99c/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/tools/CLI.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/tools/CLI.java
 

[07/20] hadoop git commit: HDFS-7700. Document quota support for storage types. (Contributed by Xiaoyu Yao)

2015-03-30 Thread zjshen
HDFS-7700. Document quota support for storage types. (Contributed by Xiaoyu Yao)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7fa9e0e6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7fa9e0e6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7fa9e0e6

Branch: refs/heads/YARN-2928
Commit: 7fa9e0e610669eea0f65ed513dcb6832aa0993ba
Parents: 8f63bd7
Author: Arpit Agarwal a...@apache.org
Authored: Fri Mar 27 19:49:26 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Mar 30 12:10:47 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../src/site/markdown/HDFSCommands.md   |  8 ++--
 .../src/site/markdown/HdfsQuotaAdminGuide.md| 41 ++--
 3 files changed, 45 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7fa9e0e6/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index af1dd60..f7cc2bc 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1311,6 +1311,9 @@ Release 2.7.0 - UNRELEASED
   HDFS-7824. GetContentSummary API and its namenode implementation for
   Storage Type Quota/Usage. (Xiaoyu Yao via Arpit Agarwal)
 
+  HDFS-7700. Document quota support for storage types. (Xiaoyu Yao via
+  Arpit Agarwal)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7fa9e0e6/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
index 191b5bc..bdb051b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
@@ -307,8 +307,8 @@ Usage:
   [-refreshNodes]
   [-setQuota quota dirname...dirname]
   [-clrQuota dirname...dirname]
-  [-setSpaceQuota quota dirname...dirname]
-  [-clrSpaceQuota dirname...dirname]
+  [-setSpaceQuota quota [-storageType storagetype] 
dirname...dirname]
+  [-clrSpaceQuota [-storageType storagetype] 
dirname...dirname]
   [-setStoragePolicy path policyName]
   [-getStoragePolicy path]
   [-finalizeUpgrade]
@@ -342,8 +342,8 @@ Usage:
 | `-refreshNodes` | Re-read the hosts and exclude files to update the set of 
Datanodes that are allowed to connect to the Namenode and those that should be 
decommissioned or recommissioned. |
 | `-setQuota` \quota\ \dirname\...\dirname\ | See [HDFS Quotas 
Guide](../hadoop-hdfs/HdfsQuotaAdminGuide.html#Administrative_Commands) for the 
detail. |
 | `-clrQuota` \dirname\...\dirname\ | See [HDFS Quotas 
Guide](../hadoop-hdfs/HdfsQuotaAdminGuide.html#Administrative_Commands) for the 
detail. |
-| `-setSpaceQuota` \quota\ \dirname\...\dirname\ | See [HDFS Quotas 
Guide](../hadoop-hdfs/HdfsQuotaAdminGuide.html#Administrative_Commands) for the 
detail. |
-| `-clrSpaceQuota` \dirname\...\dirname\ | See [HDFS Quotas 
Guide](../hadoop-hdfs/HdfsQuotaAdminGuide.html#Administrative_Commands) for the 
detail. |
+| `-setSpaceQuota` \quota\ `[-storageType storagetype]` 
\dirname\...\dirname\ | See [HDFS Quotas 
Guide](../hadoop-hdfs/HdfsQuotaAdminGuide.html#Administrative_Commands) for the 
detail. |
+| `-clrSpaceQuota` `[-storageType storagetype]` \dirname\...\dirname\ | 
See [HDFS Quotas 
Guide](../hadoop-hdfs/HdfsQuotaAdminGuide.html#Administrative_Commands) for the 
detail. |
 | `-setStoragePolicy` \path\ \policyName\ | Set a storage policy to a file 
or a directory. |
 | `-getStoragePolicy` \path\ | Get the storage policy of a file or a 
directory. |
 | `-finalizeUpgrade` | Finalize upgrade of HDFS. Datanodes delete their 
previous version working directories, followed by Namenode doing the same. This 
completes the upgrade process. |

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7fa9e0e6/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsQuotaAdminGuide.md
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsQuotaAdminGuide.md 
b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsQuotaAdminGuide.md
index a1bcd78..7c15bb1 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsQuotaAdminGuide.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsQuotaAdminGuide.md
@@ -19,6 +19,7 @@ HDFS Quotas Guide
 * 

  1   2   >