hadoop git commit: YARN-3384. TestLogAggregationService.verifyContainerLogs fails after YARN-2777. Contributed by Naganarasimha G R.

2015-03-23 Thread ozawa
Repository: hadoop
Updated Branches:
  refs/heads/trunk 0b9f12c84 - 82eda771e


YARN-3384. TestLogAggregationService.verifyContainerLogs fails after YARN-2777. 
Contributed by Naganarasimha G R.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/82eda771
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/82eda771
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/82eda771

Branch: refs/heads/trunk
Commit: 82eda771e05cf2b31788ee1582551e65f1c0f9aa
Parents: 0b9f12c
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Tue Mar 24 00:25:30 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Tue Mar 24 00:25:30 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt  | 3 +++
 .../logaggregation/TestLogAggregationService.java| 4 +++-
 2 files changed, 6 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/82eda771/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index f8c1a76..e04624e 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -816,6 +816,9 @@ Release 2.7.0 - UNRELEASED
 YARN-3369. Missing NullPointer check in AppSchedulingInfo causes RM to die.
 (Brahma Reddy Battula via wangda)
 
+YARN-3384. TestLogAggregationService.verifyContainerLogs fails after
+YARN-2777. (Naganarasimha G R via ozawa)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/82eda771/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
index 9cbf153..b1de9cb 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
@@ -804,7 +804,9 @@ public class TestLogAggregationService extends 
BaseContainerManagerTest {
 MapString, String thisContainerMap = logMap.remove(containerStr);
 Assert.assertEquals(numOfContainerLogs, thisContainerMap.size());
 for (String fileType : logFiles) {
-  String expectedValue = containerStr +  Hello  + fileType + !;
+  String expectedValue =
+  containerStr +  Hello  + fileType + !End of LogType:
+  + fileType;
   LOG.info(Expected log-content :  + new String(expectedValue));
   String foundValue = thisContainerMap.remove(fileType);
   Assert.assertNotNull(cId +   + fileType



hadoop git commit: YARN-3384. TestLogAggregationService.verifyContainerLogs fails after YARN-2777. Contributed by Naganarasimha G R.

2015-03-23 Thread ozawa
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 4fee66294 - cbacf2075


YARN-3384. TestLogAggregationService.verifyContainerLogs fails after YARN-2777. 
Contributed by Naganarasimha G R.

(cherry picked from commit 82eda771e05cf2b31788ee1582551e65f1c0f9aa)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cbacf207
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cbacf207
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cbacf207

Branch: refs/heads/branch-2
Commit: cbacf20755ffda0545f0eb01851d53b63d1487ea
Parents: 4fee662
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Tue Mar 24 00:25:30 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Tue Mar 24 00:25:52 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt  | 3 +++
 .../logaggregation/TestLogAggregationService.java| 4 +++-
 2 files changed, 6 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cbacf207/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 6ef18d5..1b3ed2c 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -771,6 +771,9 @@ Release 2.7.0 - UNRELEASED
 YARN-3369. Missing NullPointer check in AppSchedulingInfo causes RM to die.
 (Brahma Reddy Battula via wangda)
 
+YARN-3384. TestLogAggregationService.verifyContainerLogs fails after
+YARN-2777. (Naganarasimha G R via ozawa)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cbacf207/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
index 9cbf153..b1de9cb 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
@@ -804,7 +804,9 @@ public class TestLogAggregationService extends 
BaseContainerManagerTest {
 MapString, String thisContainerMap = logMap.remove(containerStr);
 Assert.assertEquals(numOfContainerLogs, thisContainerMap.size());
 for (String fileType : logFiles) {
-  String expectedValue = containerStr +  Hello  + fileType + !;
+  String expectedValue =
+  containerStr +  Hello  + fileType + !End of LogType:
+  + fileType;
   LOG.info(Expected log-content :  + new String(expectedValue));
   String foundValue = thisContainerMap.remove(fileType);
   Assert.assertNotNull(cId +   + fileType



hadoop git commit: YARN-3384. TestLogAggregationService.verifyContainerLogs fails after YARN-2777. Contributed by Naganarasimha G R.

2015-03-23 Thread ozawa
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 929b04ce3 - d2e19160d


YARN-3384. TestLogAggregationService.verifyContainerLogs fails after YARN-2777. 
Contributed by Naganarasimha G R.

(cherry picked from commit 82eda771e05cf2b31788ee1582551e65f1c0f9aa)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d2e19160
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d2e19160
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d2e19160

Branch: refs/heads/branch-2.7
Commit: d2e19160dc36f38fce4c1bc9c09f8419fab93b4e
Parents: 929b04c
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Tue Mar 24 00:25:30 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Tue Mar 24 00:26:08 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt  | 3 +++
 .../logaggregation/TestLogAggregationService.java| 4 +++-
 2 files changed, 6 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d2e19160/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 30a05a1..ef816fc 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -729,6 +729,9 @@ Release 2.7.0 - UNRELEASED
 YARN-3369. Missing NullPointer check in AppSchedulingInfo causes RM to die.
 (Brahma Reddy Battula via wangda)
 
+YARN-3384. TestLogAggregationService.verifyContainerLogs fails after
+YARN-2777. (Naganarasimha G R via ozawa)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d2e19160/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
index df51a0d..938ce9c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
@@ -804,7 +804,9 @@ public class TestLogAggregationService extends 
BaseContainerManagerTest {
 MapString, String thisContainerMap = logMap.remove(containerStr);
 Assert.assertEquals(numOfContainerLogs, thisContainerMap.size());
 for (String fileType : logFiles) {
-  String expectedValue = containerStr +  Hello  + fileType + !;
+  String expectedValue =
+  containerStr +  Hello  + fileType + !End of LogType:
+  + fileType;
   LOG.info(Expected log-content :  + new String(expectedValue));
   String foundValue = thisContainerMap.remove(fileType);
   Assert.assertNotNull(cId +   + fileType



hadoop git commit: HDFS-7960. The full block report should prune zombie storages even if they're not empty. Contributed by Colin McCabe and Eddy Xu.

2015-03-23 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/trunk d7e3c3364 - 50ee8f4e6


HDFS-7960. The full block report should prune zombie storages even if they're 
not empty. Contributed by Colin McCabe and Eddy Xu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/50ee8f4e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/50ee8f4e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/50ee8f4e

Branch: refs/heads/trunk
Commit: 50ee8f4e67a66aa77c5359182f61f3e951844db6
Parents: d7e3c33
Author: Andrew Wang w...@apache.org
Authored: Mon Mar 23 22:00:34 2015 -0700
Committer: Andrew Wang w...@apache.org
Committed: Mon Mar 23 22:00:34 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../DatanodeProtocolClientSideTranslatorPB.java |   5 +-
 .../DatanodeProtocolServerSideTranslatorPB.java |   4 +-
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |  15 +++
 .../server/blockmanagement/BlockManager.java|  53 +++-
 .../blockmanagement/DatanodeDescriptor.java |  51 ++-
 .../blockmanagement/DatanodeStorageInfo.java|  13 +-
 .../hdfs/server/datanode/BPServiceActor.java|  34 +++--
 .../hdfs/server/namenode/NameNodeRpcServer.java |  11 +-
 .../server/protocol/BlockReportContext.java |  52 +++
 .../hdfs/server/protocol/DatanodeProtocol.java  |  10 +-
 .../src/main/proto/DatanodeProtocol.proto   |  14 ++
 .../hdfs/protocol/TestBlockListAsLongs.java |   7 +-
 .../blockmanagement/TestBlockManager.java   |   8 +-
 .../TestNameNodePrunesMissingStorages.java  | 135 ++-
 .../server/datanode/BlockReportTestBase.java|   4 +-
 .../server/datanode/TestBPOfferService.java |  10 +-
 .../TestBlockHasMultipleReplicasOnSameDN.java   |   4 +-
 .../datanode/TestDataNodeVolumeFailure.java |   3 +-
 .../TestDatanodeProtocolRetryPolicy.java|   4 +-
 ...TestDnRespectsBlockReportSplitThreshold.java |   7 +-
 .../TestNNHandlesBlockReportPerStorage.java |   7 +-
 .../TestNNHandlesCombinedBlockReport.java   |   4 +-
 .../server/datanode/TestTriggerBlockReport.java |   7 +-
 .../server/namenode/NNThroughputBenchmark.java  |   9 +-
 .../hdfs/server/namenode/TestDeadDatanode.java  |   4 +-
 .../hdfs/server/namenode/ha/TestDNFencing.java  |   4 +-
 27 files changed, 433 insertions(+), 49 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/50ee8f4e/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index d2891e3..3dd5fb3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1241,6 +1241,9 @@ Release 2.7.0 - UNRELEASED
 provided by the client is larger than the one stored in the datanode.
 (Brahma Reddy Battula via szetszwo)
 
+HDFS-7960. The full block report should prune zombie storages even if
+they're not empty. (cmccabe and Eddy Xu via wang)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/50ee8f4e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
index c4003f1..825e835 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
@@ -47,6 +47,7 @@ import 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.ReportBadBlo
 import 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.StorageBlockReportProto;
 import 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.StorageReceivedDeletedBlocksProto;
 import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.VersionRequestProto;
+import org.apache.hadoop.hdfs.server.protocol.BlockReportContext;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeCommand;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration;
@@ -169,7 +170,8 @@ public class DatanodeProtocolClientSideTranslatorPB 
implements
 
   @Override
   public DatanodeCommand blockReport(DatanodeRegistration registration,
-  

hadoop git commit: HDFS-7960. The full block report should prune zombie storages even if they're not empty. Contributed by Colin McCabe and Eddy Xu.

2015-03-23 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 87079cde7 - af0af28af


HDFS-7960. The full block report should prune zombie storages even if they're 
not empty. Contributed by Colin McCabe and Eddy Xu.

(cherry picked from commit 50ee8f4e67a66aa77c5359182f61f3e951844db6)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/af0af28a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/af0af28a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/af0af28a

Branch: refs/heads/branch-2.7
Commit: af0af28afc52bc6bc6cf73e5c63f938aee07cad7
Parents: 87079cd
Author: Andrew Wang w...@apache.org
Authored: Mon Mar 23 22:00:34 2015 -0700
Committer: Andrew Wang w...@apache.org
Committed: Mon Mar 23 22:01:37 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../DatanodeProtocolClientSideTranslatorPB.java |   5 +-
 .../DatanodeProtocolServerSideTranslatorPB.java |   4 +-
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |  15 +++
 .../server/blockmanagement/BlockManager.java|  53 +++-
 .../blockmanagement/DatanodeDescriptor.java |  51 ++-
 .../blockmanagement/DatanodeStorageInfo.java|  13 +-
 .../hdfs/server/datanode/BPServiceActor.java|  34 +++--
 .../hdfs/server/namenode/NameNodeRpcServer.java |  11 +-
 .../server/protocol/BlockReportContext.java |  52 +++
 .../hdfs/server/protocol/DatanodeProtocol.java  |  10 +-
 .../src/main/proto/DatanodeProtocol.proto   |  14 ++
 .../hdfs/protocol/TestBlockListAsLongs.java |   7 +-
 .../blockmanagement/TestBlockManager.java   |   8 +-
 .../TestNameNodePrunesMissingStorages.java  | 135 ++-
 .../server/datanode/BlockReportTestBase.java|   4 +-
 .../server/datanode/TestBPOfferService.java |  10 +-
 .../TestBlockHasMultipleReplicasOnSameDN.java   |   4 +-
 .../datanode/TestDataNodeVolumeFailure.java |   3 +-
 .../TestDatanodeProtocolRetryPolicy.java|   4 +-
 ...TestDnRespectsBlockReportSplitThreshold.java |   7 +-
 .../TestNNHandlesBlockReportPerStorage.java |   7 +-
 .../TestNNHandlesCombinedBlockReport.java   |   4 +-
 .../server/datanode/TestTriggerBlockReport.java |   7 +-
 .../server/namenode/NNThroughputBenchmark.java  |   9 +-
 .../hdfs/server/namenode/TestDeadDatanode.java  |   4 +-
 .../hdfs/server/namenode/ha/TestDNFencing.java  |   4 +-
 27 files changed, 433 insertions(+), 49 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/af0af28a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index fe43d05..7e62d63 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -916,6 +916,9 @@ Release 2.7.0 - UNRELEASED
 provided by the client is larger than the one stored in the datanode.
 (Brahma Reddy Battula via szetszwo)
 
+HDFS-7960. The full block report should prune zombie storages even if
+they're not empty. (cmccabe and Eddy Xu via wang)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/af0af28a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
index c4003f1..825e835 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
@@ -47,6 +47,7 @@ import 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.ReportBadBlo
 import 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.StorageBlockReportProto;
 import 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.StorageReceivedDeletedBlocksProto;
 import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.VersionRequestProto;
+import org.apache.hadoop.hdfs.server.protocol.BlockReportContext;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeCommand;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration;
@@ -169,7 +170,8 @@ public class DatanodeProtocolClientSideTranslatorPB 
implements
 
   @Override
   

hadoop git commit: HDFS-7960. The full block report should prune zombie storages even if they're not empty. Contributed by Colin McCabe and Eddy Xu.

2015-03-23 Thread wang
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 fe693b72d - 2f46ee50b


HDFS-7960. The full block report should prune zombie storages even if they're 
not empty. Contributed by Colin McCabe and Eddy Xu.

(cherry picked from commit 50ee8f4e67a66aa77c5359182f61f3e951844db6)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2f46ee50
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2f46ee50
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2f46ee50

Branch: refs/heads/branch-2
Commit: 2f46ee50bd4efc82ba3d30bd36f7637ea9d9714e
Parents: fe693b7
Author: Andrew Wang w...@apache.org
Authored: Mon Mar 23 22:00:34 2015 -0700
Committer: Andrew Wang w...@apache.org
Committed: Mon Mar 23 22:00:44 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../DatanodeProtocolClientSideTranslatorPB.java |   5 +-
 .../DatanodeProtocolServerSideTranslatorPB.java |   4 +-
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |  15 +++
 .../server/blockmanagement/BlockManager.java|  53 +++-
 .../blockmanagement/DatanodeDescriptor.java |  51 ++-
 .../blockmanagement/DatanodeStorageInfo.java|  13 +-
 .../hdfs/server/datanode/BPServiceActor.java|  34 +++--
 .../hdfs/server/namenode/NameNodeRpcServer.java |  11 +-
 .../server/protocol/BlockReportContext.java |  52 +++
 .../hdfs/server/protocol/DatanodeProtocol.java  |  10 +-
 .../src/main/proto/DatanodeProtocol.proto   |  14 ++
 .../hdfs/protocol/TestBlockListAsLongs.java |   7 +-
 .../blockmanagement/TestBlockManager.java   |   8 +-
 .../TestNameNodePrunesMissingStorages.java  | 135 ++-
 .../server/datanode/BlockReportTestBase.java|   4 +-
 .../server/datanode/TestBPOfferService.java |  10 +-
 .../TestBlockHasMultipleReplicasOnSameDN.java   |   4 +-
 .../datanode/TestDataNodeVolumeFailure.java |   3 +-
 .../TestDatanodeProtocolRetryPolicy.java|   4 +-
 ...TestDnRespectsBlockReportSplitThreshold.java |   7 +-
 .../TestNNHandlesBlockReportPerStorage.java |   7 +-
 .../TestNNHandlesCombinedBlockReport.java   |   4 +-
 .../server/datanode/TestTriggerBlockReport.java |   7 +-
 .../server/namenode/NNThroughputBenchmark.java  |   9 +-
 .../hdfs/server/namenode/TestDeadDatanode.java  |   4 +-
 .../hdfs/server/namenode/ha/TestDNFencing.java  |   4 +-
 27 files changed, 433 insertions(+), 49 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2f46ee50/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index e7d314d..15729ef 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -941,6 +941,9 @@ Release 2.7.0 - UNRELEASED
 provided by the client is larger than the one stored in the datanode.
 (Brahma Reddy Battula via szetszwo)
 
+HDFS-7960. The full block report should prune zombie storages even if
+they're not empty. (cmccabe and Eddy Xu via wang)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2f46ee50/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
index c4003f1..825e835 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/DatanodeProtocolClientSideTranslatorPB.java
@@ -47,6 +47,7 @@ import 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.ReportBadBlo
 import 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.StorageBlockReportProto;
 import 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.StorageReceivedDeletedBlocksProto;
 import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.VersionRequestProto;
+import org.apache.hadoop.hdfs.server.protocol.BlockReportContext;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeCommand;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration;
@@ -169,7 +170,8 @@ public class DatanodeProtocolClientSideTranslatorPB 
implements
 
   @Override
   

hadoop git commit: HDFS-7884. Fix NullPointerException in BlockSender when the generation stamp provided by the client is larger than the one stored in the datanode. Contributed by Brahma Reddy Battul

2015-03-23 Thread szetszwo
Repository: hadoop
Updated Branches:
  refs/heads/trunk 9fae455e2 - d7e3c3364


HDFS-7884. Fix NullPointerException in BlockSender when the generation stamp 
provided by the client is larger than the one stored in the datanode.  
Contributed by Brahma Reddy Battula


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d7e3c336
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d7e3c336
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d7e3c336

Branch: refs/heads/trunk
Commit: d7e3c3364eb904f55a878bc14c331952f9dadab2
Parents: 9fae455
Author: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Authored: Tue Mar 24 13:49:17 2015 +0900
Committer: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Committed: Tue Mar 24 13:49:17 2015 +0900

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 4 
 .../org/apache/hadoop/hdfs/server/datanode/BlockSender.java   | 7 +++
 2 files changed, 11 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d7e3c336/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index b88b7e3..d2891e3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1237,6 +1237,10 @@ Release 2.7.0 - UNRELEASED
 
 HDFS-7942. NFS: support regexp grouping in nfs.exports.allowed.hosts 
(brandonli)
 
+HDFS-7884. Fix NullPointerException in BlockSender when the generation 
stamp
+provided by the client is larger than the one stored in the datanode.
+(Brahma Reddy Battula via szetszwo)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d7e3c336/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
index f4cde11..e76b93a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
@@ -246,6 +246,13 @@ class BlockSender implements java.io.Closeable {
   if (replica.getGenerationStamp()  block.getGenerationStamp()) {
 throw new IOException(Replica gen stamp  block genstamp, block=
 + block + , replica= + replica);
+  } else if (replica.getGenerationStamp()  block.getGenerationStamp()) {
+if (DataNode.LOG.isDebugEnabled()) {
+  DataNode.LOG.debug(Bumping up the client provided
+  +  block's genstamp to latest  + replica.getGenerationStamp()
+  +  for block  + block);
+}
+block.setGenerationStamp(replica.getGenerationStamp());
   }
   if (replicaVisibleLength  0) {
 throw new IOException(Replica is not readable, block=



hadoop git commit: HDFS-7884. Fix NullPointerException in BlockSender when the generation stamp provided by the client is larger than the one stored in the datanode. Contributed by Brahma Reddy Battul

2015-03-23 Thread szetszwo
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 cbdcdfad6 - fe693b72d


HDFS-7884. Fix NullPointerException in BlockSender when the generation stamp 
provided by the client is larger than the one stored in the datanode.  
Contributed by Brahma Reddy Battula


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fe693b72
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fe693b72
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fe693b72

Branch: refs/heads/branch-2
Commit: fe693b72dec703ecbf4ab3919d61d06ea8735a9e
Parents: cbdcdfa
Author: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Authored: Tue Mar 24 13:49:17 2015 +0900
Committer: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Committed: Tue Mar 24 13:51:31 2015 +0900

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 4 
 .../org/apache/hadoop/hdfs/server/datanode/BlockSender.java   | 7 +++
 2 files changed, 11 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe693b72/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index febec02..e7d314d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -937,6 +937,10 @@ Release 2.7.0 - UNRELEASED
 HDFS-7881. TestHftpFileSystem#testSeek fails in branch-2.
 (Brahma Reddy Battula via aajisaka)
 
+HDFS-7884. Fix NullPointerException in BlockSender when the generation 
stamp
+provided by the client is larger than the one stored in the datanode.
+(Brahma Reddy Battula via szetszwo)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe693b72/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
index f4cde11..e76b93a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
@@ -246,6 +246,13 @@ class BlockSender implements java.io.Closeable {
   if (replica.getGenerationStamp()  block.getGenerationStamp()) {
 throw new IOException(Replica gen stamp  block genstamp, block=
 + block + , replica= + replica);
+  } else if (replica.getGenerationStamp()  block.getGenerationStamp()) {
+if (DataNode.LOG.isDebugEnabled()) {
+  DataNode.LOG.debug(Bumping up the client provided
+  +  block's genstamp to latest  + replica.getGenerationStamp()
+  +  for block  + block);
+}
+block.setGenerationStamp(replica.getGenerationStamp());
   }
   if (replicaVisibleLength  0) {
 throw new IOException(Replica is not readable, block=



hadoop git commit: HDFS-7884. Fix NullPointerException in BlockSender when the generation stamp provided by the client is larger than the one stored in the datanode. Contributed by Brahma Reddy Battul

2015-03-23 Thread szetszwo
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 4dfd84ec0 - 87079cde7


HDFS-7884. Fix NullPointerException in BlockSender when the generation stamp 
provided by the client is larger than the one stored in the datanode.  
Contributed by Brahma Reddy Battula


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/87079cde
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/87079cde
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/87079cde

Branch: refs/heads/branch-2.7
Commit: 87079cde7d27dfb207dd79a5ba95c7daf17f8d08
Parents: 4dfd84e
Author: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Authored: Tue Mar 24 13:49:17 2015 +0900
Committer: Tsz-Wo Nicholas Sze szets...@hortonworks.com
Committed: Tue Mar 24 13:52:18 2015 +0900

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 4 
 .../org/apache/hadoop/hdfs/server/datanode/BlockSender.java   | 7 +++
 2 files changed, 11 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/87079cde/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 08a52dd..fe43d05 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -912,6 +912,10 @@ Release 2.7.0 - UNRELEASED
 HDFS-7881. TestHftpFileSystem#testSeek fails in branch-2.
 (Brahma Reddy Battula via aajisaka)
 
+HDFS-7884. Fix NullPointerException in BlockSender when the generation 
stamp
+provided by the client is larger than the one stored in the datanode.
+(Brahma Reddy Battula via szetszwo)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/87079cde/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
index f4cde11..e76b93a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
@@ -246,6 +246,13 @@ class BlockSender implements java.io.Closeable {
   if (replica.getGenerationStamp()  block.getGenerationStamp()) {
 throw new IOException(Replica gen stamp  block genstamp, block=
 + block + , replica= + replica);
+  } else if (replica.getGenerationStamp()  block.getGenerationStamp()) {
+if (DataNode.LOG.isDebugEnabled()) {
+  DataNode.LOG.debug(Bumping up the client provided
+  +  block's genstamp to latest  + replica.getGenerationStamp()
+  +  for block  + block);
+}
+block.setGenerationStamp(replica.getGenerationStamp());
   }
   if (replicaVisibleLength  0) {
 throw new IOException(Replica is not readable, block=



hadoop git commit: YARN-2868. FairScheduler: Metric for latency to allocate first container for an application. (Ray Chiang via kasha)

2015-03-23 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 75591e413 - 4e0c48703


YARN-2868. FairScheduler: Metric for latency to allocate first container for an 
application. (Ray Chiang via kasha)

(cherry picked from commit 972f1f1ab94a26ec446a272ad030fe13f03ed442)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4e0c4870
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4e0c4870
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4e0c4870

Branch: refs/heads/branch-2
Commit: 4e0c48703e59178490d817c65c6e7928915921be
Parents: 75591e4
Author: Karthik Kambatla ka...@apache.org
Authored: Mon Mar 23 14:07:05 2015 -0700
Committer: Karthik Kambatla ka...@apache.org
Committed: Mon Mar 23 14:10:23 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt|  3 +++
 .../resourcemanager/scheduler/QueueMetrics.java|  8 +++-
 .../scheduler/SchedulerApplicationAttempt.java | 17 +
 .../scheduler/fair/FairScheduler.java  | 11 ++-
 4 files changed, 37 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4e0c4870/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 7eb7390..0a09e0a 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -25,6 +25,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3350. YARN RackResolver spams logs with messages at info level. 
 (Wilfred Spiegelenburg via junping_du)
 
+YARN-2868. FairScheduler: Metric for latency to allocate first container 
+for an application. (Ray Chiang via kasha)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4e0c4870/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java
index 507b798..58b1ed1 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java
@@ -38,6 +38,7 @@ import org.apache.hadoop.metrics2.lib.MetricsRegistry;
 import org.apache.hadoop.metrics2.lib.MutableCounterInt;
 import org.apache.hadoop.metrics2.lib.MutableCounterLong;
 import org.apache.hadoop.metrics2.lib.MutableGaugeInt;
+import org.apache.hadoop.metrics2.lib.MutableRate;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
@@ -74,6 +75,7 @@ public class QueueMetrics implements MetricsSource {
   @Metric(# of reserved containers) MutableGaugeInt reservedContainers;
   @Metric(# of active users) MutableGaugeInt activeUsers;
   @Metric(# of active applications) MutableGaugeInt activeApplications;
+  @Metric(App Attempt First Container Allocation Delay) MutableRate 
appAttemptFirstContainerAllocationDelay;
   private final MutableGaugeInt[] runningTime;
   private TimeBucketMetricsApplicationId runBuckets;
 
@@ -462,7 +464,11 @@ public class QueueMetrics implements MetricsSource {
   parent.deactivateApp(user);
 }
   }
-  
+
+  public void addAppAttemptFirstContainerAllocationDelay(long latency) {
+appAttemptFirstContainerAllocationDelay.add(latency);
+  }
+
   public int getAppsSubmitted() {
 return appsSubmitted.value();
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4e0c4870/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
 

[16/50] [abbrv] hadoop git commit: MAPREDUCE-5190. Unnecessary condition test in RandomSampler. Contributed by Jingguo Yao.

2015-03-23 Thread zhz
MAPREDUCE-5190. Unnecessary condition test in RandomSampler. Contributed by 
Jingguo Yao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1d5c796d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1d5c796d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1d5c796d

Branch: refs/heads/HDFS-7285
Commit: 1d5c796d654c8959972d15cc6742731a99380bfc
Parents: b46c2bb
Author: Harsh J ha...@cloudera.com
Authored: Sun Mar 22 10:03:25 2015 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Sun Mar 22 10:03:25 2015 +0530

--
 hadoop-mapreduce-project/CHANGES.txt   | 3 +++
 .../apache/hadoop/mapreduce/lib/partition/InputSampler.java| 6 ++
 2 files changed, 5 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1d5c796d/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 2920811..e98aacd 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -256,6 +256,9 @@ Release 2.8.0 - UNRELEASED
 
   IMPROVEMENTS
 
+MAPREDUCE-5190. Unnecessary condition test in RandomSampler.
+(Jingguo Yao via harsh)
+
 MAPREDUCE-6239. Consolidate TestJobConf classes in
 hadoop-mapreduce-client-jobclient and hadoop-mapreduce-client-core
 (Varun Saxena via harsh)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1d5c796d/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/InputSampler.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/InputSampler.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/InputSampler.java
index 4668f49..cce9f37 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/InputSampler.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/InputSampler.java
@@ -230,10 +230,8 @@ public class InputSamplerK,V extends Configured 
implements Tool  {
   // to reflect the possibility of existing elements being
   // pushed out
   int ind = r.nextInt(numSamples);
-  if (ind != numSamples) {
-samples.set(ind, ReflectionUtils.copy(job.getConfiguration(),
- reader.getCurrentKey(), null));
-  }
+  samples.set(ind, ReflectionUtils.copy(job.getConfiguration(),
+   reader.getCurrentKey(), null));
   freq *= (numSamples - 1) / (double) numSamples;
 }
   }



[13/50] [abbrv] hadoop git commit: MAPREDUCE-6213. NullPointerException caused by job history server addr not resolvable. Contributed by Peng Zhang.

2015-03-23 Thread zhz
MAPREDUCE-6213. NullPointerException caused by job history server addr not 
resolvable. Contributed by Peng Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e1e09052
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e1e09052
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e1e09052

Branch: refs/heads/HDFS-7285
Commit: e1e09052e861926112493d6041aae01ab594b547
Parents: 7a678db
Author: Harsh J ha...@cloudera.com
Authored: Sun Mar 22 02:44:36 2015 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Sun Mar 22 02:44:36 2015 +0530

--
 hadoop-mapreduce-project/CHANGES.txt  | 3 +++
 .../org/apache/hadoop/mapreduce/v2/util/MRWebAppUtil.java | 7 ---
 2 files changed, 7 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e1e09052/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 4f80411..76180a3 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -286,6 +286,9 @@ Release 2.8.0 - UNRELEASED
 
   BUG FIXES
 
+MAPREDUCE-6213. NullPointerException caused by job history server addr not
+resolvable. (Peng Zhang via harsh)
+
 MAPREDUCE-6281. Fix javadoc in Terasort. (Albert Chu via ozawa)
 
 Release 2.7.0 - UNRELEASED

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e1e09052/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRWebAppUtil.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRWebAppUtil.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRWebAppUtil.java
index cac0119..d367060 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRWebAppUtil.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRWebAppUtil.java
@@ -137,8 +137,9 @@ public class MRWebAppUtil {
   hsAddress, getDefaultJHSWebappPort(),
   getDefaultJHSWebappURLWithoutScheme());
 StringBuffer sb = new StringBuffer();
-if (address.getAddress().isAnyLocalAddress() || 
-address.getAddress().isLoopbackAddress()) {
+if (address.getAddress() != null 
+(address.getAddress().isAnyLocalAddress() ||
+ address.getAddress().isLoopbackAddress())) {
   sb.append(InetAddress.getLocalHost().getCanonicalHostName());
 } else {
   sb.append(address.getHostName());
@@ -171,4 +172,4 @@ public class MRWebAppUtil {
   public static String getAMWebappScheme(Configuration conf) {
 return http://;;
   }
-}
\ No newline at end of file
+}



[34/50] [abbrv] hadoop git commit: HDFS-7749. Erasure Coding: Add striped block support in INodeFile. Contributed by Jing Zhao.

2015-03-23 Thread zhz
HDFS-7749. Erasure Coding: Add striped block support in INodeFile. Contributed 
by Jing Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f05e27ee
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f05e27ee
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f05e27ee

Branch: refs/heads/HDFS-7285
Commit: f05e27eeba40b069ec71416969a65cfda1cb3261
Parents: 90f073f
Author: Jing Zhao ji...@apache.org
Authored: Wed Feb 25 22:10:26 2015 -0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:12:17 2015 -0700

--
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |  17 ++
 .../server/blockmanagement/BlockCollection.java |  13 +-
 .../hdfs/server/blockmanagement/BlockInfo.java  |  88 ++-
 .../BlockInfoContiguousUnderConstruction.java   |   6 +-
 .../blockmanagement/BlockInfoStriped.java   |  31 +++
 .../BlockInfoStripedUnderConstruction.java  | 240 ++
 .../server/blockmanagement/BlockManager.java| 147 +--
 .../CacheReplicationMonitor.java|  16 +-
 .../hdfs/server/namenode/FSDirConcatOp.java |   8 +-
 .../hdfs/server/namenode/FSDirectory.java   |   5 +-
 .../hadoop/hdfs/server/namenode/FSEditLog.java  |   8 +-
 .../hdfs/server/namenode/FSEditLogLoader.java   |  16 +-
 .../hdfs/server/namenode/FSImageFormat.java |   7 +-
 .../server/namenode/FSImageFormatPBINode.java   |  46 +++-
 .../hdfs/server/namenode/FSNamesystem.java  | 130 ++
 .../namenode/FileUnderConstructionFeature.java  |  15 +-
 .../namenode/FileWithStripedBlocksFeature.java  | 112 
 .../hadoop/hdfs/server/namenode/INodeFile.java  | 254 +--
 .../hdfs/server/namenode/LeaseManager.java  |   6 +-
 .../hdfs/server/namenode/NamenodeFsck.java  |   4 +-
 .../hadoop/hdfs/server/namenode/Namesystem.java |   3 +-
 .../snapshot/FSImageFormatPBSnapshot.java   |   7 +-
 .../server/namenode/snapshot/FileDiffList.java  |   9 +-
 .../hadoop-hdfs/src/main/proto/fsimage.proto|   5 +
 .../hadoop-hdfs/src/main/proto/hdfs.proto   |  10 +
 .../org/apache/hadoop/hdfs/DFSTestUtil.java |   3 +-
 .../blockmanagement/TestReplicationPolicy.java  |   4 +-
 .../hdfs/server/namenode/TestAddBlock.java  |  12 +-
 .../hdfs/server/namenode/TestAddBlockgroup.java |   3 +-
 .../namenode/TestBlockUnderConstruction.java|   6 +-
 .../hdfs/server/namenode/TestFSImage.java   |   4 +-
 .../hdfs/server/namenode/TestFileTruncate.java  |   4 +-
 .../hadoop/hdfs/server/namenode/TestFsck.java   |   4 +-
 .../snapshot/TestSnapshotBlocksMap.java |  24 +-
 .../namenode/snapshot/TestSnapshotDeletion.java |  16 +-
 35 files changed, 963 insertions(+), 320 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f05e27ee/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
index fad1d2c..867023c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
@@ -170,6 +170,7 @@ import 
org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.StorageReportProto;
 import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.StorageTypeProto;
 import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.StorageTypesProto;
 import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.StorageUuidsProto;
+import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.StripedBlockProto;
 import org.apache.hadoop.hdfs.protocol.proto.InotifyProtos;
 import 
org.apache.hadoop.hdfs.protocol.proto.JournalProtocolProtos.JournalInfoProto;
 import 
org.apache.hadoop.hdfs.protocol.proto.XAttrProtos.GetXAttrsResponseProto;
@@ -182,6 +183,7 @@ import 
org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier;
 import org.apache.hadoop.hdfs.security.token.block.DataEncryptionKey;
 import org.apache.hadoop.hdfs.security.token.block.ExportedBlockKeys;
 import 
org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockStoragePolicySuite;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NamenodeRole;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType;
@@ -427,6 +429,21 @@ public class PBHelper {
 return new Block(b.getBlockId(), b.getNumBytes(), b.getGenStamp());
   }
 
+  public static BlockInfoStriped 

[03/50] [abbrv] hadoop git commit: HDFS-6841. Use Time.monotonicNow() wherever applicable instead of Time.now(). Contributed by Vinayakumar B

2015-03-23 Thread zhz
http://git-wip-us.apache.org/repos/asf/hadoop/blob/75ead273/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
index 9b62467..8b2d11e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/balancer/TestBalancer.java
@@ -262,7 +262,7 @@ public class TestBalancer {
   throws IOException, TimeoutException {
 long timeout = TIMEOUT;
 long failtime = (timeout = 0L) ? Long.MAX_VALUE
- : Time.now() + timeout;
+ : Time.monotonicNow() + timeout;
 
 while (true) {
   long[] status = client.getStats();
@@ -274,7 +274,7 @@ public class TestBalancer {
usedSpaceVariance  CAPACITY_ALLOWED_VARIANCE)
 break; //done
 
-  if (Time.now()  failtime) {
+  if (Time.monotonicNow()  failtime) {
 throw new TimeoutException(Cluster failed to reached expected values 
of 
 + totalSpace (current:  + status[0] 
 + , expected:  + expectedTotalSpace 
@@ -369,7 +369,7 @@ public class TestBalancer {
   int expectedExcludedNodes) throws IOException, TimeoutException {
 long timeout = TIMEOUT;
 long failtime = (timeout = 0L) ? Long.MAX_VALUE
-: Time.now() + timeout;
+: Time.monotonicNow() + timeout;
 if (!p.nodesToBeIncluded.isEmpty()) {
   totalCapacity = p.nodesToBeIncluded.size() * CAPACITY;
 }
@@ -399,7 +399,7 @@ public class TestBalancer {
 }
 if (Math.abs(avgUtilization - nodeUtilization)  
BALANCE_ALLOWED_VARIANCE) {
   balanced = false;
-  if (Time.now()  failtime) {
+  if (Time.monotonicNow()  failtime) {
 throw new TimeoutException(
 Rebalancing expected avg utilization to become 
 + avgUtilization + , but on datanode  + datanode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/75ead273/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
index f61176e..23e610f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerTestUtil.java
@@ -186,7 +186,7 @@ public class BlockManagerTestUtil {
   Assert.assertNotNull(Could not find DN with name:  + dnName, theDND);
   
   synchronized (hbm) {
-theDND.setLastUpdate(0);
+DFSTestUtil.setDatanodeDead(theDND);
 hbm.heartbeatCheck();
   }
 } finally {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/75ead273/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfoUnderConstruction.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfoUnderConstruction.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfoUnderConstruction.java
index 453f411..a7ba293 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfoUnderConstruction.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfoUnderConstruction.java
@@ -23,6 +23,7 @@ import org.apache.hadoop.hdfs.DFSTestUtil;
 import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.server.common.GenerationStamp;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState;
+import org.apache.hadoop.util.Time;
 import org.junit.Test;
 
 /**
@@ -46,40 +47,34 @@ public class TestBlockInfoUnderConstruction {
 new DatanodeStorageInfo[] {s1, s2, s3});
 
 // Recovery attempt #1.
-long currentTime = System.currentTimeMillis();
-dd1.setLastUpdate(currentTime - 3 * 1000);
-dd2.setLastUpdate(currentTime - 1 * 1000);
-dd3.setLastUpdate(currentTime - 2 * 1000);
+DFSTestUtil.resetLastUpdatesWithOffset(dd1, -3 * 1000);
+DFSTestUtil.resetLastUpdatesWithOffset(dd2, -1 * 1000);
+

[09/50] [abbrv] hadoop git commit: YARN-3350. YARN RackResolver spams logs with messages at info level. Contributed by Wilfred Spiegelenburg

2015-03-23 Thread zhz
YARN-3350. YARN RackResolver spams logs with messages at info level. 
Contributed by Wilfred Spiegelenburg


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7f1e2f99
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7f1e2f99
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7f1e2f99

Branch: refs/heads/HDFS-7285
Commit: 7f1e2f996995e1883d9336f720c27621cf1b73b6
Parents: fe5c23b
Author: Junping Du junping...@apache.org
Authored: Fri Mar 20 18:21:33 2015 -0700
Committer: Junping Du junping...@apache.org
Committed: Fri Mar 20 18:21:33 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt   |  3 +++
 .../java/org/apache/hadoop/yarn/util/RackResolver.java| 10 +++---
 2 files changed, 10 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7f1e2f99/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 046b7b1..177d587 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -68,6 +68,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3356. Capacity Scheduler FiCaSchedulerApp should use ResourceUsage to
 track used-resources-by-label. (Wangda Tan via jianhe)
 
+YARN-3350. YARN RackResolver spams logs with messages at info level. 
+(Wilfred Spiegelenburg via junping_du)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7f1e2f99/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/RackResolver.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/RackResolver.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/RackResolver.java
index cc2a56c..c44c2cf 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/RackResolver.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/RackResolver.java
@@ -102,11 +102,15 @@ public class RackResolver {
 String rName = null;
 if (rNameList == null || rNameList.get(0) == null) {
   rName = NetworkTopology.DEFAULT_RACK;
-  LOG.info(Couldn't resolve  + hostName + . Falling back to 
-  + NetworkTopology.DEFAULT_RACK);
+  if (LOG.isDebugEnabled()) {
+LOG.debug(Couldn't resolve  + hostName + . Falling back to 
++ NetworkTopology.DEFAULT_RACK);
+  }
 } else {
   rName = rNameList.get(0);
-  LOG.info(Resolved  + hostName +  to  + rName);
+  if (LOG.isDebugEnabled()) {
+LOG.debug(Resolved  + hostName +  to  + rName);
+  }
 }
 return new NodeBase(hostName, rName);
   }



[14/50] [abbrv] hadoop git commit: MAPREDUCE-6286. A typo in HistoryViewer makes some code useless, which causes counter limits are not reset correctly. Contributed by Zhihai Xu.

2015-03-23 Thread zhz
MAPREDUCE-6286. A typo in HistoryViewer makes some code useless, which causes 
counter limits are not reset correctly. Contributed by Zhihai Xu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/43354290
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/43354290
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/43354290

Branch: refs/heads/HDFS-7285
Commit: 433542904aba5ddebf9bd9d299378647351eb13a
Parents: e1e0905
Author: Harsh J ha...@cloudera.com
Authored: Sun Mar 22 02:51:02 2015 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Sun Mar 22 02:51:02 2015 +0530

--
 hadoop-mapreduce-project/CHANGES.txt | 4 
 .../org/apache/hadoop/mapreduce/jobhistory/HistoryViewer.java| 2 +-
 2 files changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/43354290/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 76180a3..fc42941 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -286,6 +286,10 @@ Release 2.8.0 - UNRELEASED
 
   BUG FIXES
 
+MAPREDUCE-6286. A typo in HistoryViewer makes some code useless, which
+causes counter limits are not reset correctly.
+(Zhihai Xu via harsh)
+
 MAPREDUCE-6213. NullPointerException caused by job history server addr not
 resolvable. (Peng Zhang via harsh)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/43354290/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/HistoryViewer.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/HistoryViewer.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/HistoryViewer.java
index 43b2df2..f343d7c 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/HistoryViewer.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/HistoryViewer.java
@@ -93,7 +93,7 @@ public class HistoryViewer {
   final Configuration jobConf = new Configuration(conf);
   try {
 jobConf.addResource(fs.open(jobConfPath), jobConfPath.toString());
-Limits.reset(conf);
+Limits.reset(jobConf);
   } catch (FileNotFoundException fnf) {
 if (LOG.isWarnEnabled()) {
   LOG.warn(Missing job conf in history, fnf);



[12/50] [abbrv] hadoop git commit: MAPREDUCE-6239. Consolidate TestJobConf classes in hadoop-mapreduce-client-jobclient and hadoop-mapreduce-client-core. Contributed by Varun Saxena.

2015-03-23 Thread zhz
MAPREDUCE-6239. Consolidate TestJobConf classes in 
hadoop-mapreduce-client-jobclient and hadoop-mapreduce-client-core. Contributed 
by Varun Saxena.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7a678db3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7a678db3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7a678db3

Branch: refs/heads/HDFS-7285
Commit: 7a678db3accf9480f3799dcf6fd7ffef09a311cc
Parents: e1feb4e
Author: Harsh J ha...@cloudera.com
Authored: Sat Mar 21 09:43:29 2015 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Sat Mar 21 09:43:29 2015 +0530

--
 hadoop-mapreduce-project/CHANGES.txt|   4 +
 .../org/apache/hadoop/mapred/TestJobConf.java   | 173 
 .../org/apache/hadoop/conf/TestJobConf.java | 199 ---
 3 files changed, 177 insertions(+), 199 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7a678db3/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 48eda8b..4f80411 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -256,6 +256,10 @@ Release 2.8.0 - UNRELEASED
 
   IMPROVEMENTS
 
+MAPREDUCE-6239. Consolidate TestJobConf classes in
+hadoop-mapreduce-client-jobclient and hadoop-mapreduce-client-core
+(Varun Saxena via harsh)
+
 MAPREDUCE-5807. Print usage by TeraSort job. (Rohith via harsh)
 
 MAPREDUCE-4653. TestRandomAlgorithm has an unused import statement.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7a678db3/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestJobConf.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestJobConf.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestJobConf.java
index 3d924e1..0612ade 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestJobConf.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestJobConf.java
@@ -22,6 +22,7 @@ import java.util.regex.Pattern;
 import static org.junit.Assert.*;
 
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapreduce.MRJobConfig;
 import org.junit.Assert;
 import org.junit.Test;
 
@@ -188,4 +189,176 @@ public class TestJobConf {
 Assert.assertEquals(2048, configuration.getLong(
 JobConf.MAPREDUCE_JOB_REDUCE_MEMORY_MB_PROPERTY, -1));
   }
+
+
+  @Test
+  public void testProfileParamsDefaults() {
+JobConf configuration = new JobConf();
+String result = configuration.getProfileParams();
+Assert.assertNotNull(result);
+Assert.assertTrue(result.contains(file=%s));
+Assert.assertTrue(result.startsWith(-agentlib:hprof));
+  }
+
+  @Test
+  public void testProfileParamsSetter() {
+JobConf configuration = new JobConf();
+
+configuration.setProfileParams(test);
+Assert.assertEquals(test, 
configuration.get(MRJobConfig.TASK_PROFILE_PARAMS));
+  }
+
+  @Test
+  public void testProfileParamsGetter() {
+JobConf configuration = new JobConf();
+
+configuration.set(MRJobConfig.TASK_PROFILE_PARAMS, test);
+Assert.assertEquals(test, configuration.getProfileParams());
+  }
+
+  /**
+   * Testing mapred.task.maxvmem replacement with new values
+   *
+   */
+  @Test
+  public void testMemoryConfigForMapOrReduceTask(){
+JobConf configuration = new JobConf();
+configuration.set(MRJobConfig.MAP_MEMORY_MB,String.valueOf(300));
+configuration.set(MRJobConfig.REDUCE_MEMORY_MB,String.valueOf(300));
+Assert.assertEquals(configuration.getMemoryForMapTask(),300);
+Assert.assertEquals(configuration.getMemoryForReduceTask(),300);
+
+configuration.set(mapred.task.maxvmem , String.valueOf(2*1024 * 1024));
+configuration.set(MRJobConfig.MAP_MEMORY_MB,String.valueOf(300));
+configuration.set(MRJobConfig.REDUCE_MEMORY_MB,String.valueOf(300));
+Assert.assertEquals(configuration.getMemoryForMapTask(),2);
+Assert.assertEquals(configuration.getMemoryForReduceTask(),2);
+
+configuration = new JobConf();
+configuration.set(mapred.task.maxvmem , -1);
+configuration.set(MRJobConfig.MAP_MEMORY_MB,String.valueOf(300));
+configuration.set(MRJobConfig.REDUCE_MEMORY_MB,String.valueOf(400));
+Assert.assertEquals(configuration.getMemoryForMapTask(), 300);
+

[02/50] [abbrv] hadoop git commit: HDFS-7957. Truncate should verify quota before making changes. Contributed by Jing Zhao.

2015-03-23 Thread zhz
HDFS-7957. Truncate should verify quota before making changes. Contributed by 
Jing Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d368d364
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d368d364
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d368d364

Branch: refs/heads/HDFS-7285
Commit: d368d3647a858644b9fcd3be33d9fea2a6962f69
Parents: a6a5aae
Author: Jing Zhao ji...@apache.org
Authored: Fri Mar 20 11:50:24 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Fri Mar 20 11:50:24 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   2 +
 .../hdfs/server/namenode/FSDirectory.java   |  44 +++-
 .../hdfs/server/namenode/FSNamesystem.java  |  23 +-
 .../hadoop/hdfs/server/namenode/INodeFile.java  |  38 +++
 .../namenode/TestDiskspaceQuotaUpdate.java  |  43 +++-
 .../namenode/TestTruncateQuotaUpdate.java   | 248 +++
 6 files changed, 380 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d368d364/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 418eee6..0ab14f2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1227,6 +1227,8 @@ Release 2.7.0 - UNRELEASED
 
 HDFS-7930. commitBlockSynchronization() does not remove locations. (yliu)
 
+HDFS-7957. Truncate should verify quota before making changes. (jing9)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d368d364/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
index f6ab077..2f73627 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
@@ -61,6 +61,7 @@ import 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockStoragePolicySuite;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState;
 import org.apache.hadoop.hdfs.server.namenode.INode.BlocksMapUpdateInfo;
+import org.apache.hadoop.hdfs.server.namenode.snapshot.FileWithSnapshotFeature;
 import org.apache.hadoop.hdfs.util.ByteArray;
 import org.apache.hadoop.hdfs.util.EnumCounters;
 import org.apache.hadoop.security.AccessControlException;
@@ -677,7 +678,7 @@ public class FSDirectory implements Closeable {
* @param checkQuota if true then check if quota is exceeded
* @throws QuotaExceededException if the new count violates any quota limit
*/
-   void updateCount(INodesInPath iip, int numOfINodes,
+  void updateCount(INodesInPath iip, int numOfINodes,
 QuotaCounts counts, boolean checkQuota)
 throws QuotaExceededException {
 assert hasWriteLock();
@@ -1050,7 +1051,7 @@ public class FSDirectory implements Closeable {
 INodeFile file = iip.getLastINode().asFile();
 BlocksMapUpdateInfo collectedBlocks = new BlocksMapUpdateInfo();
 boolean onBlockBoundary =
-unprotectedTruncate(iip, newLength, collectedBlocks, mtime);
+unprotectedTruncate(iip, newLength, collectedBlocks, mtime, null);
 
 if(! onBlockBoundary) {
   BlockInfoContiguous oldBlock = file.getLastBlock();
@@ -1073,11 +1074,11 @@ public class FSDirectory implements Closeable {
 
   boolean truncate(INodesInPath iip, long newLength,
BlocksMapUpdateInfo collectedBlocks,
-   long mtime)
+   long mtime, QuotaCounts delta)
   throws IOException {
 writeLock();
 try {
-  return unprotectedTruncate(iip, newLength, collectedBlocks, mtime);
+  return unprotectedTruncate(iip, newLength, collectedBlocks, mtime, 
delta);
 } finally {
   writeUnlock();
 }
@@ -1097,22 +1098,49 @@ public class FSDirectory implements Closeable {
*/
   boolean unprotectedTruncate(INodesInPath iip, long newLength,
   BlocksMapUpdateInfo collectedBlocks,
-  long mtime) throws IOException {
+  long mtime, QuotaCounts delta) throws 

[21/50] [abbrv] hadoop git commit: YARN-3384. TestLogAggregationService.verifyContainerLogs fails after YARN-2777. Contributed by Naganarasimha G R.

2015-03-23 Thread zhz
YARN-3384. TestLogAggregationService.verifyContainerLogs fails after YARN-2777. 
Contributed by Naganarasimha G R.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/82eda771
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/82eda771
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/82eda771

Branch: refs/heads/HDFS-7285
Commit: 82eda771e05cf2b31788ee1582551e65f1c0f9aa
Parents: 0b9f12c
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Tue Mar 24 00:25:30 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Tue Mar 24 00:25:30 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt  | 3 +++
 .../logaggregation/TestLogAggregationService.java| 4 +++-
 2 files changed, 6 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/82eda771/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index f8c1a76..e04624e 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -816,6 +816,9 @@ Release 2.7.0 - UNRELEASED
 YARN-3369. Missing NullPointer check in AppSchedulingInfo causes RM to die.
 (Brahma Reddy Battula via wangda)
 
+YARN-3384. TestLogAggregationService.verifyContainerLogs fails after
+YARN-2777. (Naganarasimha G R via ozawa)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/82eda771/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
index 9cbf153..b1de9cb 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java
@@ -804,7 +804,9 @@ public class TestLogAggregationService extends 
BaseContainerManagerTest {
 MapString, String thisContainerMap = logMap.remove(containerStr);
 Assert.assertEquals(numOfContainerLogs, thisContainerMap.size());
 for (String fileType : logFiles) {
-  String expectedValue = containerStr +  Hello  + fileType + !;
+  String expectedValue =
+  containerStr +  Hello  + fileType + !End of LogType:
+  + fileType;
   LOG.info(Expected log-content :  + new String(expectedValue));
   String foundValue = thisContainerMap.remove(fileType);
   Assert.assertNotNull(cId +   + fileType



[42/50] [abbrv] hadoop git commit: Fixed a compiling issue introduced by HADOOP-11705.

2015-03-23 Thread zhz
Fixed a compiling issue introduced by HADOOP-11705.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/31d0e404
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/31d0e404
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/31d0e404

Branch: refs/heads/HDFS-7285
Commit: 31d0e40458e7f98bdd644e4e7ca2c81e1e6e7bd3
Parents: 09a1f7c
Author: Kai Zheng kai.zh...@intel.com
Authored: Fri Mar 13 00:13:06 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:13:47 2015 -0700

--
 .../apache/hadoop/io/erasurecode/coder/TestErasureCoderBase.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/31d0e404/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/coder/TestErasureCoderBase.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/coder/TestErasureCoderBase.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/coder/TestErasureCoderBase.java
index 36e061a..d911db9 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/coder/TestErasureCoderBase.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/coder/TestErasureCoderBase.java
@@ -162,7 +162,7 @@ public abstract class TestErasureCoderBase extends 
TestCoderBase {
 }
 
 encoder.initialize(numDataUnits, numParityUnits, chunkSize);
-encoder.setConf(conf);
+((AbstractErasureCoder)encoder).setConf(conf);
 return encoder;
   }
 
@@ -179,7 +179,7 @@ public abstract class TestErasureCoderBase extends 
TestCoderBase {
 }
 
 decoder.initialize(numDataUnits, numParityUnits, chunkSize);
-decoder.setConf(conf);
+((AbstractErasureCoder)decoder).setConf(conf);
 return decoder;
   }
 



[49/50] [abbrv] hadoop git commit: HADOOP-11647. Reed-Solomon ErasureCoder. Contributed by Kai Zheng

2015-03-23 Thread zhz
HADOOP-11647. Reed-Solomon ErasureCoder. Contributed by Kai Zheng


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7da69bb3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7da69bb3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7da69bb3

Branch: refs/heads/HDFS-7285
Commit: 7da69bb30af30e02e5beebd0f5a505c879ba2efb
Parents: 18f0bac
Author: Kai Zheng kai.zh...@intel.com
Authored: Fri Mar 20 19:15:52 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:14:12 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt  |  3 +
 .../hadoop/fs/CommonConfigurationKeys.java  | 15 
 .../erasurecode/coder/AbstractErasureCoder.java | 65 ++
 .../coder/AbstractErasureDecoder.java   |  6 +-
 .../coder/AbstractErasureEncoder.java   |  6 +-
 .../io/erasurecode/coder/RSErasureDecoder.java  | 83 ++
 .../io/erasurecode/coder/RSErasureEncoder.java  | 47 ++
 .../io/erasurecode/coder/XorErasureDecoder.java |  2 +-
 .../io/erasurecode/coder/XorErasureEncoder.java |  2 +-
 .../erasurecode/coder/TestRSErasureCoder.java   | 92 
 10 files changed, 315 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7da69bb3/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index f566f0e..b69e69a 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -26,3 +26,6 @@
 
 HADOOP-11707. Add factory to create raw erasure coder. Contributed by Kai 
Zheng
 ( Kai Zheng )
+
+HADOOP-11647. Reed-Solomon ErasureCoder. Contributed by Kai Zheng
+( Kai Zheng )

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7da69bb3/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
index 7575496..70fea01 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
@@ -135,6 +135,21 @@ public class CommonConfigurationKeys extends 
CommonConfigurationKeysPublic {
   false;
 
   /**
+   * Erasure Coding configuration family
+   */
+
+  /** Supported erasure codec classes */
+  public static final String IO_ERASURECODE_CODECS_KEY = 
io.erasurecode.codecs;
+
+  /** Use XOR raw coder when possible for the RS codec */
+  public static final String IO_ERASURECODE_CODEC_RS_USEXOR_KEY =
+  io.erasurecode.codec.rs.usexor;
+
+  /** Raw coder factory for the RS codec */
+  public static final String IO_ERASURECODE_CODEC_RS_RAWCODER_KEY =
+  io.erasurecode.codec.rs.rawcoder;
+
+  /**
* Service Authorization
*/
   public static final String 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7da69bb3/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
index 8d3bc34..0e4de89 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
@@ -17,7 +17,12 @@
  */
 package org.apache.hadoop.io.erasurecode.coder;
 
+import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.io.erasurecode.rawcoder.RawErasureCoder;
+import org.apache.hadoop.io.erasurecode.rawcoder.RawErasureCoderFactory;
+import org.apache.hadoop.io.erasurecode.rawcoder.RawErasureDecoder;
+import org.apache.hadoop.io.erasurecode.rawcoder.RawErasureEncoder;
 
 /**
  * A common class of basic facilities to be shared by encoder and decoder
@@ -31,6 +36,66 @@ public abstract class AbstractErasureCoder
   private int numParityUnits;
   private int chunkSize;
 
+  /**
+   * Create raw decoder using the factory specified by 

[44/50] [abbrv] hadoop git commit: HDFS-7912. Erasure Coding: track BlockInfo instead of Block in UnderReplicatedBlocks and PendingReplicationBlocks. Contributed by Jing Zhao.

2015-03-23 Thread zhz
HDFS-7912. Erasure Coding: track BlockInfo instead of Block in 
UnderReplicatedBlocks and PendingReplicationBlocks. Contributed by Jing Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/42160763
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/42160763
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/42160763

Branch: refs/heads/HDFS-7285
Commit: 4216076343365246d15626dc134a40870e533ce8
Parents: 63d3ba1
Author: Jing Zhao ji...@apache.org
Authored: Tue Mar 17 10:18:50 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:13:48 2015 -0700

--
 .../server/blockmanagement/BlockManager.java| 47 -
 .../PendingReplicationBlocks.java   | 51 +--
 .../blockmanagement/UnderReplicatedBlocks.java  | 49 +-
 .../hdfs/server/namenode/FSDirAttrOp.java   | 10 ++--
 .../hdfs/server/namenode/FSNamesystem.java  | 21 
 .../hadoop/hdfs/server/namenode/INode.java  | 12 ++---
 .../hadoop/hdfs/server/namenode/INodeFile.java  |  4 +-
 .../hdfs/server/namenode/NamenodeFsck.java  | 10 ++--
 .../hadoop/hdfs/server/namenode/SafeMode.java   |  3 +-
 .../blockmanagement/BlockManagerTestUtil.java   |  5 +-
 .../blockmanagement/TestBlockManager.java   |  8 +--
 .../server/blockmanagement/TestNodeCount.java   |  3 +-
 .../TestOverReplicatedBlocks.java   |  5 +-
 .../blockmanagement/TestPendingReplication.java | 19 ---
 .../TestRBWBlockInvalidation.java   |  4 +-
 .../blockmanagement/TestReplicationPolicy.java  | 53 +++-
 .../TestUnderReplicatedBlockQueues.java | 16 +++---
 .../datanode/TestReadOnlySharedStorage.java |  9 ++--
 .../namenode/TestProcessCorruptBlocks.java  |  5 +-
 19 files changed, 180 insertions(+), 154 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/42160763/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 72cef37..ca24ab1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -1335,7 +1335,7 @@ public class BlockManager {
* @return number of blocks scheduled for replication during this iteration.
*/
   int computeReplicationWork(int blocksToProcess) {
-ListListBlock blocksToReplicate = null;
+ListListBlockInfo blocksToReplicate = null;
 namesystem.writeLock();
 try {
   // Choose the blocks to be replicated
@@ -1353,7 +1353,7 @@ public class BlockManager {
* @return the number of blocks scheduled for replication
*/
   @VisibleForTesting
-  int computeReplicationWorkForBlocks(ListListBlock blocksToReplicate) {
+  int computeReplicationWorkForBlocks(ListListBlockInfo blocksToReplicate) 
{
 int requiredReplication, numEffectiveReplicas;
 ListDatanodeDescriptor containingNodes;
 DatanodeDescriptor srcNode;
@@ -1367,7 +1367,7 @@ public class BlockManager {
 try {
   synchronized (neededReplications) {
 for (int priority = 0; priority  blocksToReplicate.size(); 
priority++) {
-  for (Block block : blocksToReplicate.get(priority)) {
+  for (BlockInfo block : blocksToReplicate.get(priority)) {
 // block should belong to a file
 bc = blocksMap.getBlockCollection(block);
 // abandoned block or block reopened for append
@@ -1451,7 +1451,7 @@ public class BlockManager {
 }
 
 synchronized (neededReplications) {
-  Block block = rw.block;
+  BlockInfo block = rw.block;
   int priority = rw.priority;
   // Recheck since global lock was released
   // block should belong to a file
@@ -1709,7 +1709,7 @@ public class BlockManager {
* and put them back into the neededReplication queue
*/
   private void processPendingReplications() {
-Block[] timedOutItems = pendingReplications.getTimedOutBlocks();
+BlockInfo[] timedOutItems = pendingReplications.getTimedOutBlocks();
 if (timedOutItems != null) {
   namesystem.writeLock();
   try {
@@ -2832,13 +2832,13 @@ public class BlockManager {
   
   /** Set replication for the blocks. */
   public void setReplication(final short oldRepl, final short newRepl,
-  final String src, final Block... blocks) {
+  final String src, 

[41/50] [abbrv] hadoop git commit: HADOOP-11705. Make erasure coder configurable. Contributed by Kai Zheng

2015-03-23 Thread zhz
HADOOP-11705. Make erasure coder configurable. Contributed by Kai Zheng


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/09a1f7c5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/09a1f7c5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/09a1f7c5

Branch: refs/heads/HDFS-7285
Commit: 09a1f7c5b1959cc4742a92091e28efcc7377ebce
Parents: acfe5db
Author: drankye kai.zh...@intel.com
Authored: Thu Mar 12 23:35:22 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:13:47 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt  |  4 +++
 .../erasurecode/coder/AbstractErasureCoder.java |  5 ++-
 .../rawcoder/AbstractRawErasureCoder.java   |  5 ++-
 .../hadoop/io/erasurecode/TestCoderBase.java|  6 
 .../erasurecode/coder/TestErasureCoderBase.java | 36 +---
 .../erasurecode/rawcoder/TestRawCoderBase.java  | 13 +--
 6 files changed, 60 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/09a1f7c5/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index c17a1bd..a97dc34 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -18,3 +18,7 @@
 HADOOP-11646. Erasure Coder API for encoding and decoding of block group
 ( Kai Zheng via vinayakumarb )
 
+HADOOP-11705. Make erasure coder configurable. Contributed by Kai Zheng
+( Kai Zheng )
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/09a1f7c5/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
index f2cc041..8d3bc34 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/AbstractErasureCoder.java
@@ -17,12 +17,15 @@
  */
 package org.apache.hadoop.io.erasurecode.coder;
 
+import org.apache.hadoop.conf.Configured;
+
 /**
  * A common class of basic facilities to be shared by encoder and decoder
  *
  * It implements the {@link ErasureCoder} interface.
  */
-public abstract class AbstractErasureCoder implements ErasureCoder {
+public abstract class AbstractErasureCoder
+extends Configured implements ErasureCoder {
 
   private int numDataUnits;
   private int numParityUnits;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/09a1f7c5/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
index 74d2ab6..e6f3d92 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
@@ -17,12 +17,15 @@
  */
 package org.apache.hadoop.io.erasurecode.rawcoder;
 
+import org.apache.hadoop.conf.Configured;
+
 /**
  * A common class of basic facilities to be shared by encoder and decoder
  *
  * It implements the {@link RawErasureCoder} interface.
  */
-public abstract class AbstractRawErasureCoder implements RawErasureCoder {
+public abstract class AbstractRawErasureCoder
+extends Configured implements RawErasureCoder {
 
   private int numDataUnits;
   private int numParityUnits;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/09a1f7c5/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
index 3c4288c..194413a 100644
--- 

[24/50] [abbrv] hadoop git commit: HDFS-7652. Process block reports for erasure coded blocks. Contributed by Zhe Zhang

2015-03-23 Thread zhz
HDFS-7652. Process block reports for erasure coded blocks. Contributed by Zhe 
Zhang


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/acc8e001
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/acc8e001
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/acc8e001

Branch: refs/heads/HDFS-7285
Commit: acc8e001f2404cf2e50e65e3bb5bcc59dc77e79f
Parents: 54c6526
Author: Zhe Zhang z...@apache.org
Authored: Mon Feb 9 10:27:14 2015 -0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:06:44 2015 -0700

--
 .../server/blockmanagement/BlockIdManager.java|  8 
 .../hdfs/server/blockmanagement/BlockManager.java | 18 +-
 2 files changed, 21 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/acc8e001/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
index c8b9d20..e7f8a05 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
@@ -211,4 +211,12 @@ public class BlockIdManager {
   .LAST_RESERVED_BLOCK_ID);
 generationStampV1Limit = GenerationStamp.GRANDFATHER_GENERATION_STAMP;
   }
+
+  public static boolean isStripedBlockID(long id) {
+return id  0;
+  }
+
+  public static long convertToGroupID(long id) {
+return id  (~(HdfsConstants.MAX_BLOCKS_IN_GROUP - 1));
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/acc8e001/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 674c0ea..b53f05e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -1874,7 +1874,7 @@ public class BlockManager {
   break;
 }
 
-BlockInfoContiguous bi = blocksMap.getStoredBlock(b);
+BlockInfoContiguous bi = getStoredBlock(b);
 if (bi == null) {
   if (LOG.isDebugEnabled()) {
 LOG.debug(BLOCK* rescanPostponedMisreplicatedBlocks:  +
@@ -2017,7 +2017,7 @@ public class BlockManager {
 continue;
   }
   
-  BlockInfoContiguous storedBlock = blocksMap.getStoredBlock(iblk);
+  BlockInfoContiguous storedBlock = getStoredBlock(iblk);
   // If block does not belong to any file, we are done.
   if (storedBlock == null) continue;
   
@@ -2157,7 +2157,7 @@ public class BlockManager {
 }
 
 // find block by blockId
-BlockInfoContiguous storedBlock = blocksMap.getStoredBlock(block);
+BlockInfoContiguous storedBlock = getStoredBlock(block);
 if(storedBlock == null) {
   // If blocksMap does not contain reported block id,
   // the replica should be removed from the data-node.
@@ -2448,7 +2448,7 @@ public class BlockManager {
 DatanodeDescriptor node = storageInfo.getDatanodeDescriptor();
 if (block instanceof BlockInfoContiguousUnderConstruction) {
   //refresh our copy in case the block got completed in another thread
-  storedBlock = blocksMap.getStoredBlock(block);
+  storedBlock = getStoredBlock(block);
 } else {
   storedBlock = block;
 }
@@ -3311,7 +3311,15 @@ public class BlockManager {
   }
 
   public BlockInfoContiguous getStoredBlock(Block block) {
-return blocksMap.getStoredBlock(block);
+BlockInfoContiguous info = null;
+if (BlockIdManager.isStripedBlockID(block.getBlockId())) {
+  info = blocksMap.getStoredBlock(
+  new Block(BlockIdManager.convertToGroupID(block.getBlockId(;
+}
+if (info == null) {
+  info = blocksMap.getStoredBlock(block);
+}
+return info;
   }
 
   /** updates a block in under replication queue */



[37/50] [abbrv] hadoop git commit: HDFS-7837. Erasure Coding: allocate and persist striped blocks in NameNode. Contributed by Jing Zhao.

2015-03-23 Thread zhz
HDFS-7837. Erasure Coding: allocate and persist striped blocks in NameNode. 
Contributed by Jing Zhao.

 Conflicts:
 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4b4c7e30
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4b4c7e30
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4b4c7e30

Branch: refs/heads/HDFS-7285
Commit: 4b4c7e30082aa0856badb264ffa7c290a92e3895
Parents: f05e27e
Author: Jing Zhao ji...@apache.org
Authored: Mon Mar 2 13:44:33 2015 -0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:12:35 2015 -0700

--
 .../server/blockmanagement/BlockIdManager.java  |  31 +++-
 .../hdfs/server/blockmanagement/BlockInfo.java  |   4 +-
 .../blockmanagement/BlockInfoContiguous.java|   5 +
 .../blockmanagement/BlockInfoStriped.java   |   8 +-
 .../server/blockmanagement/BlockManager.java|  44 --
 .../hdfs/server/blockmanagement/BlocksMap.java  |  20 ++-
 .../blockmanagement/DecommissionManager.java|   9 +-
 .../hdfs/server/namenode/FSDirectory.java   |  27 +++-
 .../hdfs/server/namenode/FSEditLogLoader.java   |  69 ++---
 .../hdfs/server/namenode/FSImageFormat.java |  12 +-
 .../server/namenode/FSImageFormatPBINode.java   |   5 +-
 .../server/namenode/FSImageFormatProtobuf.java  |   9 +-
 .../hdfs/server/namenode/FSNamesystem.java  |  39 ++---
 .../hadoop/hdfs/server/namenode/INodeFile.java  |  25 +++-
 .../server/namenode/NameNodeLayoutVersion.java  |   3 +-
 .../hadoop-hdfs/src/main/proto/fsimage.proto|   1 +
 .../hdfs/server/namenode/TestAddBlockgroup.java |  85 ---
 .../server/namenode/TestAddStripedBlocks.java   | 146 +++
 18 files changed, 354 insertions(+), 188 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4b4c7e30/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
index 3ae54ce..1d69d74 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
@@ -103,21 +103,38 @@ public class BlockIdManager {
   }
 
   /**
-   * Sets the maximum allocated block ID for this filesystem. This is
+   * Sets the maximum allocated contiguous block ID for this filesystem. This 
is
* the basis for allocating new block IDs.
*/
-  public void setLastAllocatedBlockId(long blockId) {
+  public void setLastAllocatedContiguousBlockId(long blockId) {
 blockIdGenerator.skipTo(blockId);
   }
 
   /**
-   * Gets the maximum sequentially allocated block ID for this filesystem
+   * Gets the maximum sequentially allocated contiguous block ID for this
+   * filesystem
*/
-  public long getLastAllocatedBlockId() {
+  public long getLastAllocatedContiguousBlockId() {
 return blockIdGenerator.getCurrentValue();
   }
 
   /**
+   * Sets the maximum allocated striped block ID for this filesystem. This is
+   * the basis for allocating new block IDs.
+   */
+  public void setLastAllocatedStripedBlockId(long blockId) {
+blockGroupIdGenerator.skipTo(blockId);
+  }
+
+  /**
+   * Gets the maximum sequentially allocated striped block ID for this
+   * filesystem
+   */
+  public long getLastAllocatedStripedBlockId() {
+return blockGroupIdGenerator.getCurrentValue();
+  }
+
+  /**
* Sets the current generation stamp for legacy blocks
*/
   public void setGenerationStampV1(long stamp) {
@@ -188,11 +205,11 @@ public class BlockIdManager {
   /**
* Increments, logs and then returns the block ID
*/
-  public long nextBlockId() {
+  public long nextContiguousBlockId() {
 return blockIdGenerator.nextValue();
   }
 
-  public long nextBlockGroupId() {
+  public long nextStripedBlockId() {
 return blockGroupIdGenerator.nextValue();
   }
 
@@ -216,7 +233,7 @@ public class BlockIdManager {
 return id  0;
   }
 
-  public static long convertToGroupID(long id) {
+  public static long convertToStripedID(long id) {
 return id  (~HdfsConstants.BLOCK_GROUP_INDEX_MASK);
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4b4c7e30/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
--

[11/50] [abbrv] hadoop git commit: YARN-3345. Add non-exclusive node label API. Contributed by Wangda Tan

2015-03-23 Thread zhz
YARN-3345. Add non-exclusive node label API. Contributed by Wangda Tan


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e1feb4ea
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e1feb4ea
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e1feb4ea

Branch: refs/heads/HDFS-7285
Commit: e1feb4ea1a532d680d6ca69b55ffcae1552d64f0
Parents: 7f1e2f9
Author: Jian He jia...@apache.org
Authored: Fri Mar 20 19:04:38 2015 -0700
Committer: Jian He jia...@apache.org
Committed: Fri Mar 20 19:04:38 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   2 +
 .../hadoop/yarn/api/records/NodeLabel.java  |  55 
 .../ResourceManagerAdministrationProtocol.java  |  12 +-
 .../UpdateNodeLabelsRequest.java|  49 +++
 .../UpdateNodeLabelsResponse.java   |  37 +++
 ...esourcemanager_administration_protocol.proto |   1 +
 ..._server_resourcemanager_service_protos.proto |   8 +
 .../src/main/proto/yarn_protos.proto|   5 +
 .../api/records/impl/pb/NodeLabelPBImpl.java| 106 +++
 .../nodelabels/CommonNodeLabelsManager.java |  75 -
 .../nodelabels/FileSystemNodeLabelsStore.java   |  28 +-
 .../hadoop/yarn/nodelabels/NodeLabel.java   | 113 ---
 .../hadoop/yarn/nodelabels/NodeLabelsStore.java |  11 +-
 .../hadoop/yarn/nodelabels/RMNodeLabel.java | 122 
 .../event/NodeLabelsStoreEventType.java |   3 +-
 .../event/StoreUpdateNodeLabelsEvent.java   |  36 +++
 ...nagerAdministrationProtocolPBClientImpl.java |  19 ++
 ...agerAdministrationProtocolPBServiceImpl.java |  23 ++
 .../impl/pb/UpdateNodeLabelsRequestPBImpl.java  | 145 +
 .../impl/pb/UpdateNodeLabelsResponsePBImpl.java |  67 
 .../hadoop/yarn/api/TestPBImplRecords.java  | 302 ++-
 .../DummyCommonNodeLabelsManager.java   |   9 +
 .../nodelabels/TestCommonNodeLabelsManager.java |  28 ++
 .../TestFileSystemNodeLabelsStore.java  |  15 +-
 .../server/resourcemanager/AdminService.java|  30 +-
 .../nodelabels/RMNodeLabelsManager.java |  24 +-
 .../resourcemanager/webapp/NodeLabelsPage.java  |   4 +-
 .../nodelabels/NullRMNodeLabelsManager.java |   8 +
 .../nodelabels/TestRMNodeLabelsManager.java |   8 +-
 29 files changed, 1189 insertions(+), 156 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e1feb4ea/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 177d587..f8c1a76 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -54,6 +54,8 @@ Release 2.8.0 - UNRELEASED
 
   NEW FEATURES
 
+YARN-3345. Add non-exclusive node label API. (Wangda Tan via jianhe)
+
   IMPROVEMENTS
 
 YARN-3243. CapacityScheduler should pass headroom from parent to children

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e1feb4ea/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeLabel.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeLabel.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeLabel.java
new file mode 100644
index 000..23da1f4
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/NodeLabel.java
@@ -0,0 +1,55 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.api.records;
+
+import org.apache.hadoop.classification.InterfaceAudience.Public;
+import org.apache.hadoop.classification.InterfaceStability.Stable;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.yarn.util.Records;
+
+@Public
+@Unstable
+public abstract class NodeLabel {
+  @Public
+  @Unstable
+  public static 

[04/50] [abbrv] hadoop git commit: HDFS-6841. Use Time.monotonicNow() wherever applicable instead of Time.now(). Contributed by Vinayakumar B

2015-03-23 Thread zhz
HDFS-6841. Use Time.monotonicNow() wherever applicable instead of Time.now(). 
Contributed by Vinayakumar B


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/75ead273
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/75ead273
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/75ead273

Branch: refs/heads/HDFS-7285
Commit: 75ead273bea8a7dad61c4f99c3a16cab2697c498
Parents: d368d36
Author: Kihwal Lee kih...@apache.org
Authored: Fri Mar 20 13:31:16 2015 -0500
Committer: Kihwal Lee kih...@apache.org
Committed: Fri Mar 20 14:02:09 2015 -0500

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |  6 +--
 .../org/apache/hadoop/hdfs/DFSOutputStream.java | 40 ++--
 .../org/apache/hadoop/hdfs/LeaseRenewer.java| 14 +++
 .../hadoop/hdfs/protocol/DatanodeInfo.java  | 38 +++
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |  5 ++-
 .../hadoop/hdfs/server/balancer/Balancer.java   |  8 ++--
 .../BlockInfoContiguousUnderConstruction.java   |  3 +-
 .../server/blockmanagement/BlockManager.java| 13 ---
 .../BlockPlacementPolicyDefault.java|  8 ++--
 .../blockmanagement/DatanodeDescriptor.java |  5 ++-
 .../server/blockmanagement/DatanodeManager.java | 12 +++---
 .../blockmanagement/DecommissionManager.java|  4 +-
 .../blockmanagement/HeartbeatManager.java   |  2 +-
 .../PendingReplicationBlocks.java   |  8 ++--
 .../hdfs/server/datanode/BPServiceActor.java| 35 +
 .../hdfs/server/datanode/DataXceiver.java   |  6 +--
 .../hdfs/server/namenode/Checkpointer.java  | 10 ++---
 .../server/namenode/EditLogOutputStream.java|  6 +--
 .../hadoop/hdfs/server/namenode/FSEditLog.java  | 14 +++
 .../hdfs/server/namenode/FSEditLogLoader.java   | 10 ++---
 .../hdfs/server/namenode/FSImageFormat.java | 16 
 .../hdfs/server/namenode/FSNamesystem.java  | 24 +++-
 .../hdfs/server/namenode/LeaseManager.java  |  8 ++--
 .../hdfs/server/namenode/NamenodeFsck.java  |  6 +--
 .../hdfs/server/namenode/ha/EditLogTailer.java  | 16 
 .../org/apache/hadoop/hdfs/web/JsonUtil.java|  2 +
 .../hadoop-hdfs/src/main/proto/hdfs.proto   |  1 +
 .../org/apache/hadoop/hdfs/DFSTestUtil.java | 27 +++--
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  |  2 +-
 .../org/apache/hadoop/hdfs/TestGetBlocks.java   | 14 +++
 .../hdfs/TestInjectionForSimulatedStorage.java  |  4 +-
 .../java/org/apache/hadoop/hdfs/TestLease.java  |  4 +-
 .../apache/hadoop/hdfs/TestLeaseRenewer.java| 10 ++---
 .../hadoop/hdfs/TestParallelReadUtil.java   |  4 +-
 .../org/apache/hadoop/hdfs/TestReplication.java |  4 +-
 .../hdfs/server/balancer/TestBalancer.java  |  8 ++--
 .../blockmanagement/BlockManagerTestUtil.java   |  2 +-
 .../TestBlockInfoUnderConstruction.java | 31 +++
 .../blockmanagement/TestHeartbeatHandling.java  | 20 +-
 .../blockmanagement/TestHostFileManager.java|  3 +-
 .../server/blockmanagement/TestNodeCount.java   |  4 +-
 .../TestOverReplicatedBlocks.java   | 11 +++---
 .../blockmanagement/TestReplicationPolicy.java  | 34 +
 .../server/datanode/BlockReportTestBase.java|  8 ++--
 .../server/datanode/TestBlockReplacement.java   |  8 ++--
 .../namenode/TestNamenodeCapacityReport.java|  5 ++-
 .../namenode/metrics/TestNameNodeMetrics.java   | 15 +---
 48 files changed, 304 insertions(+), 237 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/75ead273/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 0ab14f2..e82c4c4 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1229,6 +1229,9 @@ Release 2.7.0 - UNRELEASED
 
 HDFS-7957. Truncate should verify quota before making changes. (jing9)
 
+HDFS-6841. Use Time.monotonicNow() wherever applicable instead of 
Time.now()
+(Vinayakumar B via kihwal)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/75ead273/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 3236771..70f66bd 100644
--- 

[19/50] [abbrv] hadoop git commit: MAPREDUCE-6287. Deprecated methods in org.apache.hadoop.examples.Sort. Contributed by Chao Zhang.

2015-03-23 Thread zhz
MAPREDUCE-6287. Deprecated methods in org.apache.hadoop.examples.Sort. 
Contributed by Chao Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b375d1fc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b375d1fc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b375d1fc

Branch: refs/heads/HDFS-7285
Commit: b375d1fc936913edf4a75212559f160c41043906
Parents: 4cd54d9
Author: Harsh J ha...@cloudera.com
Authored: Mon Mar 23 03:48:36 2015 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Mon Mar 23 03:48:36 2015 +0530

--
 hadoop-mapreduce-project/CHANGES.txt  | 3 +++
 .../src/main/java/org/apache/hadoop/examples/Sort.java| 7 ---
 2 files changed, 7 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b375d1fc/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index b75d8aa..20505b6 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -256,6 +256,9 @@ Release 2.8.0 - UNRELEASED
 
   IMPROVEMENTS
 
+MAPREDUCE-6287. Deprecated methods in org.apache.hadoop.examples.Sort
+(Chao Zhang via harsh)
+
 MAPREDUCE-5190. Unnecessary condition test in RandomSampler.
 (Jingguo Yao via harsh)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b375d1fc/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/Sort.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/Sort.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/Sort.java
index a90c02b..0382c09 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/Sort.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/Sort.java
@@ -24,7 +24,7 @@ import java.util.*;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.conf.Configured;
-import org.apache.hadoop.mapreduce.filecache.DistributedCache;
+import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.io.BytesWritable;
 import org.apache.hadoop.io.Writable;
@@ -160,13 +160,14 @@ public class SortK,V extends Configured implements Tool 
{
   System.out.println(Sampling input to effect total-order sort...);
   job.setPartitionerClass(TotalOrderPartitioner.class);
   Path inputDir = FileInputFormat.getInputPaths(job)[0];
-  inputDir = inputDir.makeQualified(inputDir.getFileSystem(conf));
+  FileSystem fs = inputDir.getFileSystem(conf);
+  inputDir = inputDir.makeQualified(fs.getUri(), fs.getWorkingDirectory());
   Path partitionFile = new Path(inputDir, _sortPartitioning);
   TotalOrderPartitioner.setPartitionFile(conf, partitionFile);
   InputSampler.K,VwritePartitionFile(job, sampler);
   URI partitionUri = new URI(partitionFile.toString() +
  # + _sortPartitioning);
-  DistributedCache.addCacheFile(partitionUri, conf);
+  job.addCacheFile(partitionUri);
 }
 
 System.out.println(Running on  +



[06/50] [abbrv] hadoop git commit: YARN-3269. Yarn.nodemanager.remote-app-log-dir could not be configured to fully qualified path. Contributed by Xuan Gong

2015-03-23 Thread zhz
YARN-3269. Yarn.nodemanager.remote-app-log-dir could not be configured to fully 
qualified path. Contributed by Xuan Gong


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d81109e5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d81109e5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d81109e5

Branch: refs/heads/HDFS-7285
Commit: d81109e588493cef31e68508a3d671203bd23e12
Parents: d4f7e25
Author: Junping Du junping...@apache.org
Authored: Fri Mar 20 13:41:22 2015 -0700
Committer: Junping Du junping...@apache.org
Committed: Fri Mar 20 13:41:22 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt | 3 +++
 .../apache/hadoop/yarn/logaggregation/AggregatedLogFormat.java  | 5 +++--
 .../containermanager/logaggregation/AppLogAggregatorImpl.java   | 2 +-
 .../containermanager/logaggregation/LogAggregationService.java  | 2 +-
 .../logaggregation/TestLogAggregationService.java   | 4 +++-
 5 files changed, 11 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d81109e5/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 00b2c19..bbd018a 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -83,6 +83,9 @@ Release 2.8.0 - UNRELEASED
 
 YARN-3351. AppMaster tracking URL is broken in HA. (Anubhav Dhoot via 
kasha)
 
+YARN-3269. Yarn.nodemanager.remote-app-log-dir could not be configured to 
+fully qualified path. (Xuan Gong via junping_du)
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d81109e5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat.java
index ad2ee50..57f655b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat.java
@@ -379,7 +379,7 @@ public class AggregatedLogFormat {
 userUgi.doAs(new PrivilegedExceptionActionFSDataOutputStream() {
   @Override
   public FSDataOutputStream run() throws Exception {
-fc = FileContext.getFileContext(conf);
+fc = FileContext.getFileContext(remoteAppLogFile.toUri(), 
conf);
 fc.setUMask(APP_LOG_FILE_UMASK);
 return fc.create(
 remoteAppLogFile,
@@ -471,7 +471,8 @@ public class AggregatedLogFormat {
 
 public LogReader(Configuration conf, Path remoteAppLogFile)
 throws IOException {
-  FileContext fileContext = FileContext.getFileContext(conf);
+  FileContext fileContext =
+  FileContext.getFileContext(remoteAppLogFile.toUri(), conf);
   this.fsDataIStream = fileContext.open(remoteAppLogFile);
   reader =
   new TFile.Reader(this.fsDataIStream, fileContext.getFileStatus(

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d81109e5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
index ff70a68..393576b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
@@ -303,7 +303,7 @@ public class AppLogAggregatorImpl implements 
AppLogAggregator {
 userUgi.doAs(new 

[28/50] [abbrv] hadoop git commit: HADOOP-11541. Raw XOR coder

2015-03-23 Thread zhz
HADOOP-11541. Raw XOR coder


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/aab05c40
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/aab05c40
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/aab05c40

Branch: refs/heads/HDFS-7285
Commit: aab05c40ad6869142df48b6985964207add0e0df
Parents: 0119597
Author: Kai Zheng dran...@apache.org
Authored: Sun Feb 8 01:40:27 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:06:45 2015 -0700

--
 .../io/erasurecode/rawcoder/XorRawDecoder.java  |  81 ++
 .../io/erasurecode/rawcoder/XorRawEncoder.java  |  61 +
 .../hadoop/io/erasurecode/TestCoderBase.java| 262 +++
 .../erasurecode/rawcoder/TestRawCoderBase.java  |  96 +++
 .../erasurecode/rawcoder/TestXorRawCoder.java   |  52 
 5 files changed, 552 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/aab05c40/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XorRawDecoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XorRawDecoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XorRawDecoder.java
new file mode 100644
index 000..98307a7
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XorRawDecoder.java
@@ -0,0 +1,81 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.io.erasurecode.rawcoder;
+
+import java.nio.ByteBuffer;
+
+/**
+ * A raw decoder in XOR code scheme in pure Java, adapted from HDFS-RAID.
+ */
+public class XorRawDecoder extends AbstractRawErasureDecoder {
+
+  @Override
+  protected void doDecode(ByteBuffer[] inputs, int[] erasedIndexes,
+  ByteBuffer[] outputs) {
+assert(erasedIndexes.length == outputs.length);
+assert(erasedIndexes.length = 1);
+
+int bufSize = inputs[0].remaining();
+int erasedIdx = erasedIndexes[0];
+
+// Set the output to zeros.
+for (int j = 0; j  bufSize; j++) {
+  outputs[0].put(j, (byte) 0);
+}
+
+// Process the inputs.
+for (int i = 0; i  inputs.length; i++) {
+  // Skip the erased location.
+  if (i == erasedIdx) {
+continue;
+  }
+
+  for (int j = 0; j  bufSize; j++) {
+outputs[0].put(j, (byte) (outputs[0].get(j) ^ inputs[i].get(j)));
+  }
+}
+  }
+
+  @Override
+  protected void doDecode(byte[][] inputs, int[] erasedIndexes,
+  byte[][] outputs) {
+assert(erasedIndexes.length == outputs.length);
+assert(erasedIndexes.length = 1);
+
+int bufSize = inputs[0].length;
+int erasedIdx = erasedIndexes[0];
+
+// Set the output to zeros.
+for (int j = 0; j  bufSize; j++) {
+  outputs[0][j] = 0;
+}
+
+// Process the inputs.
+for (int i = 0; i  inputs.length; i++) {
+  // Skip the erased location.
+  if (i == erasedIdx) {
+continue;
+  }
+
+  for (int j = 0; j  bufSize; j++) {
+outputs[0][j] ^= inputs[i][j];
+  }
+}
+  }
+
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/aab05c40/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XorRawEncoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XorRawEncoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XorRawEncoder.java
new file mode 100644
index 000..99b20b9
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XorRawEncoder.java
@@ -0,0 +1,61 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the 

[18/50] [abbrv] hadoop git commit: MAPREDUCE-5448. Addendum fix to remove deprecation warning by junit.Assert import in TestFileOutputCommitter.

2015-03-23 Thread zhz
MAPREDUCE-5448. Addendum fix to remove deprecation warning by junit.Assert 
import in TestFileOutputCommitter.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4cd54d9a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4cd54d9a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4cd54d9a

Branch: refs/heads/HDFS-7285
Commit: 4cd54d9a297435150ab61803284eb05603f114e2
Parents: 8770c82
Author: Harsh J ha...@cloudera.com
Authored: Sun Mar 22 10:33:15 2015 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Sun Mar 22 10:33:15 2015 +0530

--
 .../hadoop/mapreduce/lib/output/TestFileOutputCommitter.java  | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4cd54d9a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestFileOutputCommitter.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestFileOutputCommitter.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestFileOutputCommitter.java
index 5c4428b..7678f35 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestFileOutputCommitter.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestFileOutputCommitter.java
@@ -27,7 +27,6 @@ import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.TimeUnit;
 
-import junit.framework.Assert;
 import junit.framework.TestCase;
 
 import org.apache.commons.logging.Log;
@@ -315,7 +314,7 @@ public class TestFileOutputCommitter extends TestCase {
 try {
   MapFileOutputFormat.getReaders(outDir, conf);
 } catch (Exception e) {
-  Assert.fail(Fail to read from MapFileOutputFormat:  + e);
+  fail(Fail to read from MapFileOutputFormat:  + e);
   e.printStackTrace();
 }
 



[32/50] [abbrv] hadoop git commit: HADOOP-11542. Raw Reed-Solomon coder in pure Java. Contributed by Kai Zheng

2015-03-23 Thread zhz
HADOOP-11542. Raw Reed-Solomon coder in pure Java. Contributed by Kai Zheng


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/90f073f3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/90f073f3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/90f073f3

Branch: refs/heads/HDFS-7285
Commit: 90f073f3c0e29e75903259837c71823df333d4fc
Parents: 0d515e7
Author: drankye dran...@gmail.com
Authored: Thu Feb 12 21:12:44 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:08:09 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt  |   4 +
 .../io/erasurecode/rawcoder/JRSRawDecoder.java  |  69 +++
 .../io/erasurecode/rawcoder/JRSRawEncoder.java  |  78 +++
 .../erasurecode/rawcoder/RawErasureCoder.java   |   2 +-
 .../erasurecode/rawcoder/util/GaloisField.java  | 497 +++
 .../io/erasurecode/rawcoder/util/RSUtil.java|  22 +
 .../hadoop/io/erasurecode/TestCoderBase.java|  28 +-
 .../erasurecode/rawcoder/TestJRSRawCoder.java   |  93 
 .../erasurecode/rawcoder/TestRawCoderBase.java  |   5 +-
 .../erasurecode/rawcoder/TestXorRawCoder.java   |   1 -
 10 files changed, 786 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/90f073f3/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index 9728f97..7bbacf7 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -8,3 +8,7 @@
 
 HADOOP-11541. Raw XOR coder
 ( Kai Zheng )
+
+HADOOP-11542. Raw Reed-Solomon coder in pure Java. Contributed by Kai Zheng
+( Kai Zheng )
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/90f073f3/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/JRSRawDecoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/JRSRawDecoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/JRSRawDecoder.java
new file mode 100644
index 000..dbb689e
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/JRSRawDecoder.java
@@ -0,0 +1,69 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.io.erasurecode.rawcoder;
+
+import org.apache.hadoop.io.erasurecode.rawcoder.util.RSUtil;
+
+import java.nio.ByteBuffer;
+
+/**
+ * A raw erasure decoder in RS code scheme in pure Java in case native one
+ * isn't available in some environment. Please always use native 
implementations
+ * when possible.
+ */
+public class JRSRawDecoder extends AbstractRawErasureDecoder {
+  // To describe and calculate the needed Vandermonde matrix
+  private int[] errSignature;
+  private int[] primitivePower;
+
+  @Override
+  public void initialize(int numDataUnits, int numParityUnits, int chunkSize) {
+super.initialize(numDataUnits, numParityUnits, chunkSize);
+assert (getNumDataUnits() + getNumParityUnits()  
RSUtil.GF.getFieldSize());
+
+this.errSignature = new int[getNumParityUnits()];
+this.primitivePower = RSUtil.getPrimitivePower(getNumDataUnits(),
+getNumParityUnits());
+  }
+
+  @Override
+  protected void doDecode(ByteBuffer[] inputs, int[] erasedIndexes,
+  ByteBuffer[] outputs) {
+for (int i = 0; i  erasedIndexes.length; i++) {
+  errSignature[i] = primitivePower[erasedIndexes[i]];
+  RSUtil.GF.substitute(inputs, outputs[i], primitivePower[i]);
+}
+
+int dataLen = inputs[0].remaining();
+RSUtil.GF.solveVandermondeSystem(errSignature, outputs,
+erasedIndexes.length, dataLen);
+  }
+
+  @Override
+  protected void 

[27/50] [abbrv] hadoop git commit: HADOOP-11534. Minor improvements for raw erasure coders ( Contributed by Kai Zheng )

2015-03-23 Thread zhz
HADOOP-11534. Minor improvements for raw erasure coders ( Contributed by Kai 
Zheng )


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/01195978
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/01195978
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/01195978

Branch: refs/heads/HDFS-7285
Commit: 011959785bb6d24598151ac37619677948fe8bcf
Parents: 257d22d
Author: Vinayakumar B vinayakuma...@intel.com
Authored: Mon Feb 2 14:39:53 2015 +0530
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:06:45 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt   |  5 -
 .../org/apache/hadoop/io/erasurecode/ECChunk.java| 15 +--
 .../rawcoder/AbstractRawErasureCoder.java| 12 ++--
 3 files changed, 23 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/01195978/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index 8ce5a89..2124800 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -1,4 +1,7 @@
   BREAKDOWN OF HADOOP-11264 SUBTASKS AND RELATED JIRAS (Common part of 
HDFS-7285)
 
 HADOOP-11514. Raw Erasure Coder API for concrete encoding and decoding
-(Kai Zheng via umamahesh)
\ No newline at end of file
+(Kai Zheng via umamahesh)
+
+HADOOP-11534. Minor improvements for raw erasure coders
+( Kai Zheng via vinayakumarb )
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/01195978/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
index f84eb11..01e8f35 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
@@ -66,15 +66,26 @@ public class ECChunk {
   }
 
   /**
-   * Convert an array of this chunks to an array of byte array
+   * Convert an array of this chunks to an array of byte array.
+   * Note the chunk buffers are not affected.
* @param chunks
* @return an array of byte array
*/
   public static byte[][] toArray(ECChunk[] chunks) {
 byte[][] bytesArr = new byte[chunks.length][];
 
+ByteBuffer buffer;
 for (int i = 0; i  chunks.length; i++) {
-  bytesArr[i] = chunks[i].getBuffer().array();
+  buffer = chunks[i].getBuffer();
+  if (buffer.hasArray()) {
+bytesArr[i] = buffer.array();
+  } else {
+bytesArr[i] = new byte[buffer.remaining()];
+// Avoid affecting the original one
+buffer.mark();
+buffer.get(bytesArr[i]);
+buffer.reset();
+  }
 }
 
 return bytesArr;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/01195978/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
index 474542b..74d2ab6 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
@@ -24,26 +24,26 @@ package org.apache.hadoop.io.erasurecode.rawcoder;
  */
 public abstract class AbstractRawErasureCoder implements RawErasureCoder {
 
-  private int dataSize;
-  private int paritySize;
+  private int numDataUnits;
+  private int numParityUnits;
   private int chunkSize;
 
   @Override
   public void initialize(int numDataUnits, int numParityUnits,
  int chunkSize) {
-this.dataSize = numDataUnits;
-this.paritySize = numParityUnits;
+this.numDataUnits = numDataUnits;
+this.numParityUnits = numParityUnits;
 this.chunkSize = chunkSize;
   }
 
   @Override
   public int getNumDataUnits() {
-return dataSize;
+return numDataUnits;
   }
 
   @Override
   public int 

[30/50] [abbrv] hadoop git commit: HDFS-7716. Erasure Coding: extend BlockInfo to handle EC info. Contributed by Jing Zhao.

2015-03-23 Thread zhz
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0d515e7f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
index c4612a3..3a5e66e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
@@ -24,6 +24,7 @@ import java.util.List;
 import com.google.common.annotations.VisibleForTesting;
 
 import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State;
@@ -80,10 +81,10 @@ public class DatanodeStorageInfo {
   /**
* Iterates over the list of blocks belonging to the data-node.
*/
-  class BlockIterator implements IteratorBlockInfoContiguous {
-private BlockInfoContiguous current;
+  class BlockIterator implements IteratorBlockInfo {
+private BlockInfo current;
 
-BlockIterator(BlockInfoContiguous head) {
+BlockIterator(BlockInfo head) {
   this.current = head;
 }
 
@@ -91,8 +92,8 @@ public class DatanodeStorageInfo {
   return current != null;
 }
 
-public BlockInfoContiguous next() {
-  BlockInfoContiguous res = current;
+public BlockInfo next() {
+  BlockInfo res = current;
   current = 
current.getNext(current.findStorageInfo(DatanodeStorageInfo.this));
   return res;
 }
@@ -112,7 +113,7 @@ public class DatanodeStorageInfo {
   private volatile long remaining;
   private long blockPoolUsed;
 
-  private volatile BlockInfoContiguous blockList = null;
+  private volatile BlockInfo blockList = null;
   private int numBlocks = 0;
 
   /** The number of block reports received */
@@ -215,7 +216,7 @@ public class DatanodeStorageInfo {
 return blockPoolUsed;
   }
 
-  public AddBlockResult addBlock(BlockInfoContiguous b) {
+  public AddBlockResult addBlock(BlockInfo b, Block reportedBlock) {
 // First check whether the block belongs to a different storage
 // on the same DN.
 AddBlockResult result = AddBlockResult.ADDED;
@@ -234,13 +235,21 @@ public class DatanodeStorageInfo {
 }
 
 // add to the head of the data-node list
-b.addStorage(this);
+b.addStorage(this, reportedBlock);
+insertToList(b);
+return result;
+  }
+
+  AddBlockResult addBlock(BlockInfoContiguous b) {
+return addBlock(b, b);
+  }
+
+  public void insertToList(BlockInfo b) {
 blockList = b.listInsert(blockList, this);
 numBlocks++;
-return result;
   }
 
-  public boolean removeBlock(BlockInfoContiguous b) {
+  public boolean removeBlock(BlockInfo b) {
 blockList = b.listRemove(blockList, this);
 if (b.removeStorage(this)) {
   numBlocks--;
@@ -254,16 +263,15 @@ public class DatanodeStorageInfo {
 return numBlocks;
   }
   
-  IteratorBlockInfoContiguous getBlockIterator() {
+  IteratorBlockInfo getBlockIterator() {
 return new BlockIterator(blockList);
-
   }
 
   /**
* Move block to the head of the list of blocks belonging to the data-node.
* @return the index of the head of the blockList
*/
-  int moveBlockToHead(BlockInfoContiguous b, int curIndex, int headIndex) {
+  int moveBlockToHead(BlockInfo b, int curIndex, int headIndex) {
 blockList = b.moveBlockToHead(blockList, this, curIndex, headIndex);
 return curIndex;
   }
@@ -273,7 +281,7 @@ public class DatanodeStorageInfo {
* @return the head of the blockList
*/
   @VisibleForTesting
-  BlockInfoContiguous getBlockListHeadForTesting(){
+  BlockInfo getBlockListHeadForTesting(){
 return blockList;
   }
 
@@ -360,6 +368,6 @@ public class DatanodeStorageInfo {
   }
 
   static enum AddBlockResult {
-ADDED, REPLACED, ALREADY_EXIST;
+ADDED, REPLACED, ALREADY_EXIST
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0d515e7f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ReplicaUnderConstruction.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ReplicaUnderConstruction.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ReplicaUnderConstruction.java
new file mode 100644
index 000..f4600cb7
--- /dev/null
+++ 

[31/50] [abbrv] hadoop git commit: HDFS-7716. Erasure Coding: extend BlockInfo to handle EC info. Contributed by Jing Zhao.

2015-03-23 Thread zhz
HDFS-7716. Erasure Coding: extend BlockInfo to handle EC info. Contributed by 
Jing Zhao.

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0d515e7f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0d515e7f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0d515e7f

Branch: refs/heads/HDFS-7285
Commit: 0d515e7fa49f0755938181410c07fcf1526d2abc
Parents: 71de30a
Author: Jing Zhao ji...@apache.org
Authored: Tue Feb 10 17:54:10 2015 -0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:07:52 2015 -0700

--
 .../hadoop/hdfs/protocol/HdfsConstants.java |   1 +
 .../server/blockmanagement/BlockCollection.java |  13 +-
 .../server/blockmanagement/BlockIdManager.java  |   7 +-
 .../hdfs/server/blockmanagement/BlockInfo.java  | 339 +
 .../blockmanagement/BlockInfoContiguous.java| 363 +++
 .../BlockInfoContiguousUnderConstruction.java   | 137 +--
 .../blockmanagement/BlockInfoStriped.java   | 179 +
 .../server/blockmanagement/BlockManager.java| 188 +-
 .../hdfs/server/blockmanagement/BlocksMap.java  |  46 +--
 .../CacheReplicationMonitor.java|  10 +-
 .../blockmanagement/DatanodeDescriptor.java |  22 +-
 .../blockmanagement/DatanodeStorageInfo.java|  38 +-
 .../ReplicaUnderConstruction.java   | 119 ++
 .../hdfs/server/namenode/FSDirectory.java   |   4 +-
 .../hdfs/server/namenode/FSNamesystem.java  |  24 +-
 .../hdfs/server/namenode/NamenodeFsck.java  |   3 +-
 .../snapshot/FSImageFormatPBSnapshot.java   |   4 +-
 .../org/apache/hadoop/hdfs/DFSTestUtil.java |   4 +-
 .../server/blockmanagement/TestBlockInfo.java   |   6 +-
 .../blockmanagement/TestBlockInfoStriped.java   | 219 +++
 .../blockmanagement/TestBlockManager.java   |   4 +-
 .../blockmanagement/TestReplicationPolicy.java  |   2 +-
 22 files changed, 1125 insertions(+), 607 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0d515e7f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
index de60b6e..245b630 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
@@ -184,5 +184,6 @@ public class HdfsConstants {
 
   public static final byte NUM_DATA_BLOCKS = 3;
   public static final byte NUM_PARITY_BLOCKS = 2;
+  public static final long BLOCK_GROUP_INDEX_MASK = 15;
   public static final byte MAX_BLOCKS_IN_GROUP = 16;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0d515e7f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
index 1547611..974cac3 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
@@ -39,12 +39,12 @@ public interface BlockCollection {
   public ContentSummary computeContentSummary();
 
   /**
-   * @return the number of blocks
+   * @return the number of blocks or block groups
*/ 
   public int numBlocks();
 
   /**
-   * Get the blocks.
+   * Get the blocks or block groups.
*/
   public BlockInfoContiguous[] getBlocks();
 
@@ -55,8 +55,8 @@ public interface BlockCollection {
   public long getPreferredBlockSize();
 
   /**
-   * Get block replication for the collection 
-   * @return block replication value
+   * Get block replication for the collection.
+   * @return block replication value. Return 0 if the file is erasure coded.
*/
   public short getBlockReplication();
 
@@ -71,7 +71,7 @@ public interface BlockCollection {
   public String getName();
 
   /**
-   * Set the block at the given index.
+   * Set the block/block-group at the given index.
*/
   public void setBlock(int index, BlockInfoContiguous blk);
 
@@ -79,7 +79,8 @@ public interface 

[45/50] [abbrv] hadoop git commit: HADOOP-11706 Refine a little bit erasure coder API

2015-03-23 Thread zhz
HADOOP-11706 Refine a little bit erasure coder API


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fce132f9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fce132f9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fce132f9

Branch: refs/heads/HDFS-7285
Commit: fce132f98ecc6805506b35ce83dca67e407fde27
Parents: 4216076
Author: Kai Zheng kai.zh...@intel.com
Authored: Wed Mar 18 19:21:37 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:13:48 2015 -0700

--
 .../io/erasurecode/coder/ErasureCoder.java  |  4 +++-
 .../erasurecode/rawcoder/RawErasureCoder.java   |  4 +++-
 .../hadoop/io/erasurecode/TestCoderBase.java| 17 +---
 .../erasurecode/coder/TestErasureCoderBase.java | 21 +++-
 .../erasurecode/rawcoder/TestJRSRawCoder.java   | 12 +--
 .../erasurecode/rawcoder/TestRawCoderBase.java  |  2 ++
 6 files changed, 31 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fce132f9/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
index 68875c0..c5922f3 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.io.erasurecode.coder;
 
+import org.apache.hadoop.conf.Configurable;
+
 /**
  * An erasure coder to perform encoding or decoding given a group. Generally it
  * involves calculating necessary internal steps according to codec logic. For
@@ -31,7 +33,7 @@ package org.apache.hadoop.io.erasurecode.coder;
  * of multiple coding steps.
  *
  */
-public interface ErasureCoder {
+public interface ErasureCoder extends Configurable {
 
   /**
* Initialize with the important parameters for the code.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fce132f9/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
index 91a9abf..9af5b6c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.io.erasurecode.rawcoder;
 
+import org.apache.hadoop.conf.Configurable;
+
 /**
  * RawErasureCoder is a common interface for {@link RawErasureEncoder} and
  * {@link RawErasureDecoder} as both encoder and decoder share some properties.
@@ -31,7 +33,7 @@ package org.apache.hadoop.io.erasurecode.rawcoder;
  * low level constructs, since it only takes care of the math calculation with
  * a group of byte buffers.
  */
-public interface RawErasureCoder {
+public interface RawErasureCoder extends Configurable {
 
   /**
* Initialize with the important parameters for the code.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fce132f9/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
index 194413a..22fd98d 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
@@ -17,11 +17,12 @@
  */
 package org.apache.hadoop.io.erasurecode;
 
+import org.apache.hadoop.conf.Configuration;
+
 import java.nio.ByteBuffer;
 import java.util.Arrays;
 import java.util.Random;
 
-import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertTrue;
 
 /**
@@ -31,6 +32,7 @@ import static org.junit.Assert.assertTrue;
 public abstract class TestCoderBase {
   protected static Random RAND = new Random();
 
+  private 

[33/50] [abbrv] hadoop git commit: HDFS-7749. Erasure Coding: Add striped block support in INodeFile. Contributed by Jing Zhao.

2015-03-23 Thread zhz
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f05e27ee/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileWithStripedBlocksFeature.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileWithStripedBlocksFeature.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileWithStripedBlocksFeature.java
new file mode 100644
index 000..47445be
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileWithStripedBlocksFeature.java
@@ -0,0 +1,112 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguous;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped;
+
+/**
+ * Feature for file with striped blocks
+ */
+class FileWithStripedBlocksFeature implements INode.Feature {
+  private BlockInfoStriped[] blocks;
+
+  FileWithStripedBlocksFeature() {
+blocks = new BlockInfoStriped[0];
+  }
+
+  FileWithStripedBlocksFeature(BlockInfoStriped[] blocks) {
+Preconditions.checkArgument(blocks != null);
+this.blocks = blocks;
+  }
+
+  BlockInfoStriped[] getBlocks() {
+return this.blocks;
+  }
+
+  void setBlock(int index, BlockInfoStriped blk) {
+blocks[index] = blk;
+  }
+
+  BlockInfoStriped getLastBlock() {
+return blocks == null || blocks.length == 0 ?
+null : blocks[blocks.length - 1];
+  }
+
+  int numBlocks() {
+return blocks == null ? 0 : blocks.length;
+  }
+
+  void updateBlockCollection(INodeFile file) {
+if (blocks != null) {
+  for (BlockInfoStriped blk : blocks) {
+blk.setBlockCollection(file);
+  }
+}
+  }
+
+  private void setBlocks(BlockInfoStriped[] blocks) {
+this.blocks = blocks;
+  }
+
+  void addBlock(BlockInfoStriped newBlock) {
+if (this.blocks == null) {
+  this.setBlocks(new BlockInfoStriped[]{newBlock});
+} else {
+  int size = this.blocks.length;
+  BlockInfoStriped[] newlist = new BlockInfoStriped[size + 1];
+  System.arraycopy(this.blocks, 0, newlist, 0, size);
+  newlist[size] = newBlock;
+  this.setBlocks(newlist);
+}
+  }
+
+  boolean removeLastBlock(Block oldblock) {
+if (blocks == null || blocks.length == 0) {
+  return false;
+}
+int newSize = blocks.length - 1;
+if (!blocks[newSize].equals(oldblock)) {
+  return false;
+}
+
+//copy to a new list
+BlockInfoStriped[] newlist = new BlockInfoStriped[newSize];
+System.arraycopy(blocks, 0, newlist, 0, newSize);
+setBlocks(newlist);
+return true;
+  }
+
+  void truncateStripedBlocks(int n) {
+final BlockInfoStriped[] newBlocks;
+if (n == 0) {
+  newBlocks = new BlockInfoStriped[0];
+} else {
+  newBlocks = new BlockInfoStriped[n];
+  System.arraycopy(getBlocks(), 0, newBlocks, 0, n);
+}
+// set new blocks
+setBlocks(newBlocks);
+  }
+
+  void clear() {
+this.blocks = null;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f05e27ee/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
index 3772690..d3c5e3e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.hdfs.server.namenode;
 
+import static 
org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState.UNDER_CONSTRUCTION;
 import static 

[26/50] [abbrv] hadoop git commit: Fix Compilation Error in TestAddBlockgroup.java after the merge

2015-03-23 Thread zhz
Fix Compilation Error in TestAddBlockgroup.java after the merge


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e892417d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e892417d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e892417d

Branch: refs/heads/HDFS-7285
Commit: e892417d8fdbcb7dbbd2ae45d1bddfb4e2760825
Parents: acc8e00
Author: Jing Zhao ji...@apache.org
Authored: Sun Feb 8 16:01:03 2015 -0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:06:44 2015 -0700

--
 .../apache/hadoop/hdfs/server/namenode/TestAddBlockgroup.java| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e892417d/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockgroup.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockgroup.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockgroup.java
index 95133ce..06dfade 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockgroup.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockgroup.java
@@ -26,7 +26,7 @@ import org.apache.hadoop.hdfs.DFSTestUtil;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.MiniDFSCluster;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
-import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguous;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
@@ -75,7 +75,7 @@ public class TestAddBlockgroup {
 final Path file1 = new Path(/file1);
 DFSTestUtil.createFile(fs, file1, BLOCKSIZE * 2, REPLICATION, 0L);
 INodeFile file1Node = fsdir.getINode4Write(file1.toString()).asFile();
-BlockInfo[] file1Blocks = file1Node.getBlocks();
+BlockInfoContiguous[] file1Blocks = file1Node.getBlocks();
 assertEquals(2, file1Blocks.length);
 assertEquals(GROUP_SIZE, file1Blocks[0].numNodes());
 assertEquals(HdfsConstants.MAX_BLOCKS_IN_GROUP,



[17/50] [abbrv] hadoop git commit: MAPREDUCE-6286. Amend commit to CHANGES.txt for backport into 2.7.0.

2015-03-23 Thread zhz
MAPREDUCE-6286. Amend commit to CHANGES.txt for backport into 2.7.0.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8770c82a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8770c82a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8770c82a

Branch: refs/heads/HDFS-7285
Commit: 8770c82acc948bc5127afb1c59072718fd04630c
Parents: 1d5c796
Author: Harsh J ha...@cloudera.com
Authored: Sun Mar 22 10:15:52 2015 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Sun Mar 22 10:15:52 2015 +0530

--
 hadoop-mapreduce-project/CHANGES.txt | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8770c82a/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index e98aacd..b75d8aa 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -292,10 +292,6 @@ Release 2.8.0 - UNRELEASED
 MAPREDUCE-5448. MapFileOutputFormat#getReaders bug with hidden
 files/folders. (Maysam Yabandeh via harsh)
 
-MAPREDUCE-6286. A typo in HistoryViewer makes some code useless, which
-causes counter limits are not reset correctly.
-(Zhihai Xu via harsh)
-
 MAPREDUCE-6213. NullPointerException caused by job history server addr not
 resolvable. (Peng Zhang via harsh)
 
@@ -398,6 +394,10 @@ Release 2.7.0 - UNRELEASED
 
   BUG FIXES
 
+MAPREDUCE-6286. A typo in HistoryViewer makes some code useless, which
+causes counter limits are not reset correctly.
+(Zhihai Xu via harsh)
+
 MAPREDUCE-6210. Use getApplicationAttemptId() instead of getApplicationId()
 for logging AttemptId in RMContainerAllocator.java (Leitao Guo via 
aajisaka)
 



[29/50] [abbrv] hadoop git commit: Added the missed entry for commit of HADOOP-11541

2015-03-23 Thread zhz
Added the missed entry for commit of HADOOP-11541


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/71de30a7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/71de30a7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/71de30a7

Branch: refs/heads/HDFS-7285
Commit: 71de30a720336190e1425556de33ed60f7cb808f
Parents: aab05c4
Author: drankye dran...@gmail.com
Authored: Mon Feb 9 22:04:08 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:06:45 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/71de30a7/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index 2124800..9728f97 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -4,4 +4,7 @@
 (Kai Zheng via umamahesh)
 
 HADOOP-11534. Minor improvements for raw erasure coders
-( Kai Zheng via vinayakumarb )
\ No newline at end of file
+( Kai Zheng via vinayakumarb )
+
+HADOOP-11541. Raw XOR coder
+( Kai Zheng )



[10/50] [abbrv] hadoop git commit: YARN-3345. Add non-exclusive node label API. Contributed by Wangda Tan

2015-03-23 Thread zhz
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e1feb4ea/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestCommonNodeLabelsManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestCommonNodeLabelsManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestCommonNodeLabelsManager.java
index d05c75c..1e2326b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestCommonNodeLabelsManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestCommonNodeLabelsManager.java
@@ -29,7 +29,9 @@ import java.util.Set;
 import org.apache.commons.lang.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.yarn.api.records.NodeId;
+import org.apache.hadoop.yarn.api.records.NodeLabel;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.YarnException;
 import org.junit.After;
 import org.junit.Assert;
 import org.junit.Before;
@@ -536,4 +538,30 @@ public class TestCommonNodeLabelsManager extends 
NodeLabelTestBase {
 Assert.assertTrue(Should failed when #labels  1 on a host after add,
 failed);
   }
+
+  @Test (timeout = 5000)
+  public void testUpdateNodeLabels() throws Exception {
+boolean failed = false;
+
+// should fail: label isn't exist
+try {
+  mgr.updateNodeLabels(Arrays.asList(NodeLabel.newInstance(
+p1, false)));
+} catch (YarnException e) {
+  failed = true;
+}
+Assert.assertTrue(Should fail since the node label doesn't exist, 
failed);
+
+mgr.addToCluserNodeLabels(toSet(p1, p2, p3));
+
+mgr.updateNodeLabels(Arrays.asList(
+NodeLabel.newInstance(p1, false), NodeLabel.newInstance(p2, 
true)));
+Assert.assertEquals(p1, mgr.lastUpdatedNodeLabels.get(0).getNodeLabel());
+Assert.assertFalse(mgr.lastUpdatedNodeLabels.get(0).getIsExclusive());
+Assert.assertTrue(mgr.lastUpdatedNodeLabels.get(1).getIsExclusive());
+
+// Check exclusive for p1/p2
+Assert.assertFalse(mgr.isExclusiveNodeLabel(p1));
+Assert.assertTrue(mgr.isExclusiveNodeLabel(p2));
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e1feb4ea/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
index 5cc026a..6694290 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/nodelabels/TestFileSystemNodeLabelsStore.java
@@ -24,6 +24,7 @@ import java.util.Arrays;
 import java.util.Map;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.api.records.NodeLabel;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.event.InlineDispatcher;
 import org.junit.After;
@@ -188,7 +189,7 @@ public class TestFileSystemNodeLabelsStore extends 
NodeLabelTestBase {
   }
   
   @SuppressWarnings({ unchecked, rawtypes })
-  @Test//(timeout = 1)
+  @Test (timeout = 1)
   public void testSerilizationAfterRecovery() throws Exception {
 mgr.addToCluserNodeLabels(toSet(p1, p2, p3));
 mgr.addToCluserNodeLabels(toSet(p4));
@@ -218,6 +219,14 @@ public class TestFileSystemNodeLabelsStore extends 
NodeLabelTestBase {
  * p4: n4 
  * p6: n6, n7
  */
+
+mgr.updateNodeLabels(Arrays.asList(NodeLabel.newInstance(p2, false)));
+mgr.updateNodeLabels(Arrays.asList(NodeLabel.newInstance(p6, false)));
+
+/*
+ * Set p2/p6 to be exclusive
+ */
+
 // shutdown mgr and start a new mgr
 mgr.stop();
 
@@ -239,6 +248,10 @@ public class TestFileSystemNodeLabelsStore extends 
NodeLabelTestBase {
 p4, toSet(toNodeId(n4)),
 p2, toSet(toNodeId(n2;
 
+Assert.assertFalse(mgr.isExclusiveNodeLabel(p2));
+Assert.assertTrue(mgr.isExclusiveNodeLabel(p4));
+Assert.assertFalse(mgr.isExclusiveNodeLabel(p6));
+
 /*
  * Add label p7,p8 then shutdown
  */

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e1feb4ea/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/AdminService.java

[07/50] [abbrv] hadoop git commit: YARN-3356. Capacity Scheduler FiCaSchedulerApp should use ResourceUsage to track used-resources-by-label. Contributed by Wangda Tan

2015-03-23 Thread zhz
YARN-3356. Capacity Scheduler FiCaSchedulerApp should use ResourceUsage to 
track used-resources-by-label. Contributed by Wangda Tan


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/586348e4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/586348e4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/586348e4

Branch: refs/heads/HDFS-7285
Commit: 586348e4cbf197188057d6b843a6701cfffdaff3
Parents: d81109e
Author: Jian He jia...@apache.org
Authored: Fri Mar 20 13:54:01 2015 -0700
Committer: Jian He jia...@apache.org
Committed: Fri Mar 20 13:54:01 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../scheduler/AbstractYarnScheduler.java|   5 +-
 .../scheduler/AppSchedulingInfo.java|  27 ++-
 .../server/resourcemanager/scheduler/Queue.java |  20 +++
 .../scheduler/ResourceUsage.java|  19 ++-
 .../scheduler/SchedulerApplicationAttempt.java  |  50 +++---
 .../scheduler/SchedulerNode.java|  14 ++
 .../scheduler/capacity/AbstractCSQueue.java |  24 +++
 .../scheduler/capacity/LeafQueue.java   |  29 +++-
 .../scheduler/common/fica/FiCaSchedulerApp.java |  17 +-
 .../scheduler/fair/FSAppAttempt.java|  11 +-
 .../resourcemanager/scheduler/fair/FSQueue.java |   8 +
 .../scheduler/fifo/FifoScheduler.java   |  12 +-
 .../yarn/server/resourcemanager/MockAM.java |  24 +--
 .../capacity/TestCapacityScheduler.java | 167 ++-
 .../scheduler/capacity/TestChildQueueOrder.java |   3 +-
 .../capacity/TestContainerAllocation.java   |  70 +---
 .../scheduler/capacity/TestReservations.java|   6 +-
 .../scheduler/capacity/TestUtils.java   | 129 ++
 19 files changed, 509 insertions(+), 129 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/586348e4/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index bbd018a..046b7b1 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -65,6 +65,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3357. Move TestFifoScheduler to FIFO package. (Rohith Sharmaks 
 via devaraj)
 
+YARN-3356. Capacity Scheduler FiCaSchedulerApp should use ResourceUsage to
+track used-resources-by-label. (Wangda Tan via jianhe)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/586348e4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
index 968a767..e1f94cf 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
@@ -358,14 +358,15 @@ public abstract class AbstractYarnScheduler
 container));
 
   // recover scheduler node
-  nodes.get(nm.getNodeID()).recoverContainer(rmContainer);
+  SchedulerNode schedulerNode = nodes.get(nm.getNodeID());
+  schedulerNode.recoverContainer(rmContainer);
 
   // recover queue: update headroom etc.
   Queue queue = schedulerAttempt.getQueue();
   queue.recoverContainer(clusterResource, schedulerAttempt, rmContainer);
 
   // recover scheduler attempt
-  schedulerAttempt.recoverContainer(rmContainer);
+  schedulerAttempt.recoverContainer(schedulerNode, rmContainer);
 
   // set master container for the current running AMContainer for this
   // attempt.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/586348e4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
--
diff --git 

[25/50] [abbrv] hadoop git commit: HDFS-7339. Allocating and persisting block groups in NameNode. Contributed by Zhe Zhang

2015-03-23 Thread zhz
HDFS-7339. Allocating and persisting block groups in NameNode. Contributed by 
Zhe Zhang

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java

Conflicts:

hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/54c65267
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/54c65267
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/54c65267

Branch: refs/heads/HDFS-7285
Commit: 54c6526721959a5ae5b70a66e04076e63eb3df5a
Parents: 39e7d4e
Author: Zhe Zhang z...@apache.org
Authored: Fri Jan 30 16:16:26 2015 -0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:06:44 2015 -0700

--
 .../org/apache/hadoop/hdfs/DFSConfigKeys.java   |  2 +
 .../hadoop/hdfs/protocol/HdfsConstants.java |  4 +
 .../server/blockmanagement/BlockIdManager.java  |  8 +-
 .../SequentialBlockGroupIdGenerator.java| 82 +++
 .../SequentialBlockIdGenerator.java |  6 +-
 .../hdfs/server/namenode/FSDirectory.java   |  8 +-
 .../hdfs/server/namenode/FSNamesystem.java  | 34 +---
 .../hadoop/hdfs/server/namenode/INodeFile.java  | 11 +++
 .../hdfs/server/namenode/TestAddBlockgroup.java | 84 
 9 files changed, 223 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/54c65267/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 9ecf242..2b62744 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -219,6 +219,8 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
   public static final int DFS_NAMENODE_REPLICATION_INTERVAL_DEFAULT = 3;
   public static final String  DFS_NAMENODE_REPLICATION_MIN_KEY = 
dfs.namenode.replication.min;
   public static final int DFS_NAMENODE_REPLICATION_MIN_DEFAULT = 1;
+  public static final String  DFS_NAMENODE_STRIPE_MIN_KEY = 
dfs.namenode.stripe.min;
+  public static final int DFS_NAMENODE_STRIPE_MIN_DEFAULT = 1;
   public static final String  DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_KEY 
= dfs.namenode.replication.pending.timeout-sec;
   public static final int 
DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_DEFAULT = -1;
   public static final String  DFS_NAMENODE_REPLICATION_MAX_STREAMS_KEY = 
dfs.namenode.replication.max-streams;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/54c65267/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
index 54c650b..de60b6e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
@@ -181,4 +181,8 @@ public class HdfsConstants {
   public static final byte WARM_STORAGE_POLICY_ID = 5;
   public static final byte EC_STORAGE_POLICY_ID = 4;
   public static final byte COLD_STORAGE_POLICY_ID = 2;
+
+  public static final byte NUM_DATA_BLOCKS = 3;
+  public static final byte NUM_PARITY_BLOCKS = 2;
+  public static final byte MAX_BLOCKS_IN_GROUP = 16;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/54c65267/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
index 1c69203..c8b9d20 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
@@ -53,10 +53,12 @@ public 

[15/50] [abbrv] hadoop git commit: MAPREDUCE-5448. MapFileOutputFormat#getReaders bug with invisible files/folders. Contributed by Maysam Yabandeh.

2015-03-23 Thread zhz
MAPREDUCE-5448. MapFileOutputFormat#getReaders bug with invisible 
files/folders. Contributed by Maysam Yabandeh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b46c2bb5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b46c2bb5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b46c2bb5

Branch: refs/heads/HDFS-7285
Commit: b46c2bb51ae524e6640756620f70e5925cda7592
Parents: 4335429
Author: Harsh J ha...@cloudera.com
Authored: Sun Mar 22 09:45:48 2015 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Sun Mar 22 09:45:48 2015 +0530

--
 hadoop-mapreduce-project/CHANGES.txt|  3 +++
 .../mapreduce/lib/output/MapFileOutputFormat.java   | 12 +++-
 .../mapreduce/lib/output/TestFileOutputCommitter.java   | 10 ++
 3 files changed, 24 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b46c2bb5/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index fc42941..2920811 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -286,6 +286,9 @@ Release 2.8.0 - UNRELEASED
 
   BUG FIXES
 
+MAPREDUCE-5448. MapFileOutputFormat#getReaders bug with hidden
+files/folders. (Maysam Yabandeh via harsh)
+
 MAPREDUCE-6286. A typo in HistoryViewer makes some code useless, which
 causes counter limits are not reset correctly.
 (Zhihai Xu via harsh)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b46c2bb5/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/MapFileOutputFormat.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/MapFileOutputFormat.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/MapFileOutputFormat.java
index b8cb997..da33770 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/MapFileOutputFormat.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/MapFileOutputFormat.java
@@ -24,6 +24,7 @@ import java.util.Arrays;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.fs.PathFilter;
 
 import org.apache.hadoop.io.MapFile;
 import org.apache.hadoop.io.WritableComparable;
@@ -88,7 +89,16 @@ public class MapFileOutputFormat
   public static MapFile.Reader[] getReaders(Path dir,
   Configuration conf) throws IOException {
 FileSystem fs = dir.getFileSystem(conf);
-Path[] names = FileUtil.stat2Paths(fs.listStatus(dir));
+PathFilter filter = new PathFilter() {
+  @Override
+  public boolean accept(Path path) {
+String name = path.getName();
+if (name.startsWith(_) || name.startsWith(.))
+  return false;
+return true;
+  }
+};
+Path[] names = FileUtil.stat2Paths(fs.listStatus(dir, filter));
 
 // sort names, so that hash partitioning works
 Arrays.sort(names);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b46c2bb5/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestFileOutputCommitter.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestFileOutputCommitter.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestFileOutputCommitter.java
index 0d4ab98..5c4428b 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestFileOutputCommitter.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestFileOutputCommitter.java
@@ -27,6 +27,7 @@ import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.TimeUnit;
 
+import junit.framework.Assert;
 import junit.framework.TestCase;
 
 import org.apache.commons.logging.Log;
@@ -309,6 +310,15 @@ public class TestFileOutputCommitter extends 

[01/50] [abbrv] hadoop git commit: HDFS-7829. Code clean up for LocatedBlock. Contributed by Takanobu Asanuma.

2015-03-23 Thread zhz
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-7285 a42ffa8b7 - edbe633c0 (forced update)


HDFS-7829. Code clean up for LocatedBlock. Contributed by Takanobu Asanuma.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a6a5aae4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a6a5aae4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a6a5aae4

Branch: refs/heads/HDFS-7285
Commit: a6a5aae472d015d2ea5cd746719485dff93873a8
Parents: 6bc7710
Author: Jing Zhao ji...@apache.org
Authored: Fri Mar 20 10:50:03 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Fri Mar 20 10:50:19 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  2 ++
 .../hadoop/hdfs/protocol/LocatedBlock.java  | 20 ++--
 .../hadoop/hdfs/protocol/LocatedBlocks.java |  2 +-
 .../hdfs/server/namenode/FSNamesystem.java  |  2 +-
 .../server/protocol/BlockRecoveryCommand.java   |  4 ++--
 .../hadoop/hdfs/TestDFSClientRetries.java   |  5 ++---
 .../org/apache/hadoop/hdfs/TestDFSUtil.java |  8 ++--
 .../hadoop/hdfs/protocolPB/TestPBHelper.java|  3 ++-
 8 files changed, 22 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a6a5aae4/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 52fbeff..418eee6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -326,6 +326,8 @@ Release 2.8.0 - UNRELEASED
 HDFS-7835. make initial sleeptime in locateFollowingBlock configurable for
 DFSClient. (Zhihai Xu via Yongjun Zhang)
 
+HDFS-7829. Code clean up for LocatedBlock. (Takanobu Asanuma via jing9)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a6a5aae4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
index 0d52191..e729869 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
@@ -44,9 +44,9 @@ public class LocatedBlock {
   private long offset;  // offset of the first byte of the block in the file
   private final DatanodeInfoWithStorage[] locs;
   /** Cached storage ID for each replica */
-  private String[] storageIDs;
+  private final String[] storageIDs;
   /** Cached storage type for each replica, if reported. */
-  private StorageType[] storageTypes;
+  private final StorageType[] storageTypes;
   // corrupt flag is true if all of the replicas of a block are corrupt.
   // else false. If block has few corrupt replicas, they are filtered and 
   // their locations are not part of this object
@@ -62,16 +62,8 @@ public class LocatedBlock {
   new DatanodeInfoWithStorage[0];
 
   public LocatedBlock(ExtendedBlock b, DatanodeInfo[] locs) {
-this(b, locs, -1, false); // startOffset is unknown
-  }
-
-  public LocatedBlock(ExtendedBlock b, DatanodeInfo[] locs, long startOffset, 
-  boolean corrupt) {
-this(b, locs, null, null, startOffset, corrupt, EMPTY_LOCS);
-  }
-
-  public LocatedBlock(ExtendedBlock b, DatanodeStorageInfo[] storages) {
-this(b, storages, -1, false); // startOffset is unknown
+// By default, startOffset is unknown(-1) and corrupt is false.
+this(b, locs, null, null, -1, false, EMPTY_LOCS);
   }
 
   public LocatedBlock(ExtendedBlock b, DatanodeInfo[] locs,
@@ -170,11 +162,11 @@ public class LocatedBlock {
 return b.getNumBytes();
   }
 
-  void setStartOffset(long value) {
+  public void setStartOffset(long value) {
 this.offset = value;
   }
 
-  void setCorrupt(boolean corrupt) {
+  public void setCorrupt(boolean corrupt) {
 this.corrupt = corrupt;
   }
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a6a5aae4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java
index fc739cf..e35a431 100644
--- 

[08/50] [abbrv] hadoop git commit: HADOOP-11447. Add a more meaningful toString method to SampleStat and MutableStat. (kasha)

2015-03-23 Thread zhz
HADOOP-11447. Add a more meaningful toString method to SampleStat and 
MutableStat. (kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fe5c23b6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fe5c23b6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fe5c23b6

Branch: refs/heads/HDFS-7285
Commit: fe5c23b670c773145b87fecfaf9191536e9f1c51
Parents: 586348e
Author: Karthik Kambatla ka...@apache.org
Authored: Fri Mar 20 17:03:03 2015 -0700
Committer: Karthik Kambatla ka...@apache.org
Committed: Fri Mar 20 17:03:03 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt|  3 +++
 .../org/apache/hadoop/metrics2/lib/MutableStat.java|  4 
 .../org/apache/hadoop/metrics2/util/SampleStat.java| 13 +
 3 files changed, 20 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe5c23b6/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 823a36b..4cd2154 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -455,6 +455,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-11709. Time.NANOSECONDS_PER_MILLISECOND - use class-level final
 constant instead of method variable (Ajith S via ozawa)
 
+HADOOP-11447. Add a more meaningful toString method to SampleStat and 
+MutableStat. (kasha)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe5c23b6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableStat.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableStat.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableStat.java
index ba37757..d794e8e 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableStat.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableStat.java
@@ -151,4 +151,8 @@ public class MutableStat extends MutableMetric {
 minMax.reset();
   }
 
+  @Override
+  public String toString() {
+return lastStat().toString();
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fe5c23b6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/SampleStat.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/SampleStat.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/SampleStat.java
index 589062a..cd9aaa4 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/SampleStat.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/util/SampleStat.java
@@ -137,6 +137,19 @@ public class SampleStat {
 return minmax.max();
   }
 
+  @Override
+  public String toString() {
+try {
+  return Samples =  + numSamples() +
+Min =  + min() +
+Mean =  + mean() +
+Std Dev =  + stddev() +
+Max =  + max();
+} catch (Throwable t) {
+  return super.toString();
+}
+  }
+
   /**
* Helper to keep running min/max
*/



[43/50] [abbrv] hadoop git commit: HDFS-7936. Erasure coding: resolving conflicts when merging with HDFS-7903 and HDFS-7435. Contributed by Zhe Zhang.

2015-03-23 Thread zhz
HDFS-7936. Erasure coding: resolving conflicts when merging with HDFS-7903 and 
HDFS-7435. Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b6bc9882
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b6bc9882
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b6bc9882

Branch: refs/heads/HDFS-7285
Commit: b6bc988206768c21313952076aed7694443627ee
Parents: 31d0e40
Author: Jing Zhao ji...@apache.org
Authored: Mon Mar 16 14:27:21 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:13:47 2015 -0700

--
 .../hadoop/hdfs/server/blockmanagement/DecommissionManager.java | 2 +-
 .../org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java| 2 +-
 .../hadoop/hdfs/server/namenode/snapshot/FileDiffList.java  | 3 ++-
 .../src/test/java/org/apache/hadoop/hdfs/TestDecommission.java  | 5 ++---
 .../hadoop/hdfs/server/namenode/TestAddStripedBlocks.java   | 4 ++--
 5 files changed, 8 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b6bc9882/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
index 0faf3ad..df31d6e 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
@@ -536,7 +536,7 @@ public class DecommissionManager {
  */
 private void processBlocksForDecomInternal(
 final DatanodeDescriptor datanode,
-final IteratorBlockInfoContiguous it,
+final Iterator? extends BlockInfo it,
 final ListBlockInfoContiguous insufficientlyReplicated,
 boolean pruneSufficientlyReplicated) {
   boolean firstReplicationLog = true;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b6bc9882/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 773f918..b1e2c7f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -1982,7 +1982,7 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 }
 
 // Check if the file is already being truncated with the same length
-final BlockInfoContiguous last = file.getLastBlock();
+final BlockInfo last = file.getLastBlock();
 if (last != null  last.getBlockUCState() == BlockUCState.UNDER_RECOVERY) 
{
   final Block truncateBlock
   = ((BlockInfoContiguousUnderConstruction)last).getTruncateBlock();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b6bc9882/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
index a1263c5..d0248eb 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
@@ -21,6 +21,7 @@ import java.util.Collections;
 import java.util.List;
 
 import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguous;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockStoragePolicySuite;
@@ -132,7 +133,7 @@ public class FileDiffList extends
   break;
 }
 // Check if last block is part of truncate recovery
-BlockInfoContiguous lastBlock = file.getLastBlock();
+BlockInfo lastBlock = file.getLastBlock();
 Block 

[40/50] [abbrv] hadoop git commit: HDFS-7826. Erasure Coding: Update INodeFile quota computation for striped blocks. Contributed by Kai Sasaki.

2015-03-23 Thread zhz
HDFS-7826. Erasure Coding: Update INodeFile quota computation for striped 
blocks. Contributed by Kai Sasaki.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/63d3ba16
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/63d3ba16
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/63d3ba16

Branch: refs/heads/HDFS-7285
Commit: 63d3ba1638207c43c40755d4142eeec0a4cfe40a
Parents: b6bc988
Author: Jing Zhao ji...@apache.org
Authored: Mon Mar 16 16:37:08 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:13:47 2015 -0700

--
 .../hadoop/hdfs/protocol/HdfsConstants.java |  3 +
 .../blockmanagement/BlockInfoStriped.java   | 12 ++-
 .../hadoop/hdfs/server/namenode/INodeFile.java  | 89 +---
 3 files changed, 90 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/63d3ba16/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
index 245b630..07b72e6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
@@ -186,4 +186,7 @@ public class HdfsConstants {
   public static final byte NUM_PARITY_BLOCKS = 2;
   public static final long BLOCK_GROUP_INDEX_MASK = 15;
   public static final byte MAX_BLOCKS_IN_GROUP = 16;
+
+  // The chunk size for striped block which is used by erasure coding
+  public static final int BLOCK_STRIPED_CHUNK_SIZE = 64 * 1024;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63d3ba16/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
index 84c3be6..cef8318 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.hdfs.server.blockmanagement;
 
 import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState;
 
 /**
@@ -34,6 +35,7 @@ import 
org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState;
  * array to record the block index for each triplet.
  */
 public class BlockInfoStriped extends BlockInfo {
+  private final int   chunkSize = HdfsConstants.BLOCK_STRIPED_CHUNK_SIZE;
   private final short dataBlockNum;
   private final short parityBlockNum;
   /**
@@ -56,7 +58,7 @@ public class BlockInfoStriped extends BlockInfo {
 this.setBlockCollection(b.getBlockCollection());
   }
 
-  short getTotalBlockNum() {
+  public short getTotalBlockNum() {
 return (short) (dataBlockNum + parityBlockNum);
   }
 
@@ -178,6 +180,14 @@ public class BlockInfoStriped extends BlockInfo {
 }
   }
 
+  public long spaceConsumed() {
+// In case striped blocks, total usage by this striped blocks should
+// be the total of data blocks and parity blocks because
+// `getNumBytes` is the total of actual data block size.
+return ((getNumBytes() - 1) / (dataBlockNum * chunkSize) + 1)
+* chunkSize * parityBlockNum + getNumBytes();
+  }
+
   @Override
   public final boolean isStriped() {
 return true;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/63d3ba16/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
index 8a6bb69..a8ab3ce 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
@@ -42,6 +42,7 @@ import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
 import 

[05/50] [abbrv] hadoop git commit: MAPREDUCE-6282. Reuse historyFileAbsolute.getFileSystem in CompletedJob#loadFullHistoryData for code optimization. (zxu via rkanter)

2015-03-23 Thread zhz
MAPREDUCE-6282. Reuse historyFileAbsolute.getFileSystem in 
CompletedJob#loadFullHistoryData for code optimization. (zxu via rkanter)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d4f7e250
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d4f7e250
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d4f7e250

Branch: refs/heads/HDFS-7285
Commit: d4f7e2507f4bb02d172f94e74431bc2f319c
Parents: 75ead27
Author: Robert Kanter rkan...@apache.org
Authored: Fri Mar 20 13:11:58 2015 -0700
Committer: Robert Kanter rkan...@apache.org
Committed: Fri Mar 20 13:11:58 2015 -0700

--
 hadoop-mapreduce-project/CHANGES.txt | 4 
 .../java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java | 4 +---
 2 files changed, 5 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d4f7e250/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 2a4bf0c..48eda8b 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -274,6 +274,10 @@ Release 2.8.0 - UNRELEASED
 MAPREDUCE-5755. MapTask.MapOutputBuffer#compare/swap should have
 @Override annotation. (ozawa)
 
+MAPREDUCE-6282. Reuse historyFileAbsolute.getFileSystem in
+CompletedJob#loadFullHistoryData for code optimization.
+(zxu via rkanter)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d4f7e250/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java
index 1cf63d4..6df8261 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java
@@ -345,9 +345,7 @@ public class CompletedJob implements 
org.apache.hadoop.mapreduce.v2.app.job.Job
   JobHistoryParser parser = null;
   try {
 final FileSystem fs = historyFileAbsolute.getFileSystem(conf);
-parser =
-new JobHistoryParser(historyFileAbsolute.getFileSystem(conf),
-historyFileAbsolute);
+parser = new JobHistoryParser(fs, historyFileAbsolute);
 final Path jobConfPath = new Path(historyFileAbsolute.getParent(),
 JobHistoryUtils.getIntermediateConfFileName(jobId));
 final Configuration conf = new Configuration();



[50/50] [abbrv] hadoop git commit: HDFS-7936. Erasure coding: resolving conflicts when merging with HDFS-7903, HDFS-7435 and HDFS-7930 (this commit is for HDFS-7930 only)

2015-03-23 Thread zhz
HDFS-7936. Erasure coding: resolving conflicts when merging with HDFS-7903, 
HDFS-7435 and HDFS-7930 (this commit is for HDFS-7930 only)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/edbe633c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/edbe633c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/edbe633c

Branch: refs/heads/HDFS-7285
Commit: edbe633c061a3955f4f944ae5c97ba6624cf1040
Parents: 7da69bb
Author: Zhe Zhang z...@apache.org
Authored: Mon Mar 23 11:25:40 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:25:40 2015 -0700

--
 .../hadoop/hdfs/server/blockmanagement/BlockManager.java  | 7 ---
 .../org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java  | 7 ---
 .../org/apache/hadoop/hdfs/server/namenode/INodeFile.java | 2 +-
 3 files changed, 9 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/edbe633c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 65ffd1d..058ab4a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -2061,17 +2061,18 @@ public class BlockManager {
* Mark block replicas as corrupt except those on the storages in 
* newStorages list.
*/
-  public void markBlockReplicasAsCorrupt(BlockInfoContiguous block, 
+  public void markBlockReplicasAsCorrupt(Block oldBlock,
+  BlockInfo block,
   long oldGenerationStamp, long oldNumBytes, 
   DatanodeStorageInfo[] newStorages) throws IOException {
 assert namesystem.hasWriteLock();
 BlockToMarkCorrupt b = null;
 if (block.getGenerationStamp() != oldGenerationStamp) {
-  b = new BlockToMarkCorrupt(block, oldGenerationStamp,
+  b = new BlockToMarkCorrupt(oldBlock, block, oldGenerationStamp,
   genstamp does not match  + oldGenerationStamp
   +  :  + block.getGenerationStamp(), Reason.GENSTAMP_MISMATCH);
 } else if (block.getNumBytes() != oldNumBytes) {
-  b = new BlockToMarkCorrupt(block,
+  b = new BlockToMarkCorrupt(oldBlock, block,
   length does not match  + oldNumBytes
   +  :  + block.getNumBytes(), Reason.SIZE_MISMATCH);
 } else {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/edbe633c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index c8d1488..5d3531c 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -2776,7 +2776,7 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   /** Compute quota change for converting a complete block to a UC block */
   private QuotaCounts computeQuotaDeltaForUCBlock(INodeFile file) {
 final QuotaCounts delta = new QuotaCounts.Builder().build();
-final BlockInfoContiguous lastBlock = file.getLastBlock();
+final BlockInfo lastBlock = file.getLastBlock();
 if (lastBlock != null) {
   final long diff = file.getPreferredBlockSize() - lastBlock.getNumBytes();
   final short repl = file.getBlockReplication();
@@ -4371,8 +4371,9 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
 } else {
   iFile.convertLastBlockToUC(storedBlock, trimmedStorageInfos);
   if (closeFile) {
-blockManager.markBlockReplicasAsCorrupt(storedBlock,
-oldGenerationStamp, oldNumBytes, trimmedStorageInfos);
+blockManager.markBlockReplicasAsCorrupt(oldBlock.getLocalBlock(),
+storedBlock, oldGenerationStamp, oldNumBytes,
+trimmedStorageInfos);
   }
 }
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/edbe633c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
--
diff --git 

[22/50] [abbrv] hadoop git commit: HDFS-7347. Configurable erasure coding policy for individual files and directories ( Contributed by Zhe Zhang )

2015-03-23 Thread zhz
HDFS-7347. Configurable erasure coding policy for individual files and 
directories ( Contributed by Zhe Zhang )


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/39e7d4e7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/39e7d4e7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/39e7d4e7

Branch: refs/heads/HDFS-7285
Commit: 39e7d4e7ec1f58fc5234e35ab4fc71478f9e2c47
Parents: 82eda77
Author: Vinayakumar B vinayakum...@apache.org
Authored: Thu Nov 6 10:03:26 2014 +0530
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:06:43 2015 -0700

--
 .../hadoop-hdfs/CHANGES-HDFS-EC-7285.txt|  4 ++
 .../hadoop/hdfs/protocol/HdfsConstants.java |  2 +
 .../BlockStoragePolicySuite.java|  5 ++
 .../hadoop/hdfs/TestBlockStoragePolicy.java | 12 +++-
 .../TestBlockInitialEncoding.java   | 75 
 5 files changed, 95 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/39e7d4e7/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
new file mode 100644
index 000..2ef8527
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES-HDFS-EC-7285.txt
@@ -0,0 +1,4 @@
+  BREAKDOWN OF HDFS-7285 SUBTASKS AND RELATED JIRAS
+
+HDFS-7347. Configurable erasure coding policy for individual files and
+directories ( Zhe Zhang via vinayakumarb )
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/39e7d4e7/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
index 7cf8a47..54c650b 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
@@ -171,6 +171,7 @@ public class HdfsConstants {
   public static final String ONESSD_STORAGE_POLICY_NAME = ONE_SSD;
   public static final String HOT_STORAGE_POLICY_NAME = HOT;
   public static final String WARM_STORAGE_POLICY_NAME = WARM;
+  public static final String EC_STORAGE_POLICY_NAME = EC;
   public static final String COLD_STORAGE_POLICY_NAME = COLD;
 
   public static final byte MEMORY_STORAGE_POLICY_ID = 15;
@@ -178,5 +179,6 @@ public class HdfsConstants {
   public static final byte ONESSD_STORAGE_POLICY_ID = 10;
   public static final byte HOT_STORAGE_POLICY_ID = 7;
   public static final byte WARM_STORAGE_POLICY_ID = 5;
+  public static final byte EC_STORAGE_POLICY_ID = 4;
   public static final byte COLD_STORAGE_POLICY_ID = 2;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/39e7d4e7/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
index 020cb5f..3d121cc 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
@@ -78,6 +78,11 @@ public class BlockStoragePolicySuite {
 new StorageType[]{StorageType.DISK, StorageType.ARCHIVE},
 new StorageType[]{StorageType.DISK, StorageType.ARCHIVE},
 new StorageType[]{StorageType.DISK, StorageType.ARCHIVE});
+final byte ecId = HdfsConstants.EC_STORAGE_POLICY_ID;
+policies[ecId] = new BlockStoragePolicy(ecId,
+HdfsConstants.EC_STORAGE_POLICY_NAME,
+new StorageType[]{StorageType.DISK}, StorageType.EMPTY_ARRAY,
+new StorageType[]{StorageType.ARCHIVE});
 final byte coldId = HdfsConstants.COLD_STORAGE_POLICY_ID;
 policies[coldId] = new BlockStoragePolicy(coldId,
 HdfsConstants.COLD_STORAGE_POLICY_NAME,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/39e7d4e7/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
--
diff --git 

[36/50] [abbrv] hadoop git commit: HDFS-7872. Erasure Coding: INodeFile.dumpTreeRecursively() supports to print striped blocks. Contributed by Takuya Fukudome.

2015-03-23 Thread zhz
HDFS-7872. Erasure Coding: INodeFile.dumpTreeRecursively() supports to print 
striped blocks. Contributed by Takuya Fukudome.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/803d4da9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/803d4da9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/803d4da9

Branch: refs/heads/HDFS-7285
Commit: 803d4da95ce5c655cf37941bfa293114cd1b08b4
Parents: 7905e3c
Author: Jing Zhao ji...@apache.org
Authored: Thu Mar 5 16:44:38 2015 -0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:12:35 2015 -0700

--
 .../java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/803d4da9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
index 2906996..8a6bb69 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
@@ -862,8 +862,8 @@ public class INodeFile extends INodeWithAdditionalFields
 out.print(, fileSize= + computeFileSize(snapshotId));
 // only compare the first block
 out.print(, blocks=);
-out.print(blocks == null || blocks.length == 0? null: blocks[0]);
-// TODO print striped blocks
+BlockInfo[] blks = getBlocks();
+out.print(blks == null || blks.length == 0? null: blks[0]);
 out.println();
   }
 



[38/50] [abbrv] hadoop git commit: HADOOP-11646. Erasure Coder API for encoding and decoding of block group ( Contributed by Kai Zheng )

2015-03-23 Thread zhz
HADOOP-11646. Erasure Coder API for encoding and decoding of block group ( 
Contributed by Kai Zheng )


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a5804bf3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a5804bf3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a5804bf3

Branch: refs/heads/HDFS-7285
Commit: a5804bf3fb6684a940339ef11f1a705fa4ae87d0
Parents: 803d4da
Author: Vinayakumar B vinayakum...@apache.org
Authored: Mon Mar 9 12:32:26 2015 +0530
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:12:35 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt  |   2 +
 .../apache/hadoop/io/erasurecode/ECBlock.java   |  80 ++
 .../hadoop/io/erasurecode/ECBlockGroup.java |  82 ++
 .../erasurecode/coder/AbstractErasureCoder.java |  63 +
 .../coder/AbstractErasureCodingStep.java|  59 
 .../coder/AbstractErasureDecoder.java   | 152 +++
 .../coder/AbstractErasureEncoder.java   |  50 
 .../io/erasurecode/coder/ErasureCoder.java  |  77 ++
 .../io/erasurecode/coder/ErasureCodingStep.java |  55 
 .../io/erasurecode/coder/ErasureDecoder.java|  41 +++
 .../erasurecode/coder/ErasureDecodingStep.java  |  52 
 .../io/erasurecode/coder/ErasureEncoder.java|  39 +++
 .../erasurecode/coder/ErasureEncodingStep.java  |  49 
 .../io/erasurecode/coder/XorErasureDecoder.java |  78 ++
 .../io/erasurecode/coder/XorErasureEncoder.java |  45 
 .../erasurecode/rawcoder/RawErasureCoder.java   |   2 +-
 .../erasurecode/coder/TestErasureCoderBase.java | 266 +++
 .../io/erasurecode/coder/TestXorCoder.java  |  50 
 18 files changed, 1241 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a5804bf3/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index ee42c84..c17a1bd 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -15,4 +15,6 @@
 HADOOP-11643. Define EC schema API for ErasureCodec. Contributed by Kai 
Zheng
 ( Kai Zheng )
 
+HADOOP-11646. Erasure Coder API for encoding and decoding of block group
+( Kai Zheng via vinayakumarb )
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a5804bf3/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlock.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlock.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlock.java
new file mode 100644
index 000..956954a
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECBlock.java
@@ -0,0 +1,80 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.io.erasurecode;
+
+/**
+ * A wrapper of block level data source/output that {@link ECChunk}s can be
+ * extracted from. For HDFS, it can be an HDFS block (250MB). Note it only 
cares
+ * about erasure coding specific logic thus avoids coupling with any HDFS block
+ * details. We can have something like HdfsBlock extend it.
+ */
+public class ECBlock {
+
+  private boolean isParity;
+  private boolean isErased;
+
+  /**
+   * A default constructor. isParity and isErased are false by default.
+   */
+  public ECBlock() {
+this(false, false);
+  }
+
+  /**
+   * A constructor specifying isParity and isErased.
+   * @param isParity
+   * @param isErased
+   */
+  public ECBlock(boolean isParity, boolean isErased) {
+this.isParity = isParity;
+this.isErased = isErased;
+  }
+
+  /**
+   * Set true if it's for a parity block.
+   * @param isParity
+   */

[48/50] [abbrv] hadoop git commit: HADOOP-11707. Add factory to create raw erasure coder. Contributed by Kai Zheng

2015-03-23 Thread zhz
HADOOP-11707. Add factory to create raw erasure coder.  Contributed by Kai Zheng


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/18f0bac7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/18f0bac7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/18f0bac7

Branch: refs/heads/HDFS-7285
Commit: 18f0bac7386d1cf2cc0c8f9c329f52535ecf3062
Parents: fb5edec
Author: Kai Zheng kai.zh...@intel.com
Authored: Fri Mar 20 15:07:00 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:14:12 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt  |  3 +-
 .../rawcoder/JRSRawErasureCoderFactory.java | 34 ++
 .../rawcoder/RawErasureCoderFactory.java| 38 
 .../rawcoder/XorRawErasureCoderFactory.java | 34 ++
 4 files changed, 108 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/18f0bac7/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index e27ff5c..f566f0e 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -24,4 +24,5 @@
 HADOOP-11706. Refine a little bit erasure coder API. Contributed by Kai 
Zheng
 ( Kai Zheng )
 
-
+HADOOP-11707. Add factory to create raw erasure coder. Contributed by Kai 
Zheng
+( Kai Zheng )

http://git-wip-us.apache.org/repos/asf/hadoop/blob/18f0bac7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/JRSRawErasureCoderFactory.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/JRSRawErasureCoderFactory.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/JRSRawErasureCoderFactory.java
new file mode 100644
index 000..d6b40aa
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/JRSRawErasureCoderFactory.java
@@ -0,0 +1,34 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.io.erasurecode.rawcoder;
+
+/**
+ * A raw coder factory for raw Reed-Solomon coder in Java.
+ */
+public class JRSRawErasureCoderFactory implements RawErasureCoderFactory {
+
+  @Override
+  public RawErasureEncoder createEncoder() {
+return new JRSRawEncoder();
+  }
+
+  @Override
+  public RawErasureDecoder createDecoder() {
+return new JRSRawDecoder();
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/18f0bac7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderFactory.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderFactory.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderFactory.java
new file mode 100644
index 000..95a1cfe
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderFactory.java
@@ -0,0 +1,38 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in 

[23/50] [abbrv] hadoop git commit: HADOOP-11514. Raw Erasure Coder API for concrete encoding and decoding (Kai Zheng via umamahesh)

2015-03-23 Thread zhz
HADOOP-11514. Raw Erasure Coder API for concrete encoding and decoding (Kai 
Zheng via umamahesh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/257d22d0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/257d22d0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/257d22d0

Branch: refs/heads/HDFS-7285
Commit: 257d22d00c41103101a5a0ea539c7922b9ea7b80
Parents: e892417
Author: Uma Maheswara Rao G umamah...@apache.org
Authored: Thu Jan 29 14:15:13 2015 +0530
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:06:44 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt  |  4 +
 .../apache/hadoop/io/erasurecode/ECChunk.java   | 82 +
 .../rawcoder/AbstractRawErasureCoder.java   | 63 +
 .../rawcoder/AbstractRawErasureDecoder.java | 93 
 .../rawcoder/AbstractRawErasureEncoder.java | 93 
 .../erasurecode/rawcoder/RawErasureCoder.java   | 78 
 .../erasurecode/rawcoder/RawErasureDecoder.java | 55 
 .../erasurecode/rawcoder/RawErasureEncoder.java | 54 
 8 files changed, 522 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/257d22d0/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
new file mode 100644
index 000..8ce5a89
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -0,0 +1,4 @@
+  BREAKDOWN OF HADOOP-11264 SUBTASKS AND RELATED JIRAS (Common part of 
HDFS-7285)
+
+HADOOP-11514. Raw Erasure Coder API for concrete encoding and decoding
+(Kai Zheng via umamahesh)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/257d22d0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
new file mode 100644
index 000..f84eb11
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECChunk.java
@@ -0,0 +1,82 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.io.erasurecode;
+
+import java.nio.ByteBuffer;
+
+/**
+ * A wrapper for ByteBuffer or bytes array for an erasure code chunk.
+ */
+public class ECChunk {
+
+  private ByteBuffer chunkBuffer;
+
+  /**
+   * Wrapping a ByteBuffer
+   * @param buffer
+   */
+  public ECChunk(ByteBuffer buffer) {
+this.chunkBuffer = buffer;
+  }
+
+  /**
+   * Wrapping a bytes array
+   * @param buffer
+   */
+  public ECChunk(byte[] buffer) {
+this.chunkBuffer = ByteBuffer.wrap(buffer);
+  }
+
+  /**
+   * Convert to ByteBuffer
+   * @return ByteBuffer
+   */
+  public ByteBuffer getBuffer() {
+return chunkBuffer;
+  }
+
+  /**
+   * Convert an array of this chunks to an array of ByteBuffers
+   * @param chunks
+   * @return an array of ByteBuffers
+   */
+  public static ByteBuffer[] toBuffers(ECChunk[] chunks) {
+ByteBuffer[] buffers = new ByteBuffer[chunks.length];
+
+for (int i = 0; i  chunks.length; i++) {
+  buffers[i] = chunks[i].getBuffer();
+}
+
+return buffers;
+  }
+
+  /**
+   * Convert an array of this chunks to an array of byte array
+   * @param chunks
+   * @return an array of byte array
+   */
+  public static byte[][] toArray(ECChunk[] chunks) {
+byte[][] bytesArr = new byte[chunks.length][];
+
+for (int i = 0; i  chunks.length; i++) {
+  bytesArr[i] = chunks[i].getBuffer().array();
+}
+
+return bytesArr;
+  }
+}


[47/50] [abbrv] hadoop git commit: HDFS-7369. Erasure coding: distribute recovery work for striped blocks to DataNode. Contributed by Zhe Zhang.

2015-03-23 Thread zhz
HDFS-7369. Erasure coding: distribute recovery work for striped blocks to 
DataNode. Contributed by Zhe Zhang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fb5edec3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fb5edec3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fb5edec3

Branch: refs/heads/HDFS-7285
Commit: fb5edec3017b7e139a48149fe4d310afb296610c
Parents: f10d580
Author: Zhe Zhang z...@apache.org
Authored: Wed Mar 18 15:52:36 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:13:49 2015 -0700

--
 .../server/blockmanagement/BlockCollection.java |   5 +
 .../server/blockmanagement/BlockManager.java| 296 +--
 .../blockmanagement/DatanodeDescriptor.java |  72 -
 .../server/blockmanagement/DatanodeManager.java |  20 +-
 .../hadoop/hdfs/server/namenode/INodeFile.java  |   9 +-
 .../server/protocol/BlockECRecoveryCommand.java |  63 
 .../hdfs/server/protocol/DatanodeProtocol.java  |   1 +
 .../blockmanagement/BlockManagerTestUtil.java   |   2 +-
 .../blockmanagement/TestBlockManager.java   |  22 +-
 .../TestRecoverStripedBlocks.java   | 107 +++
 10 files changed, 486 insertions(+), 111 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fb5edec3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
index 1c753de..62a5781 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockCollection.java
@@ -86,4 +86,9 @@ public interface BlockCollection {
* @return whether the block collection is under construction.
*/
   public boolean isUnderConstruction();
+
+  /**
+   * @return whether the block collection is in striping format
+   */
+  public boolean isStriped();
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fb5edec3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index ca24ab1..65ffd1d 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -42,6 +42,7 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.HAUtil;
@@ -530,9 +531,9 @@ public class BlockManager {
 
 NumberReplicas numReplicas = new NumberReplicas();
 // source node returned is not used
-chooseSourceDatanode(block, containingNodes,
+chooseSourceDatanodes(getStoredBlock(block), containingNodes,
 containingLiveReplicasNodes, numReplicas,
-UnderReplicatedBlocks.LEVEL);
+null, 1, UnderReplicatedBlocks.LEVEL);
 
 // containingLiveReplicasNodes can include READ_ONLY_SHARED replicas which 
are 
 // not included in the numReplicas.liveReplicas() count
@@ -1326,15 +1327,15 @@ public class BlockManager {
   }
 
   /**
-   * Scan blocks in {@link #neededReplications} and assign replication
-   * work to data-nodes they belong to.
+   * Scan blocks in {@link #neededReplications} and assign recovery
+   * (replication or erasure coding) work to data-nodes they belong to.
*
* The number of process blocks equals either twice the number of live
* data-nodes or the number of under-replicated blocks whichever is less.
*
* @return number of blocks scheduled for replication during this iteration.
*/
-  int computeReplicationWork(int blocksToProcess) {
+  int computeBlockRecoveryWork(int blocksToProcess) {
 ListListBlockInfo blocksToReplicate = null;
 namesystem.writeLock();
 try {
@@ -1344,30 +1345,32 @@ public class BlockManager {
 

[39/50] [abbrv] hadoop git commit: HDFS-7853. Erasure coding: extend LocatedBlocks to support reading from striped files. Contributed by Jing Zhao.

2015-03-23 Thread zhz
HDFS-7853. Erasure coding: extend LocatedBlocks to support reading from striped 
files. Contributed by Jing Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/acfe5db9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/acfe5db9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/acfe5db9

Branch: refs/heads/HDFS-7285
Commit: acfe5db903a2128ff5e1d2d7a62fc17e4739f5c2
Parents: a5804bf
Author: Jing Zhao ji...@apache.org
Authored: Mon Mar 9 14:59:58 2015 -0700
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:13:34 2015 -0700

--
 .../hadoop/hdfs/protocol/LocatedBlock.java  |   5 +-
 .../hdfs/protocol/LocatedStripedBlock.java  |  68 +
 ...tNamenodeProtocolServerSideTranslatorPB.java |  14 +-
 .../ClientNamenodeProtocolTranslatorPB.java |  13 +-
 .../DatanodeProtocolClientSideTranslatorPB.java |   2 +-
 .../DatanodeProtocolServerSideTranslatorPB.java |   2 +-
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |  80 +++
 .../blockmanagement/BlockInfoStriped.java   |   5 +
 .../BlockInfoStripedUnderConstruction.java  |  99 +++--
 .../server/blockmanagement/BlockManager.java|  51 ---
 .../blockmanagement/DatanodeDescriptor.java |   4 +-
 .../blockmanagement/DatanodeStorageInfo.java|   3 +-
 .../server/namenode/FSImageFormatPBINode.java   |  21 +--
 .../hdfs/server/namenode/FSNamesystem.java  |  34 +++--
 .../hadoop-hdfs/src/main/proto/hdfs.proto   |   1 +
 .../org/apache/hadoop/hdfs/DFSTestUtil.java |  12 ++
 .../hadoop/hdfs/protocolPB/TestPBHelper.java|  16 +--
 .../datanode/TestIncrementalBrVariations.java   |  14 +-
 .../server/namenode/TestAddStripedBlocks.java   | 141 +++
 .../hdfs/server/namenode/TestFSImage.java   |   5 +-
 20 files changed, 444 insertions(+), 146 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/acfe5db9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
index e729869..a38e8f2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
@@ -25,7 +25,6 @@ import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier;
 import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo;
-import org.apache.hadoop.hdfs.protocol.DatanodeInfoWithStorage;
 import org.apache.hadoop.security.token.Token;
 
 import com.google.common.collect.Lists;
@@ -51,14 +50,14 @@ public class LocatedBlock {
   // else false. If block has few corrupt replicas, they are filtered and 
   // their locations are not part of this object
   private boolean corrupt;
-  private TokenBlockTokenIdentifier blockToken = new 
TokenBlockTokenIdentifier();
+  private TokenBlockTokenIdentifier blockToken = new Token();
   /**
* List of cached datanode locations
*/
   private DatanodeInfo[] cachedLocs;
 
   // Used when there are no locations
-  private static final DatanodeInfoWithStorage[] EMPTY_LOCS =
+  static final DatanodeInfoWithStorage[] EMPTY_LOCS =
   new DatanodeInfoWithStorage[0];
 
   public LocatedBlock(ExtendedBlock b, DatanodeInfo[] locs) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/acfe5db9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedStripedBlock.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedStripedBlock.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedStripedBlock.java
new file mode 100644
index 000..97e3a69
--- /dev/null
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedStripedBlock.java
@@ -0,0 +1,68 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required 

[20/50] [abbrv] hadoop git commit: HADOOP-11737. mockito's version in hadoop-nfs’ pom.xml shouldn't be specified. Contributed by Kengo Seki.

2015-03-23 Thread zhz
HADOOP-11737. mockito's version in hadoop-nfs’ pom.xml shouldn't be 
specified. Contributed by Kengo Seki.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0b9f12c8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0b9f12c8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0b9f12c8

Branch: refs/heads/HDFS-7285
Commit: 0b9f12c847e26103bc2304cf7114e6d103264669
Parents: b375d1f
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Mon Mar 23 13:56:24 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Mon Mar 23 13:56:24 2015 +0900

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 3 +++
 hadoop-common-project/hadoop-nfs/pom.xml| 1 -
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/0b9f12c8/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 4cd2154..430015d 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -458,6 +458,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-11447. Add a more meaningful toString method to SampleStat and 
 MutableStat. (kasha)
 
+HADOOP-11737. mockito's version in hadoop-nfs’ pom.xml shouldn't be
+specified. (Kengo Seki via ozawa)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/0b9f12c8/hadoop-common-project/hadoop-nfs/pom.xml
--
diff --git a/hadoop-common-project/hadoop-nfs/pom.xml 
b/hadoop-common-project/hadoop-nfs/pom.xml
index 409ed75..e8156d9 100644
--- a/hadoop-common-project/hadoop-nfs/pom.xml
+++ b/hadoop-common-project/hadoop-nfs/pom.xml
@@ -55,7 +55,6 @@
 dependency
   groupIdorg.mockito/groupId
   artifactIdmockito-all/artifactId
-  version1.8.5/version
 /dependency
 dependency
   groupIdcommons-logging/groupId



[46/50] [abbrv] hadoop git commit: Updated CHANGES-HDFS-EC-7285.txt accordingly

2015-03-23 Thread zhz
Updated CHANGES-HDFS-EC-7285.txt accordingly


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f10d580a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f10d580a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f10d580a

Branch: refs/heads/HDFS-7285
Commit: f10d580abe33ba10a349c3a9c7ccd529a0d9e843
Parents: fce132f
Author: Kai Zheng kai.zh...@intel.com
Authored: Wed Mar 18 19:24:24 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:13:48 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f10d580a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index a97dc34..e27ff5c 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -19,6 +19,9 @@
 ( Kai Zheng via vinayakumarb )
 
 HADOOP-11705. Make erasure coder configurable. Contributed by Kai Zheng
-( Kai Zheng )
+( Kai Zheng )
+
+HADOOP-11706. Refine a little bit erasure coder API. Contributed by Kai 
Zheng
+( Kai Zheng )
 
 



[35/50] [abbrv] hadoop git commit: HADOOP-11643. Define EC schema API for ErasureCodec. Contributed by Kai Zheng

2015-03-23 Thread zhz
HADOOP-11643. Define EC schema API for ErasureCodec. Contributed by Kai Zheng


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7905e3ce
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7905e3ce
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7905e3ce

Branch: refs/heads/HDFS-7285
Commit: 7905e3cef85e00e126e6e19175822770c11138f2
Parents: 4b4c7e3
Author: drankye kai.zh...@intel.com
Authored: Thu Mar 5 22:51:52 2015 +0800
Committer: Zhe Zhang z...@apache.org
Committed: Mon Mar 23 11:12:35 2015 -0700

--
 .../hadoop-common/CHANGES-HDFS-EC-7285.txt  |   4 +
 .../apache/hadoop/io/erasurecode/ECSchema.java  | 203 +++
 .../hadoop/io/erasurecode/TestECSchema.java |  54 +
 3 files changed, 261 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7905e3ce/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index 7bbacf7..ee42c84 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -12,3 +12,7 @@
 HADOOP-11542. Raw Reed-Solomon coder in pure Java. Contributed by Kai Zheng
 ( Kai Zheng )
 
+HADOOP-11643. Define EC schema API for ErasureCodec. Contributed by Kai 
Zheng
+( Kai Zheng )
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7905e3ce/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
new file mode 100644
index 000..8dc3f45
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ECSchema.java
@@ -0,0 +1,203 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.io.erasurecode;
+
+import java.util.Collections;
+import java.util.Map;
+
+/**
+ * Erasure coding schema to housekeeper relevant information.
+ */
+public class ECSchema {
+  public static final String NUM_DATA_UNITS_KEY = k;
+  public static final String NUM_PARITY_UNITS_KEY = m;
+  public static final String CODEC_NAME_KEY = codec;
+  public static final String CHUNK_SIZE_KEY = chunkSize;
+  public static final int DEFAULT_CHUNK_SIZE = 64 * 1024; // 64K
+
+  private String schemaName;
+  private String codecName;
+  private MapString, String options;
+  private int numDataUnits;
+  private int numParityUnits;
+  private int chunkSize;
+
+  /**
+   * Constructor with schema name and provided options. Note the options may
+   * contain additional information for the erasure codec to interpret further.
+   * @param schemaName schema name
+   * @param options schema options
+   */
+  public ECSchema(String schemaName, MapString, String options) {
+assert (schemaName != null  ! schemaName.isEmpty());
+
+this.schemaName = schemaName;
+
+if (options == null || options.isEmpty()) {
+  throw new IllegalArgumentException(No schema options are provided);
+}
+
+String codecName = options.get(CODEC_NAME_KEY);
+if (codecName == null || codecName.isEmpty()) {
+  throw new IllegalArgumentException(No codec option is provided);
+}
+
+int dataUnits = 0, parityUnits = 0;
+try {
+  if (options.containsKey(NUM_DATA_UNITS_KEY)) {
+dataUnits = Integer.parseInt(options.get(NUM_DATA_UNITS_KEY));
+  }
+} catch (NumberFormatException e) {
+  throw new IllegalArgumentException(Option value  +
+  options.get(CHUNK_SIZE_KEY) +  for  + CHUNK_SIZE_KEY +
+   is found. It should be an integer);
+}
+
+try {
+  if 

hadoop git commit: YARN-3241. FairScheduler handles invalid queue names inconsistently. (Zhihai Xu via kasha)

2015-03-23 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/trunk 6ca1f1202 - 2bc097cd1


YARN-3241. FairScheduler handles invalid queue names inconsistently. (Zhihai Xu 
via kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2bc097cd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2bc097cd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2bc097cd

Branch: refs/heads/trunk
Commit: 2bc097cd14692e6ceb06bff959f28531534eb307
Parents: 6ca1f12
Author: Karthik Kambatla ka...@apache.org
Authored: Mon Mar 23 13:22:03 2015 -0700
Committer: Karthik Kambatla ka...@apache.org
Committed: Mon Mar 23 13:22:03 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../fair/AllocationFileLoaderService.java   |  8 +-
 .../scheduler/fair/FairScheduler.java   |  2 +
 .../fair/InvalidQueueNameException.java | 39 ++
 .../scheduler/fair/QueueManager.java| 16 
 .../fair/TestAllocationFileLoaderService.java   | 25 ++-
 .../scheduler/fair/TestFairScheduler.java   | 78 
 .../scheduler/fair/TestQueueManager.java| 13 +++-
 8 files changed, 181 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2bc097cd/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index b90109c..b716064 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -94,6 +94,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3269. Yarn.nodemanager.remote-app-log-dir could not be configured to 
 fully qualified path. (Xuan Gong via junping_du)
 
+YARN-3241. FairScheduler handles invalid queue names inconsistently. 
+(Zhihai Xu via kasha)
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2bc097cd/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
index 76fa588..dab6d9f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
@@ -426,13 +426,19 @@ public class AllocationFileLoaderService extends 
AbstractService {
   MapFSQueueType, SetString configuredQueues,
   SetString reservableQueues)
   throws AllocationConfigurationException {
-String queueName = element.getAttribute(name);
+String queueName = element.getAttribute(name).trim();
 
 if (queueName.contains(.)) {
   throw new AllocationConfigurationException(Bad fair scheduler config 
   + file: queue name ( + queueName + ) shouldn't contain period.);
 }
 
+if (queueName.isEmpty()) {
+  throw new AllocationConfigurationException(Bad fair scheduler config 
+  + file: queue name shouldn't be empty or 
+  + consist only of whitespace.);
+}
+
 if (parentName != null) {
   queueName = parentName + . + queueName;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2bc097cd/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
index 1d97983..98a8de2 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
+++ 

hadoop git commit: YARN-2868. FairScheduler: Metric for latency to allocate first container for an application. (Ray Chiang via kasha)

2015-03-23 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/trunk 2bc097cd1 - 972f1f1ab


YARN-2868. FairScheduler: Metric for latency to allocate first container for an 
application. (Ray Chiang via kasha)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/972f1f1a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/972f1f1a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/972f1f1a

Branch: refs/heads/trunk
Commit: 972f1f1ab94a26ec446a272ad030fe13f03ed442
Parents: 2bc097c
Author: Karthik Kambatla ka...@apache.org
Authored: Mon Mar 23 14:07:05 2015 -0700
Committer: Karthik Kambatla ka...@apache.org
Committed: Mon Mar 23 14:07:05 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt|  3 +++
 .../resourcemanager/scheduler/QueueMetrics.java|  8 +++-
 .../scheduler/SchedulerApplicationAttempt.java | 17 +
 .../scheduler/fair/FairScheduler.java  | 11 ++-
 4 files changed, 37 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/972f1f1a/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index b716064..e7d4f59 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -73,6 +73,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3350. YARN RackResolver spams logs with messages at info level. 
 (Wilfred Spiegelenburg via junping_du)
 
+YARN-2868. FairScheduler: Metric for latency to allocate first container 
+for an application. (Ray Chiang via kasha)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/972f1f1a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java
index 507b798..58b1ed1 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/QueueMetrics.java
@@ -38,6 +38,7 @@ import org.apache.hadoop.metrics2.lib.MetricsRegistry;
 import org.apache.hadoop.metrics2.lib.MutableCounterInt;
 import org.apache.hadoop.metrics2.lib.MutableCounterLong;
 import org.apache.hadoop.metrics2.lib.MutableGaugeInt;
+import org.apache.hadoop.metrics2.lib.MutableRate;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
@@ -74,6 +75,7 @@ public class QueueMetrics implements MetricsSource {
   @Metric(# of reserved containers) MutableGaugeInt reservedContainers;
   @Metric(# of active users) MutableGaugeInt activeUsers;
   @Metric(# of active applications) MutableGaugeInt activeApplications;
+  @Metric(App Attempt First Container Allocation Delay) MutableRate 
appAttemptFirstContainerAllocationDelay;
   private final MutableGaugeInt[] runningTime;
   private TimeBucketMetricsApplicationId runBuckets;
 
@@ -462,7 +464,11 @@ public class QueueMetrics implements MetricsSource {
   parent.deactivateApp(user);
 }
   }
-  
+
+  public void addAppAttemptFirstContainerAllocationDelay(long latency) {
+appAttemptFirstContainerAllocationDelay.add(latency);
+  }
+
   public int getAppsSubmitted() {
 return appsSubmitted.value();
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/972f1f1a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
 

hadoop git commit: HDFS-7881. TestHftpFileSystem#testSeek fails in branch-2. Contributed by Brahma Reddy Battula.

2015-03-23 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 57e297208 - 2742f12b5


HDFS-7881. TestHftpFileSystem#testSeek fails in branch-2. Contributed by Brahma 
Reddy Battula.

(cherry picked from commit fad8c78173c4b7c55324033720f04a09943deac7)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2742f12b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2742f12b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2742f12b

Branch: refs/heads/branch-2.7
Commit: 2742f12b58f27892374c4fcf9dfb60772365da1e
Parents: 57e2972
Author: Akira Ajisaka aajis...@apache.org
Authored: Tue Mar 24 06:21:14 2015 +0900
Committer: Akira Ajisaka aajis...@apache.org
Committed: Tue Mar 24 06:25:21 2015 +0900

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../hadoop/hdfs/web/ByteRangeInputStream.java   | 38 
 2 files changed, 35 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2742f12b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index b83c9a6..9155772 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -906,6 +906,9 @@ Release 2.7.0 - UNRELEASED
 
 HDFS-7942. NFS: support regexp grouping in nfs.exports.allowed.hosts 
(brandonli)
 
+HDFS-7881. TestHftpFileSystem#testSeek fails in branch-2.
+(Brahma Reddy Battula via aajisaka)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2742f12b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/ByteRangeInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/ByteRangeInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/ByteRangeInputStream.java
index 395c9f6..9e3b29a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/ByteRangeInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/ByteRangeInputStream.java
@@ -28,6 +28,7 @@ import java.util.StringTokenizer;
 
 import org.apache.commons.io.input.BoundedInputStream;
 import org.apache.hadoop.fs.FSInputStream;
+import org.apache.http.HttpStatus;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.net.HttpHeaders;
@@ -127,12 +128,7 @@ public abstract class ByteRangeInputStream extends 
FSInputStream {
   fileLength = null;
 } else {
   // for non-chunked transfer-encoding, get content-length
-  final String cl = connection.getHeaderField(HttpHeaders.CONTENT_LENGTH);
-  if (cl == null) {
-throw new IOException(HttpHeaders.CONTENT_LENGTH +  is missing: 
-+ headers);
-  }
-  final long streamlength = Long.parseLong(cl);
+  long streamlength = getStreamLength(connection, headers);
   fileLength = startPos + streamlength;
 
   // Java has a bug with 2GB request streams.  It won't bounds check
@@ -143,6 +139,36 @@ public abstract class ByteRangeInputStream extends 
FSInputStream {
 return in;
   }
 
+  private static long getStreamLength(HttpURLConnection connection,
+  MapString, ListString headers) throws IOException {
+String cl = connection.getHeaderField(HttpHeaders.CONTENT_LENGTH);
+if (cl == null) {
+  // Try to get the content length by parsing the content range
+  // because HftpFileSystem does not return the content length
+  // if the content is partial.
+  if (connection.getResponseCode() == HttpStatus.SC_PARTIAL_CONTENT) {
+cl = connection.getHeaderField(HttpHeaders.CONTENT_RANGE);
+return getLengthFromRange(cl);
+  } else {
+throw new IOException(HttpHeaders.CONTENT_LENGTH +  is missing: 
++ headers);
+  }
+}
+return Long.parseLong(cl);
+  }
+
+  private static long getLengthFromRange(String cl) throws IOException {
+try {
+
+  String[] str = cl.substring(6).split([-/]);
+  return Long.parseLong(str[1]) - Long.parseLong(str[0]) + 1;
+} catch (Exception e) {
+  throw new IOException(
+  failed to get content length by parsing the content range:  + cl
+  +   + e.getMessage());
+}
+  }
+
   private static boolean isChunkedTransferEncoding(
   final MapString, ListString headers) {
 return contains(headers, HttpHeaders.TRANSFER_ENCODING, chunked)



hadoop git commit: HDFS-7881. TestHftpFileSystem#testSeek fails in branch-2. Contributed by Brahma Reddy Battula.

2015-03-23 Thread aajisaka
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 4e0c48703 - fad8c7817


HDFS-7881. TestHftpFileSystem#testSeek fails in branch-2. Contributed by Brahma 
Reddy Battula.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fad8c781
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fad8c781
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fad8c781

Branch: refs/heads/branch-2
Commit: fad8c78173c4b7c55324033720f04a09943deac7
Parents: 4e0c487
Author: Akira Ajisaka aajis...@apache.org
Authored: Tue Mar 24 06:21:14 2015 +0900
Committer: Akira Ajisaka aajis...@apache.org
Committed: Tue Mar 24 06:24:29 2015 +0900

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../hadoop/hdfs/web/ByteRangeInputStream.java   | 38 
 2 files changed, 35 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fad8c781/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 98ea260..9981d4f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -931,6 +931,9 @@ Release 2.7.0 - UNRELEASED
 
 HDFS-7942. NFS: support regexp grouping in nfs.exports.allowed.hosts 
(brandonli)
 
+HDFS-7881. TestHftpFileSystem#testSeek fails in branch-2.
+(Brahma Reddy Battula via aajisaka)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fad8c781/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/ByteRangeInputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/ByteRangeInputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/ByteRangeInputStream.java
index 395c9f6..9e3b29a 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/ByteRangeInputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/ByteRangeInputStream.java
@@ -28,6 +28,7 @@ import java.util.StringTokenizer;
 
 import org.apache.commons.io.input.BoundedInputStream;
 import org.apache.hadoop.fs.FSInputStream;
+import org.apache.http.HttpStatus;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.net.HttpHeaders;
@@ -127,12 +128,7 @@ public abstract class ByteRangeInputStream extends 
FSInputStream {
   fileLength = null;
 } else {
   // for non-chunked transfer-encoding, get content-length
-  final String cl = connection.getHeaderField(HttpHeaders.CONTENT_LENGTH);
-  if (cl == null) {
-throw new IOException(HttpHeaders.CONTENT_LENGTH +  is missing: 
-+ headers);
-  }
-  final long streamlength = Long.parseLong(cl);
+  long streamlength = getStreamLength(connection, headers);
   fileLength = startPos + streamlength;
 
   // Java has a bug with 2GB request streams.  It won't bounds check
@@ -143,6 +139,36 @@ public abstract class ByteRangeInputStream extends 
FSInputStream {
 return in;
   }
 
+  private static long getStreamLength(HttpURLConnection connection,
+  MapString, ListString headers) throws IOException {
+String cl = connection.getHeaderField(HttpHeaders.CONTENT_LENGTH);
+if (cl == null) {
+  // Try to get the content length by parsing the content range
+  // because HftpFileSystem does not return the content length
+  // if the content is partial.
+  if (connection.getResponseCode() == HttpStatus.SC_PARTIAL_CONTENT) {
+cl = connection.getHeaderField(HttpHeaders.CONTENT_RANGE);
+return getLengthFromRange(cl);
+  } else {
+throw new IOException(HttpHeaders.CONTENT_LENGTH +  is missing: 
++ headers);
+  }
+}
+return Long.parseLong(cl);
+  }
+
+  private static long getLengthFromRange(String cl) throws IOException {
+try {
+
+  String[] str = cl.substring(6).split([-/]);
+  return Long.parseLong(str[1]) - Long.parseLong(str[0]) + 1;
+} catch (Exception e) {
+  throw new IOException(
+  failed to get content length by parsing the content range:  + cl
+  +   + e.getMessage());
+}
+  }
+
   private static boolean isChunkedTransferEncoding(
   final MapString, ListString headers) {
 return contains(headers, HttpHeaders.TRANSFER_ENCODING, chunked)



hadoop git commit: YARN-3241. FairScheduler handles invalid queue names inconsistently. (Zhihai Xu via kasha)

2015-03-23 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 342c525ea - 75591e413


YARN-3241. FairScheduler handles invalid queue names inconsistently. (Zhihai Xu 
via kasha)

(cherry picked from commit 2bc097cd14692e6ceb06bff959f28531534eb307)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/75591e41
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/75591e41
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/75591e41

Branch: refs/heads/branch-2
Commit: 75591e4131b5303e2daff0255059392f97299dbe
Parents: 342c525
Author: Karthik Kambatla ka...@apache.org
Authored: Mon Mar 23 13:22:03 2015 -0700
Committer: Karthik Kambatla ka...@apache.org
Committed: Mon Mar 23 13:24:22 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../fair/AllocationFileLoaderService.java   |  8 +-
 .../scheduler/fair/FairScheduler.java   |  2 +
 .../fair/InvalidQueueNameException.java | 39 ++
 .../scheduler/fair/QueueManager.java| 16 
 .../fair/TestAllocationFileLoaderService.java   | 25 ++-
 .../scheduler/fair/TestFairScheduler.java   | 78 
 .../scheduler/fair/TestQueueManager.java| 13 +++-
 8 files changed, 181 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/75591e41/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index f5b04d3..7eb7390 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -46,6 +46,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3269. Yarn.nodemanager.remote-app-log-dir could not be configured to 
 fully qualified path. (Xuan Gong via junping_du)
 
+YARN-3241. FairScheduler handles invalid queue names inconsistently. 
+(Zhihai Xu via kasha)
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/75591e41/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
index 76fa588..dab6d9f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
@@ -426,13 +426,19 @@ public class AllocationFileLoaderService extends 
AbstractService {
   MapFSQueueType, SetString configuredQueues,
   SetString reservableQueues)
   throws AllocationConfigurationException {
-String queueName = element.getAttribute(name);
+String queueName = element.getAttribute(name).trim();
 
 if (queueName.contains(.)) {
   throw new AllocationConfigurationException(Bad fair scheduler config 
   + file: queue name ( + queueName + ) shouldn't contain period.);
 }
 
+if (queueName.isEmpty()) {
+  throw new AllocationConfigurationException(Bad fair scheduler config 
+  + file: queue name shouldn't be empty or 
+  + consist only of whitespace.);
+}
+
 if (parentName != null) {
   queueName = parentName + . + queueName;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/75591e41/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
index 1d97983..98a8de2 100644
--- 

hadoop git commit: HDFS-7827. Erasure Coding: support striped blocks in non-protobuf fsimage. Contributed by Hui Zheng.

2015-03-23 Thread jing9
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-7285 273cbc296 - cbc9f1109


HDFS-7827. Erasure Coding: support striped blocks in non-protobuf fsimage. 
Contributed by Hui Zheng.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cbc9f110
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cbc9f110
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cbc9f110

Branch: refs/heads/HDFS-7285
Commit: cbc9f110978b4ec56d09619d6f5a9a3e08391d70
Parents: 273cbc2
Author: Jing Zhao ji...@apache.org
Authored: Mon Mar 23 15:10:10 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon Mar 23 15:10:10 2015 -0700

--
 .../blockmanagement/BlockInfoStriped.java   |  11 +-
 .../hdfs/server/namenode/FSImageFormat.java |  62 ++--
 .../server/namenode/FSImageSerialization.java   |  78 +++---
 .../blockmanagement/TestBlockInfoStriped.java   |  34 +
 .../hdfs/server/namenode/TestFSImage.java   | 148 ++-
 5 files changed, 300 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cbc9f110/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
index cef8318..30b5ee7 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoStriped.java
@@ -20,6 +20,8 @@ package org.apache.hadoop.hdfs.server.blockmanagement;
 import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState;
+import java.io.DataOutput;
+import java.io.IOException;
 
 /**
  * Subclass of {@link BlockInfo}, presenting a block group in erasure coding.
@@ -206,6 +208,13 @@ public class BlockInfoStriped extends BlockInfo {
 return num;
   }
 
+  @Override
+  public void write(DataOutput out) throws IOException {
+out.writeShort(dataBlockNum);
+out.writeShort(parityBlockNum);
+super.write(out);
+  }
+
   /**
* Convert a complete block to an under construction block.
* @return BlockInfoUnderConstruction -  an under construction block.
@@ -215,7 +224,7 @@ public class BlockInfoStriped extends BlockInfo {
 final BlockInfoStripedUnderConstruction ucBlock;
 if(isComplete()) {
   ucBlock = new BlockInfoStripedUnderConstruction(this, getDataBlockNum(),
-  getParityBlockNum(),  s, targets);
+  getParityBlockNum(), s, targets);
   ucBlock.setBlockCollection(getBlockCollection());
 } else {
   // the block is already under construction

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cbc9f110/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
index 2e6e741..ad96863 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
@@ -47,13 +47,16 @@ import org.apache.hadoop.fs.PathIsNotDirectoryException;
 import org.apache.hadoop.fs.UnresolvedLinkException;
 import org.apache.hadoop.fs.permission.PermissionStatus;
 import org.apache.hadoop.hdfs.DFSUtil;
+import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
 import org.apache.hadoop.hdfs.protocol.LayoutFlags;
 import org.apache.hadoop.hdfs.protocol.LayoutVersion;
 import org.apache.hadoop.hdfs.protocol.LayoutVersion.Feature;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguous;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped;
 import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction;
+import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockManager;
 import 

hadoop git commit: HDFS-7864. Erasure Coding: Update safemode calculation for striped blocks. Contributed by GAO Rui.

2015-03-23 Thread jing9
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-7285 edbe633c0 - 273cbc296


HDFS-7864. Erasure Coding: Update safemode calculation for striped blocks. 
Contributed by GAO Rui.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/273cbc29
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/273cbc29
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/273cbc29

Branch: refs/heads/HDFS-7285
Commit: 273cbc29637558fc971e3f511434b90ef0afe4c0
Parents: edbe633
Author: Jing Zhao ji...@apache.org
Authored: Mon Mar 23 15:06:53 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Mon Mar 23 15:06:53 2015 -0700

--
 .../server/blockmanagement/BlockIdManager.java |  6 ++
 .../hdfs/server/blockmanagement/BlockManager.java  | 12 +++-
 .../hdfs/server/blockmanagement/BlocksMap.java |  2 +-
 .../hadoop/hdfs/server/namenode/FSNamesystem.java  | 17 -
 .../hadoop/hdfs/server/namenode/SafeMode.java  |  5 +++--
 .../java/org/apache/hadoop/hdfs/TestSafeMode.java  | 15 +--
 6 files changed, 42 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/273cbc29/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
index 1d69d74..187f8c9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java
@@ -233,6 +233,12 @@ public class BlockIdManager {
 return id  0;
   }
 
+  /**
+   * The last 4 bits of HdfsConstants.BLOCK_GROUP_INDEX_MASK(15) is ,
+   * so the last 4 bits of (~HdfsConstants.BLOCK_GROUP_INDEX_MASK) is 
+   * and the other 60 bits are 1. Group ID is the first 60 bits of any
+   * data/parity block id in the same striped block group.
+   */
   public static long convertToStripedID(long id) {
 return id  (~HdfsConstants.BLOCK_GROUP_INDEX_MASK);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/273cbc29/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 058ab4a..394c0ee 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -683,8 +683,10 @@ public class BlockManager {
 // a forced completion when a file is getting closed by an
 // OP_CLOSE edit on the standby).
 namesystem.adjustSafeModeBlockTotals(0, 1);
+final int minStorage = curBlock.isStriped() ?
+((BlockInfoStriped) curBlock).getDataBlockNum() : minReplication;
 namesystem.incrementSafeBlockCount(
-Math.min(numNodes, minReplication));
+Math.min(numNodes, minStorage), curBlock);
 
 // replace block in the blocksMap
 return blocksMap.replaceBlock(completeBlock);
@@ -2155,7 +2157,7 @@ public class BlockManager {
 // refer HDFS-5283
 if (namesystem.isInSnapshot(storedBlock.getBlockCollection())) {
   int numOfReplicas = BlockInfo.getNumExpectedLocations(storedBlock);
-  namesystem.incrementSafeBlockCount(numOfReplicas);
+  namesystem.incrementSafeBlockCount(numOfReplicas, storedBlock);
 }
 //and fall through to next clause
   }  
@@ -2536,14 +2538,14 @@ public class BlockManager {
   // only complete blocks are counted towards that.
   // In the case that the block just became complete above, completeBlock()
   // handles the safe block count maintenance.
-  namesystem.incrementSafeBlockCount(numCurrentReplica);
+  namesystem.incrementSafeBlockCount(numCurrentReplica, storedBlock);
 }
   }
 
   /**
* Modify (block--datanode) map. Remove block from set of
* needed replications if this takes care of the problem.
-   * @return the block that is stored in blockMap.
+   * @return the block that is stored in blocksMap.
*/
   private Block addStoredBlock(final BlockInfo block,

hadoop git commit: MAPREDUCE-6242. Progress report log is incredibly excessive in application master. Contributed by Varun Saxena.

2015-03-23 Thread devaraj
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 503d8e416 - 943d9ee60


MAPREDUCE-6242. Progress report log is incredibly excessive in application
master. Contributed by Varun Saxena.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/943d9ee6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/943d9ee6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/943d9ee6

Branch: refs/heads/branch-2
Commit: 943d9ee603b28a769805f94b83ea90ccba717813
Parents: 503d8e4
Author: Devaraj K deva...@apache.org
Authored: Mon Mar 23 22:48:00 2015 +0530
Committer: Devaraj K deva...@apache.org
Committed: Mon Mar 23 22:48:00 2015 +0530

--
 hadoop-mapreduce-project/CHANGES.txt|   3 +
 .../java/org/apache/hadoop/mapred/Task.java |  15 +-
 .../apache/hadoop/mapreduce/MRJobConfig.java|   5 +
 .../hadoop/mapred/TestTaskProgressReporter.java | 147 +++
 4 files changed, 166 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/943d9ee6/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index e399d3e..06aba95 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -52,6 +52,9 @@ Release 2.8.0 - UNRELEASED
 
 MAPREDUCE-6281. Fix javadoc in Terasort. (Albert Chu via ozawa)
 
+MAPREDUCE-6242. Progress report log is incredibly excessive in 
+application master. (Varun Saxena via devaraj)
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/943d9ee6/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
index 1ea1666..9fab545 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
@@ -70,6 +70,8 @@ import org.apache.hadoop.util.ShutdownHookManager;
 import org.apache.hadoop.util.StringInterner;
 import org.apache.hadoop.util.StringUtils;
 
+import com.google.common.annotations.VisibleForTesting;
+
 /**
  * Base class for tasks.
  */
@@ -227,6 +229,11 @@ abstract public class Task implements Writable, 
Configurable {
 gcUpdater = new GcTimeUpdater();
   }
 
+  @VisibleForTesting
+  void setTaskDone() {
+taskDone.set(true);
+  }
+
   
   // Accessors
   
@@ -534,9 +541,6 @@ abstract public class Task implements Writable, 
Configurable {
   public abstract void run(JobConf job, TaskUmbilicalProtocol umbilical)
 throws IOException, ClassNotFoundException, InterruptedException;
 
-  /** The number of milliseconds between progress reports. */
-  public static final int PROGRESS_INTERVAL = 3000;
-
   private transient Progress taskProgress = new Progress();
 
   // Current counters
@@ -711,6 +715,9 @@ abstract public class Task implements Writable, 
Configurable {
   int remainingRetries = MAX_RETRIES;
   // get current flag value and reset it as well
   boolean sendProgress = resetProgressFlag();
+  long taskProgressInterval =
+  conf.getLong(MRJobConfig.TASK_PROGRESS_REPORT_INTERVAL,
+   MRJobConfig.DEFAULT_TASK_PROGRESS_REPORT_INTERVAL);
   while (!taskDone.get()) {
 synchronized (lock) {
   done = false;
@@ -722,7 +729,7 @@ abstract public class Task implements Writable, 
Configurable {
 if (taskDone.get()) {
   break;
 }
-lock.wait(PROGRESS_INTERVAL);
+lock.wait(taskProgressInterval);
   }
   if (taskDone.get()) {
 break;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/943d9ee6/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
 

hadoop git commit: HDFS-7942. NFS: support regexp grouping in nfs.exports.allowed.hosts. Contributed by Brandon Li

2015-03-23 Thread brandonli
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 d2e19160d - 6b9f2d9f3


HDFS-7942. NFS: support regexp grouping in nfs.exports.allowed.hosts. 
Contributed by Brandon Li

(cherry picked from commit 36af4a913c97113bd0486c48e1cb864c5cba46fd)
(cherry picked from commit 503d8e4164ff3da29fcaf56436fe6fab6a450105)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6b9f2d9f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6b9f2d9f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6b9f2d9f

Branch: refs/heads/branch-2.7
Commit: 6b9f2d9f39d0f32a4ca44a730d71e4f0f48d7e7f
Parents: d2e1916
Author: Brandon Li brando...@apache.org
Authored: Mon Mar 23 10:06:47 2015 -0700
Committer: Brandon Li brando...@apache.org
Committed: Mon Mar 23 10:13:11 2015 -0700

--
 .../java/org/apache/hadoop/nfs/NfsExports.java  |  2 +-
 .../org/apache/hadoop/nfs/TestNfsExports.java   | 22 ++--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  2 ++
 .../src/site/markdown/HdfsNfsGateway.md |  8 ---
 4 files changed, 28 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6b9f2d9f/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java
--
diff --git 
a/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java
 
b/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java
index 8b6b46a..af96565 100644
--- 
a/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java
+++ 
b/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java
@@ -391,7 +391,7 @@ public class NfsExports {
   return new CIDRMatch(privilege,
   new SubnetUtils(pair[0], pair[1]).getInfo());
 } else if (host.contains(*) || host.contains(?) || host.contains([)
-|| host.contains(])) {
+|| host.contains(]) || host.contains(() || host.contains())) {
   if (LOG.isDebugEnabled()) {
 LOG.debug(Using Regex match for ' + host + ' and  + privilege);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6b9f2d9f/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/nfs/TestNfsExports.java
--
diff --git 
a/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/nfs/TestNfsExports.java
 
b/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/nfs/TestNfsExports.java
index 349e82a..542975d 100644
--- 
a/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/nfs/TestNfsExports.java
+++ 
b/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/nfs/TestNfsExports.java
@@ -23,8 +23,8 @@ import org.junit.Test;
 
 public class TestNfsExports {
 
-  private final String address1 = 192.168.0.1;
-  private final String address2 = 10.0.0.1;
+  private final String address1 = 192.168.0.12;
+  private final String address2 = 10.0.0.12;
   private final String hostname1 = a.b.com;
   private final String hostname2 = a.b.org;
   
@@ -165,6 +165,24 @@ public class TestNfsExports {
   }
   
   @Test
+  public void testRegexGrouping() {
+NfsExports matcher = new NfsExports(CacheSize, ExpirationPeriod,
+192.168.0.(12|34));
+Assert.assertEquals(AccessPrivilege.READ_ONLY,
+matcher.getAccessPrivilege(address1, hostname1));
+// address1 will hit the cache
+Assert.assertEquals(AccessPrivilege.READ_ONLY,
+matcher.getAccessPrivilege(address1, hostname2));
+
+matcher = new NfsExports(CacheSize, ExpirationPeriod, \\w*.a.b.com);
+Assert.assertEquals(AccessPrivilege.READ_ONLY,
+matcher.getAccessPrivilege(1.2.3.4, web.a.b.com));
+// address 1.2.3.4 will hit the cache
+Assert.assertEquals(AccessPrivilege.READ_ONLY,
+matcher.getAccessPrivilege(1.2.3.4, email.a.b.org));
+  }
+  
+  @Test
   public void testMultiMatchers() throws Exception {
 long shortExpirationPeriod = 1 * 1000 * 1000 * 1000; // 1s
 NfsExports matcher = new NfsExports(CacheSize, shortExpirationPeriod, 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6b9f2d9f/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 95cfb2d..b83c9a6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -904,6 +904,8 @@ Release 2.7.0 - UNRELEASED
 HDFS-6841. Use Time.monotonicNow() wherever applicable instead of 
Time.now()
 (Vinayakumar B via kihwal)
 
+HDFS-7942. NFS: support regexp grouping in 

hadoop git commit: HDFS-7942. NFS: support regexp grouping in nfs.exports.allowed.hosts. Contributed by Brandon Li

2015-03-23 Thread brandonli
Repository: hadoop
Updated Branches:
  refs/heads/trunk 82eda771e - 36af4a913


HDFS-7942. NFS: support regexp grouping in nfs.exports.allowed.hosts. 
Contributed by Brandon Li


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/36af4a91
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/36af4a91
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/36af4a91

Branch: refs/heads/trunk
Commit: 36af4a913c97113bd0486c48e1cb864c5cba46fd
Parents: 82eda77
Author: Brandon Li brando...@apache.org
Authored: Mon Mar 23 10:06:47 2015 -0700
Committer: Brandon Li brando...@apache.org
Committed: Mon Mar 23 10:06:47 2015 -0700

--
 .../java/org/apache/hadoop/nfs/NfsExports.java  |  2 +-
 .../org/apache/hadoop/nfs/TestNfsExports.java   | 22 ++--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  2 ++
 .../src/site/markdown/HdfsNfsGateway.md |  8 ---
 4 files changed, 28 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/36af4a91/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java
--
diff --git 
a/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java
 
b/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java
index 8b6b46a..af96565 100644
--- 
a/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java
+++ 
b/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java
@@ -391,7 +391,7 @@ public class NfsExports {
   return new CIDRMatch(privilege,
   new SubnetUtils(pair[0], pair[1]).getInfo());
 } else if (host.contains(*) || host.contains(?) || host.contains([)
-|| host.contains(])) {
+|| host.contains(]) || host.contains(() || host.contains())) {
   if (LOG.isDebugEnabled()) {
 LOG.debug(Using Regex match for ' + host + ' and  + privilege);
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/36af4a91/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/nfs/TestNfsExports.java
--
diff --git 
a/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/nfs/TestNfsExports.java
 
b/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/nfs/TestNfsExports.java
index 349e82a..542975d 100644
--- 
a/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/nfs/TestNfsExports.java
+++ 
b/hadoop-common-project/hadoop-nfs/src/test/java/org/apache/hadoop/nfs/TestNfsExports.java
@@ -23,8 +23,8 @@ import org.junit.Test;
 
 public class TestNfsExports {
 
-  private final String address1 = 192.168.0.1;
-  private final String address2 = 10.0.0.1;
+  private final String address1 = 192.168.0.12;
+  private final String address2 = 10.0.0.12;
   private final String hostname1 = a.b.com;
   private final String hostname2 = a.b.org;
   
@@ -165,6 +165,24 @@ public class TestNfsExports {
   }
   
   @Test
+  public void testRegexGrouping() {
+NfsExports matcher = new NfsExports(CacheSize, ExpirationPeriod,
+192.168.0.(12|34));
+Assert.assertEquals(AccessPrivilege.READ_ONLY,
+matcher.getAccessPrivilege(address1, hostname1));
+// address1 will hit the cache
+Assert.assertEquals(AccessPrivilege.READ_ONLY,
+matcher.getAccessPrivilege(address1, hostname2));
+
+matcher = new NfsExports(CacheSize, ExpirationPeriod, \\w*.a.b.com);
+Assert.assertEquals(AccessPrivilege.READ_ONLY,
+matcher.getAccessPrivilege(1.2.3.4, web.a.b.com));
+// address 1.2.3.4 will hit the cache
+Assert.assertEquals(AccessPrivilege.READ_ONLY,
+matcher.getAccessPrivilege(1.2.3.4, email.a.b.org));
+  }
+  
+  @Test
   public void testMultiMatchers() throws Exception {
 long shortExpirationPeriod = 1 * 1000 * 1000 * 1000; // 1s
 NfsExports matcher = new NfsExports(CacheSize, shortExpirationPeriod, 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/36af4a91/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index e82c4c4..8c99876 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1232,6 +1232,8 @@ Release 2.7.0 - UNRELEASED
 HDFS-6841. Use Time.monotonicNow() wherever applicable instead of 
Time.now()
 (Vinayakumar B via kihwal)
 
+HDFS-7942. NFS: support regexp grouping in nfs.exports.allowed.hosts 
(brandonli)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and 

hadoop git commit: MAPREDUCE-6242. Progress report log is incredibly excessive in application master. Contributed by Varun Saxena.

2015-03-23 Thread devaraj
Repository: hadoop
Updated Branches:
  refs/heads/trunk 36af4a913 - 7e6f384dd


MAPREDUCE-6242. Progress report log is incredibly excessive in application
master. Contributed by Varun Saxena.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7e6f384d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7e6f384d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7e6f384d

Branch: refs/heads/trunk
Commit: 7e6f384dd742de21f29e96ee76df5316529c9019
Parents: 36af4a9
Author: Devaraj K deva...@apache.org
Authored: Mon Mar 23 22:51:20 2015 +0530
Committer: Devaraj K deva...@apache.org
Committed: Mon Mar 23 22:51:20 2015 +0530

--
 hadoop-mapreduce-project/CHANGES.txt|   3 +
 .../java/org/apache/hadoop/mapred/Task.java |  13 +-
 .../apache/hadoop/mapreduce/MRJobConfig.java|   5 +
 .../hadoop/mapred/TestTaskProgressReporter.java | 160 +++
 4 files changed, 177 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7e6f384d/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 20505b6..b8a2a1c 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -300,6 +300,9 @@ Release 2.8.0 - UNRELEASED
 
 MAPREDUCE-6281. Fix javadoc in Terasort. (Albert Chu via ozawa)
 
+MAPREDUCE-6242. Progress report log is incredibly excessive in 
+application master. (Varun Saxena via devaraj)
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7e6f384d/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
index 7fa5d02..bf5ca22 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
@@ -229,6 +229,11 @@ abstract public class Task implements Writable, 
Configurable {
 gcUpdater = new GcTimeUpdater();
   }
 
+  @VisibleForTesting
+  void setTaskDone() {
+taskDone.set(true);
+  }
+
   
   // Accessors
   
@@ -536,9 +541,6 @@ abstract public class Task implements Writable, 
Configurable {
   public abstract void run(JobConf job, TaskUmbilicalProtocol umbilical)
 throws IOException, ClassNotFoundException, InterruptedException;
 
-  /** The number of milliseconds between progress reports. */
-  public static final int PROGRESS_INTERVAL = 3000;
-
   private transient Progress taskProgress = new Progress();
 
   // Current counters
@@ -714,6 +716,9 @@ abstract public class Task implements Writable, 
Configurable {
   int remainingRetries = MAX_RETRIES;
   // get current flag value and reset it as well
   boolean sendProgress = resetProgressFlag();
+  long taskProgressInterval =
+  conf.getLong(MRJobConfig.TASK_PROGRESS_REPORT_INTERVAL,
+   MRJobConfig.DEFAULT_TASK_PROGRESS_REPORT_INTERVAL);
   while (!taskDone.get()) {
 synchronized (lock) {
   done = false;
@@ -726,7 +731,7 @@ abstract public class Task implements Writable, 
Configurable {
 if (taskDone.get()) {
   break;
 }
-lock.wait(PROGRESS_INTERVAL);
+lock.wait(taskProgressInterval);
   }
   if (taskDone.get()) {
 break;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7e6f384d/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
index f0a6ddf..947c814 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
+++ 

[2/3] hadoop git commit: YARN-3336. FileSystem memory leak in DelegationTokenRenewer.

2015-03-23 Thread cnauroth
YARN-3336. FileSystem memory leak in DelegationTokenRenewer.

(cherry picked from commit 6ca1f12024fd7cec7b01df0f039ca59f3f365dc1)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/342c525e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/342c525e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/342c525e

Branch: refs/heads/branch-2
Commit: 342c525eaa0749175f0e3827d245642776d043a5
Parents: 943d9ee
Author: cnauroth cnaur...@apache.org
Authored: Mon Mar 23 10:45:50 2015 -0700
Committer: cnauroth cnaur...@apache.org
Committed: Mon Mar 23 10:46:06 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 ++
 .../security/DelegationTokenRenewer.java| 13 +++--
 .../security/TestDelegationTokenRenewer.java| 30 ++--
 3 files changed, 41 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/342c525e/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 1b3ed2c..f5b04d3 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -774,6 +774,9 @@ Release 2.7.0 - UNRELEASED
 YARN-3384. TestLogAggregationService.verifyContainerLogs fails after
 YARN-2777. (Naganarasimha G R via ozawa)
 
+YARN-3336. FileSystem memory leak in DelegationTokenRenewer.
+(Zhihai Xu via cnauroth)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/342c525e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
index cb456d8..2619971 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
@@ -605,6 +605,7 @@ public class DelegationTokenRenewer extends AbstractService 
{
 rmContext.getSystemCredentialsForApps().put(applicationId, byteBuffer);
   }
 
+  @VisibleForTesting
   protected Token?[] obtainSystemTokensForUser(String user,
   final Credentials credentials) throws IOException, InterruptedException {
 // Get new hdfs tokens on behalf of this user
@@ -615,8 +616,16 @@ public class DelegationTokenRenewer extends 
AbstractService {
 proxyUser.doAs(new PrivilegedExceptionActionToken?[]() {
   @Override
   public Token?[] run() throws Exception {
-return FileSystem.get(getConfig()).addDelegationTokens(
-  UserGroupInformation.getLoginUser().getUserName(), credentials);
+FileSystem fs = FileSystem.get(getConfig());
+try {
+  return fs.addDelegationTokens(
+  UserGroupInformation.getLoginUser().getUserName(),
+  credentials);
+} finally {
+  // Close the FileSystem created by the new proxy user,
+  // So that we don't leave an entry in the FileSystem cache
+  fs.close();
+}
   }
 });
 return newTokens;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/342c525e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestDelegationTokenRenewer.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestDelegationTokenRenewer.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestDelegationTokenRenewer.java
index 5d31404..99a506a 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestDelegationTokenRenewer.java
+++ 

[3/3] hadoop git commit: YARN-3336. FileSystem memory leak in DelegationTokenRenewer.

2015-03-23 Thread cnauroth
YARN-3336. FileSystem memory leak in DelegationTokenRenewer.

(cherry picked from commit 6ca1f12024fd7cec7b01df0f039ca59f3f365dc1)
(cherry picked from commit 342c525eaa0749175f0e3827d245642776d043a5)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/57e29720
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/57e29720
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/57e29720

Branch: refs/heads/branch-2.7
Commit: 57e297208d1b6f5b12b1d9251dc0081b4b603ed0
Parents: 6b9f2d9
Author: cnauroth cnaur...@apache.org
Authored: Mon Mar 23 10:45:50 2015 -0700
Committer: cnauroth cnaur...@apache.org
Committed: Mon Mar 23 10:46:22 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 ++
 .../security/DelegationTokenRenewer.java| 13 +++--
 .../security/TestDelegationTokenRenewer.java| 30 ++--
 3 files changed, 41 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/57e29720/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index ef816fc..1cb95ae 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -732,6 +732,9 @@ Release 2.7.0 - UNRELEASED
 YARN-3384. TestLogAggregationService.verifyContainerLogs fails after
 YARN-2777. (Naganarasimha G R via ozawa)
 
+YARN-3336. FileSystem memory leak in DelegationTokenRenewer.
+(Zhihai Xu via cnauroth)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/57e29720/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
index cb456d8..2619971 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java
@@ -605,6 +605,7 @@ public class DelegationTokenRenewer extends AbstractService 
{
 rmContext.getSystemCredentialsForApps().put(applicationId, byteBuffer);
   }
 
+  @VisibleForTesting
   protected Token?[] obtainSystemTokensForUser(String user,
   final Credentials credentials) throws IOException, InterruptedException {
 // Get new hdfs tokens on behalf of this user
@@ -615,8 +616,16 @@ public class DelegationTokenRenewer extends 
AbstractService {
 proxyUser.doAs(new PrivilegedExceptionActionToken?[]() {
   @Override
   public Token?[] run() throws Exception {
-return FileSystem.get(getConfig()).addDelegationTokens(
-  UserGroupInformation.getLoginUser().getUserName(), credentials);
+FileSystem fs = FileSystem.get(getConfig());
+try {
+  return fs.addDelegationTokens(
+  UserGroupInformation.getLoginUser().getUserName(),
+  credentials);
+} finally {
+  // Close the FileSystem created by the new proxy user,
+  // So that we don't leave an entry in the FileSystem cache
+  fs.close();
+}
   }
 });
 return newTokens;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/57e29720/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestDelegationTokenRenewer.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestDelegationTokenRenewer.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestDelegationTokenRenewer.java
index 5d31404..99a506a 100644
--- 

[2/3] hadoop git commit: HDFS-7917. Use file to replace data dirs in test to simulate a disk failure. Contributed by Lei (Eddy) Xu.

2015-03-23 Thread cnauroth
HDFS-7917. Use file to replace data dirs in test to simulate a disk failure. 
Contributed by Lei (Eddy) Xu.

(cherry picked from commit 2c238ae4e00371ef76582b007bb0e20ac8455d9c)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/01c0bcb1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/01c0bcb1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/01c0bcb1

Branch: refs/heads/branch-2
Commit: 01c0bcb176e22ddefbc8086e382dd1ebd105f9c6
Parents: fad8c78
Author: cnauroth cnaur...@apache.org
Authored: Mon Mar 23 16:29:51 2015 -0700
Committer: cnauroth cnaur...@apache.org
Committed: Mon Mar 23 16:30:33 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../hdfs/server/datanode/DataNodeTestUtils.java | 61 +++-
 .../datanode/TestDataNodeHotSwapVolumes.java| 29 --
 .../datanode/TestDataNodeVolumeFailure.java | 11 +---
 .../TestDataNodeVolumeFailureReporting.java | 46 ---
 .../TestDataNodeVolumeFailureToleration.java|  8 +--
 6 files changed, 88 insertions(+), 70 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/01c0bcb1/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 9981d4f..febec02 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -469,6 +469,9 @@ Release 2.7.0 - UNRELEASED
 
 HDFS-7962. Remove duplicated logs in BlockManager. (yliu)
 
+HDFS-7917. Use file to replace data dirs in test to simulate a disk 
failure.
+(Lei (Eddy) Xu via cnauroth)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/01c0bcb1/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
index fd51e52..f9a2ba1 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
@@ -40,7 +40,9 @@ import com.google.common.base.Preconditions;
  * Utility class for accessing package-private DataNode information during 
tests.
  *
  */
-public class DataNodeTestUtils {  
+public class DataNodeTestUtils {
+  private static final String DIR_FAILURE_SUFFIX = .origin;
+
   public static DatanodeRegistration 
   getDNRegistrationForBP(DataNode dn, String bpid) throws IOException {
 return dn.getDNRegistrationForBP(bpid);
@@ -159,4 +161,61 @@ public class DataNodeTestUtils {
   final String bpid, final long blkId) {
 return FsDatasetTestUtil.fetchReplicaInfo(dn.getFSDataset(), bpid, blkId);
   }
+
+  /**
+   * It injects disk failures to data dirs by replacing these data dirs with
+   * regular files.
+   *
+   * @param dirs data directories.
+   * @throws IOException on I/O error.
+   */
+  public static void injectDataDirFailure(File... dirs) throws IOException {
+for (File dir : dirs) {
+  File renamedTo = new File(dir.getPath() + DIR_FAILURE_SUFFIX);
+  if (renamedTo.exists()) {
+throw new IOException(String.format(
+Can not inject failure to dir: %s because %s exists.,
+dir, renamedTo));
+  }
+  if (!dir.renameTo(renamedTo)) {
+throw new IOException(String.format(Failed to rename %s to %s.,
+dir, renamedTo));
+  }
+  if (!dir.createNewFile()) {
+throw new IOException(String.format(
+Failed to create file %s to inject disk failure., dir));
+  }
+}
+  }
+
+  /**
+   * Restore the injected data dir failures.
+   *
+   * @see {@link #injectDataDirFailures}.
+   * @param dirs data directories.
+   * @throws IOException
+   */
+  public static void restoreDataDirFromFailure(File... dirs)
+  throws IOException {
+for (File dir : dirs) {
+  File renamedDir = new File(dir.getPath() + DIR_FAILURE_SUFFIX);
+  if (renamedDir.exists()) {
+if (dir.exists()) {
+  if (!dir.isFile()) {
+throw new IOException(
+Injected failure data dir is supposed to be file:  + dir);
+  }
+  if (!dir.delete()) {
+throw new IOException(
+Failed to delete injected failure data dir:  + dir);
+  }
+

hadoop git commit: YARN-3393. Getting application(s) goes wrong when app finishes before starting the attempt. Contributed by Zhijie Shen

2015-03-23 Thread xgong
Repository: hadoop
Updated Branches:
  refs/heads/trunk 2c238ae4e - 9fae455e2


YARN-3393. Getting application(s) goes wrong when app finishes before
starting the attempt. Contributed by Zhijie Shen


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9fae455e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9fae455e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9fae455e

Branch: refs/heads/trunk
Commit: 9fae455e26e0230107e1c6db58a49a5b6b296cf4
Parents: 2c238ae
Author: Xuan xg...@apache.org
Authored: Mon Mar 23 20:33:16 2015 -0700
Committer: Xuan xg...@apache.org
Committed: Mon Mar 23 20:33:16 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 ++
 ...pplicationHistoryManagerOnTimelineStore.java | 13 +++
 ...pplicationHistoryManagerOnTimelineStore.java | 39 +---
 3 files changed, 42 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9fae455e/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index e7d4f59..3d9f271 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -828,6 +828,9 @@ Release 2.7.0 - UNRELEASED
 YARN-3336. FileSystem memory leak in DelegationTokenRenewer.
 (Zhihai Xu via cnauroth)
 
+YARN-3393. Getting application(s) goes wrong when app finishes before
+starting the attempt. (Zhijie Shen via xgong)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9fae455e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
index 1010f62..49041c7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
@@ -517,15 +517,14 @@ public class ApplicationHistoryManagerOnTimelineStore 
extends AbstractService
   if (app.appReport.getCurrentApplicationAttemptId() != null) {
 ApplicationAttemptReport appAttempt =
 
getApplicationAttempt(app.appReport.getCurrentApplicationAttemptId());
-if (appAttempt != null) {
-  app.appReport.setHost(appAttempt.getHost());
-  app.appReport.setRpcPort(appAttempt.getRpcPort());
-  app.appReport.setTrackingUrl(appAttempt.getTrackingUrl());
-  
app.appReport.setOriginalTrackingUrl(appAttempt.getOriginalTrackingUrl());
-}
+app.appReport.setHost(appAttempt.getHost());
+app.appReport.setRpcPort(appAttempt.getRpcPort());
+app.appReport.setTrackingUrl(appAttempt.getTrackingUrl());
+
app.appReport.setOriginalTrackingUrl(appAttempt.getOriginalTrackingUrl());
   }
-} catch (AuthorizationException e) {
+} catch (AuthorizationException | ApplicationAttemptNotFoundException e) {
   // AuthorizationException is thrown because the user doesn't have access
+  // It's possible that the app is finished before the first attempt is 
created.
   app.appReport.setDiagnostics(null);
   app.appReport.setCurrentApplicationAttemptId(null);
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9fae455e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java
 

hadoop git commit: YARN-3393. Getting application(s) goes wrong when app finishes before starting the attempt. Contributed by Zhijie Shen

2015-03-23 Thread xgong
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 8e1c33e70 - 4dfd84ec0


YARN-3393. Getting application(s) goes wrong when app finishes before
starting the attempt. Contributed by Zhijie Shen

(cherry picked from commit 9fae455e26e0230107e1c6db58a49a5b6b296cf4)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4dfd84ec
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4dfd84ec
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4dfd84ec

Branch: refs/heads/branch-2.7
Commit: 4dfd84ec0829bab8c4acde5aa26b884eca084b52
Parents: 8e1c33e
Author: Xuan xg...@apache.org
Authored: Mon Mar 23 20:33:16 2015 -0700
Committer: Xuan xg...@apache.org
Committed: Mon Mar 23 20:34:59 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 ++
 ...pplicationHistoryManagerOnTimelineStore.java | 13 +++
 ...pplicationHistoryManagerOnTimelineStore.java | 39 +---
 3 files changed, 42 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4dfd84ec/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 1cb95ae..ff5adcd 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -735,6 +735,9 @@ Release 2.7.0 - UNRELEASED
 YARN-3336. FileSystem memory leak in DelegationTokenRenewer.
 (Zhihai Xu via cnauroth)
 
+YARN-3393. Getting application(s) goes wrong when app finishes before
+starting the attempt. (Zhijie Shen via xgong)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4dfd84ec/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
index 1010f62..49041c7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
@@ -517,15 +517,14 @@ public class ApplicationHistoryManagerOnTimelineStore 
extends AbstractService
   if (app.appReport.getCurrentApplicationAttemptId() != null) {
 ApplicationAttemptReport appAttempt =
 
getApplicationAttempt(app.appReport.getCurrentApplicationAttemptId());
-if (appAttempt != null) {
-  app.appReport.setHost(appAttempt.getHost());
-  app.appReport.setRpcPort(appAttempt.getRpcPort());
-  app.appReport.setTrackingUrl(appAttempt.getTrackingUrl());
-  
app.appReport.setOriginalTrackingUrl(appAttempt.getOriginalTrackingUrl());
-}
+app.appReport.setHost(appAttempt.getHost());
+app.appReport.setRpcPort(appAttempt.getRpcPort());
+app.appReport.setTrackingUrl(appAttempt.getTrackingUrl());
+
app.appReport.setOriginalTrackingUrl(appAttempt.getOriginalTrackingUrl());
   }
-} catch (AuthorizationException e) {
+} catch (AuthorizationException | ApplicationAttemptNotFoundException e) {
   // AuthorizationException is thrown because the user doesn't have access
+  // It's possible that the app is finished before the first attempt is 
created.
   app.appReport.setDiagnostics(null);
   app.appReport.setCurrentApplicationAttemptId(null);
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4dfd84ec/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java
 

hadoop git commit: YARN-3393. Getting application(s) goes wrong when app finishes before starting the attempt. Contributed by Zhijie Shen

2015-03-23 Thread xgong
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 01c0bcb17 - cbdcdfad6


YARN-3393. Getting application(s) goes wrong when app finishes before
starting the attempt. Contributed by Zhijie Shen

(cherry picked from commit 9fae455e26e0230107e1c6db58a49a5b6b296cf4)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cbdcdfad
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cbdcdfad
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cbdcdfad

Branch: refs/heads/branch-2
Commit: cbdcdfad6de81e17fb586bc2a53b37da43defd79
Parents: 01c0bcb
Author: Xuan xg...@apache.org
Authored: Mon Mar 23 20:33:16 2015 -0700
Committer: Xuan xg...@apache.org
Committed: Mon Mar 23 20:34:29 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 ++
 ...pplicationHistoryManagerOnTimelineStore.java | 13 +++
 ...pplicationHistoryManagerOnTimelineStore.java | 39 +---
 3 files changed, 42 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cbdcdfad/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 0a09e0a..107f5db 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -783,6 +783,9 @@ Release 2.7.0 - UNRELEASED
 YARN-3336. FileSystem memory leak in DelegationTokenRenewer.
 (Zhihai Xu via cnauroth)
 
+YARN-3393. Getting application(s) goes wrong when app finishes before
+starting the attempt. (Zhijie Shen via xgong)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cbdcdfad/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
index 1010f62..49041c7 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
@@ -517,15 +517,14 @@ public class ApplicationHistoryManagerOnTimelineStore 
extends AbstractService
   if (app.appReport.getCurrentApplicationAttemptId() != null) {
 ApplicationAttemptReport appAttempt =
 
getApplicationAttempt(app.appReport.getCurrentApplicationAttemptId());
-if (appAttempt != null) {
-  app.appReport.setHost(appAttempt.getHost());
-  app.appReport.setRpcPort(appAttempt.getRpcPort());
-  app.appReport.setTrackingUrl(appAttempt.getTrackingUrl());
-  
app.appReport.setOriginalTrackingUrl(appAttempt.getOriginalTrackingUrl());
-}
+app.appReport.setHost(appAttempt.getHost());
+app.appReport.setRpcPort(appAttempt.getRpcPort());
+app.appReport.setTrackingUrl(appAttempt.getTrackingUrl());
+
app.appReport.setOriginalTrackingUrl(appAttempt.getOriginalTrackingUrl());
   }
-} catch (AuthorizationException e) {
+} catch (AuthorizationException | ApplicationAttemptNotFoundException e) {
   // AuthorizationException is thrown because the user doesn't have access
+  // It's possible that the app is finished before the first attempt is 
created.
   app.appReport.setDiagnostics(null);
   app.appReport.setCurrentApplicationAttemptId(null);
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cbdcdfad/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java