[15/50] hadoop git commit: HADOOP-11558. Fix dead links to doc of hadoop-tools. Contributed by Jean-Pierre Matsumoto.

2015-03-17 Thread zjshen
HADOOP-11558. Fix dead links to doc of hadoop-tools. Contributed by Jean-Pierre 
Matsumoto.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/79426f33
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/79426f33
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/79426f33

Branch: refs/heads/YARN-2928
Commit: 79426f3334ade5850fbf169764f540ede00fe366
Parents: b308a8d
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Sun Mar 15 14:17:35 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Sun Mar 15 14:29:49 2015 +0900

--
 hadoop-common-project/hadoop-common/CHANGES.txt   |  3 +++
 .../src/site/markdown/SchedulerLoadSimulator.md   |  2 +-
 .../src/site/markdown/HadoopStreaming.md.vm   | 14 +++---
 3 files changed, 11 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/79426f33/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 55028cb..bb08cfe 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1100,6 +1100,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11710. Make CryptoOutputStream behave like DFSOutputStream wrt
 synchronization. (Sean Busbey via yliu)
 
+HADOOP-11558. Fix dead links to doc of hadoop-tools. (Jean-Pierre 
+Matsumoto via ozawa)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/79426f33/hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md
--
diff --git 
a/hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md 
b/hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md
index ca179ee..2cffc86 100644
--- a/hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md
+++ b/hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md
@@ -43,7 +43,7 @@ The Yarn Scheduler Load Simulator (SLS) is such a tool, which 
can simulate large
 o
 The simulator will exercise the real Yarn `ResourceManager` removing the 
network factor by simulating `NodeManagers` and `ApplicationMasters` via 
handling and dispatching `NM`/`AMs` heartbeat events from within the same JVM. 
To keep tracking of scheduler behavior and performance, a scheduler wrapper 
will wrap the real scheduler.
 
-The size of the cluster and the application load can be loaded from 
configuration files, which are generated from job history files directly by 
adopting [Apache Rumen](https://hadoop.apache.org/docs/stable/rumen.html).
+The size of the cluster and the application load can be loaded from 
configuration files, which are generated from job history files directly by 
adopting [Apache Rumen](../hadoop-rumen/Rumen.html).
 
 The simulator will produce real time metrics while executing, including:
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/79426f33/hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm
--
diff --git 
a/hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm 
b/hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm
index 0b64586..b4c5e38 100644
--- a/hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm
+++ b/hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm
@@ -201,7 +201,7 @@ To specify additional local temp directories use:
  -D mapred.system.dir=/tmp/system
  -D mapred.temp.dir=/tmp/temp
 
-**Note:** For more details on job configuration parameters see: 
[mapred-default.xml](./mapred-default.xml)
+**Note:** For more details on job configuration parameters see: 
[mapred-default.xml](../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml)
 
 $H4 Specifying Map-Only Jobs
 
@@ -322,7 +322,7 @@ More Usage Examples
 
 $H3 Hadoop Partitioner Class
 
-Hadoop has a library class, 
[KeyFieldBasedPartitioner](../../api/org/apache/hadoop/mapred/lib/KeyFieldBasedPartitioner.html),
 that is useful for many applications. This class allows the Map/Reduce 
framework to partition the map outputs based on certain key fields, not the 
whole keys. For example:
+Hadoop has a library class, 
[KeyFieldBasedPartitioner](../api/org/apache/hadoop/mapred/lib/KeyFieldBasedPartitioner.html),
 that is useful for many applications. This class allows the Map/Reduce 
framework to partition the map outputs based on certain key fields, not the 
whole keys. For example:
 
 hadoop jar hadoop-streaming-${project.version}.jar \
   -D 

[06/50] hadoop git commit: HDFS-7903. Cannot recover block after truncate and delete snapshot. Contributed by Plamen Jeliazkov.

2015-03-17 Thread zjshen
HDFS-7903. Cannot recover block after truncate and delete snapshot. Contributed 
by Plamen Jeliazkov.

Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6acb7f21
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6acb7f21
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6acb7f21

Branch: refs/heads/YARN-2928
Commit: 6acb7f2110897264241df44d564db2f85260348f
Parents: d324164
Author: Konstantin V Shvachko s...@apache.org
Authored: Fri Mar 13 12:39:01 2015 -0700
Committer: Konstantin V Shvachko s...@apache.org
Committed: Fri Mar 13 13:12:51 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../server/namenode/snapshot/FileDiffList.java  | 19 +++--
 .../hdfs/server/namenode/TestFileTruncate.java  | 30 
 3 files changed, 49 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6acb7f21/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index ac7e096..a149f18 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1148,6 +1148,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7926. NameNode implementation of ClientProtocol.truncate(..) is not 
 idempotent (Tsz Wo Nicholas Sze via brandonli)
 
+HDFS-7903. Cannot recover block after truncate and delete snapshot.
+(Plamen Jeliazkov via shv)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6acb7f21/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
index 0c94554..5c9e121 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java
@@ -20,8 +20,11 @@ package org.apache.hadoop.hdfs.server.namenode.snapshot;
 import java.util.Collections;
 import java.util.List;
 
+import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguous;
+import 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoContiguousUnderConstruction;
 import org.apache.hadoop.hdfs.server.blockmanagement.BlockStoragePolicySuite;
+import org.apache.hadoop.hdfs.server.common.HdfsServerConstants;
 import org.apache.hadoop.hdfs.server.namenode.INode;
 import org.apache.hadoop.hdfs.server.namenode.INode.BlocksMapUpdateInfo;
 import org.apache.hadoop.hdfs.server.namenode.INodeFile;
@@ -125,9 +128,19 @@ public class FileDiffList extends
 continue;
   break;
 }
-// Collect the remaining blocks of the file
-while(i  removedBlocks.length) {
-  collectedBlocks.addDeleteBlock(removedBlocks[i++]);
+// Check if last block is part of truncate recovery
+BlockInfoContiguous lastBlock = file.getLastBlock();
+Block dontRemoveBlock = null;
+if(lastBlock != null  lastBlock.getBlockUCState().equals(
+HdfsServerConstants.BlockUCState.UNDER_RECOVERY)) {
+  dontRemoveBlock = ((BlockInfoContiguousUnderConstruction) lastBlock)
+  .getTruncateBlock();
+}
+// Collect the remaining blocks of the file, ignoring truncate block
+for(;i  removedBlocks.length; i++) {
+  if(dontRemoveBlock == null || !removedBlocks[i].equals(dontRemoveBlock)) 
{
+collectedBlocks.addDeleteBlock(removedBlocks[i]);
+  }
 }
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6acb7f21/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
index 260d8bb..3b6e107 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
@@ -178,6 +178,36 @@ public class TestFileTruncate {
 

[09/50] hadoop git commit: HDFS-7915. The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell the DFSClient about it because of a network error (cmccabe)

2015-03-17 Thread zjshen
HDFS-7915. The DataNode can sometimes allocate a ShortCircuitShm slot and fail 
to tell the DFSClient about it because of a network error (cmccabe)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5aa892ed
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5aa892ed
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5aa892ed

Branch: refs/heads/YARN-2928
Commit: 5aa892ed486d42ae6b94c4866b92cd2b382ea640
Parents: 6fdef76
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Fri Mar 13 18:29:49 2015 -0700
Committer: Colin Patrick Mccabe cmcc...@cloudera.com
Committed: Fri Mar 13 18:29:49 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../apache/hadoop/hdfs/BlockReaderFactory.java  | 23 -
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |  2 +
 .../datatransfer/DataTransferProtocol.java  |  5 +-
 .../hdfs/protocol/datatransfer/Receiver.java|  2 +-
 .../hdfs/protocol/datatransfer/Sender.java  |  4 +-
 .../hdfs/server/datanode/DataXceiver.java   | 95 
 .../server/datanode/ShortCircuitRegistry.java   | 13 ++-
 .../src/main/proto/datatransfer.proto   | 11 +++
 .../shortcircuit/TestShortCircuitCache.java | 63 +
 10 files changed, 178 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5aa892ed/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index c3f9367..ff00b0c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1177,6 +1177,9 @@ Release 2.7.0 - UNRELEASED
   HDFS-7722. DataNode#checkDiskError should also remove Storage when error
   is found. (Lei Xu via Colin P. McCabe)
 
+  HDFS-7915. The DataNode can sometimes allocate a ShortCircuitShm slot and
+  fail to tell the DFSClient about it because of a network error (cmccabe)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5aa892ed/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
index ba48c79..1e915b2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.hdfs;
 
+import static 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ShortCircuitFdResponse.USE_RECEIPT_VERIFICATION;
+
 import java.io.BufferedOutputStream;
 import java.io.DataInputStream;
 import java.io.DataOutputStream;
@@ -69,6 +71,12 @@ import com.google.common.base.Preconditions;
 public class BlockReaderFactory implements ShortCircuitReplicaCreator {
   static final Log LOG = LogFactory.getLog(BlockReaderFactory.class);
 
+  public static class FailureInjector {
+public void injectRequestFileDescriptorsFailure() throws IOException {
+  // do nothing
+}
+  }
+
   @VisibleForTesting
   static ShortCircuitReplicaCreator
   createShortCircuitReplicaInfoCallback = null;
@@ -76,6 +84,11 @@ public class BlockReaderFactory implements 
ShortCircuitReplicaCreator {
   private final DFSClient.Conf conf;
 
   /**
+   * Injects failures into specific operations during unit tests.
+   */
+  private final FailureInjector failureInjector;
+
+  /**
* The file name, for logging and debugging purposes.
*/
   private String fileName;
@@ -169,6 +182,7 @@ public class BlockReaderFactory implements 
ShortCircuitReplicaCreator {
 
   public BlockReaderFactory(DFSClient.Conf conf) {
 this.conf = conf;
+this.failureInjector = conf.brfFailureInjector;
 this.remainingCacheTries = conf.nCachedConnRetry;
   }
 
@@ -518,11 +532,12 @@ public class BlockReaderFactory implements 
ShortCircuitReplicaCreator {
 final DataOutputStream out =
 new DataOutputStream(new BufferedOutputStream(peer.getOutputStream()));
 SlotId slotId = slot == null ? null : slot.getSlotId();
-new Sender(out).requestShortCircuitFds(block, token, slotId, 1);
+new Sender(out).requestShortCircuitFds(block, token, slotId, 1, true);
 DataInputStream in = new DataInputStream(peer.getInputStream());
 BlockOpResponseProto resp = BlockOpResponseProto.parseFrom(
 PBHelper.vintPrefixed(in));
 DomainSocket sock = 

[33/50] hadoop git commit: MAPREDUCE-6100. replace mapreduce.job.credentials.binary with MRJobConfig.MAPREDUCE_JOB_CREDENTIALS_BINARY for better readability. Contributed by Zhihai Xu.

2015-03-17 Thread zjshen
MAPREDUCE-6100. replace mapreduce.job.credentials.binary with 
MRJobConfig.MAPREDUCE_JOB_CREDENTIALS_BINARY for better readability. 
Contributed by Zhihai Xu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f222bde2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f222bde2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f222bde2

Branch: refs/heads/YARN-2928
Commit: f222bde273cc10a38945dc31e85206a0c4f06a12
Parents: 046521c
Author: Harsh J ha...@cloudera.com
Authored: Tue Mar 17 11:06:35 2015 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Tue Mar 17 11:06:35 2015 +0530

--
 hadoop-mapreduce-project/CHANGES.txt | 4 
 .../src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java  | 2 +-
 2 files changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f222bde2/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 52880f6..ee21b70 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -253,6 +253,10 @@ Release 2.8.0 - UNRELEASED
 
   IMPROVEMENTS
 
+MAPREDUCE-6100. replace mapreduce.job.credentials.binary with
+MRJobConfig.MAPREDUCE_JOB_CREDENTIALS_BINARY for better readability.
+(Zhihai Xu via harsh)
+
 MAPREDUCE-6105. Inconsistent configuration in property
 mapreduce.reduce.shuffle.merge.percent. (Ray Chiang via harsh)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f222bde2/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java
index 30a87c7..023bd63 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java
@@ -383,7 +383,7 @@ class JobSubmitter {
   throws IOException {
 // add tokens and secrets coming from a token storage file
 String binaryTokenFilename =
-  conf.get(mapreduce.job.credentials.binary);
+  conf.get(MRJobConfig.MAPREDUCE_JOB_CREDENTIALS_BINARY);
 if (binaryTokenFilename != null) {
   Credentials binary = Credentials.readTokenStorageFile(
   FileSystem.getLocal(conf).makeQualified(



[19/50] hadoop git commit: YARN-1453. [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments. Contributed by Akira AJISAKA, Andrew Purtell, and Allen Wittenauer.

2015-03-17 Thread zjshen
YARN-1453. [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc 
comments. Contributed by Akira AJISAKA, Andrew Purtell, and Allen Wittenauer.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3da9a97c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3da9a97c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3da9a97c

Branch: refs/heads/YARN-2928
Commit: 3da9a97cfbcc3a1c50aaf85b1a129d4d269cd5fd
Parents: 3ff1ba2
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Mon Mar 16 23:19:05 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Mon Mar 16 23:19:05 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../yarn/api/ApplicationBaseProtocol.java   | 44 -
 .../yarn/api/ApplicationClientProtocol.java |  3 -
 .../api/protocolrecords/AllocateRequest.java| 25 ---
 .../api/protocolrecords/AllocateResponse.java   | 68 ++--
 .../FinishApplicationMasterRequest.java | 25 ---
 .../FinishApplicationMasterResponse.java|  7 +-
 .../protocolrecords/GetApplicationsRequest.java |  1 -
 .../GetClusterMetricsResponse.java  |  4 +-
 .../GetContainerStatusesRequest.java|  2 -
 .../GetContainerStatusesResponse.java   |  2 -
 .../protocolrecords/GetQueueInfoRequest.java|  2 +-
 .../protocolrecords/GetQueueInfoResponse.java   | 11 ++--
 .../KillApplicationResponse.java|  9 ++-
 .../RegisterApplicationMasterRequest.java   | 33 +-
 .../RegisterApplicationMasterResponse.java  | 11 ++--
 .../protocolrecords/StartContainerRequest.java  |  9 +--
 .../api/records/ApplicationAttemptReport.java   | 23 +++
 .../yarn/api/records/ApplicationReport.java | 47 +++---
 .../records/ApplicationSubmissionContext.java   | 50 +++---
 .../hadoop/yarn/api/records/Container.java  | 49 +++---
 .../api/records/ContainerLaunchContext.java | 35 +-
 .../yarn/api/records/ContainerReport.java   | 29 -
 .../yarn/api/records/ContainerStatus.java   | 21 +++---
 .../yarn/api/records/LocalResourceType.java | 32 -
 .../api/records/LocalResourceVisibility.java| 31 +
 .../yarn/api/records/LogAggregationContext.java | 39 ++-
 .../hadoop/yarn/api/records/NodeReport.java | 25 ---
 .../yarn/api/records/PreemptionMessage.java | 32 -
 .../hadoop/yarn/api/records/QueueACL.java   | 13 ++--
 .../hadoop/yarn/api/records/QueueInfo.java  | 25 ---
 .../hadoop/yarn/api/records/QueueState.java | 15 ++---
 .../yarn/api/records/ReservationRequest.java| 17 ++---
 .../records/ReservationRequestInterpreter.java  | 38 +--
 .../yarn/api/records/ResourceRequest.java   | 51 +++
 .../hadoop/yarn/conf/YarnConfiguration.java |  5 +-
 .../UpdateNodeResourceRequest.java  |  4 +-
 .../hadoop/yarn/client/api/AHSClient.java   | 24 +++
 .../hadoop/yarn/client/api/AMRMClient.java  |  4 +-
 .../apache/hadoop/yarn/client/api/NMClient.java |  4 +-
 .../hadoop/yarn/client/api/NMTokenCache.java| 58 -
 .../hadoop/yarn/client/api/YarnClient.java  | 23 ---
 .../nodelabels/CommonNodeLabelsManager.java |  6 +-
 .../hadoop/yarn/nodelabels/NodeLabelsStore.java |  3 +-
 .../server/security/ApplicationACLsManager.java |  1 -
 .../apache/hadoop/yarn/util/StringHelper.java   |  6 +-
 .../org/apache/hadoop/yarn/webapp/WebApps.java  |  4 +-
 .../registry/client/binding/RegistryUtils.java  |  8 +--
 .../client/impl/RegistryOperationsClient.java   |  2 +-
 .../client/impl/zk/ZookeeperConfigOptions.java  |  3 +-
 .../server/services/MicroZookeeperService.java  | 10 +--
 .../registry/server/services/package-info.java  |  9 ++-
 ...TimelineAuthenticationFilterInitializer.java | 13 ++--
 .../org/apache/hadoop/yarn/lib/ZKClient.java|  2 +-
 .../RegisterNodeManagerRequest.java |  3 +-
 .../server/api/records/NodeHealthStatus.java| 24 ---
 .../server/nodemanager/ContainerExecutor.java   |  8 ++-
 .../util/NodeManagerHardwareUtils.java  |  8 +--
 .../rmapp/attempt/RMAppAttempt.java | 11 ++--
 .../scheduler/SchedulerNode.java|  2 +-
 .../scheduler/SchedulerUtils.java   |  3 +-
 .../fair/policies/ComputeFairShares.java| 19 +++---
 .../security/DelegationTokenRenewer.java|  2 -
 .../yarn/server/webproxy/ProxyUriUtils.java |  2 +-
 64 files changed, 517 insertions(+), 585 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3da9a97c/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt

[31/50] hadoop git commit: HDFS-7838. Expose truncate API for libhdfs. (yliu)

2015-03-17 Thread zjshen
HDFS-7838. Expose truncate API for libhdfs. (yliu)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/48c2db34
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/48c2db34
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/48c2db34

Branch: refs/heads/YARN-2928
Commit: 48c2db34eff376c0f3a72587a5540b1e3dffafd2
Parents: ef9946c
Author: yliu y...@apache.org
Authored: Tue Mar 17 07:22:17 2015 +0800
Committer: yliu y...@apache.org
Committed: Tue Mar 17 07:22:17 2015 +0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  2 ++
 .../src/contrib/libwebhdfs/src/hdfs_web.c   |  6 
 .../hadoop-hdfs/src/main/native/libhdfs/hdfs.c  | 37 
 .../hadoop-hdfs/src/main/native/libhdfs/hdfs.h  | 15 
 4 files changed, 60 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c2db34/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 9339b97..ad3e880 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -364,6 +364,8 @@ Release 2.7.0 - UNRELEASED
 
 HDFS-6488. Support HDFS superuser in NFS gateway. (brandonli)
 
+HDFS-7838. Expose truncate API for libhdfs. (yliu)
+
   IMPROVEMENTS
 
 HDFS-7752. Improve description for

http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c2db34/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c 
b/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
index deb11ef..86b4faf 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
@@ -1124,6 +1124,12 @@ done:
 return file;
 }
 
+int hdfsTruncateFile(hdfsFS fs, const char* path, tOffset newlength)
+{
+errno = ENOTSUP;
+return -1;
+}
+
 tSize hdfsWrite(hdfsFS fs, hdfsFile file, const void* buffer, tSize length)
 {
 if (length == 0) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c2db34/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
index 34a..504d47e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
@@ -1037,6 +1037,43 @@ done:
 return file;
 }
 
+int hdfsTruncateFile(hdfsFS fs, const char* path, tOffset newlength)
+{
+jobject jFS = (jobject)fs;
+jthrowable jthr;
+jvalue jVal;
+jobject jPath = NULL;
+
+JNIEnv *env = getJNIEnv();
+
+if (!env) {
+errno = EINTERNAL;
+return -1;
+}
+
+/* Create an object of org.apache.hadoop.fs.Path */
+jthr = constructNewObjectOfPath(env, path, jPath);
+if (jthr) {
+errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
+hdfsTruncateFile(%s): constructNewObjectOfPath, path);
+return -1;
+}
+
+jthr = invokeMethod(env, jVal, INSTANCE, jFS, HADOOP_FS,
+truncate, JMETHOD2(JPARAM(HADOOP_PATH), J, Z),
+jPath, newlength);
+destroyLocalReference(env, jPath);
+if (jthr) {
+errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
+hdfsTruncateFile(%s): FileSystem#truncate, path);
+return -1;
+}
+if (jVal.z == JNI_TRUE) {
+return 1;
+}
+return 0;
+}
+
 int hdfsUnbufferFile(hdfsFile file)
 {
 int ret;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c2db34/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
index 64889ed..5b7bc1e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
@@ -396,6 +396,21 @@ extern  C {
   int bufferSize, short replication, tSize blocksize);
 
 /**
+ * hdfsTruncateFile - Truncate a hdfs file to given lenght.
+ * @param fs The configured filesystem handle.
+ * @param path The full path to the file.
+ * @param newlength The size the file is to be truncated to
+ * @return 1 if the file has been 

[18/50] hadoop git commit: YARN-1453. [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments. Contributed by Akira AJISAKA, Andrew Purtell, and Allen Wittenauer.

2015-03-17 Thread zjshen
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3da9a97c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
index 9923806..bfe10d6 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
@@ -349,7 +349,7 @@ public abstract class AMRMClientT extends 
AMRMClient.ContainerRequest extends
* Set the NM token cache for the codeAMRMClient/code. This cache must
* be shared with the {@link NMClient} used to manage containers for the
* codeAMRMClient/code
-   * p/
+   * p
* If a NM token cache is not set, the {@link NMTokenCache#getSingleton()}
* singleton instance will be used.
*
@@ -363,7 +363,7 @@ public abstract class AMRMClientT extends 
AMRMClient.ContainerRequest extends
* Get the NM token cache of the codeAMRMClient/code. This cache must be
* shared with the {@link NMClient} used to manage containers for the
* codeAMRMClient/code.
-   * p/
+   * p
* If a NM token cache is not set, the {@link NMTokenCache#getSingleton()}
* singleton instance will be used.
*

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3da9a97c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
index 721728e..08b911b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
@@ -125,7 +125,7 @@ public abstract class NMClient extends AbstractService {
* Set the NM Token cache of the codeNMClient/code. This cache must be
* shared with the {@link AMRMClient} that requested the containers managed
* by this codeNMClient/code
-   * p/
+   * p
* If a NM token cache is not set, the {@link NMTokenCache#getSingleton()}
* singleton instance will be used.
*
@@ -139,7 +139,7 @@ public abstract class NMClient extends AbstractService {
* Get the NM token cache of the codeNMClient/code. This cache must be
* shared with the {@link AMRMClient} that requested the containers managed
* by this codeNMClient/code
-   * p/
+   * p
* If a NM token cache is not set, the {@link NMTokenCache#getSingleton()}
* singleton instance will be used.
*

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3da9a97c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMTokenCache.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMTokenCache.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMTokenCache.java
index 0e7356f..0c349cc 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMTokenCache.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMTokenCache.java
@@ -34,26 +34,26 @@ import com.google.common.annotations.VisibleForTesting;
 /**
  * NMTokenCache manages NMTokens required for an Application Master
  * communicating with individual NodeManagers.
- * p/
+ * p
  * By default Yarn client libraries {@link AMRMClient} and {@link NMClient} use
  * {@link #getSingleton()} instance of the cache.
  * ul
- * liUsing the singleton instance of the cache is appropriate when running a
- * single ApplicationMaster in the same JVM./li
- * liWhen using the singleton, users don't need to do anything special,
- * {@link AMRMClient} and {@link NMClient} are already set up to use the 
default
- * singleton {@link NMTokenCache}/li
+ *   li
+ * Using the singleton instance of the cache is appropriate when running a
+ * single ApplicationMaster in the same JVM.
+ *   /li
+ *   li
+ * When using the singleton, users don't need to do anything special,
+ * {@link AMRMClient} and {@link NMClient} are already set up to use the
+ * default singleton {@link NMTokenCache}
+ * /li
  * /ul
- * p/
  

[42/50] hadoop git commit: HDFS-7940. Add tracing to DFSClient#setQuotaByStorageType (Rakesh R via Colin P. McCabe)

2015-03-17 Thread zjshen
HDFS-7940. Add tracing to DFSClient#setQuotaByStorageType (Rakesh R via Colin 
P. McCabe)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d8846707
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d8846707
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d8846707

Branch: refs/heads/YARN-2928
Commit: d8846707c58c5c3ec542128df13a82ddc05fb347
Parents: 487374b
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Tue Mar 17 10:47:21 2015 -0700
Committer: Colin Patrick Mccabe cmcc...@cloudera.com
Committed: Tue Mar 17 10:47:21 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../src/main/java/org/apache/hadoop/hdfs/DFSClient.java   | 3 +++
 2 files changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d8846707/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index bbe1f02..3e11356 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -756,6 +756,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-2605. Remove redundant Release 0.21.1 section from CHANGES.txt.
 (Allen Wittenauer via shv)
 
+HDFS-7940. Add tracing to DFSClient#setQuotaByStorageType (Rakesh R via
+Colin P. McCabe)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d8846707/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index f970fef..3c8fd31 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -3089,6 +3089,7 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
   throw new IllegalArgumentException(Don't support Quota for storage type 
: 
 + type.toString());
 }
+TraceScope scope = getPathTraceScope(setQuotaByStorageType, src);
 try {
   namenode.setQuota(src, HdfsConstants.QUOTA_DONT_SET, quota, type);
 } catch (RemoteException re) {
@@ -3097,6 +3098,8 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
 QuotaByStorageTypeExceededException.class,
 UnresolvedPathException.class,
 SnapshotAccessControlException.class);
+} finally {
+  scope.close();
 }
   }
   /**



[10/50] hadoop git commit: Revert HDFS-7915. The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell the DFSClient about it because of a network error (cmccabe) (jenkins didn't r

2015-03-17 Thread zjshen
Revert HDFS-7915. The DataNode can sometimes allocate a ShortCircuitShm slot 
and fail to tell the DFSClient about it because of a network error (cmccabe) 
(jenkins didn't run yet)

This reverts commit 5aa892ed486d42ae6b94c4866b92cd2b382ea640.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/32741cf3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/32741cf3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/32741cf3

Branch: refs/heads/YARN-2928
Commit: 32741cf3d25d85a92e3deb11c302cc2a718d71dd
Parents: 5aa892e
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Fri Mar 13 18:40:20 2015 -0700
Committer: Colin Patrick Mccabe cmcc...@cloudera.com
Committed: Fri Mar 13 18:40:20 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 -
 .../apache/hadoop/hdfs/BlockReaderFactory.java  | 23 +
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |  2 -
 .../datatransfer/DataTransferProtocol.java  |  5 +-
 .../hdfs/protocol/datatransfer/Receiver.java|  2 +-
 .../hdfs/protocol/datatransfer/Sender.java  |  4 +-
 .../hdfs/server/datanode/DataXceiver.java   | 95 
 .../server/datanode/ShortCircuitRegistry.java   | 13 +--
 .../src/main/proto/datatransfer.proto   | 11 ---
 .../shortcircuit/TestShortCircuitCache.java | 63 -
 10 files changed, 43 insertions(+), 178 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/32741cf3/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index ff00b0c..c3f9367 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1177,9 +1177,6 @@ Release 2.7.0 - UNRELEASED
   HDFS-7722. DataNode#checkDiskError should also remove Storage when error
   is found. (Lei Xu via Colin P. McCabe)
 
-  HDFS-7915. The DataNode can sometimes allocate a ShortCircuitShm slot and
-  fail to tell the DFSClient about it because of a network error (cmccabe)
-
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/32741cf3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
index 1e915b2..ba48c79 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
@@ -17,8 +17,6 @@
  */
 package org.apache.hadoop.hdfs;
 
-import static 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ShortCircuitFdResponse.USE_RECEIPT_VERIFICATION;
-
 import java.io.BufferedOutputStream;
 import java.io.DataInputStream;
 import java.io.DataOutputStream;
@@ -71,12 +69,6 @@ import com.google.common.base.Preconditions;
 public class BlockReaderFactory implements ShortCircuitReplicaCreator {
   static final Log LOG = LogFactory.getLog(BlockReaderFactory.class);
 
-  public static class FailureInjector {
-public void injectRequestFileDescriptorsFailure() throws IOException {
-  // do nothing
-}
-  }
-
   @VisibleForTesting
   static ShortCircuitReplicaCreator
   createShortCircuitReplicaInfoCallback = null;
@@ -84,11 +76,6 @@ public class BlockReaderFactory implements 
ShortCircuitReplicaCreator {
   private final DFSClient.Conf conf;
 
   /**
-   * Injects failures into specific operations during unit tests.
-   */
-  private final FailureInjector failureInjector;
-
-  /**
* The file name, for logging and debugging purposes.
*/
   private String fileName;
@@ -182,7 +169,6 @@ public class BlockReaderFactory implements 
ShortCircuitReplicaCreator {
 
   public BlockReaderFactory(DFSClient.Conf conf) {
 this.conf = conf;
-this.failureInjector = conf.brfFailureInjector;
 this.remainingCacheTries = conf.nCachedConnRetry;
   }
 
@@ -532,12 +518,11 @@ public class BlockReaderFactory implements 
ShortCircuitReplicaCreator {
 final DataOutputStream out =
 new DataOutputStream(new BufferedOutputStream(peer.getOutputStream()));
 SlotId slotId = slot == null ? null : slot.getSlotId();
-new Sender(out).requestShortCircuitFds(block, token, slotId, 1, true);
+new Sender(out).requestShortCircuitFds(block, token, slotId, 1);
 DataInputStream in = new DataInputStream(peer.getInputStream());
 BlockOpResponseProto resp = 

[46/50] hadoop git commit: YARN-3205. FileSystemRMStateStore should disable FileSystem Cache to avoid get a Filesystem with an old configuration. Contributed by Zhihai Xu.

2015-03-17 Thread zjshen
YARN-3205. FileSystemRMStateStore should disable FileSystem Cache to avoid get 
a Filesystem with an old configuration. Contributed by Zhihai Xu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3bc72cc1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3bc72cc1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3bc72cc1

Branch: refs/heads/YARN-2928
Commit: 3bc72cc16d8c7b8addd8f565523001dfcc32b891
Parents: fc90bf7
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Wed Mar 18 11:53:14 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Wed Mar 18 11:53:19 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt |  3 +++
 .../recovery/FileSystemRMStateStore.java| 22 +++-
 .../recovery/TestFSRMStateStore.java|  5 +
 3 files changed, 25 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3bc72cc1/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index bb752ab..c869113 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -72,6 +72,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3305. Normalize AM resource request on app submission. (Rohith 
Sharmaks
 via jianhe)
 
+YARN-3205 FileSystemRMStateStore should disable FileSystem Cache to avoid
+get a Filesystem with an old configuration. (Zhihai Xu via ozawa)
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3bc72cc1/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
index 8147597..7652a07 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
@@ -84,7 +84,10 @@ public class FileSystemRMStateStore extends RMStateStore {
   protected static final String AMRMTOKEN_SECRET_MANAGER_NODE =
   AMRMTokenSecretManagerNode;
 
+  @VisibleForTesting
   protected FileSystem fs;
+  @VisibleForTesting
+  protected Configuration fsConf;
 
   private Path rootDirPath;
   @Private
@@ -121,14 +124,23 @@ public class FileSystemRMStateStore extends RMStateStore {
 // create filesystem only now, as part of service-start. By this time, RM 
is
 // authenticated with kerberos so we are good to create a file-system
 // handle.
-Configuration conf = new Configuration(getConfig());
-conf.setBoolean(dfs.client.retry.policy.enabled, true);
+fsConf = new Configuration(getConfig());
+fsConf.setBoolean(dfs.client.retry.policy.enabled, true);
 String retryPolicy =
-conf.get(YarnConfiguration.FS_RM_STATE_STORE_RETRY_POLICY_SPEC,
+fsConf.get(YarnConfiguration.FS_RM_STATE_STORE_RETRY_POLICY_SPEC,
   YarnConfiguration.DEFAULT_FS_RM_STATE_STORE_RETRY_POLICY_SPEC);
-conf.set(dfs.client.retry.policy.spec, retryPolicy);
+fsConf.set(dfs.client.retry.policy.spec, retryPolicy);
+
+String scheme = fsWorkingPath.toUri().getScheme();
+if (scheme == null) {
+  scheme = FileSystem.getDefaultUri(fsConf).getScheme();
+}
+if (scheme != null) {
+  String disableCacheName = String.format(fs.%s.impl.disable.cache, 
scheme);
+  fsConf.setBoolean(disableCacheName, true);
+}
 
-fs = fsWorkingPath.getFileSystem(conf);
+fs = fsWorkingPath.getFileSystem(fsConf);
 mkdirsWithRetries(rmDTSecretManagerRoot);
 mkdirsWithRetries(rmAppRoot);
 mkdirsWithRetries(amrmTokenSecretManagerRoot);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3bc72cc1/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestFSRMStateStore.java
--
diff --git 

[44/50] hadoop git commit: YARN-3305. Normalize AM resource request on app submission. Contributed by Rohith Sharmaks

2015-03-17 Thread zjshen
YARN-3305. Normalize AM resource request on app submission. Contributed by 
Rohith Sharmaks


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/968425e9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/968425e9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/968425e9

Branch: refs/heads/YARN-2928
Commit: 968425e9f7b850ff9c2ab8ca37a64c3fdbe77dbf
Parents: 32b4330
Author: Jian He jia...@apache.org
Authored: Tue Mar 17 13:49:59 2015 -0700
Committer: Jian He jia...@apache.org
Committed: Tue Mar 17 13:49:59 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  7 +++--
 .../server/resourcemanager/RMAppManager.java|  6 -
 .../server/resourcemanager/TestAppManager.java  |  5 
 .../resourcemanager/TestClientRMService.java|  5 
 .../capacity/TestCapacityScheduler.java | 27 
 5 files changed, 47 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/968425e9/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index fee0ce0..bb752ab 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -66,8 +66,11 @@ Release 2.8.0 - UNRELEASED
 
   BUG FIXES
 
- YARN-3197. Confusing log generated by CapacityScheduler. (Varun Saxena 
- via devaraj)
+YARN-3197. Confusing log generated by CapacityScheduler. (Varun Saxena 
+via devaraj)
+
+YARN-3305. Normalize AM resource request on app submission. (Rohith 
Sharmaks
+via jianhe)
 
 Release 2.7.0 - UNRELEASED
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/968425e9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
index 8dcfe67..9197630 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
@@ -390,7 +390,11 @@ public class RMAppManager implements 
EventHandlerRMAppManagerEvent,
 +  for application  + submissionContext.getApplicationId(), e);
 throw e;
   }
-  
+  SchedulerUtils.normalizeRequest(amReq, scheduler.getResourceCalculator(),
+  scheduler.getClusterResource(),
+  scheduler.getMinimumResourceCapability(),
+  scheduler.getMaximumResourceCapability(),
+  scheduler.getMinimumResourceCapability());
   return amReq;
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/968425e9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAppManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAppManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAppManager.java
index d2ac4ef..5ebc68c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAppManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAppManager.java
@@ -67,6 +67,7 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.YarnScheduler;
 import 
org.apache.hadoop.yarn.server.resourcemanager.security.ClientToAMTokenSecretManagerInRM;
 import org.apache.hadoop.yarn.server.security.ApplicationACLsManager;
+import org.apache.hadoop.yarn.util.resource.ResourceCalculator;
 import org.apache.hadoop.yarn.util.resource.Resources;
 import org.junit.After;
 import org.junit.Before;
@@ -604,6 +605,10 @@ public class TestAppManager{
 

[02/50] hadoop git commit: HADOOP-11711. Provide a default value for AES/CTR/NoPadding CryptoCodec classes.

2015-03-17 Thread zjshen
HADOOP-11711. Provide a default value for AES/CTR/NoPadding CryptoCodec classes.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/387f271c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/387f271c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/387f271c

Branch: refs/heads/YARN-2928
Commit: 387f271c81f7b3bf53bddc5368d5f4486530c2e1
Parents: a852910
Author: Andrew Wang w...@apache.org
Authored: Thu Mar 12 21:40:58 2015 -0700
Committer: Andrew Wang w...@apache.org
Committed: Thu Mar 12 21:40:58 2015 -0700

--
 .../org/apache/hadoop/crypto/CryptoCodec.java   | 10 +-
 .../fs/CommonConfigurationKeysPublic.java   | 11 ++
 .../crypto/TestCryptoStreamsForLocalFS.java |  8 +
 ...stCryptoStreamsWithJceAesCtrCryptoCodec.java | 38 
 ...yptoStreamsWithOpensslAesCtrCryptoCodec.java |  7 
 5 files changed, 66 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/387f271c/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoCodec.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoCodec.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoCodec.java
index c5ac2ae..493e23d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoCodec.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoCodec.java
@@ -24,6 +24,7 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configurable;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.util.PerformanceAdvisory;
 import org.apache.hadoop.util.ReflectionUtils;
 import org.slf4j.Logger;
@@ -105,7 +106,14 @@ public abstract class CryptoCodec implements Configurable {
 ListClass? extends CryptoCodec result = Lists.newArrayList();
 String configName = HADOOP_SECURITY_CRYPTO_CODEC_CLASSES_KEY_PREFIX + 
 cipherSuite.getConfigSuffix();
-String codecString = conf.get(configName);
+String codecString;
+if (configName.equals(CommonConfigurationKeysPublic
+.HADOOP_SECURITY_CRYPTO_CODEC_CLASSES_AES_CTR_NOPADDING_KEY)) {
+  codecString = conf.get(configName, CommonConfigurationKeysPublic
+  .HADOOP_SECURITY_CRYPTO_CODEC_CLASSES_AES_CTR_NOPADDING_DEFAULT);
+} else {
+  codecString = conf.get(configName);
+}
 if (codecString == null) {
   PerformanceAdvisory.LOG.debug(
   No crypto codec classes with cipher suite configured.);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/387f271c/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
index 470b4d0..87c2aba 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
@@ -19,6 +19,9 @@
 package org.apache.hadoop.fs;
 
 import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.crypto.CipherSuite;
+import org.apache.hadoop.crypto.JceAesCtrCryptoCodec;
+import org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec;
 
 /** 
  * This class contains constants for configuration keys used
@@ -299,6 +302,14 @@ public class CommonConfigurationKeysPublic {
 hadoop.security.saslproperties.resolver.class;
   public static final String HADOOP_SECURITY_CRYPTO_CODEC_CLASSES_KEY_PREFIX = 
 hadoop.security.crypto.codec.classes;
+  public static final String
+  HADOOP_SECURITY_CRYPTO_CODEC_CLASSES_AES_CTR_NOPADDING_KEY =
+  HADOOP_SECURITY_CRYPTO_CODEC_CLASSES_KEY_PREFIX
+  + CipherSuite.AES_CTR_NOPADDING.getConfigSuffix();
+  public static final String
+  HADOOP_SECURITY_CRYPTO_CODEC_CLASSES_AES_CTR_NOPADDING_DEFAULT =
+  OpensslAesCtrCryptoCodec.class.getName() + , +
+  JceAesCtrCryptoCodec.class.getName();
   /** See a href={@docRoot}/../core-default.htmlcore-default.xml/a */
   public static final String HADOOP_SECURITY_CRYPTO_CIPHER_SUITE_KEY =
 hadoop.security.crypto.cipher.suite;


[48/50] hadoop git commit: YARN-3039. Implemented the app-level timeline aggregator discovery service. Contributed by Junping Du.

2015-03-17 Thread zjshen
http://git-wip-us.apache.org/repos/asf/hadoop/blob/8a637914/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestTimelineAggregatorsCollection.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestTimelineAggregatorsCollection.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestTimelineAggregatorsCollection.java
index cec1d71..dd64629 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestTimelineAggregatorsCollection.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestTimelineAggregatorsCollection.java
@@ -32,6 +32,7 @@ import java.util.concurrent.Future;
 
 import com.sun.jersey.core.impl.provider.entity.XMLJAXBElementProvider;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.junit.Test;
 
 public class TestTimelineAggregatorsCollection {
@@ -45,11 +46,11 @@ public class TestTimelineAggregatorsCollection {
 final int NUM_APPS = 5;
 ListCallableBoolean tasks = new ArrayListCallableBoolean();
 for (int i = 0; i  NUM_APPS; i++) {
-  final String appId = String.valueOf(i);
+  final ApplicationId appId = ApplicationId.newInstance(0L, i);
   CallableBoolean task = new CallableBoolean() {
 public Boolean call() {
   AppLevelTimelineAggregator aggregator =
-  new AppLevelTimelineAggregator(appId);
+  new AppLevelTimelineAggregator(appId.toString());
   return (aggregatorCollection.putIfAbsent(appId, aggregator) == 
aggregator);
 }
   };
@@ -79,14 +80,14 @@ public class TestTimelineAggregatorsCollection {
 final int NUM_APPS = 5;
 ListCallableBoolean tasks = new ArrayListCallableBoolean();
 for (int i = 0; i  NUM_APPS; i++) {
-  final String appId = String.valueOf(i);
+  final ApplicationId appId = ApplicationId.newInstance(0L, i);
   CallableBoolean task = new CallableBoolean() {
 public Boolean call() {
   AppLevelTimelineAggregator aggregator =
-  new AppLevelTimelineAggregator(appId);
+  new AppLevelTimelineAggregator(appId.toString());
   boolean successPut =
   (aggregatorCollection.putIfAbsent(appId, aggregator) == 
aggregator);
-  return successPut  aggregatorCollection.remove(appId);
+  return successPut  aggregatorCollection.remove(appId.toString());
 }
   };
   tasks.add(task);



[50/50] hadoop git commit: YARN-3039. Implemented the app-level timeline aggregator discovery service. Contributed by Junping Du.

2015-03-17 Thread zjshen
YARN-3039. Implemented the app-level timeline aggregator discovery service. 
Contributed by Junping Du.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8a637914
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8a637914
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8a637914

Branch: refs/heads/YARN-2928
Commit: 8a637914c13baae6749b481551901cfac94694f4
Parents: 5de4026
Author: Zhijie Shen zjs...@apache.org
Authored: Tue Mar 17 20:23:49 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Tue Mar 17 20:23:49 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../api/protocolrecords/AllocateResponse.java   |  33 ++
 .../hadoop/yarn/conf/YarnConfiguration.java |  12 +
 .../src/main/proto/yarn_service_protos.proto|   1 +
 .../distributedshell/ApplicationMaster.java |  82 -
 .../hadoop/yarn/client/api/AMRMClient.java  |  18 +
 .../yarn/client/api/async/AMRMClientAsync.java  |  17 +
 .../api/async/impl/AMRMClientAsyncImpl.java |  15 +-
 .../impl/pb/AllocateResponsePBImpl.java |  17 +
 .../hadoop/yarn/client/api/TimelineClient.java  |   6 +-
 .../client/api/impl/TimelineClientImpl.java | 133 ++-
 .../hadoop/yarn/webapp/util/WebAppUtils.java|   2 +-
 .../src/main/resources/yarn-default.xml |  13 +
 .../hadoop/yarn/TestContainerLaunchRPC.java |  16 +-
 .../java/org/apache/hadoop/yarn/TestRPC.java| 247 -
 .../hadoop/yarn/api/TestAllocateResponse.java   |  17 +
 .../hadoop-yarn-server-common/pom.xml   |   1 +
 .../api/AggregatorNodemanagerProtocol.java  |  56 +++
 .../api/AggregatorNodemanagerProtocolPB.java|  33 ++
 ...gregatorNodemanagerProtocolPBClientImpl.java |  94 +
 ...regatorNodemanagerProtocolPBServiceImpl.java |  61 
 .../protocolrecords/NodeHeartbeatRequest.java   |  23 ++
 .../protocolrecords/NodeHeartbeatResponse.java  |   4 +
 .../ReportNewAggregatorsInfoRequest.java|  53 +++
 .../ReportNewAggregatorsInfoResponse.java   |  32 ++
 .../impl/pb/NodeHeartbeatRequestPBImpl.java |  61 
 .../impl/pb/NodeHeartbeatResponsePBImpl.java|  47 +++
 .../ReportNewAggregatorsInfoRequestPBImpl.java  | 142 
 .../ReportNewAggregatorsInfoResponsePBImpl.java |  74 
 .../server/api/records/AppAggregatorsMap.java   |  33 ++
 .../impl/pb/AppAggregatorsMapPBImpl.java| 151 
 .../proto/aggregatornodemanager_protocol.proto  |  29 ++
 .../yarn_server_common_service_protos.proto |  21 ++
 .../java/org/apache/hadoop/yarn/TestRPC.java| 345 +++
 .../hadoop/yarn/TestYarnServerApiClasses.java   |  17 +
 .../hadoop/yarn/server/nodemanager/Context.java |  13 +
 .../yarn/server/nodemanager/NodeManager.java|  46 ++-
 .../nodemanager/NodeStatusUpdaterImpl.java  |   7 +-
 .../aggregatormanager/NMAggregatorService.java  | 113 ++
 .../application/ApplicationImpl.java|   4 +
 .../ApplicationMasterService.java   |   6 +
 .../resourcemanager/ResourceTrackerService.java |  68 +++-
 .../server/resourcemanager/rmapp/RMApp.java |  17 +
 .../rmapp/RMAppAggregatorUpdateEvent.java   |  36 ++
 .../resourcemanager/rmapp/RMAppEventType.java   |   3 +
 .../server/resourcemanager/rmapp/RMAppImpl.java |  51 ++-
 .../applicationsmanager/MockAsm.java|  12 +
 .../server/resourcemanager/rmapp/MockRMApp.java |  15 +
 .../PerNodeTimelineAggregatorsAuxService.java   |   5 +-
 .../TimelineAggregatorsCollection.java  |  78 -
 .../TestTimelineAggregatorsCollection.java  |  11 +-
 51 files changed, 2103 insertions(+), 291 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8a637914/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index e62bcf9..47351c6 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -29,6 +29,9 @@ Branch YARN-2928: Timeline Server Next Generation: Phase 1
 YARN-3264. Created backing storage write interface and a POC only FS based
 storage implementation. (Vrushali C via zjshen)
 
+YARN-3039. Implemented the app-level timeline aggregator discovery service.
+(Junping Du via zjshen)
+
   IMPROVEMENTS
 
   OPTIMIZATIONS

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8a637914/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java
 

[39/50] hadoop git commit: HADOOP-11721. switch jenkins patch tester to use git clean instead of mvn clean (temp commit)

2015-03-17 Thread zjshen
HADOOP-11721. switch jenkins patch tester to use git clean instead of mvn clean 
(temp commit)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a89b087c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a89b087c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a89b087c

Branch: refs/heads/YARN-2928
Commit: a89b087c45e549e1f5b5fc953de4657fcbb97195
Parents: 7179f94
Author: Vinayakumar B vinayakum...@apache.org
Authored: Tue Mar 17 21:39:14 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Tue Mar 17 21:39:14 2015 +0530

--
 dev-support/test-patch.sh | 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a89b087c/dev-support/test-patch.sh
--
diff --git a/dev-support/test-patch.sh b/dev-support/test-patch.sh
index b0fbb80..574a4fd 100755
--- a/dev-support/test-patch.sh
+++ b/dev-support/test-patch.sh
@@ -292,6 +292,10 @@ prebuildWithoutPatch () {
 cd -
   fi
   echo Compiling $(pwd)
+  if [[ -d $(pwd)/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs ]]; 
then
+echo Changing permission 
$(pwd)/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs to avoid broken 
builds 
+chmod +x -R $(pwd)/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs
+  fi
   echo $MVN clean test -DskipTests -D${PROJECT_NAME}PatchProcess -Ptest-patch 
 $PATCH_DIR/trunkJavacWarnings.txt 21
   $MVN clean test -DskipTests -D${PROJECT_NAME}PatchProcess -Ptest-patch  
$PATCH_DIR/trunkJavacWarnings.txt 21
   if [[ $? != 0 ]] ; then



[45/50] hadoop git commit: HADOOP-11722. Some Instances of Services using ZKDelegationTokenSecretManager go down when old token cannot be deleted. Contributed by Arun Suresh.

2015-03-17 Thread zjshen
HADOOP-11722. Some Instances of Services using ZKDelegationTokenSecretManager 
go down when old token cannot be deleted. Contributed by Arun Suresh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fc90bf7b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fc90bf7b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fc90bf7b

Branch: refs/heads/YARN-2928
Commit: fc90bf7b27cc20486f2806670a14fd7d654b0a31
Parents: 968425e
Author: Aaron T. Myers a...@apache.org
Authored: Tue Mar 17 19:41:36 2015 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Tue Mar 17 19:41:36 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  4 
 .../ZKDelegationTokenSecretManager.java | 21 ++--
 2 files changed, 23 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fc90bf7b/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 3817054..a6bd68d 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -,6 +,10 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11720. [JDK8] Fix javadoc errors caused by incorrect or illegal
 tags in hadoop-tools. (Akira AJISAKA via ozawa)
 
+HADOOP-11722. Some Instances of Services using
+ZKDelegationTokenSecretManager go down when old token cannot be deleted.
+(Arun Suresh via atm)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fc90bf7b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
index ec522dcf..73c3ab8 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
@@ -55,6 +55,7 @@ import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.token.delegation.web.DelegationTokenManager;
 import org.apache.zookeeper.CreateMode;
 import org.apache.zookeeper.KeeperException;
+import org.apache.zookeeper.KeeperException.NoNodeException;
 import org.apache.zookeeper.ZooDefs.Perms;
 import org.apache.zookeeper.client.ZooKeeperSaslClient;
 import org.apache.zookeeper.data.ACL;
@@ -709,7 +710,15 @@ public abstract class 
ZKDelegationTokenSecretManagerTokenIdent extends Abstract
 try {
   if (zkClient.checkExists().forPath(nodeRemovePath) != null) {
 while(zkClient.checkExists().forPath(nodeRemovePath) != null){
-  zkClient.delete().guaranteed().forPath(nodeRemovePath);
+  try {
+zkClient.delete().guaranteed().forPath(nodeRemovePath);
+  } catch (NoNodeException nne) {
+// It is possible that the node might be deleted between the
+// check and the actual delete.. which might lead to an
+// exception that can bring down the daemon running this
+// SecretManager
+LOG.debug(Node already deleted by peer  + nodeRemovePath);
+  }
 }
   } else {
 LOG.debug(Attempted to delete a non-existing znode  + 
nodeRemovePath);
@@ -761,7 +770,15 @@ public abstract class 
ZKDelegationTokenSecretManagerTokenIdent extends Abstract
 try {
   if (zkClient.checkExists().forPath(nodeRemovePath) != null) {
 while(zkClient.checkExists().forPath(nodeRemovePath) != null){
-  zkClient.delete().guaranteed().forPath(nodeRemovePath);
+  try {
+zkClient.delete().guaranteed().forPath(nodeRemovePath);
+  } catch (NoNodeException nne) {
+// It is possible that the node might be deleted between the
+// check and the actual delete.. which might lead to an
+// exception that can bring down the daemon running this
+// SecretManager
+LOG.debug(Node already deleted by peer  + nodeRemovePath);
+  }
 }
   } else {
 LOG.debug(Attempted to remove a non-existing znode  + 
nodeRemovePath);



[34/50] hadoop git commit: MAPREDUCE-5755. MapTask.MapOutputBuffer#compare/swap should have @Override annotation. (ozawa)

2015-03-17 Thread zjshen
MAPREDUCE-5755. MapTask.MapOutputBuffer#compare/swap should have @Override 
annotation. (ozawa)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bb243cea
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bb243cea
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bb243cea

Branch: refs/heads/YARN-2928
Commit: bb243cea93b6872ef8956311a2290ca0b83ebb22
Parents: f222bde
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Tue Mar 17 14:55:15 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Tue Mar 17 14:55:15 2015 +0900

--
 hadoop-mapreduce-project/CHANGES.txt  | 3 +++
 .../src/main/java/org/apache/hadoop/mapred/MapTask.java   | 2 ++
 2 files changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bb243cea/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index ee21b70..b5baf51 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -263,6 +263,9 @@ Release 2.8.0 - UNRELEASED
 MAPREDUCE-4414. Add main methods to JobConf and YarnConfiguration,
 for debug purposes. (Plamen Jeliazkov via harsh)
 
+MAPREDUCE-5755. MapTask.MapOutputBuffer#compare/swap should have
+@Override annotation. (ozawa)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bb243cea/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java
index 8094317..c4957b7 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java
@@ -1255,6 +1255,7 @@ public class MapTask extends Task {
  * Compare by partition, then by key.
  * @see IndexedSortable#compare
  */
+@Override
 public int compare(final int mi, final int mj) {
   final int kvi = offsetFor(mi % maxRec);
   final int kvj = offsetFor(mj % maxRec);
@@ -1278,6 +1279,7 @@ public class MapTask extends Task {
  * Swap metadata for items i, j
  * @see IndexedSortable#swap
  */
+@Override
 public void swap(final int mi, final int mj) {
   int iOff = (mi % maxRec) * METASIZE;
   int jOff = (mj % maxRec) * METASIZE;



[03/50] hadoop git commit: YARN-3267. Timelineserver applies the ACL rules after applying the limit on the number of records (Chang Li via jeagles)

2015-03-17 Thread zjshen
YARN-3267. Timelineserver applies the ACL rules after applying the limit on the 
number of records (Chang Li via jeagles)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8180e676
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8180e676
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8180e676

Branch: refs/heads/YARN-2928
Commit: 8180e676abb2bb500a48b3a0c0809d2a807ab235
Parents: 387f271
Author: Jonathan Eagles jeag...@gmail.com
Authored: Fri Mar 13 12:04:30 2015 -0500
Committer: Jonathan Eagles jeag...@gmail.com
Committed: Fri Mar 13 12:04:30 2015 -0500

--
 .../jobhistory/TestJobHistoryEventHandler.java  | 14 +++---
 .../mapred/TestMRTimelineEventHandling.java | 12 ++---
 hadoop-yarn-project/CHANGES.txt |  3 ++
 .../distributedshell/TestDistributedShell.java  |  4 +-
 .../server/timeline/LeveldbTimelineStore.java   | 18 +--
 .../server/timeline/MemoryTimelineStore.java| 12 -
 .../server/timeline/TimelineDataManager.java| 50 +++-
 .../yarn/server/timeline/TimelineReader.java|  3 +-
 .../timeline/TestLeveldbTimelineStore.java  | 16 +++
 .../timeline/TestTimelineDataManager.java   | 26 +-
 .../server/timeline/TimelineStoreTestUtils.java | 33 +
 11 files changed, 126 insertions(+), 65 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8180e676/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java
index de35d84..43e3dbe 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java
@@ -464,7 +464,7 @@ public class TestJobHistoryEventHandler {
   t.appAttemptId, 200, t.containerId, nmhost, 3000, 4000),
   currentTime - 10));
   TimelineEntities entities = ts.getEntities(MAPREDUCE_JOB, null, null,
-  null, null, null, null, null, null);
+  null, null, null, null, null, null, null);
   Assert.assertEquals(1, entities.getEntities().size());
   TimelineEntity tEntity = entities.getEntities().get(0);
   Assert.assertEquals(t.jobId.toString(), tEntity.getEntityId());
@@ -480,7 +480,7 @@ public class TestJobHistoryEventHandler {
   new HashMapJobACL, AccessControlList(), default),
   currentTime + 10));
   entities = ts.getEntities(MAPREDUCE_JOB, null, null, null,
-  null, null, null, null, null);
+  null, null, null, null, null, null);
   Assert.assertEquals(1, entities.getEntities().size());
   tEntity = entities.getEntities().get(0);
   Assert.assertEquals(t.jobId.toString(), tEntity.getEntityId());
@@ -498,7 +498,7 @@ public class TestJobHistoryEventHandler {
   new JobQueueChangeEvent(TypeConverter.fromYarn(t.jobId), q2),
   currentTime - 20));
   entities = ts.getEntities(MAPREDUCE_JOB, null, null, null,
-  null, null, null, null, null);
+  null, null, null, null, null, null);
   Assert.assertEquals(1, entities.getEntities().size());
   tEntity = entities.getEntities().get(0);
   Assert.assertEquals(t.jobId.toString(), tEntity.getEntityId());
@@ -520,7 +520,7 @@ public class TestJobHistoryEventHandler {
   new JobFinishedEvent(TypeConverter.fromYarn(t.jobId), 0, 0, 0, 0,
   0, new Counters(), new Counters(), new Counters()), 
currentTime));
   entities = ts.getEntities(MAPREDUCE_JOB, null, null, null,
-  null, null, null, null, null);
+  null, null, null, null, null, null);
   Assert.assertEquals(1, entities.getEntities().size());
   tEntity = entities.getEntities().get(0);
   Assert.assertEquals(t.jobId.toString(), tEntity.getEntityId());
@@ -546,7 +546,7 @@ public class TestJobHistoryEventHandler {
 new JobUnsuccessfulCompletionEvent(TypeConverter.fromYarn(t.jobId),
 0, 0, 0, JobStateInternal.KILLED.toString()), currentTime + 20));
   entities = ts.getEntities(MAPREDUCE_JOB, null, null, null,
-  null, 

[07/50] hadoop git commit: HDFS-2605. Remove redundant Release 0.21.1 section from CHANGES.txt. Contributed by Allen Wittenauer.

2015-03-17 Thread zjshen
HDFS-2605. Remove redundant Release 0.21.1 section from CHANGES.txt. 
Contributed by Allen Wittenauer.

Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/dfd32017
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/dfd32017
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/dfd32017

Branch: refs/heads/YARN-2928
Commit: dfd32017001e6902829671dc8cc68afbca61e940
Parents: 6acb7f2
Author: Konstantin V Shvachko s...@apache.org
Authored: Fri Mar 13 13:32:45 2015 -0700
Committer: Konstantin V Shvachko s...@apache.org
Committed: Fri Mar 13 13:32:45 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/dfd32017/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index a149f18..c3f9367 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -746,6 +746,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7435. PB encoding of block reports is very inefficient.
 (Daryn Sharp via kihwal)
 
+HDFS-2605. Remove redundant Release 0.21.1 section from CHANGES.txt.
+(Allen Wittenauer via shv)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.
@@ -10299,8 +10302,6 @@ Release 0.22.0 - 2011-11-29
 
 HDFS-2287. TestParallelRead has a small off-by-one bug. (todd)
 
-Release 0.21.1 - Unreleased
-
 HDFS-1466. TestFcHdfsSymlink relies on /tmp/test not existing. (eli)
 
 HDFS-874. TestHDFSFileContextMainOperations fails on weirdly 



[32/50] hadoop git commit: HDFS-2360. Ugly stacktrce when quota exceeds. (harsh)

2015-03-17 Thread zjshen
HDFS-2360. Ugly stacktrce when quota exceeds. (harsh)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/046521cd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/046521cd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/046521cd

Branch: refs/heads/YARN-2928
Commit: 046521cd6511b7fc6d9478cb2bed90d8e75fca20
Parents: 5608520
Author: Harsh J ha...@cloudera.com
Authored: Tue Mar 17 00:59:50 2015 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Tue Mar 17 10:28:17 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 2 ++
 .../main/java/org/apache/hadoop/hdfs/DFSOutputStream.java   | 9 -
 2 files changed, 10 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/046521cd/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index d313b6c..9339b97 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -321,6 +321,8 @@ Release 2.8.0 - UNRELEASED
 
   IMPROVEMENTS
 
+HDFS-2360. Ugly stacktrace when quota exceeds. (harsh)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/046521cd/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
index 130bb6e..286ae7d 100755
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
@@ -57,6 +57,7 @@ import org.apache.hadoop.fs.Syncable;
 import org.apache.hadoop.hdfs.client.HdfsDataOutputStream;
 import org.apache.hadoop.hdfs.client.HdfsDataOutputStream.SyncFlag;
 import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
+import org.apache.hadoop.hdfs.protocol.QuotaExceededException;
 import org.apache.hadoop.hdfs.protocol.DSQuotaExceededException;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
@@ -551,7 +552,13 @@ public class DFSOutputStream extends FSOutputSummer
 } catch (Throwable e) {
   // Log warning if there was a real error.
   if (restartingNodeIndex.get() == -1) {
-DFSClient.LOG.warn(DataStreamer Exception, e);
+// Since their messages are descriptive enough, do not always
+// log a verbose stack-trace WARN for quota exceptions.
+if (e instanceof QuotaExceededException) {
+  DFSClient.LOG.debug(DataStreamer Quota Exception, e);
+} else {
+  DFSClient.LOG.warn(DataStreamer Exception, e);
+}
   }
   if (e instanceof IOException) {
 setLastException((IOException)e);



[26/50] hadoop git commit: MAPREDUCE-6105. nconsistent configuration in property mapreduce.reduce.shuffle.merge.percent. Contributed by Ray Chiang.

2015-03-17 Thread zjshen
MAPREDUCE-6105. nconsistent configuration in property 
mapreduce.reduce.shuffle.merge.percent. Contributed by Ray Chiang.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/685dbafb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/685dbafb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/685dbafb

Branch: refs/heads/YARN-2928
Commit: 685dbafbe2154e5bf4b638da0668ce32d8c879b0
Parents: ce5de93
Author: Harsh J ha...@cloudera.com
Authored: Tue Mar 17 01:17:34 2015 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Tue Mar 17 02:28:09 2015 +0530

--
 hadoop-mapreduce-project/CHANGES.txt| 3 +++
 .../src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java  | 1 +
 .../apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java   | 5 +++--
 3 files changed, 7 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/685dbafb/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index d02d725..52880f6 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -253,6 +253,9 @@ Release 2.8.0 - UNRELEASED
 
   IMPROVEMENTS
 
+MAPREDUCE-6105. Inconsistent configuration in property
+mapreduce.reduce.shuffle.merge.percent. (Ray Chiang via harsh)
+
 MAPREDUCE-4414. Add main methods to JobConf and YarnConfiguration,
 for debug purposes. (Plamen Jeliazkov via harsh)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/685dbafb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
index 3aa304a..f0a6ddf 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
@@ -305,6 +305,7 @@ public interface MRJobConfig {
 = mapreduce.reduce.shuffle.memory.limit.percent;
 
   public static final String SHUFFLE_MERGE_PERCENT = 
mapreduce.reduce.shuffle.merge.percent;
+  public static final float DEFAULT_SHUFFLE_MERGE_PERCENT = 0.66f;
 
   public static final String REDUCE_FAILURES_MAXPERCENT = 
mapreduce.reduce.failures.maxpercent;
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/685dbafb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java
index a4b1aa8..8bf17ef 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java
@@ -191,8 +191,9 @@ public class MergeManagerImplK, V implements 
MergeManagerK, V {
 this.memToMemMergeOutputsThreshold = 
 jobConf.getInt(MRJobConfig.REDUCE_MEMTOMEM_THRESHOLD, 
ioSortFactor);
 this.mergeThreshold = (long)(this.memoryLimit * 
-  jobConf.getFloat(MRJobConfig.SHUFFLE_MERGE_PERCENT, 
-   0.90f));
+  jobConf.getFloat(
+MRJobConfig.SHUFFLE_MERGE_PERCENT,
+MRJobConfig.DEFAULT_SHUFFLE_MERGE_PERCENT));
 LOG.info(MergerManager: memoryLimit= + memoryLimit + ,  +
  maxSingleShuffleLimit= + maxSingleShuffleLimit + ,  +
  mergeThreshold= + mergeThreshold + ,  + 



[16/50] hadoop git commit: HDFS-7915. The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell the DFSClient about it because of a network error (cmccabe)

2015-03-17 Thread zjshen
HDFS-7915. The DataNode can sometimes allocate a ShortCircuitShm slot and fail 
to tell the DFSClient about it because of a network error (cmccabe)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bc9cb3e2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bc9cb3e2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bc9cb3e2

Branch: refs/heads/YARN-2928
Commit: bc9cb3e271b22069a15ca110cd60c860250aaab2
Parents: 79426f3
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Sat Mar 14 22:36:46 2015 -0700
Committer: Colin Patrick Mccabe cmcc...@cloudera.com
Committed: Sat Mar 14 22:36:46 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../apache/hadoop/hdfs/BlockReaderFactory.java  | 23 -
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |  2 +
 .../datatransfer/DataTransferProtocol.java  |  5 +-
 .../hdfs/protocol/datatransfer/Receiver.java|  2 +-
 .../hdfs/protocol/datatransfer/Sender.java  |  4 +-
 .../hdfs/server/datanode/DataXceiver.java   | 95 
 .../server/datanode/ShortCircuitRegistry.java   | 13 ++-
 .../src/main/proto/datatransfer.proto   | 11 +++
 .../shortcircuit/TestShortCircuitCache.java | 63 +
 10 files changed, 178 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bc9cb3e2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index c3f9367..93237af 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1154,6 +1154,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7903. Cannot recover block after truncate and delete snapshot.
 (Plamen Jeliazkov via shv)
 
+HDFS-7915. The DataNode can sometimes allocate a ShortCircuitShm slot and
+fail to tell the DFSClient about it because of a network error (cmccabe)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bc9cb3e2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
index ba48c79..1e915b2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.hdfs;
 
+import static 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ShortCircuitFdResponse.USE_RECEIPT_VERIFICATION;
+
 import java.io.BufferedOutputStream;
 import java.io.DataInputStream;
 import java.io.DataOutputStream;
@@ -69,6 +71,12 @@ import com.google.common.base.Preconditions;
 public class BlockReaderFactory implements ShortCircuitReplicaCreator {
   static final Log LOG = LogFactory.getLog(BlockReaderFactory.class);
 
+  public static class FailureInjector {
+public void injectRequestFileDescriptorsFailure() throws IOException {
+  // do nothing
+}
+  }
+
   @VisibleForTesting
   static ShortCircuitReplicaCreator
   createShortCircuitReplicaInfoCallback = null;
@@ -76,6 +84,11 @@ public class BlockReaderFactory implements 
ShortCircuitReplicaCreator {
   private final DFSClient.Conf conf;
 
   /**
+   * Injects failures into specific operations during unit tests.
+   */
+  private final FailureInjector failureInjector;
+
+  /**
* The file name, for logging and debugging purposes.
*/
   private String fileName;
@@ -169,6 +182,7 @@ public class BlockReaderFactory implements 
ShortCircuitReplicaCreator {
 
   public BlockReaderFactory(DFSClient.Conf conf) {
 this.conf = conf;
+this.failureInjector = conf.brfFailureInjector;
 this.remainingCacheTries = conf.nCachedConnRetry;
   }
 
@@ -518,11 +532,12 @@ public class BlockReaderFactory implements 
ShortCircuitReplicaCreator {
 final DataOutputStream out =
 new DataOutputStream(new BufferedOutputStream(peer.getOutputStream()));
 SlotId slotId = slot == null ? null : slot.getSlotId();
-new Sender(out).requestShortCircuitFds(block, token, slotId, 1);
+new Sender(out).requestShortCircuitFds(block, token, slotId, 1, true);
 DataInputStream in = new DataInputStream(peer.getInputStream());
 BlockOpResponseProto resp = BlockOpResponseProto.parseFrom(
 

[43/50] hadoop git commit: Revert YARN-3181. FairScheduler: Fix up outdated findbugs issues. (kasha)

2015-03-17 Thread zjshen
Revert YARN-3181. FairScheduler: Fix up outdated findbugs issues. (kasha)

This reverts commit c2b185def846f5577a130003a533b9c377b58fab.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/32b43304
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/32b43304
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/32b43304

Branch: refs/heads/YARN-2928
Commit: 32b43304563c2430c00bc3e142a962d2bc5f4d58
Parents: d884670
Author: Karthik Kambatla ka...@apache.org
Authored: Tue Mar 17 12:31:15 2015 -0700
Committer: Karthik Kambatla ka...@apache.org
Committed: Tue Mar 17 12:31:15 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  2 --
 .../dev-support/findbugs-exclude.xml| 27 
 .../scheduler/fair/AllocationConfiguration.java | 13 +++---
 .../fair/AllocationFileLoaderService.java   |  2 +-
 .../scheduler/fair/FSOpDurations.java   |  3 ---
 5 files changed, 31 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/32b43304/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index f5b72d7..fee0ce0 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -320,8 +320,6 @@ Release 2.7.0 - UNRELEASED
 YARN-2079. Recover NonAggregatingLogHandler state upon nodemanager
 restart. (Jason Lowe via junping_du) 
 
-YARN-3181. FairScheduler: Fix up outdated findbugs issues. (kasha)
-
 YARN-3124. Fixed CS LeafQueue/ParentQueue to use QueueCapacities to track
 capacities-by-label. (Wangda Tan via jianhe)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/32b43304/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml 
b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
index a89884a..943ecb0 100644
--- a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
+++ b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
@@ -152,12 +152,22 @@
 Class 
name=org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService
 /
 Bug pattern=IS2_INCONSISTENT_SYNC /
   /Match
+  Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService
 /
+Field name=allocFile /
+Bug pattern=IS2_INCONSISTENT_SYNC /
+  /Match
   !-- Inconsistent sync warning - minimumAllocation is only initialized once 
and never changed --
   Match
 Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler
 /
 Field name=minimumAllocation /
 Bug pattern=IS2_INCONSISTENT_SYNC /
   /Match
+  Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerNode
 /
+Method name=reserveResource /
+Bug pattern=BC_UNCONFIRMED_CAST / 
+  /Match
   !-- Inconsistent sync warning - reinitialize read from other queue does not 
need sync--
   Match
 Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue
 /
@@ -215,6 +225,18 @@
 Field name=scheduleAsynchronously /
 Bug pattern=IS2_INCONSISTENT_SYNC /
   /Match
+  !-- Inconsistent sync warning - updateInterval is only initialized once and 
never changed --
+  Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
 /
+Field name=updateInterval /
+Bug pattern=IS2_INCONSISTENT_SYNC /
+  /Match
+  !-- Inconsistent sync warning - callDurationMetrics is only initialized 
once and never changed --
+  Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
 /
+Field name=fsOpDurations /
+Bug pattern=IS2_INCONSISTENT_SYNC /
+  /Match
 
   !-- Inconsistent sync warning - numRetries is only initialized once and 
never changed --
   Match
@@ -415,6 +437,11 @@
 Bug pattern=IS2_INCONSISTENT_SYNC /
   /Match
   Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
 /
+Field name=allocConf /
+Bug pattern=IS2_INCONSISTENT_SYNC /
+  /Match
+  Match
 Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode /
 Field name=numContainers /
 Bug pattern=VO_VOLATILE_INCREMENT /

http://git-wip-us.apache.org/repos/asf/hadoop/blob/32b43304/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java

[38/50] hadoop git commit: YARN-3197. Confusing log generated by CapacityScheduler. Contributed by Varun Saxena.

2015-03-17 Thread zjshen
YARN-3197. Confusing log generated by CapacityScheduler. Contributed by
Varun Saxena.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7179f94f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7179f94f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7179f94f

Branch: refs/heads/YARN-2928
Commit: 7179f94f9d000fc52bd9ce5aa9741aba97ec3ee8
Parents: 018893e
Author: Devaraj K deva...@apache.org
Authored: Tue Mar 17 15:57:57 2015 +0530
Committer: Devaraj K deva...@apache.org
Committed: Tue Mar 17 15:57:57 2015 +0530

--
 hadoop-yarn-project/CHANGES.txt | 3 +++
 .../resourcemanager/scheduler/capacity/CapacityScheduler.java   | 5 +++--
 2 files changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7179f94f/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index cb68480..82934ad 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -63,6 +63,9 @@ Release 2.8.0 - UNRELEASED
 
   BUG FIXES
 
+ YARN-3197. Confusing log generated by CapacityScheduler. (Varun Saxena 
+ via devaraj)
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7179f94f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index 28ce264..756e537 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -1279,7 +1279,8 @@ public class CapacityScheduler extends
   protected synchronized void completedContainer(RMContainer rmContainer,
   ContainerStatus containerStatus, RMContainerEventType event) {
 if (rmContainer == null) {
-  LOG.info(Null container completed...);
+  LOG.info(Container  + containerStatus.getContainerId() +
+   completed with event  + event);
   return;
 }
 
@@ -1291,7 +1292,7 @@ public class CapacityScheduler extends
 ApplicationId appId =
 container.getId().getApplicationAttemptId().getApplicationId();
 if (application == null) {
-  LOG.info(Container  + container +  of +  unknown application 
+  LOG.info(Container  + container +  of +  finished application 
   + appId +  completed with event  + event);
   return;
 }



[47/50] hadoop git commit: Merge remote-tracking branch 'apache/trunk' into YARN-2928

2015-03-17 Thread zjshen
Merge remote-tracking branch 'apache/trunk' into YARN-2928


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5de4026d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5de4026d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5de4026d

Branch: refs/heads/YARN-2928
Commit: 5de4026d8aef6c3343d24aa3831da48ad7a1a87e
Parents: fb1b596 3bc72cc
Author: Zhijie Shen zjs...@apache.org
Authored: Tue Mar 17 20:22:11 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Tue Mar 17 20:22:11 2015 -0700

--
 dev-support/test-patch.sh   |   4 +
 hadoop-common-project/hadoop-common/CHANGES.txt |  36 +-
 .../org/apache/hadoop/crypto/CryptoCodec.java   |  10 +-
 .../hadoop/crypto/CryptoOutputStream.java   |  19 +-
 .../fs/CommonConfigurationKeysPublic.java   |  11 +
 .../ZKDelegationTokenSecretManager.java |  21 +-
 .../apache/hadoop/tracing/SpanReceiverHost.java |  13 +-
 .../hadoop/crypto/random/OpensslSecureRandom.c  |  18 +-
 .../crypto/TestCryptoStreamsForLocalFS.java |   8 +-
 ...stCryptoStreamsWithJceAesCtrCryptoCodec.java |  38 ++
 ...yptoStreamsWithOpensslAesCtrCryptoCodec.java |   7 +
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  29 +-
 .../src/contrib/libwebhdfs/src/hdfs_web.c   |   6 +
 .../apache/hadoop/hdfs/BlockReaderFactory.java  |  23 +-
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |   5 +
 .../org/apache/hadoop/hdfs/DFSOutputStream.java |   9 +-
 .../hadoop/hdfs/protocol/BlockListAsLongs.java  | 660 +++
 .../datatransfer/DataTransferProtocol.java  |   5 +-
 .../hdfs/protocol/datatransfer/Receiver.java|   2 +-
 .../hdfs/protocol/datatransfer/Sender.java  |   4 +-
 .../DatanodeProtocolClientSideTranslatorPB.java |  22 +-
 .../DatanodeProtocolServerSideTranslatorPB.java |  14 +-
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |   6 +-
 .../BlockInfoContiguousUnderConstruction.java   |   1 +
 .../server/blockmanagement/BlockManager.java|  16 +-
 .../hdfs/server/datanode/BPServiceActor.java|  13 +-
 .../hdfs/server/datanode/DataXceiver.java   |  95 +--
 .../server/datanode/ShortCircuitRegistry.java   |  13 +-
 .../datanode/fsdataset/impl/FsDatasetImpl.java  |  20 +-
 .../hdfs/server/namenode/FSNamesystem.java  |  15 +
 .../hdfs/server/namenode/NameNodeRpcServer.java |   2 +-
 .../server/namenode/snapshot/FileDiffList.java  |  19 +-
 .../server/protocol/DatanodeRegistration.java   |   9 +
 .../hdfs/server/protocol/NamespaceInfo.java |  52 ++
 .../server/protocol/StorageBlockReport.java |   8 +-
 .../hadoop-hdfs/src/main/native/libhdfs/hdfs.c  |  37 ++
 .../hadoop-hdfs/src/main/native/libhdfs/hdfs.h  |  15 +
 .../src/main/proto/DatanodeProtocol.proto   |   2 +
 .../src/main/proto/datatransfer.proto   |  11 +
 .../hadoop-hdfs/src/main/proto/hdfs.proto   |   1 +
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  |  76 ++-
 .../apache/hadoop/hdfs/TestFileCreation.java|   4 +-
 .../hdfs/protocol/TestBlockListAsLongs.java | 237 +++
 .../blockmanagement/TestBlockManager.java   |   8 +-
 .../server/datanode/BlockReportTestBase.java|  27 +-
 .../server/datanode/SimulatedFSDataset.java |  11 +-
 .../TestBlockHasMultipleReplicasOnSameDN.java   |   9 +-
 .../datanode/TestDataNodeVolumeFailure.java |   4 +-
 ...TestDnRespectsBlockReportSplitThreshold.java |   2 +-
 .../extdataset/ExternalDatasetImpl.java |   2 +-
 .../server/namenode/NNThroughputBenchmark.java  |  23 +-
 .../hdfs/server/namenode/TestDeadDatanode.java  |   3 +-
 .../hdfs/server/namenode/TestFSImage.java   |   2 +
 .../hdfs/server/namenode/TestFileTruncate.java  |  50 +-
 .../snapshot/TestRenameWithSnapshots.java   |   4 +-
 .../shortcircuit/TestShortCircuitCache.java |  63 ++
 .../TestOfflineEditsViewer.java |   9 +-
 hadoop-mapreduce-project/CHANGES.txt|  23 +-
 .../v2/app/launcher/ContainerLauncherImpl.java  |  14 +-
 .../jobhistory/TestJobHistoryEventHandler.java  |  14 +-
 .../v2/app/launcher/TestContainerLauncher.java  |  21 +-
 .../java/org/apache/hadoop/mapred/JobConf.java  |   5 +
 .../java/org/apache/hadoop/mapred/MapTask.java  |   2 +
 .../apache/hadoop/mapreduce/JobSubmitter.java   |   2 +-
 .../apache/hadoop/mapreduce/MRJobConfig.java|   9 +
 .../mapreduce/task/reduce/MergeManagerImpl.java |   5 +-
 .../src/main/resources/mapred-default.xml   |   8 +
 .../mapred/TestMRTimelineEventHandling.java |  12 +-
 .../java/org/apache/hadoop/ant/DfsTask.java |   6 +-
 .../org/apache/hadoop/fs/s3/S3FileSystem.java   |   4 +-
 .../hadoop/fs/s3a/S3AFastOutputStream.java  |   4 +-
 .../hadoop/fs/s3native/NativeS3FileSystem.java  |  24 +-
 .../fs/azure/AzureNativeFileSystemStore.java|  22 +-
 .../hadoop/fs/azure/NativeAzureFileSystem.java  |  16 +-
 

[40/50] hadoop git commit: YARN-3243. CapacityScheduler should pass headroom from parent to children to make sure ParentQueue obey its capacity limits. Contributed by Wangda Tan.

2015-03-17 Thread zjshen
http://git-wip-us.apache.org/repos/asf/hadoop/blob/487374b7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
index a5a2e5f..972cabb 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
@@ -350,8 +350,8 @@ public class TestLeafQueue {
 // Start testing...
 
 // Only 1 container
-a.assignContainers(clusterResource, node_0, false,
-new ResourceLimits(clusterResource));
+a.assignContainers(clusterResource, node_0, new ResourceLimits(
+clusterResource));
 assertEquals(
 (int)(node_0.getTotalResource().getMemory() * a.getCapacity()) - 
(1*GB),
 a.getMetrics().getAvailableMB());
@@ -486,7 +486,7 @@ public class TestLeafQueue {
 // Start testing...
 
 // Only 1 container
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(1*GB, a.getUsedResources().getMemory());
 assertEquals(1*GB, app_0.getCurrentConsumption().getMemory());
@@ -497,7 +497,7 @@ public class TestLeafQueue {
 
 // Also 2nd - minCapacity = 1024 since (.1 * 8G)  minAlloc, also
 // you can get one container more than user-limit
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(2*GB, a.getUsedResources().getMemory());
 assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
@@ -506,7 +506,7 @@ public class TestLeafQueue {
 assertEquals(2*GB, a.getMetrics().getAllocatedMB());
 
 // Can't allocate 3rd due to user-limit
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(2*GB, a.getUsedResources().getMemory());
 assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
@@ -516,7 +516,7 @@ public class TestLeafQueue {
 
 // Bump up user-limit-factor, now allocate should work
 a.setUserLimitFactor(10);
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(3*GB, a.getUsedResources().getMemory());
 assertEquals(3*GB, app_0.getCurrentConsumption().getMemory());
@@ -525,7 +525,7 @@ public class TestLeafQueue {
 assertEquals(3*GB, a.getMetrics().getAllocatedMB());
 
 // One more should work, for app_1, due to user-limit-factor
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(4*GB, a.getUsedResources().getMemory());
 assertEquals(3*GB, app_0.getCurrentConsumption().getMemory());
@@ -536,8 +536,8 @@ public class TestLeafQueue {
 // Test max-capacity
 // Now - no more allocs since we are at max-cap
 a.setMaxCapacity(0.5f);
-a.assignContainers(clusterResource, node_0, false,
-new ResourceLimits(clusterResource));
+a.assignContainers(clusterResource, node_0, new ResourceLimits(
+clusterResource));
 assertEquals(4*GB, a.getUsedResources().getMemory());
 assertEquals(3*GB, app_0.getCurrentConsumption().getMemory());
 assertEquals(1*GB, app_1.getCurrentConsumption().getMemory());
@@ -652,21 +652,21 @@ public class TestLeafQueue {
 //recordFactory)));
 
 // 1 container to user_0
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(2*GB, a.getUsedResources().getMemory());
 assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
 assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
 
 // Again one to user_0 since he hasn't exceeded user limit yet
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(3*GB, 

[37/50] hadoop git commit: HDFS-5356. MiniDFSCluster should close all open FileSystems when shutdown() (Contributed by Rakesh R)

2015-03-17 Thread zjshen
HDFS-5356. MiniDFSCluster should close all open FileSystems when shutdown() 
(Contributed by Rakesh R)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/018893e8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/018893e8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/018893e8

Branch: refs/heads/YARN-2928
Commit: 018893e81ec1c43e6c79c77adec92c2edfb20cab
Parents: e537047
Author: Vinayakumar B vinayakum...@apache.org
Authored: Tue Mar 17 15:32:34 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Tue Mar 17 15:32:34 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  | 32 +---
 .../apache/hadoop/hdfs/TestFileCreation.java|  4 +--
 .../snapshot/TestRenameWithSnapshots.java   |  4 +--
 4 files changed, 35 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/018893e8/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index ad3e880..bbe1f02 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -327,6 +327,9 @@ Release 2.8.0 - UNRELEASED
 
   BUG FIXES
 
+HDFS-5356. MiniDFSCluster should close all open FileSystems when shutdown()
+(Rakesh R via vinayakumarb)
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/018893e8/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
index 9208ed2..a6cc71f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
@@ -60,6 +60,7 @@ import java.util.Collection;
 import java.util.List;
 import java.util.Map;
 import java.util.Random;
+import java.util.Set;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -118,6 +119,7 @@ import org.apache.hadoop.util.ToolRunner;
 import com.google.common.base.Joiner;
 import com.google.common.base.Preconditions;
 import com.google.common.collect.Lists;
+import com.google.common.collect.Sets;
 
 /**
  * This class creates a single-process DFS cluster for junit testing.
@@ -523,7 +525,8 @@ public class MiniDFSCluster {
   private boolean federation;
   private boolean checkExitOnShutdown = true;
   protected final int storagesPerDatanode;
-  
+  private SetFileSystem fileSystems = Sets.newHashSet();
+
   /**
* A unique instance identifier for the cluster. This
* is used to disambiguate HA filesystems in the case where
@@ -1705,6 +1708,13 @@ public class MiniDFSCluster {
* Shutdown all the nodes in the cluster.
*/
   public void shutdown(boolean deleteDfsDir) {
+shutdown(deleteDfsDir, true);
+  }
+
+  /**
+   * Shutdown all the nodes in the cluster.
+   */
+  public void shutdown(boolean deleteDfsDir, boolean closeFileSystem) {
 LOG.info(Shutting down the Mini HDFS Cluster);
 if (checkExitOnShutdown)  {
   if (ExitUtil.terminateCalled()) {
@@ -1714,6 +1724,16 @@ public class MiniDFSCluster {
 throw new AssertionError(Test resulted in an unexpected exit);
   }
 }
+if (closeFileSystem) {
+  for (FileSystem fs : fileSystems) {
+try {
+  fs.close();
+} catch (IOException ioe) {
+  LOG.warn(Exception while closing file system, ioe);
+}
+  }
+  fileSystems.clear();
+}
 shutdownDataNodes();
 for (NameNodeInfo nnInfo : nameNodes) {
   if (nnInfo == null) continue;
@@ -2144,8 +2164,10 @@ public class MiniDFSCluster {
* Get a client handle to the DFS cluster for the namenode at given index.
*/
   public DistributedFileSystem getFileSystem(int nnIndex) throws IOException {
-return (DistributedFileSystem)FileSystem.get(getURI(nnIndex),
-nameNodes[nnIndex].conf);
+DistributedFileSystem dfs = (DistributedFileSystem) FileSystem.get(
+getURI(nnIndex), nameNodes[nnIndex].conf);
+fileSystems.add(dfs);
+return dfs;
   }
 
   /**
@@ -2153,7 +2175,9 @@ public class MiniDFSCluster {
* This simulating different threads working on different FileSystem 
instances.
*/
   public FileSystem getNewFileSystemInstance(int nnIndex) throws IOException {
-return 

[22/50] hadoop git commit: YARN-2854. Addendum patch to fix the minor issue in the timeline service documentation.

2015-03-17 Thread zjshen
YARN-2854. Addendum patch to fix the minor issue in the timeline service 
documentation.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ed4e72a2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ed4e72a2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ed4e72a2

Branch: refs/heads/YARN-2928
Commit: ed4e72a20b75ffbd22deb0607dd8b94f6e437a84
Parents: d1eebd9
Author: Zhijie Shen zjs...@apache.org
Authored: Mon Mar 16 10:39:10 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Mon Mar 16 10:39:10 2015 -0700

--
 .../src/site/markdown/TimelineServer.md | 20 ++--
 1 file changed, 10 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ed4e72a2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md
index 31fe4ac..cb8a5d3 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md
@@ -43,7 +43,7 @@ Overview
 
 ### Current Status
 
-  Current version of Timeline sever has been completed. Essential 
functionality of the timeline server can work in both secure and non secure 
modes. The generic history service is also ridden on the timeline store. In 
subsequent releases we will be rolling out next generation timeline service 
which is scalable and reliable. Finally, the Application specific information 
is only available via RESTful APIs, using JSON type content - ability to 
install framework specific UIs in YARN isn't supported yet.
+  The essential functionality of the timeline server have been completed and 
it can work in both secure and non secure modes. The generic history service is 
also built on timeline store. In subsequent releases we will be rolling out 
next generation timeline service which is scalable and reliable. Currently, 
Application specific information is only available via RESTful APIs using JSON 
type content. The ability to install framework specific UIs in YARN is not 
supported yet.
 
 ### Timeline Structure
 
@@ -72,7 +72,7 @@ Deployment
 |: |: |
 | `yarn.timeline-service.enabled` | Indicate to clients whether Timeline 
service is enabled or not. If enabled, the TimelineClient library used by 
end-users will post entities and events to the Timeline server. Defaults to 
false. |
 | `yarn.resourcemanager.system-metrics-publisher.enabled` | The setting that 
controls whether yarn system metrics is published on the timeline server or not 
by RM. Defaults to false. |
-| `yarn.timeline-service.generic-application-history.enabled` | Indicate to 
clients whether to query generic application data from timeline 
history-service. If not enabled then application data is only queried from 
Resource Manager. Defaults to false. |
+| `yarn.timeline-service.generic-application-history.enabled` | Indicate to 
clients whether to query generic application data from timeline history-service 
or not. If not enabled then application data is queried only from Resource 
Manager. Defaults to false. |
 
  Advanced configuration
 
@@ -141,16 +141,16 @@ Deployment
 /property
 
 property
-descriptionThe setting that controls whether yarn system metrics is
-published on the timeline server or not by RM./description
-nameyarn.resourcemanager.system-metrics-publisher.enabled/name
-valuetrue/value
+  descriptionThe setting that controls whether yarn system metrics is
+  published on the timeline server or not by RM./description
+  nameyarn.resourcemanager.system-metrics-publisher.enabled/name
+  valuetrue/value
 /property
 
 property
-  descriptionIndicate to clients whether to query generic application data 
from
-  timeline history-service. If not enabled then application data is only 
queried 
-  from Resource Manager/description
+  descriptionIndicate to clients whether to query generic application
+  data from timeline history-service or not. If not enabled then application
+  data is queried only from Resource Manager./description
   nameyarn.timeline-service.generic-application-history.enabled/name
   valuetrue/value
 /property
@@ -167,7 +167,7 @@ Deployment
   Or users can start the Timeline server / history service as a daemon:
 
 ```
-  $ yarn --daemon start timelineserver
+  $ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh start timelineserver
 ```
 
 ### Accessing generic-data via command-line



[36/50] hadoop git commit: MAPREDUCE-4653. TestRandomAlgorithm has an unused import statement. Contributed by Amir Sanjar.

2015-03-17 Thread zjshen
MAPREDUCE-4653. TestRandomAlgorithm has an unused import statement. Contributed 
by Amir Sanjar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e5370477
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e5370477
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e5370477

Branch: refs/heads/YARN-2928
Commit: e5370477c2d00745e695507ecfdf86de59c5f5b9
Parents: 48c2db3
Author: Harsh J ha...@cloudera.com
Authored: Tue Mar 17 14:01:15 2015 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Tue Mar 17 14:11:54 2015 +0530

--
 hadoop-mapreduce-project/CHANGES.txt  | 3 +++
 .../org/apache/hadoop/mapred/gridmix/TestRandomAlgorithm.java | 2 --
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e5370477/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index b5baf51..3936c9b 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -253,6 +253,9 @@ Release 2.8.0 - UNRELEASED
 
   IMPROVEMENTS
 
+MAPREDUCE-4653. TestRandomAlgorithm has an unused import statement.
+(Amir Sanjar via harsh)
+
 MAPREDUCE-6100. replace mapreduce.job.credentials.binary with
 MRJobConfig.MAPREDUCE_JOB_CREDENTIALS_BINARY for better readability.
 (Zhihai Xu via harsh)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e5370477/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestRandomAlgorithm.java
--
diff --git 
a/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestRandomAlgorithm.java
 
b/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestRandomAlgorithm.java
index cd55483..4e85ce2 100644
--- 
a/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestRandomAlgorithm.java
+++ 
b/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestRandomAlgorithm.java
@@ -30,8 +30,6 @@ import java.util.Set;
 
 import org.junit.Test;
 
-import com.sun.tools.javac.code.Attribute.Array;
-
 public class TestRandomAlgorithm {
   private static final int[][] parameters = new int[][] {
 {5, 1, 1}, 



[08/50] hadoop git commit: YARN-2854. Updated the documentation of the timeline service and the generic history service. Contributed by Naganarasimha G R.

2015-03-17 Thread zjshen
YARN-2854. Updated the documentation of the timeline service and the generic 
history service. Contributed by Naganarasimha G R.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6fdef76c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6fdef76c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6fdef76c

Branch: refs/heads/YARN-2928
Commit: 6fdef76cc3e818856ddcc4d385c2899a8e6ba916
Parents: dfd3201
Author: Zhijie Shen zjs...@apache.org
Authored: Fri Mar 13 13:58:42 2015 -0700
Committer: Zhijie Shen zjs...@apache.org
Committed: Fri Mar 13 14:00:09 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../src/site/markdown/TimelineServer.md | 318 ++-
 .../resources/images/timeline_structure.jpg | Bin 0 - 23070 bytes
 3 files changed, 165 insertions(+), 156 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6fdef76c/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 94f992d..77f8819 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -387,6 +387,9 @@ Release 2.7.0 - UNRELEASED
 YARN-3187. Documentation of Capacity Scheduler Queue mapping based on user
 or group. (Gururaj Shetty via jianhe)
 
+YARN-2854. Updated the documentation of the timeline service and the 
generic
+history service. (Naganarasimha G R via zjshen)
+
   OPTIMIZATIONS
 
 YARN-2990. FairScheduler's delay-scheduling always waits for node-local 
and 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6fdef76c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md
index 4889936..31fe4ac 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md
@@ -16,144 +16,122 @@ YARN Timeline Server
 
 
 * [Overview](#Overview)
-* [Current Status](#Current_Status)
-* [Basic Configuration](#Basic_Configuration)
-* [Advanced Configuration](#Advanced_Configuration)
-* [Generic-data related Configuration](#Generic-data_related_Configuration)
-* [Per-framework-date related 
Configuration](#Per-framework-date_related_Configuration)
-* [Running Timeline server](#Running_Timeline_server)
-* [Accessing generic-data via 
command-line](#Accessing_generic-data_via_command-line)
-* [Publishing of per-framework data by 
applications](#Publishing_of_per-framework_data_by_applications)
+* [Introduction](#Introduction)
+* [Current Status](#Current_Status)
+* [Timeline Structure](#Timeline_Structure)
+* [Deployment](#Deployment)
+* [Configurations](#Configurations)
+* [Running Timeline server](#Running_Timeline_server)
+* [Accessing generic-data via 
command-line](#Accessing_generic-data_via_command-line)
+* [Publishing of application specific 
data](#Publishing_of_application_specific_data)
 
 Overview
-
+-
 
-Storage and retrieval of applications' current as well as historic information 
in a generic fashion is solved in YARN through the Timeline Server (previously 
also called Generic Application History Server). This serves two 
responsibilities:
+### Introduction  
 
-* Generic information about completed applications
-
-Generic information includes application level data like queue-name, user 
information etc in the ApplicationSubmissionContext, list of 
application-attempts that ran for an application, information about each 
application-attempt, list of containers run under each application-attempt, and 
information about each container. Generic data is stored by ResourceManager to 
a history-store (default implementation on a file-system) and used by the 
web-UI to display information about completed applications.
+ Storage and retrieval of application's current as well as historic 
information in a generic fashion is solved in YARN through the Timeline Server. 
This serves two responsibilities:
 
-* Per-framework information of running and completed applications
-
-Per-framework information is completely specific to an application or 
framework. For example, Hadoop MapReduce framework can include pieces of 
information like number of map tasks, reduce tasks, counters etc. Application 
developers can publish the specific information to the Timeline server via 
TimelineClient from within a client, the 

[25/50] hadoop git commit: HDFS-7886. Fix TestFileTruncate falures. Contributed by Plamen Jeliazkov and Konstantin Shvachko.

2015-03-17 Thread zjshen
HDFS-7886. Fix TestFileTruncate falures. Contributed by Plamen Jeliazkov and 
Konstantin Shvachko.

Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ce5de93a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ce5de93a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ce5de93a

Branch: refs/heads/YARN-2928
Commit: ce5de93a5837e115e1f0b7d3c5a67ace25385a63
Parents: 587d8be
Author: Konstantin V Shvachko s...@apache.org
Authored: Mon Mar 16 12:54:04 2015 -0700
Committer: Konstantin V Shvachko s...@apache.org
Committed: Mon Mar 16 12:54:04 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  2 +
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  | 44 ++--
 .../hdfs/server/namenode/TestFileTruncate.java  | 18 
 3 files changed, 51 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ce5de93a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 93237af..d313b6c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1157,6 +1157,8 @@ Release 2.7.0 - UNRELEASED
 HDFS-7915. The DataNode can sometimes allocate a ShortCircuitShm slot and
 fail to tell the DFSClient about it because of a network error (cmccabe)
 
+HDFS-7886. Fix TestFileTruncate falures. (Plamen Jeliazkov and shv)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ce5de93a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
index 834eb32..9208ed2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
@@ -77,9 +77,12 @@ import org.apache.hadoop.hdfs.MiniDFSNNTopology.NNConf;
 import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.protocol.BlockListAsLongs;
 import org.apache.hadoop.hdfs.protocol.ClientProtocol;
+import org.apache.hadoop.hdfs.protocol.DatanodeID;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerTestUtil;
+import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.StartupOption;
 import org.apache.hadoop.hdfs.server.common.Storage;
 import org.apache.hadoop.hdfs.server.common.Util;
@@ -1343,7 +1346,6 @@ public class MiniDFSCluster {
 }
 
 int curDatanodesNum = dataNodes.size();
-final int curDatanodesNumSaved = curDatanodesNum;
 // for mincluster's the default initialDelay for BRs is 0
 if (conf.get(DFS_BLOCKREPORT_INITIAL_DELAY_KEY) == null) {
   conf.setLong(DFS_BLOCKREPORT_INITIAL_DELAY_KEY, 0);
@@ -2022,7 +2024,23 @@ public class MiniDFSCluster {
*/
   public synchronized boolean restartDataNode(int i, boolean keepPort)
   throws IOException {
-DataNodeProperties dnprop = stopDataNode(i);
+return restartDataNode(i, keepPort, false);
+  }
+
+  /**
+   * Restart a particular DataNode.
+   * @param idn index of the DataNode
+   * @param keepPort true if should restart on the same port
+   * @param expireOnNN true if NameNode should expire the DataNode heartbeat
+   * @return
+   * @throws IOException
+   */
+  public synchronized boolean restartDataNode(
+  int idn, boolean keepPort, boolean expireOnNN) throws IOException {
+DataNodeProperties dnprop = stopDataNode(idn);
+if(expireOnNN) {
+  setDataNodeDead(dnprop.datanode.getDatanodeId());
+}
 if (dnprop == null) {
   return false;
 } else {
@@ -2030,6 +2048,24 @@ public class MiniDFSCluster {
 }
   }
 
+  /**
+   * Expire a DataNode heartbeat on the NameNode
+   * @param dnId
+   * @throws IOException
+   */
+  public void setDataNodeDead(DatanodeID dnId) throws IOException {
+DatanodeDescriptor dnd =
+NameNodeAdapter.getDatanode(getNamesystem(), dnId);
+dnd.setLastUpdate(0L);
+BlockManagerTestUtil.checkHeartbeat(getNamesystem().getBlockManager());
+  }
+
+  public void setDataNodesDead() throws IOException {

[04/50] hadoop git commit: HDFS-7926. NameNode implementation of ClientProtocol.truncate(..) is not idempotent. Contributed by Tsz Wo Nicholas Sze

2015-03-17 Thread zjshen
HDFS-7926. NameNode implementation of ClientProtocol.truncate(..) is not 
idempotent. Contributed by Tsz Wo Nicholas Sze


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f446669a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f446669a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f446669a

Branch: refs/heads/YARN-2928
Commit: f446669afb5c3d31a00c65449f27088b39e11ae3
Parents: 8180e67
Author: Brandon Li brando...@apache.org
Authored: Fri Mar 13 10:42:22 2015 -0700
Committer: Brandon Li brando...@apache.org
Committed: Fri Mar 13 10:42:22 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  |  3 +++
 .../BlockInfoContiguousUnderConstruction.java|  1 +
 .../hadoop/hdfs/server/namenode/FSNamesystem.java| 15 +++
 .../hdfs/server/namenode/TestFileTruncate.java   |  2 ++
 4 files changed, 21 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f446669a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 153453c..909182b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1142,6 +1142,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-6833.  DirectoryScanner should not register a deleting block with
 memory of DataNode.  (Shinichi Yamashita via szetszwo)
 
+HDFS-7926. NameNode implementation of ClientProtocol.truncate(..) is not 
+idempotent (Tsz Wo Nicholas Sze via brandonli)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f446669a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguousUnderConstruction.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguousUnderConstruction.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguousUnderConstruction.java
index 91b76cc..ae809a5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguousUnderConstruction.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguousUnderConstruction.java
@@ -383,6 +383,7 @@ public class BlockInfoContiguousUnderConstruction extends 
BlockInfoContiguous {
 
   private void appendUCParts(StringBuilder sb) {
 sb.append({UCState=).append(blockUCState)
+  .append(, truncateBlock= + truncateBlock)
   .append(, primaryNodeIndex=).append(primaryNodeIndex)
   .append(, replicas=[);
 if (replicas != null) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f446669a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 77b4a27..b384ce6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -1966,6 +1966,21 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   throw new UnsupportedOperationException(
   Cannot truncate lazy persist file  + src);
 }
+
+// Check if the file is already being truncated with the same length
+final BlockInfoContiguous last = file.getLastBlock();
+if (last != null  last.getBlockUCState() == BlockUCState.UNDER_RECOVERY) 
{
+  final Block truncateBlock
+  = ((BlockInfoContiguousUnderConstruction)last).getTruncateBlock();
+  if (truncateBlock != null) {
+final long truncateLength = file.computeFileSize(false, false)
++ truncateBlock.getNumBytes();
+if (newLength == truncateLength) {
+  return false;
+}
+  }
+}
+
 // Opening an existing file for truncate. May need lease recovery.
 recoverLeaseInternal(RecoverLeaseOp.TRUNCATE_FILE,
 iip, src, clientName, clientMachine, false);


[27/50] hadoop git commit: HADOOP-11638. OpensslSecureRandom.c pthreads_thread_id should support FreeBSD and Solaris in addition to Linux. Contributed by Kiran Kumar M R.

2015-03-17 Thread zjshen
HADOOP-11638. OpensslSecureRandom.c pthreads_thread_id should support FreeBSD 
and Solaris in addition to Linux. Contributed by Kiran Kumar M R.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/72cd4e4a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/72cd4e4a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/72cd4e4a

Branch: refs/heads/YARN-2928
Commit: 72cd4e4a4eb2a9f8695d4c67eb55dd2be36c52dc
Parents: 685dbaf
Author: cnauroth cnaur...@apache.org
Authored: Mon Mar 16 13:26:57 2015 -0700
Committer: cnauroth cnaur...@apache.org
Committed: Mon Mar 16 14:04:40 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt   |  3 +++
 .../hadoop/crypto/random/OpensslSecureRandom.c| 18 +-
 2 files changed, 20 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/72cd4e4a/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index aa17841..2a2b916 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1105,6 +1105,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11558. Fix dead links to doc of hadoop-tools. (Jean-Pierre 
 Matsumoto via ozawa)
 
+HADOOP-11638. OpensslSecureRandom.c pthreads_thread_id should support 
FreeBSD
+and Solaris in addition to Linux. (Kiran Kumar M R via cnauroth)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/72cd4e4a/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/random/OpensslSecureRandom.c
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/random/OpensslSecureRandom.c
 
b/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/random/OpensslSecureRandom.c
index 6c31d10..8f0c06d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/random/OpensslSecureRandom.c
+++ 
b/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/random/OpensslSecureRandom.c
@@ -29,6 +29,10 @@
 #include sys/types.h
 #endif
 
+#if defined(__FreeBSD__)
+#include pthread_np.h
+#endif
+
 #ifdef WINDOWS
 #include windows.h
 #endif
@@ -274,7 +278,19 @@ static void pthreads_locking_callback(int mode, int type, 
char *file, int line)
 
 static unsigned long pthreads_thread_id(void)
 {
-  return (unsigned long)syscall(SYS_gettid);
+  unsigned long thread_id = 0;
+#if defined(__linux__)
+  thread_id = (unsigned long)syscall(SYS_gettid);
+#elif defined(__FreeBSD__)
+  thread_id = (unsigned long)pthread_getthreadid_np();
+#elif defined(__sun)
+  thread_id = (unsigned long)pthread_self();
+#elif defined(__APPLE__)
+  (void)pthread_threadid_np(pthread_self(), thread_id);
+#else
+#error Platform not supported
+#endif
+  return thread_id;
 }
 
 #endif /* UNIX */



[35/50] hadoop git commit: HADOOP-11720. [JDK8] Fix javadoc errors caused by incorrect or illegal tags in hadoop-tools. Contributed by Akira AJISAKA.

2015-03-17 Thread zjshen
HADOOP-11720. [JDK8] Fix javadoc errors caused by incorrect or illegal tags in 
hadoop-tools. Contributed by Akira AJISAKA.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ef9946cd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ef9946cd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ef9946cd

Branch: refs/heads/YARN-2928
Commit: ef9946cd52d54200c658987c1dbc3e6fce133f77
Parents: bb243ce
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Tue Mar 17 16:09:21 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Tue Mar 17 16:09:21 2015 +0900

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 +++
 .../java/org/apache/hadoop/ant/DfsTask.java |  6 ++---
 .../org/apache/hadoop/fs/s3/S3FileSystem.java   |  4 +---
 .../hadoop/fs/s3a/S3AFastOutputStream.java  |  4 ++--
 .../hadoop/fs/s3native/NativeS3FileSystem.java  | 24 
 .../fs/azure/AzureNativeFileSystemStore.java| 22 ++
 .../hadoop/fs/azure/NativeAzureFileSystem.java  | 16 +
 .../hadoop/tools/CopyListingFileStatus.java |  8 +++
 .../apache/hadoop/tools/SimpleCopyListing.java  |  2 +-
 .../apache/hadoop/tools/util/DistCpUtils.java   |  4 ++--
 .../java/org/apache/hadoop/record/Buffer.java   |  8 +++
 .../java/org/apache/hadoop/record/Utils.java|  8 +++
 12 files changed, 51 insertions(+), 58 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef9946cd/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 2e04cc1..3817054 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1108,6 +1108,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11638. OpensslSecureRandom.c pthreads_thread_id should support 
FreeBSD
 and Solaris in addition to Linux. (Kiran Kumar M R via cnauroth)
 
+HADOOP-11720. [JDK8] Fix javadoc errors caused by incorrect or illegal
+tags in hadoop-tools. (Akira AJISAKA via ozawa)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef9946cd/hadoop-tools/hadoop-ant/src/main/java/org/apache/hadoop/ant/DfsTask.java
--
diff --git 
a/hadoop-tools/hadoop-ant/src/main/java/org/apache/hadoop/ant/DfsTask.java 
b/hadoop-tools/hadoop-ant/src/main/java/org/apache/hadoop/ant/DfsTask.java
index 78cb360..9d0b3a4 100644
--- a/hadoop-tools/hadoop-ant/src/main/java/org/apache/hadoop/ant/DfsTask.java
+++ b/hadoop-tools/hadoop-ant/src/main/java/org/apache/hadoop/ant/DfsTask.java
@@ -41,8 +41,8 @@ import org.apache.hadoop.hdfs.HdfsConfiguration;
 public class DfsTask extends Task {
 
   /**
-   * Default sink for {@link java.lang.System.out System.out}
-   * and {@link java.lang.System.err System.err}.
+   * Default sink for {@link java.lang.System#out}
+   * and {@link java.lang.System#err}.
*/
   private static final OutputStream nullOut = new OutputStream() {
   public void write(int b){ /* ignore */ }
@@ -171,7 +171,7 @@ public class DfsTask extends Task {
   }
 
   /**
-   * Invoke {@link org.apache.hadoop.fs.FsShell#doMain FsShell.doMain} after a
+   * Invoke {@link org.apache.hadoop.fs.FsShell#main} after a
* few cursory checks of the configuration.
*/
   public void execute() throws BuildException {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef9946cd/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java
index dda3cf6..8bdfe9a 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java
@@ -44,10 +44,9 @@ import org.apache.hadoop.io.retry.RetryProxy;
 import org.apache.hadoop.util.Progressable;
 
 /**
- * p
  * A block-based {@link FileSystem} backed by
  * a href=http://aws.amazon.com/s3;Amazon S3/a.
- * /p
+ *
  * @see NativeS3FileSystem
  */
 @InterfaceAudience.Public
@@ -70,7 +69,6 @@ public class S3FileSystem extends FileSystem {
 
   /**
* Return the protocol scheme for the FileSystem.
-   * p/
*
* @return codes3/code
*/

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef9946cd/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFastOutputStream.java

[28/50] hadoop git commit: HADOOP-8059. Update CHANGES.txt to target 2.7.0.

2015-03-17 Thread zjshen
HADOOP-8059. Update CHANGES.txt to target 2.7.0.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2681ed96
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2681ed96
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2681ed96

Branch: refs/heads/YARN-2928
Commit: 2681ed96983577d2eae19e749e25bfc5fd0589c3
Parents: 72cd4e4
Author: cnauroth cnaur...@apache.org
Authored: Mon Mar 16 14:17:25 2015 -0700
Committer: cnauroth cnaur...@apache.org
Committed: Mon Mar 16 14:17:25 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2681ed96/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 2a2b916..2e04cc1 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -68,9 +68,6 @@ Trunk (Unreleased)
 HADOOP-7659. fs -getmerge isn't guaranteed to work well over non-HDFS
 filesystems (harsh)
 
-HADOOP-8059. Add javadoc to InterfaceAudience and InterfaceStability.
-(Brandon Li via suresh)
-
 HADOOP-8434. Add tests for Configuration setter methods.
 (Madhukara Phatak via suresh)
 
@@ -694,6 +691,9 @@ Release 2.7.0 - UNRELEASED
 
 HADOOP-11714. Add more trace log4j messages to SpanReceiverHost (cmccabe)
 
+HADOOP-8059. Add javadoc to InterfaceAudience and InterfaceStability.
+(Brandon Li via suresh)
+
   OPTIMIZATIONS
 
 HADOOP-11323. WritableComparator#compare keeps reference to byte array.



[17/50] hadoop git commit: YARN-3171. Sort by Application id, AppAttempt and ContainerID doesn't work in ATS / RM web ui. Contributed by Naganarasimha G R

2015-03-17 Thread zjshen
YARN-3171. Sort by Application id, AppAttempt and ContainerID doesn't
work in ATS / RM web ui. Contributed by Naganarasimha G R


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3ff1ba2a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3ff1ba2a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3ff1ba2a

Branch: refs/heads/YARN-2928
Commit: 3ff1ba2a7b00fdf06270d00b2193bde4b56b06b3
Parents: bc9cb3e
Author: Xuan xg...@apache.org
Authored: Sun Mar 15 20:26:10 2015 -0700
Committer: Xuan xg...@apache.org
Committed: Sun Mar 15 20:26:10 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt| 3 +++
 .../org/apache/hadoop/yarn/server/webapp/WebPageUtils.java | 6 +++---
 2 files changed, 6 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3ff1ba2a/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 77f8819..bcab88c 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -766,6 +766,9 @@ Release 2.7.0 - UNRELEASED
 YARN-3267. Timelineserver applies the ACL rules after applying the limit on
 the number of records (Chang Li via jeagles)
 
+YARN-3171. Sort by Application id, AppAttempt and ContainerID doesn't work
+in ATS / RM web ui. (Naganarasimha G R via xgong)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3ff1ba2a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
index 384a976..5acabf5 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebPageUtils.java
@@ -44,7 +44,7 @@ public class WebPageUtils {
 StringBuilder sb = new StringBuilder();
 return sb
   .append([\n)
-  .append({'sType':'numeric', 'aTargets': [0])
+  .append({'sType':'string', 'aTargets': [0])
   .append(, 'mRender': parseHadoopID })
   .append(\n, {'sType':'numeric', 'aTargets':  +
   (isFairSchedulerPage ? [6, 7]: [5, 6]))
@@ -63,7 +63,7 @@ public class WebPageUtils {
 
   private static String getAttemptsTableColumnDefs() {
 StringBuilder sb = new StringBuilder();
-return sb.append([\n).append({'sType':'numeric', 'aTargets': [0])
+return sb.append([\n).append({'sType':'string', 'aTargets': [0])
   .append(, 'mRender': parseHadoopID })
   .append(\n, {'sType':'numeric', 'aTargets': [1])
   .append(, 'mRender': renderHadoopDate }]).toString();
@@ -79,7 +79,7 @@ public class WebPageUtils {
 
   private static String getContainersTableColumnDefs() {
 StringBuilder sb = new StringBuilder();
-return sb.append([\n).append({'sType':'numeric', 'aTargets': [0])
+return sb.append([\n).append({'sType':'string', 'aTargets': [0])
   .append(, 'mRender': parseHadoopID }]).toString();
   }
 



[21/50] hadoop git commit: HADOOP-9477. Amendment to CHANGES.txt.

2015-03-17 Thread zjshen
HADOOP-9477. Amendment to CHANGES.txt.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d1eebd9c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d1eebd9c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d1eebd9c

Branch: refs/heads/YARN-2928
Commit: d1eebd9c9c1fed5877ef2665959e9bd1485d080c
Parents: 03b77ed
Author: Yongjun Zhang yzh...@cloudera.com
Authored: Mon Mar 16 09:16:57 2015 -0700
Committer: Yongjun Zhang yzh...@cloudera.com
Committed: Mon Mar 16 09:16:57 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1eebd9c/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index e161d7d..a43a153 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -37,9 +37,6 @@ Trunk (Unreleased)
 
 HADOOP-11565. Add --slaves shell option (aw)
 
-HADOOP-9477. Add posixGroups support for LDAP groups mapping service.
-(Dapeng Sun via Yongjun Zhang)
-
   IMPROVEMENTS
 
 HADOOP-8017. Configure hadoop-main pom to get rid of M2E plugin execution
@@ -447,6 +444,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-11226. Add a configuration to set ipc.Client's traffic class with
 IPTOS_LOWDELAY|IPTOS_RELIABILITY. (Gopal V via ozawa)
 
+HADOOP-9477. Add posixGroups support for LDAP groups mapping service.
+(Dapeng Sun via Yongjun Zhang)
+
   IMPROVEMENTS
 
 HADOOP-11692. Improve authentication failure WARN message to avoid user



[11/50] hadoop git commit: MAPREDUCE-6265. Make ContainerLauncherImpl.INITIAL_POOL_SIZE configurable to better control to launch/kill containers. Contributed by Zhihai Xu

2015-03-17 Thread zjshen
MAPREDUCE-6265. Make ContainerLauncherImpl.INITIAL_POOL_SIZE configurable to 
better control to launch/kill containers. Contributed by Zhihai Xu


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9d38520c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9d38520c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9d38520c

Branch: refs/heads/YARN-2928
Commit: 9d38520c8e42530a817a7f69c9aa73a9ad40639c
Parents: 32741cf
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Sat Mar 14 16:44:02 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Sat Mar 14 16:44:02 2015 +0900

--
 hadoop-mapreduce-project/CHANGES.txt|  3 +++
 .../v2/app/launcher/ContainerLauncherImpl.java  | 14 +
 .../v2/app/launcher/TestContainerLauncher.java  | 21 +++-
 .../apache/hadoop/mapreduce/MRJobConfig.java|  8 
 .../src/main/resources/mapred-default.xml   |  8 
 5 files changed, 45 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9d38520c/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 0bbe85c..ab6eef5 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -340,6 +340,9 @@ Release 2.7.0 - UNRELEASED
 MAPREDUCE-6263. Configurable timeout between YARNRunner terminate the 
 application and forcefully kill. (Eric Payne via junping_du)
 
+MAPREDUCE-6265. Make ContainerLauncherImpl.INITIAL_POOL_SIZE configurable 
+to better control to launch/kill containers. (Zhihai Xu via ozawa)
+
   OPTIMIZATIONS
 
 MAPREDUCE-6169. MergeQueue should release reference to the current item 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9d38520c/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/launcher/ContainerLauncherImpl.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/launcher/ContainerLauncherImpl.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/launcher/ContainerLauncherImpl.java
index 666f757..9c1125d 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/launcher/ContainerLauncherImpl.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/launcher/ContainerLauncherImpl.java
@@ -70,7 +70,7 @@ public class ContainerLauncherImpl extends AbstractService 
implements
 new ConcurrentHashMapContainerId, Container(); 
   private final AppContext context;
   protected ThreadPoolExecutor launcherPool;
-  protected static final int INITIAL_POOL_SIZE = 10;
+  protected int initialPoolSize;
   private int limitOnPoolSize;
   private Thread eventHandlingThread;
   protected BlockingQueueContainerLauncherEvent eventQueue =
@@ -246,6 +246,12 @@ public class ContainerLauncherImpl extends AbstractService 
implements
 MRJobConfig.MR_AM_CONTAINERLAUNCHER_THREAD_COUNT_LIMIT,
 MRJobConfig.DEFAULT_MR_AM_CONTAINERLAUNCHER_THREAD_COUNT_LIMIT);
 LOG.info(Upper limit on the thread pool size is  + this.limitOnPoolSize);
+
+this.initialPoolSize = conf.getInt(
+MRJobConfig.MR_AM_CONTAINERLAUNCHER_THREADPOOL_INITIAL_SIZE,
+MRJobConfig.DEFAULT_MR_AM_CONTAINERLAUNCHER_THREADPOOL_INITIAL_SIZE);
+LOG.info(The thread pool initial size is  + this.initialPoolSize);
+
 super.serviceInit(conf);
 cmProxy = new ContainerManagementProtocolProxy(conf);
   }
@@ -256,7 +262,7 @@ public class ContainerLauncherImpl extends AbstractService 
implements
 ContainerLauncher #%d).setDaemon(true).build();
 
 // Start with a default core-pool size of 10 and change it dynamically.
-launcherPool = new ThreadPoolExecutor(INITIAL_POOL_SIZE,
+launcherPool = new ThreadPoolExecutor(initialPoolSize,
 Integer.MAX_VALUE, 1, TimeUnit.HOURS,
 new LinkedBlockingQueueRunnable(),
 tf);
@@ -289,11 +295,11 @@ public class ContainerLauncherImpl extends 
AbstractService implements
 int idealPoolSize = Math.min(limitOnPoolSize, numNodes);
 
 if (poolSize  idealPoolSize) {
-  // Bump up the pool size to idealPoolSize+INITIAL_POOL_SIZE, the
+  // Bump up the pool size to idealPoolSize+initialPoolSize, the
   // later is just a buffer so we 

[24/50] hadoop git commit: MAPREDUCE-4414. Add main methods to JobConf and YarnConfiguration, for debug purposes. Contributed by Plamen Jeliazkov.

2015-03-17 Thread zjshen
MAPREDUCE-4414. Add main methods to JobConf and YarnConfiguration, for debug 
purposes. Contributed by Plamen Jeliazkov.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/587d8be1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/587d8be1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/587d8be1

Branch: refs/heads/YARN-2928
Commit: 587d8be17bb9e71bad2881e24e7372d3e15125d3
Parents: bf3275d
Author: Harsh J ha...@cloudera.com
Authored: Tue Mar 17 01:01:06 2015 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Tue Mar 17 01:03:08 2015 +0530

--
 hadoop-mapreduce-project/CHANGES.txt| 3 +++
 .../src/main/java/org/apache/hadoop/mapred/JobConf.java | 5 +
 .../java/org/apache/hadoop/yarn/conf/YarnConfiguration.java | 5 +
 3 files changed, 13 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/587d8be1/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index 28460d3..d02d725 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -253,6 +253,9 @@ Release 2.8.0 - UNRELEASED
 
   IMPROVEMENTS
 
+MAPREDUCE-4414. Add main methods to JobConf and YarnConfiguration,
+for debug purposes. (Plamen Jeliazkov via harsh)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/587d8be1/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobConf.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobConf.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobConf.java
index c388bda..9cac685 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobConf.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobConf.java
@@ -2140,5 +2140,10 @@ public class JobConf extends Configuration {
 }
   }
 
+  /* For debugging. Dump configurations to system output as XML format. */
+  public static void main(String[] args) throws Exception {
+new JobConf(new Configuration()).writeXml(System.out);
+  }
+
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/587d8be1/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index f40c999..a527af4 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -1808,4 +1808,9 @@ public class YarnConfiguration extends Configuration {
 }
 return clusterId;
   }
+
+  /* For debugging. mp configurations to system output as XML format. */
+  public static void main(String[] args) throws Exception {
+new YarnConfiguration(new Configuration()).writeXml(System.out);
+  }
 }



[30/50] hadoop git commit: YARN-3339. TestDockerContainerExecutor should pull a single image and not the entire centos repository. (Ravindra Kumar Naik via raviprak)

2015-03-17 Thread zjshen
YARN-3339. TestDockerContainerExecutor should pull a single image and not the 
entire centos repository. (Ravindra Kumar Naik via raviprak)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/56085203
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/56085203
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/56085203

Branch: refs/heads/YARN-2928
Commit: 56085203c43b8f2561bf3745910e03f8ac176a67
Parents: 7522a64
Author: Ravi Prakash ravip...@altiscale.com
Authored: Mon Mar 16 16:17:58 2015 -0700
Committer: Ravi Prakash ravip...@altiscale.com
Committed: Mon Mar 16 16:17:58 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt   | 3 +++
 .../yarn/server/nodemanager/TestDockerContainerExecutor.java  | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/56085203/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index b8e07a0..cb68480 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -58,6 +58,9 @@ Release 2.8.0 - UNRELEASED
 
   OPTIMIZATIONS
 
+YARN-3339. TestDockerContainerExecutor should pull a single image and not
+the entire centos repository. (Ravindra Kumar Naik via raviprak)
+
   BUG FIXES
 
 Release 2.7.0 - UNRELEASED

http://git-wip-us.apache.org/repos/asf/hadoop/blob/56085203/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestDockerContainerExecutor.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestDockerContainerExecutor.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestDockerContainerExecutor.java
index e43ac2e..ac02542 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestDockerContainerExecutor.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestDockerContainerExecutor.java
@@ -78,7 +78,7 @@ public class TestDockerContainerExecutor {
   private int id = 0;
   private String appSubmitter;
   private String dockerUrl;
-  private String testImage = centos;
+  private String testImage = centos:latest;
   private String dockerExec;
   private String containerIdStr;
 



[12/50] hadoop git commit: Moving CHANGES.txt entry for MAPREDUCE-4742 to branch-2.7.

2015-03-17 Thread zjshen
Moving CHANGES.txt entry for MAPREDUCE-4742 to branch-2.7.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bd0a9ba8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bd0a9ba8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bd0a9ba8

Branch: refs/heads/YARN-2928
Commit: bd0a9ba8e3b1c75dd2fc4cc65cb00c3e31d609ce
Parents: 9d38520
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Sat Mar 14 16:53:50 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Sat Mar 14 16:53:50 2015 +0900

--
 hadoop-mapreduce-project/CHANGES.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bd0a9ba8/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index ab6eef5..28460d3 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -257,8 +257,6 @@ Release 2.8.0 - UNRELEASED
 
   BUG FIXES
 
-MAPREDUCE-4742. Fix typo in nnbench#displayUsage. (Liang Xie via ozawa)
-
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES
@@ -440,6 +438,8 @@ Release 2.7.0 - UNRELEASED
 MAPREDUCE-5657. Fix Javadoc errors caused by incorrect or illegal tags in 
doc
 comments. (Akira AJISAKA and Andrew Purtell via ozawa)
 
+MAPREDUCE-4742. Fix typo in nnbench#displayUsage. (Liang Xie via ozawa)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES



[01/50] hadoop git commit: HADOOP-11710. Make CryptoOutputStream behave like DFSOutputStream wrt synchronization. (Sean Busbey via yliu)

2015-03-17 Thread zjshen
Repository: hadoop
Updated Branches:
  refs/heads/YARN-2928 fb1b59600 - 8a637914c


HADOOP-11710. Make CryptoOutputStream behave like DFSOutputStream wrt 
synchronization. (Sean Busbey via yliu)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a8529100
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a8529100
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a8529100

Branch: refs/heads/YARN-2928
Commit: a85291003cf3e3fd79b6addcf59d4f43dc72d356
Parents: 8212877
Author: yliu y...@apache.org
Authored: Fri Mar 13 02:25:02 2015 +0800
Committer: yliu y...@apache.org
Committed: Fri Mar 13 02:25:02 2015 +0800

--
 hadoop-common-project/hadoop-common/CHANGES.txt  |  3 +++
 .../apache/hadoop/crypto/CryptoOutputStream.java | 19 ---
 2 files changed, 15 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a8529100/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 6970bad..55028cb 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1097,6 +1097,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11693. Azure Storage FileSystem rename operations are throttled too
 aggressively to complete HBase WAL archiving. (Duo Xu via cnauroth)
 
+HADOOP-11710. Make CryptoOutputStream behave like DFSOutputStream wrt
+synchronization. (Sean Busbey via yliu)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/a8529100/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoOutputStream.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoOutputStream.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoOutputStream.java
index f1ea0fc..bc09b8c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoOutputStream.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoOutputStream.java
@@ -40,6 +40,9 @@ import com.google.common.base.Preconditions;
  * padding = pos%(algorithm blocksize); 
  * p/
  * The underlying stream offset is maintained as state.
+ *
+ * Note that while some of this class' methods are synchronized, this is just 
to
+ * match the threadsafety behavior of DFSOutputStream. See HADOOP-11710.
  */
 @InterfaceAudience.Private
 @InterfaceStability.Evolving
@@ -126,7 +129,7 @@ public class CryptoOutputStream extends FilterOutputStream 
implements
* @throws IOException
*/
   @Override
-  public void write(byte[] b, int off, int len) throws IOException {
+  public synchronized void write(byte[] b, int off, int len) throws 
IOException {
 checkStream();
 if (b == null) {
   throw new NullPointerException();
@@ -213,14 +216,16 @@ public class CryptoOutputStream extends 
FilterOutputStream implements
   }
   
   @Override
-  public void close() throws IOException {
+  public synchronized void close() throws IOException {
 if (closed) {
   return;
 }
-
-super.close();
-freeBuffers();
-closed = true;
+try {
+  super.close();
+  freeBuffers();
+} finally {
+  closed = true;
+}
   }
   
   /**
@@ -228,7 +233,7 @@ public class CryptoOutputStream extends FilterOutputStream 
implements
* underlying stream, then do the flush.
*/
   @Override
-  public void flush() throws IOException {
+  public synchronized void flush() throws IOException {
 checkStream();
 encrypt();
 super.flush();



[41/50] hadoop git commit: YARN-3243. CapacityScheduler should pass headroom from parent to children to make sure ParentQueue obey its capacity limits. Contributed by Wangda Tan.

2015-03-17 Thread zjshen
YARN-3243. CapacityScheduler should pass headroom from parent to children to 
make sure ParentQueue obey its capacity limits. Contributed by Wangda Tan.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/487374b7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/487374b7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/487374b7

Branch: refs/heads/YARN-2928
Commit: 487374b7fe0c92fc7eb1406c568952722b5d5b15
Parents: a89b087
Author: Jian He jia...@apache.org
Authored: Tue Mar 17 10:22:15 2015 -0700
Committer: Jian He jia...@apache.org
Committed: Tue Mar 17 10:24:23 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../scheduler/capacity/AbstractCSQueue.java | 112 ++-
 .../scheduler/capacity/CSQueue.java |   4 +-
 .../scheduler/capacity/CapacityScheduler.java   |  33 ++-
 .../scheduler/capacity/LeafQueue.java   | 292 +++
 .../scheduler/capacity/ParentQueue.java | 140 +++--
 .../scheduler/common/fica/FiCaSchedulerApp.java |  16 +-
 .../capacity/TestApplicationLimits.java |   8 +-
 .../capacity/TestCapacityScheduler.java |  59 
 .../scheduler/capacity/TestChildQueueOrder.java |  25 +-
 .../scheduler/capacity/TestLeafQueue.java   | 142 -
 .../scheduler/capacity/TestParentQueue.java |  97 +++---
 .../scheduler/capacity/TestReservations.java| 147 +-
 13 files changed, 561 insertions(+), 517 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/487374b7/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 82934ad..f5b72d7 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -56,6 +56,9 @@ Release 2.8.0 - UNRELEASED
 
   IMPROVEMENTS
 
+YARN-3243. CapacityScheduler should pass headroom from parent to children
+to make sure ParentQueue obey its capacity limits. (Wangda Tan via jianhe)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/487374b7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
index d800709..4e53060 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
@@ -20,10 +20,13 @@ package 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity;
 
 import java.io.IOException;
 import java.util.HashMap;
+import java.util.HashSet;
 import java.util.Map;
 import java.util.Set;
 
 import org.apache.commons.lang.StringUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.authorize.AccessControlList;
@@ -34,6 +37,7 @@ import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.factories.RecordFactory;
 import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider;
+import org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager;
 import org.apache.hadoop.yarn.security.AccessType;
 import org.apache.hadoop.yarn.security.PrivilegedEntity;
 import org.apache.hadoop.yarn.security.PrivilegedEntity.EntityType;
@@ -49,6 +53,7 @@ import org.apache.hadoop.yarn.util.resource.Resources;
 import com.google.common.collect.Sets;
 
 public abstract class AbstractCSQueue implements CSQueue {
+  private static final Log LOG = LogFactory.getLog(AbstractCSQueue.class);
   
   CSQueue parent;
   final String queueName;
@@ -406,21 +411,102 @@ public abstract class AbstractCSQueue implements CSQueue 
{
 parentQ.getPreemptionDisabled());
   }
   
-  protected 

[29/50] hadoop git commit: YARN-3349. Treat all exceptions as failure in TestFSRMStateStore#testFSRMStateStoreClientRetry. Contributed by Zhihai Xu.

2015-03-17 Thread zjshen
YARN-3349. Treat all exceptions as failure in 
TestFSRMStateStore#testFSRMStateStoreClientRetry. Contributed by Zhihai Xu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7522a643
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7522a643
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7522a643

Branch: refs/heads/YARN-2928
Commit: 7522a643faeea2d8a8e2c7409ae60e0973e7cf38
Parents: 2681ed9
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Tue Mar 17 08:09:55 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Tue Mar 17 08:09:55 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt  |  3 +++
 .../resourcemanager/recovery/TestFSRMStateStore.java | 11 +++
 2 files changed, 6 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7522a643/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 26ef7d3..b8e07a0 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -772,6 +772,9 @@ Release 2.7.0 - UNRELEASED
 YARN-1453. [JDK8] Fix Javadoc errors caused by incorrect or illegal tags 
in 
 doc comments. (Akira AJISAKA, Andrew Purtell, and Allen Wittenauer via 
ozawa)
 
+YARN-3349. Treat all exceptions as failure in
+TestFSRMStateStore#testFSRMStateStoreClientRetry. (Zhihai Xu via ozawa)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7522a643/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestFSRMStateStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestFSRMStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestFSRMStateStore.java
index 675d73c..d2eddd6 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestFSRMStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestFSRMStateStore.java
@@ -100,11 +100,11 @@ public class TestFSRMStateStore extends 
RMStateStoreTestBase {
   workingDirPathURI.toString());
   conf.set(YarnConfiguration.FS_RM_STATE_STORE_RETRY_POLICY_SPEC,
 100,6000);
-  conf.setInt(YarnConfiguration.FS_RM_STATE_STORE_NUM_RETRIES, 5);
+  conf.setInt(YarnConfiguration.FS_RM_STATE_STORE_NUM_RETRIES, 8);
   conf.setLong(YarnConfiguration.FS_RM_STATE_STORE_RETRY_INTERVAL_MS,
   900L);
   this.store = new TestFileSystemRMStore(conf);
-  Assert.assertEquals(store.getNumRetries(), 5);
+  Assert.assertEquals(store.getNumRetries(), 8);
   Assert.assertEquals(store.getRetryInterval(), 900L);
   return store;
 }
@@ -277,12 +277,7 @@ public class TestFSRMStateStore extends 
RMStateStoreTestBase {
 ApplicationStateData.newInstance(111, 111, user, null,
 RMAppState.ACCEPTED, diagnostics, 333));
   } catch (Exception e) {
-// TODO 0 datanode exception will not be retried by dfs client, fix
-// that separately.
-if (!e.getMessage().contains(could only be replicated +
- to 0 nodes instead of minReplication (=1))) {
-  assertionFailedInThread.set(true);
-}
+assertionFailedInThread.set(true);
 e.printStackTrace();
   }
 }



hadoop git commit: YARN-3273. Improve scheduler UI to facilitate scheduling analysis and debugging. Contributed Rohith Sharmaks (cherry picked from commit 658097d6da1b1aac8e01db459f0c3b456e99652f)

2015-03-17 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 b00b216a9 - 15ebacf03


YARN-3273. Improve scheduler UI to facilitate scheduling analysis and 
debugging. Contributed Rohith Sharmaks
(cherry picked from commit 658097d6da1b1aac8e01db459f0c3b456e99652f)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/15ebacf0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/15ebacf0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/15ebacf0

Branch: refs/heads/branch-2
Commit: 15ebacf03a858d7c10d6d77bf53e307881e4c4de
Parents: b00b216
Author: Jian He jia...@apache.org
Authored: Tue Mar 17 21:28:58 2015 -0700
Committer: Jian He jia...@apache.org
Committed: Tue Mar 17 21:31:20 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../yarn/server/webapp/AppAttemptBlock.java | 21 +-
 .../rmapp/attempt/RMAppAttemptMetrics.java  | 10 +++
 .../scheduler/SchedulerApplicationAttempt.java  | 10 +++
 .../scheduler/capacity/LeafQueue.java   | 22 --
 .../scheduler/capacity/UserInfo.java| 15 +++-
 .../scheduler/common/fica/FiCaSchedulerApp.java |  4 +-
 .../scheduler/fair/FairScheduler.java   |  7 +-
 .../scheduler/fifo/FifoScheduler.java   |  7 +-
 .../webapp/CapacitySchedulerPage.java   | 76 +++-
 .../webapp/MetricsOverviewTable.java| 22 ++
 .../dao/CapacitySchedulerLeafQueueInfo.java | 12 +++-
 .../webapp/dao/SchedulerInfo.java   | 44 
 .../resourcemanager/TestFifoScheduler.java  | 22 ++
 .../capacity/TestCapacityScheduler.java | 76 
 .../scheduler/fair/FairSchedulerTestBase.java   | 22 +-
 .../fair/TestContinuousScheduling.java  |  3 +
 .../scheduler/fair/TestFairScheduler.java   | 25 +--
 .../scheduler/fifo/TestFifoScheduler.java   | 28 +++-
 .../resourcemanager/webapp/TestNodesPage.java   |  2 +-
 .../webapp/TestRMWebServicesCapacitySched.java  |  2 +-
 21 files changed, 371 insertions(+), 62 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/15ebacf0/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 4b81d04..56c8b0a 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -11,6 +11,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3243. CapacityScheduler should pass headroom from parent to children
 to make sure ParentQueue obey its capacity limits. (Wangda Tan via jianhe)
 
+YARN-3273. Improve scheduler UI to facilitate scheduling analysis and
+debugging. (Rohith Sharmaks via jianhe)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/15ebacf0/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppAttemptBlock.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppAttemptBlock.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppAttemptBlock.java
index 4a82c93..eeccf0f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppAttemptBlock.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppAttemptBlock.java
@@ -172,7 +172,7 @@ public class AppAttemptBlock extends HtmlBlock {
   ._(Diagnostics Info:, appAttempt.getDiagnosticsInfo() == null ?
: appAttempt.getDiagnosticsInfo());
 
-html._(InfoBlock.class);
+
 
 if (exceptionWhenGetContainerReports) {
   html
@@ -183,6 +183,19 @@ public class AppAttemptBlock extends HtmlBlock {
   return;
 }
 
+// TODO need to render applicationHeadRoom value from
+// ApplicationAttemptMetrics after YARN-3284
+if (webUiType.equals(YarnWebParams.RM_WEB_UI)) {
+  if (!isApplicationInFinalState(appAttempt.getAppAttemptState())) {
+DIVHamlet pdiv = html._(InfoBlock.class).div(_INFO_WRAP);
+info(Application Attempt Overview).clear();
+info(Application Attempt Metrics)._(
+Application Attempt Headroom : , 0);
+pdiv._();
+  }
+}
+html._(InfoBlock.class);
+
 // Container Table
 TBODYTABLEHamlet tbody =
 html.table(#containers).thead().tr().th(.id, Container ID)
@@ -273,4 

[20/50] hadoop git commit: HADOOP-11692. Amendment to CHANGES.txt.

2015-03-17 Thread zjshen
HADOOP-11692. Amendment to CHANGES.txt.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/03b77ede
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/03b77ede
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/03b77ede

Branch: refs/heads/YARN-2928
Commit: 03b77ede9249d7d16654257035dfc01a7a0a8c50
Parents: 3da9a97
Author: Yongjun Zhang yzh...@cloudera.com
Authored: Mon Mar 16 09:08:41 2015 -0700
Committer: Yongjun Zhang yzh...@cloudera.com
Committed: Mon Mar 16 09:08:41 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/03b77ede/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index bb08cfe..e161d7d 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -449,6 +449,9 @@ Release 2.8.0 - UNRELEASED
 
   IMPROVEMENTS
 
+HADOOP-11692. Improve authentication failure WARN message to avoid user
+confusion. (Yongjun Zhang)
+
   OPTIMIZATIONS
 
   BUG FIXES
@@ -1088,9 +1091,6 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11686. MiniKDC cannot change ORG_NAME or ORG_DOMAIN.
 (Duo Zhang via wheat9)
 
-HADOOP-11692. Improve authentication failure WARN message to avoid user
-confusion. (Yongjun Zhang)
-
 HADOOP-11618. DelegateToFileSystem erroneously uses default FS's port in
 constructor. (Brahma Reddy Battula via gera)
 



[23/50] hadoop git commit: HADOOP-11714. Add more trace log4j messages to SpanReceiverHost (cmccabe)

2015-03-17 Thread zjshen
HADOOP-11714. Add more trace log4j messages to SpanReceiverHost (cmccabe)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bf3275db
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bf3275db
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bf3275db

Branch: refs/heads/YARN-2928
Commit: bf3275dbaa99105d49520e25f5a6eadd6fd5b7ed
Parents: ed4e72a
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Mon Mar 16 12:02:10 2015 -0700
Committer: Colin Patrick Mccabe cmcc...@cloudera.com
Committed: Mon Mar 16 12:02:10 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt|  2 ++
 .../org/apache/hadoop/tracing/SpanReceiverHost.java| 13 ++---
 2 files changed, 12 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bf3275db/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index a43a153..aa17841 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -692,6 +692,8 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11642. Upgrade azure sdk version from 0.6.0 to 2.0.0.
 (Shashank Khandelwal and Ivan Mitic via cnauroth)
 
+HADOOP-11714. Add more trace log4j messages to SpanReceiverHost (cmccabe)
+
   OPTIMIZATIONS
 
 HADOOP-11323. WritableComparator#compare keeps reference to byte array.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bf3275db/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/SpanReceiverHost.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/SpanReceiverHost.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/SpanReceiverHost.java
index 01ba76d..f2de0a0 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/SpanReceiverHost.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/SpanReceiverHost.java
@@ -134,20 +134,27 @@ public class SpanReceiverHost implements 
TraceAdminProtocol {
 String[] receiverNames =
 config.getTrimmedStrings(SPAN_RECEIVERS_CONF_KEY);
 if (receiverNames == null || receiverNames.length == 0) {
+  if (LOG.isTraceEnabled()) {
+LOG.trace(No span receiver names found in  +
+  SPAN_RECEIVERS_CONF_KEY + .);
+  }
   return;
 }
 // It's convenient to have each daemon log to a random trace file when
 // testing.
 if (config.get(LOCAL_FILE_SPAN_RECEIVER_PATH) == null) {
-  config.set(LOCAL_FILE_SPAN_RECEIVER_PATH,
-  getUniqueLocalTraceFileName());
+  String uniqueFile = getUniqueLocalTraceFileName();
+  config.set(LOCAL_FILE_SPAN_RECEIVER_PATH, uniqueFile);
+  if (LOG.isTraceEnabled()) {
+LOG.trace(Set  + LOCAL_FILE_SPAN_RECEIVER_PATH +  to  +  
uniqueFile);
+  }
 }
 for (String className : receiverNames) {
   try {
 SpanReceiver rcvr = loadInstance(className, EMPTY);
 Trace.addReceiver(rcvr);
 receivers.put(highestId++, rcvr);
-LOG.info(SpanReceiver  + className +  was loaded successfully.);
+LOG.info(Loaded SpanReceiver  + className +  successfully.);
   } catch (IOException e) {
 LOG.error(Failed to load SpanReceiver, e);
   }



[13/50] hadoop git commit: HADOOP-11558. Fix dead links to doc of hadoop-tools. Contributed by Masatake Iwasaki.

2015-03-17 Thread zjshen
HADOOP-11558. Fix dead links to doc of hadoop-tools. Contributed by Masatake 
Iwasaki.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7da136ec
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7da136ec
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7da136ec

Branch: refs/heads/YARN-2928
Commit: 7da136ecca4dafc83ef69b5d9980fa5b67ada084
Parents: bd0a9ba
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Sun Mar 15 14:17:35 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Sun Mar 15 14:17:35 2015 +0900

--
 hadoop-common-project/hadoop-common/CHANGES.txt   |  3 +++
 .../src/site/markdown/SchedulerLoadSimulator.md   |  2 +-
 .../src/site/markdown/HadoopStreaming.md.vm   | 14 +++---
 3 files changed, 11 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7da136ec/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 55028cb..e386723 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1100,6 +1100,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11710. Make CryptoOutputStream behave like DFSOutputStream wrt
 synchronization. (Sean Busbey via yliu)
 
+HADOOP-11558. Fix dead links to doc of hadoop-tools. (Masatake Iwasaki
+via ozawa)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7da136ec/hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md
--
diff --git 
a/hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md 
b/hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md
index ca179ee..2cffc86 100644
--- a/hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md
+++ b/hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md
@@ -43,7 +43,7 @@ The Yarn Scheduler Load Simulator (SLS) is such a tool, which 
can simulate large
 o
 The simulator will exercise the real Yarn `ResourceManager` removing the 
network factor by simulating `NodeManagers` and `ApplicationMasters` via 
handling and dispatching `NM`/`AMs` heartbeat events from within the same JVM. 
To keep tracking of scheduler behavior and performance, a scheduler wrapper 
will wrap the real scheduler.
 
-The size of the cluster and the application load can be loaded from 
configuration files, which are generated from job history files directly by 
adopting [Apache Rumen](https://hadoop.apache.org/docs/stable/rumen.html).
+The size of the cluster and the application load can be loaded from 
configuration files, which are generated from job history files directly by 
adopting [Apache Rumen](../hadoop-rumen/Rumen.html).
 
 The simulator will produce real time metrics while executing, including:
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7da136ec/hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm
--
diff --git 
a/hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm 
b/hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm
index 0b64586..b4c5e38 100644
--- a/hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm
+++ b/hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm
@@ -201,7 +201,7 @@ To specify additional local temp directories use:
  -D mapred.system.dir=/tmp/system
  -D mapred.temp.dir=/tmp/temp
 
-**Note:** For more details on job configuration parameters see: 
[mapred-default.xml](./mapred-default.xml)
+**Note:** For more details on job configuration parameters see: 
[mapred-default.xml](../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml)
 
 $H4 Specifying Map-Only Jobs
 
@@ -322,7 +322,7 @@ More Usage Examples
 
 $H3 Hadoop Partitioner Class
 
-Hadoop has a library class, 
[KeyFieldBasedPartitioner](../../api/org/apache/hadoop/mapred/lib/KeyFieldBasedPartitioner.html),
 that is useful for many applications. This class allows the Map/Reduce 
framework to partition the map outputs based on certain key fields, not the 
whole keys. For example:
+Hadoop has a library class, 
[KeyFieldBasedPartitioner](../api/org/apache/hadoop/mapred/lib/KeyFieldBasedPartitioner.html),
 that is useful for many applications. This class allows the Map/Reduce 
framework to partition the map outputs based on certain key fields, not the 
whole keys. For example:
 
 hadoop jar hadoop-streaming-${project.version}.jar \
   -D 

[05/50] hadoop git commit: HDFS-7435. PB encoding of block reports is very inefficient. Contributed by Daryn Sharp.

2015-03-17 Thread zjshen
HDFS-7435. PB encoding of block reports is very inefficient. Contributed by 
Daryn Sharp.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d324164a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d324164a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d324164a

Branch: refs/heads/YARN-2928
Commit: d324164a51a43d72c02567248bd9f0f12b244a40
Parents: f446669
Author: Kihwal Lee kih...@apache.org
Authored: Fri Mar 13 14:13:55 2015 -0500
Committer: Kihwal Lee kih...@apache.org
Committed: Fri Mar 13 14:23:37 2015 -0500

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |   3 +
 .../hadoop/hdfs/protocol/BlockListAsLongs.java  | 660 +++
 .../DatanodeProtocolClientSideTranslatorPB.java |  22 +-
 .../DatanodeProtocolServerSideTranslatorPB.java |  14 +-
 .../apache/hadoop/hdfs/protocolPB/PBHelper.java |   6 +-
 .../server/blockmanagement/BlockManager.java|  16 +-
 .../hdfs/server/datanode/BPServiceActor.java|  13 +-
 .../datanode/fsdataset/impl/FsDatasetImpl.java  |  20 +-
 .../hdfs/server/namenode/NameNodeRpcServer.java |   2 +-
 .../server/protocol/DatanodeRegistration.java   |   9 +
 .../hdfs/server/protocol/NamespaceInfo.java |  52 ++
 .../server/protocol/StorageBlockReport.java |   8 +-
 .../src/main/proto/DatanodeProtocol.proto   |   2 +
 .../hadoop-hdfs/src/main/proto/hdfs.proto   |   1 +
 .../hdfs/protocol/TestBlockListAsLongs.java | 237 +++
 .../blockmanagement/TestBlockManager.java   |   8 +-
 .../server/datanode/BlockReportTestBase.java|  27 +-
 .../server/datanode/SimulatedFSDataset.java |  11 +-
 .../TestBlockHasMultipleReplicasOnSameDN.java   |   9 +-
 .../datanode/TestDataNodeVolumeFailure.java |   4 +-
 ...TestDnRespectsBlockReportSplitThreshold.java |   2 +-
 .../extdataset/ExternalDatasetImpl.java |   2 +-
 .../server/namenode/NNThroughputBenchmark.java  |  23 +-
 .../hdfs/server/namenode/TestDeadDatanode.java  |   3 +-
 .../hdfs/server/namenode/TestFSImage.java   |   2 +
 .../TestOfflineEditsViewer.java |   9 +-
 26 files changed, 811 insertions(+), 354 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d324164a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 909182b..ac7e096 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -743,6 +743,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7491. Add incremental blockreport latency to DN metrics.
 (Ming Ma via cnauroth)
 
+HDFS-7435. PB encoding of block reports is very inefficient.
+(Daryn Sharp via kihwal)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d324164a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/BlockListAsLongs.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/BlockListAsLongs.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/BlockListAsLongs.java
index 4389714..1c89ee4 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/BlockListAsLongs.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/BlockListAsLongs.java
@@ -17,342 +17,458 @@
  */
 package org.apache.hadoop.hdfs.protocol;
 
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
 import java.util.Iterator;
 import java.util.List;
-import java.util.Random;
 
-import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.hdfs.protocol.BlockListAsLongs.BlockReportReplica;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState;
 import org.apache.hadoop.hdfs.server.datanode.Replica;
+import com.google.common.base.Preconditions;
+import com.google.protobuf.ByteString;
+import com.google.protobuf.CodedInputStream;
+import com.google.protobuf.CodedOutputStream;
 
-/**
- * This class provides an interface for accessing list of blocks that
- * has been implemented as long[].
- * This class is useful for block report. Rather than send block reports
- * as a Block[] we can send it as a long[].
- *
- * The structure of the array is as follows:
- * 0: the length of the finalized replica list;
- * 1: the length of the under-construction replica list;
- 

[14/50] hadoop git commit: Revert HADOOP-11558. Fix dead links to doc of hadoop-tools. Contributed by Masatake Iwasaki.

2015-03-17 Thread zjshen
Revert HADOOP-11558. Fix dead links to doc of hadoop-tools. Contributed by 
Masatake Iwasaki.

This reverts commit 7da136ecca4dafc83ef69b5d9980fa5b67ada084.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b308a8d1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b308a8d1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b308a8d1

Branch: refs/heads/YARN-2928
Commit: b308a8d181416b5fe6bf77756e5f2c7b8fbd793c
Parents: 7da136e
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Sun Mar 15 14:27:38 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Sun Mar 15 14:27:38 2015 +0900

--
 hadoop-common-project/hadoop-common/CHANGES.txt   |  3 ---
 .../src/site/markdown/SchedulerLoadSimulator.md   |  2 +-
 .../src/site/markdown/HadoopStreaming.md.vm   | 14 +++---
 3 files changed, 8 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b308a8d1/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index e386723..55028cb 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1100,9 +1100,6 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11710. Make CryptoOutputStream behave like DFSOutputStream wrt
 synchronization. (Sean Busbey via yliu)
 
-HADOOP-11558. Fix dead links to doc of hadoop-tools. (Masatake Iwasaki
-via ozawa)
-
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b308a8d1/hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md
--
diff --git 
a/hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md 
b/hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md
index 2cffc86..ca179ee 100644
--- a/hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md
+++ b/hadoop-tools/hadoop-sls/src/site/markdown/SchedulerLoadSimulator.md
@@ -43,7 +43,7 @@ The Yarn Scheduler Load Simulator (SLS) is such a tool, which 
can simulate large
 o
 The simulator will exercise the real Yarn `ResourceManager` removing the 
network factor by simulating `NodeManagers` and `ApplicationMasters` via 
handling and dispatching `NM`/`AMs` heartbeat events from within the same JVM. 
To keep tracking of scheduler behavior and performance, a scheduler wrapper 
will wrap the real scheduler.
 
-The size of the cluster and the application load can be loaded from 
configuration files, which are generated from job history files directly by 
adopting [Apache Rumen](../hadoop-rumen/Rumen.html).
+The size of the cluster and the application load can be loaded from 
configuration files, which are generated from job history files directly by 
adopting [Apache Rumen](https://hadoop.apache.org/docs/stable/rumen.html).
 
 The simulator will produce real time metrics while executing, including:
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b308a8d1/hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm
--
diff --git 
a/hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm 
b/hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm
index b4c5e38..0b64586 100644
--- a/hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm
+++ b/hadoop-tools/hadoop-streaming/src/site/markdown/HadoopStreaming.md.vm
@@ -201,7 +201,7 @@ To specify additional local temp directories use:
  -D mapred.system.dir=/tmp/system
  -D mapred.temp.dir=/tmp/temp
 
-**Note:** For more details on job configuration parameters see: 
[mapred-default.xml](../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml)
+**Note:** For more details on job configuration parameters see: 
[mapred-default.xml](./mapred-default.xml)
 
 $H4 Specifying Map-Only Jobs
 
@@ -322,7 +322,7 @@ More Usage Examples
 
 $H3 Hadoop Partitioner Class
 
-Hadoop has a library class, 
[KeyFieldBasedPartitioner](../api/org/apache/hadoop/mapred/lib/KeyFieldBasedPartitioner.html),
 that is useful for many applications. This class allows the Map/Reduce 
framework to partition the map outputs based on certain key fields, not the 
whole keys. For example:
+Hadoop has a library class, 
[KeyFieldBasedPartitioner](../../api/org/apache/hadoop/mapred/lib/KeyFieldBasedPartitioner.html),
 that is useful for many applications. This class allows the Map/Reduce 
framework to partition the map outputs based on certain key fields, not the 
whole keys. For example:
 
 hadoop jar 

[49/50] hadoop git commit: YARN-3039. Implemented the app-level timeline aggregator discovery service. Contributed by Junping Du.

2015-03-17 Thread zjshen
http://git-wip-us.apache.org/repos/asf/hadoop/blob/8a637914/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/ReportNewAggregatorsInfoRequestPBImpl.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/ReportNewAggregatorsInfoRequestPBImpl.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/ReportNewAggregatorsInfoRequestPBImpl.java
new file mode 100644
index 000..eb7beef
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/ReportNewAggregatorsInfoRequestPBImpl.java
@@ -0,0 +1,142 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.api.protocolrecords.impl.pb;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import 
org.apache.hadoop.yarn.proto.YarnServerCommonServiceProtos.AppAggregatorsMapProto;
+import 
org.apache.hadoop.yarn.proto.YarnServerCommonServiceProtos.ReportNewAggregatorsInfoRequestProto;
+import 
org.apache.hadoop.yarn.proto.YarnServerCommonServiceProtos.ReportNewAggregatorsInfoRequestProtoOrBuilder;
+import 
org.apache.hadoop.yarn.server.api.protocolrecords.ReportNewAggregatorsInfoRequest;
+import org.apache.hadoop.yarn.server.api.records.AppAggregatorsMap;
+import 
org.apache.hadoop.yarn.server.api.records.impl.pb.AppAggregatorsMapPBImpl;
+
+public class ReportNewAggregatorsInfoRequestPBImpl extends
+ReportNewAggregatorsInfoRequest {
+
+  ReportNewAggregatorsInfoRequestProto proto = 
+  ReportNewAggregatorsInfoRequestProto.getDefaultInstance();
+  
+  ReportNewAggregatorsInfoRequestProto.Builder builder = null;
+  boolean viaProto = false;
+
+  private ListAppAggregatorsMap aggregatorsList = null;
+
+  public ReportNewAggregatorsInfoRequestPBImpl() {
+builder = ReportNewAggregatorsInfoRequestProto.newBuilder();
+  }
+
+  public ReportNewAggregatorsInfoRequestPBImpl(
+  ReportNewAggregatorsInfoRequestProto proto) {
+this.proto = proto;
+viaProto = true;
+  }
+
+  public ReportNewAggregatorsInfoRequestProto getProto() {
+mergeLocalToProto();
+proto = viaProto ? proto : builder.build();
+viaProto = true;
+return proto;
+  }
+  
+  @Override
+  public int hashCode() {
+return getProto().hashCode();
+  }
+
+  @Override
+  public boolean equals(Object other) {
+if (other == null)
+  return false;
+if (other.getClass().isAssignableFrom(this.getClass())) {
+  return this.getProto().equals(this.getClass().cast(other).getProto());
+}
+return false;
+  }
+
+  private void mergeLocalToProto() {
+if (viaProto)
+  maybeInitBuilder();
+mergeLocalToBuilder();
+proto = builder.build();
+viaProto = true;
+  }
+
+  private void mergeLocalToBuilder() {
+if (aggregatorsList != null) {
+  addLocalAggregatorsToProto();
+}
+  }
+  
+  private void maybeInitBuilder() {
+if (viaProto || builder == null) {
+  builder = ReportNewAggregatorsInfoRequestProto.newBuilder(proto);
+}
+viaProto = false;
+  }
+
+  private void addLocalAggregatorsToProto() {
+maybeInitBuilder();
+builder.clearAppAggregators();
+ListAppAggregatorsMapProto protoList =
+new ArrayListAppAggregatorsMapProto();
+for (AppAggregatorsMap m : this.aggregatorsList) {
+  protoList.add(convertToProtoFormat(m));
+}
+builder.addAllAppAggregators(protoList);
+  }
+
+  private void initLocalAggregatorsList() {
+ReportNewAggregatorsInfoRequestProtoOrBuilder p = viaProto ? proto : 
builder;
+ListAppAggregatorsMapProto aggregatorsList =
+p.getAppAggregatorsList();
+this.aggregatorsList = new ArrayListAppAggregatorsMap();
+for (AppAggregatorsMapProto m : aggregatorsList) {
+  this.aggregatorsList.add(convertFromProtoFormat(m));
+}
+  }
+
+  @Override
+  public 

hadoop git commit: YARN-3273. Improve scheduler UI to facilitate scheduling analysis and debugging. Contributed Rohith Sharmaks

2015-03-17 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/trunk 3bc72cc16 - 658097d6d


YARN-3273. Improve scheduler UI to facilitate scheduling analysis and 
debugging. Contributed Rohith Sharmaks


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/658097d6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/658097d6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/658097d6

Branch: refs/heads/trunk
Commit: 658097d6da1b1aac8e01db459f0c3b456e99652f
Parents: 3bc72cc
Author: Jian He jia...@apache.org
Authored: Tue Mar 17 21:28:58 2015 -0700
Committer: Jian He jia...@apache.org
Committed: Tue Mar 17 21:30:23 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  3 +
 .../yarn/server/webapp/AppAttemptBlock.java | 21 +-
 .../rmapp/attempt/RMAppAttemptMetrics.java  | 10 +++
 .../scheduler/SchedulerApplicationAttempt.java  | 10 +++
 .../scheduler/capacity/LeafQueue.java   | 22 --
 .../scheduler/capacity/UserInfo.java| 15 +++-
 .../scheduler/common/fica/FiCaSchedulerApp.java |  4 +-
 .../scheduler/fair/FairScheduler.java   |  7 +-
 .../scheduler/fifo/FifoScheduler.java   |  7 +-
 .../webapp/CapacitySchedulerPage.java   | 76 +++-
 .../webapp/MetricsOverviewTable.java| 22 ++
 .../dao/CapacitySchedulerLeafQueueInfo.java | 12 +++-
 .../webapp/dao/SchedulerInfo.java   | 44 
 .../resourcemanager/TestFifoScheduler.java  | 22 ++
 .../capacity/TestCapacityScheduler.java | 76 
 .../scheduler/fair/FairSchedulerTestBase.java   | 22 +-
 .../fair/TestContinuousScheduling.java  |  3 +
 .../scheduler/fair/TestFairScheduler.java   | 25 +--
 .../scheduler/fifo/TestFifoScheduler.java   | 28 +++-
 .../resourcemanager/webapp/TestNodesPage.java   |  2 +-
 .../webapp/TestRMWebServicesCapacitySched.java  |  2 +-
 21 files changed, 371 insertions(+), 62 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/658097d6/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index c869113..e0ed568 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -59,6 +59,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3243. CapacityScheduler should pass headroom from parent to children
 to make sure ParentQueue obey its capacity limits. (Wangda Tan via jianhe)
 
+YARN-3273. Improve scheduler UI to facilitate scheduling analysis and
+debugging. (Rohith Sharmaks via jianhe)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/658097d6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppAttemptBlock.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppAttemptBlock.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppAttemptBlock.java
index 4a82c93..eeccf0f 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppAttemptBlock.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppAttemptBlock.java
@@ -172,7 +172,7 @@ public class AppAttemptBlock extends HtmlBlock {
   ._(Diagnostics Info:, appAttempt.getDiagnosticsInfo() == null ?
: appAttempt.getDiagnosticsInfo());
 
-html._(InfoBlock.class);
+
 
 if (exceptionWhenGetContainerReports) {
   html
@@ -183,6 +183,19 @@ public class AppAttemptBlock extends HtmlBlock {
   return;
 }
 
+// TODO need to render applicationHeadRoom value from
+// ApplicationAttemptMetrics after YARN-3284
+if (webUiType.equals(YarnWebParams.RM_WEB_UI)) {
+  if (!isApplicationInFinalState(appAttempt.getAppAttemptState())) {
+DIVHamlet pdiv = html._(InfoBlock.class).div(_INFO_WRAP);
+info(Application Attempt Overview).clear();
+info(Application Attempt Metrics)._(
+Application Attempt Headroom : , 0);
+pdiv._();
+  }
+}
+html._(InfoBlock.class);
+
 // Container Table
 TBODYTABLEHamlet tbody =
 html.table(#containers).thead().tr().th(.id, Container ID)
@@ -273,4 +286,10 @@ public class AppAttemptBlock extends HtmlBlock {
 }
 return 

hadoop git commit: HDFS-7946. TestDataNodeVolumeFailureReporting NPE on Windows. (Contributed by Xiaoyu Yao)

2015-03-17 Thread arp
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 47e6fc2bf - 26c35438f


HDFS-7946. TestDataNodeVolumeFailureReporting NPE on Windows. (Contributed by 
Xiaoyu Yao)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/26c35438
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/26c35438
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/26c35438

Branch: refs/heads/branch-2.7
Commit: 26c35438f3681c157aae86a5de135c03315969f9
Parents: 47e6fc2
Author: Arpit Agarwal a...@apache.org
Authored: Tue Mar 17 21:29:19 2015 -0700
Committer: Arpit Agarwal a...@apache.org
Committed: Tue Mar 17 21:30:13 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 3 +++
 .../server/datanode/TestDataNodeVolumeFailureReporting.java | 5 -
 2 files changed, 7 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/26c35438/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 2aff3b3..5c486b0 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -849,6 +849,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7915. The DataNode can sometimes allocate a ShortCircuitShm slot and
 fail to tell the DFSClient about it because of a network error (cmccabe)
 
+HDFS-7946. TestDataNodeVolumeFailureReporting NPE on Windows. (Xiaoyu Yao
+via Arpit Agarwal)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/26c35438/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
index 788ddb3..9842f25 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
@@ -91,6 +91,7 @@ public class TestDataNodeVolumeFailureReporting {
 // been simulated by denying execute access.  This is based on the maximum
 // number of datanodes and the maximum number of storages per data node 
used
 // throughout the tests in this suite.
+assumeTrue(!Path.WINDOWS);
 int maxDataNodes = 3;
 int maxStoragesPerDataNode = 4;
 for (int i = 0; i  maxDataNodes; i++) {
@@ -100,7 +101,9 @@ public class TestDataNodeVolumeFailureReporting {
   }
 }
 IOUtils.cleanup(LOG, fs);
-cluster.shutdown();
+if (cluster != null) {
+  cluster.shutdown();
+}
   }
 
   /**



[1/2] hadoop git commit: HDFS-7946. TestDataNodeVolumeFailureReporting NPE on Windows. (Contributed by Xiaoyu Yao)

2015-03-17 Thread arp
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 15ebacf03 - 7d0f84bd8
  refs/heads/trunk 658097d6d - 5b322c6a8


HDFS-7946. TestDataNodeVolumeFailureReporting NPE on Windows. (Contributed by 
Xiaoyu Yao)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5b322c6a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5b322c6a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5b322c6a

Branch: refs/heads/trunk
Commit: 5b322c6a823208bbc64698379340343a72e8160a
Parents: 658097d
Author: Arpit Agarwal a...@apache.org
Authored: Tue Mar 17 21:29:19 2015 -0700
Committer: Arpit Agarwal a...@apache.org
Committed: Tue Mar 17 21:34:31 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 3 +++
 .../server/datanode/TestDataNodeVolumeFailureReporting.java | 5 -
 2 files changed, 7 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5b322c6a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 3e11356..db8741c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1169,6 +1169,9 @@ Release 2.7.0 - UNRELEASED
 
 HDFS-7886. Fix TestFileTruncate falures. (Plamen Jeliazkov and shv)
 
+HDFS-7946. TestDataNodeVolumeFailureReporting NPE on Windows. (Xiaoyu Yao
+via Arpit Agarwal)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5b322c6a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
index 788ddb3..9842f25 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
@@ -91,6 +91,7 @@ public class TestDataNodeVolumeFailureReporting {
 // been simulated by denying execute access.  This is based on the maximum
 // number of datanodes and the maximum number of storages per data node 
used
 // throughout the tests in this suite.
+assumeTrue(!Path.WINDOWS);
 int maxDataNodes = 3;
 int maxStoragesPerDataNode = 4;
 for (int i = 0; i  maxDataNodes; i++) {
@@ -100,7 +101,9 @@ public class TestDataNodeVolumeFailureReporting {
   }
 }
 IOUtils.cleanup(LOG, fs);
-cluster.shutdown();
+if (cluster != null) {
+  cluster.shutdown();
+}
   }
 
   /**



[2/2] hadoop git commit: HDFS-7946. TestDataNodeVolumeFailureReporting NPE on Windows. (Contributed by Xiaoyu Yao)

2015-03-17 Thread arp
HDFS-7946. TestDataNodeVolumeFailureReporting NPE on Windows. (Contributed by 
Xiaoyu Yao)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7d0f84bd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7d0f84bd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7d0f84bd

Branch: refs/heads/branch-2
Commit: 7d0f84bd8db55a331a20ac9c67ee6a94fb2d4cbd
Parents: 15ebacf
Author: Arpit Agarwal a...@apache.org
Authored: Tue Mar 17 21:29:19 2015 -0700
Committer: Arpit Agarwal a...@apache.org
Committed: Tue Mar 17 21:34:35 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 3 +++
 .../server/datanode/TestDataNodeVolumeFailureReporting.java | 5 -
 2 files changed, 7 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7d0f84bd/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 08d58b1..e95f9df 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -866,6 +866,9 @@ Release 2.7.0 - UNRELEASED
 
 HDFS-7886. Fix TestFileTruncate falures. (Plamen Jeliazkov and shv)
 
+HDFS-7946. TestDataNodeVolumeFailureReporting NPE on Windows. (Xiaoyu Yao
+via Arpit Agarwal)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7d0f84bd/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
index 788ddb3..9842f25 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureReporting.java
@@ -91,6 +91,7 @@ public class TestDataNodeVolumeFailureReporting {
 // been simulated by denying execute access.  This is based on the maximum
 // number of datanodes and the maximum number of storages per data node 
used
 // throughout the tests in this suite.
+assumeTrue(!Path.WINDOWS);
 int maxDataNodes = 3;
 int maxStoragesPerDataNode = 4;
 for (int i = 0; i  maxDataNodes; i++) {
@@ -100,7 +101,9 @@ public class TestDataNodeVolumeFailureReporting {
   }
 }
 IOUtils.cleanup(LOG, fs);
-cluster.shutdown();
+if (cluster != null) {
+  cluster.shutdown();
+}
   }
 
   /**



hadoop git commit: Backport part of YARN-3273 to rename CapacitySchedulerLeafQueueInfo#aMResourceLimit to AMResourceLimit. Contributed by Rohith Sharmaks

2015-03-17 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 26c35438f - 54e63c52c


Backport part of YARN-3273 to rename 
CapacitySchedulerLeafQueueInfo#aMResourceLimit to AMResourceLimit. Contributed 
by Rohith Sharmaks


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/54e63c52
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/54e63c52
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/54e63c52

Branch: refs/heads/branch-2.7
Commit: 54e63c52c9741a59c9b1d1efe0814496b595df18
Parents: 26c3543
Author: Jian He jia...@apache.org
Authored: Tue Mar 17 22:05:23 2015 -0700
Committer: Jian He jia...@apache.org
Committed: Tue Mar 17 22:08:10 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt| 4 
 .../webapp/dao/CapacitySchedulerLeafQueueInfo.java | 6 +++---
 .../scheduler/capacity/TestApplicationLimits.java  | 2 +-
 3 files changed, 8 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/54e63c52/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 0cdc7c4..fbd9c76 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -331,6 +331,10 @@ Release 2.7.0 - UNRELEASED
 YARN-2854. Updated the documentation of the timeline service and the 
generic
 history service. (Naganarasimha G R via zjshen)
 
+Backport part of YARN-3273 to rename
+CapacitySchedulerLeafQueueInfo#aMResourceLimit to AMResourceLimit.
+(Rohith via jianhe)
+
   OPTIMIZATIONS
 
 YARN-2990. FairScheduler's delay-scheduling always waits for node-local 
and 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/54e63c52/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java
index a8b0d32..7a3a0be 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/CapacitySchedulerLeafQueueInfo.java
@@ -35,7 +35,7 @@ public class CapacitySchedulerLeafQueueInfo extends 
CapacitySchedulerQueueInfo {
   protected int userLimit;
   protected UsersInfo users; // To add another level in the XML
   protected float userLimitFactor;
-  protected ResourceInfo aMResourceLimit;
+  protected ResourceInfo AMResourceLimit;
   protected ResourceInfo userAMResourceLimit;
   protected boolean preemptionDisabled;
 
@@ -52,7 +52,7 @@ public class CapacitySchedulerLeafQueueInfo extends 
CapacitySchedulerQueueInfo {
 userLimit = q.getUserLimit();
 users = new UsersInfo(q.getUsers());
 userLimitFactor = q.getUserLimitFactor();
-aMResourceLimit = new ResourceInfo(q.getAMResourceLimit());
+AMResourceLimit = new ResourceInfo(q.getAMResourceLimit());
 userAMResourceLimit = new ResourceInfo(q.getUserAMResourceLimit());
 preemptionDisabled = q.getPreemptionDisabled();
   }
@@ -91,7 +91,7 @@ public class CapacitySchedulerLeafQueueInfo extends 
CapacitySchedulerQueueInfo {
   }
   
   public ResourceInfo getAMResourceLimit() {
-return aMResourceLimit;
+return AMResourceLimit;
   }
   
   public ResourceInfo getUserAMResourceLimit() {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/54e63c52/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
index 8cad057..8e0ccf0 

hadoop git commit: HADOOP-11720. [JDK8] Fix javadoc errors caused by incorrect or illegal tags in hadoop-tools. Contributed by Akira AJISAKA.

2015-03-17 Thread ozawa
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 861cc0509 - 51c374ac1


HADOOP-11720. [JDK8] Fix javadoc errors caused by incorrect or illegal tags in 
hadoop-tools. Contributed by Akira AJISAKA.

(cherry picked from commit ef9946cd52d54200c658987c1dbc3e6fce133f77)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/51c374ac
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/51c374ac
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/51c374ac

Branch: refs/heads/branch-2.7
Commit: 51c374ac1983239351c2d37eb6a9fd7332a35884
Parents: 861cc05
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Tue Mar 17 16:09:21 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Tue Mar 17 16:09:54 2015 +0900

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 +++
 .../java/org/apache/hadoop/record/Buffer.java   |  8 +++
 .../java/org/apache/hadoop/record/Utils.java|  8 +++
 .../java/org/apache/hadoop/ant/DfsTask.java |  6 ++---
 .../org/apache/hadoop/fs/s3/S3FileSystem.java   |  4 +---
 .../hadoop/fs/s3a/S3AFastOutputStream.java  |  4 ++--
 .../hadoop/fs/s3native/NativeS3FileSystem.java  | 24 
 .../fs/azure/AzureNativeFileSystemStore.java| 22 ++
 .../hadoop/fs/azure/NativeAzureFileSystem.java  | 16 +
 .../hadoop/tools/CopyListingFileStatus.java |  8 +++
 .../apache/hadoop/tools/SimpleCopyListing.java  |  2 +-
 .../apache/hadoop/tools/util/DistCpUtils.java   |  4 ++--
 12 files changed, 51 insertions(+), 58 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/51c374ac/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 37f4f3e..99cdf5c 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -670,6 +670,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11638. OpensslSecureRandom.c pthreads_thread_id should support 
FreeBSD
 and Solaris in addition to Linux. (Kiran Kumar M R via cnauroth)
 
+HADOOP-11720. [JDK8] Fix javadoc errors caused by incorrect or illegal
+tags in hadoop-tools. (Akira AJISAKA via ozawa)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/51c374ac/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Buffer.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Buffer.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Buffer.java
index 50cc1a1..737d63d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Buffer.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Buffer.java
@@ -25,10 +25,10 @@ import org.apache.hadoop.classification.InterfaceStability;
 
 /**
  * A byte sequence that is used as a Java native type for buffer.
- * It is resizable and distinguishes between the count of the seqeunce and
+ * It is resizable and distinguishes between the count of the sequence and
  * the current capacity.
  * 
- * @deprecated Replaced by a href=http://hadoop.apache.org/avro/;Avro/a.
+ * @deprecated Replaced by a href=http://avro.apache.org/;Avro/a.
  */
 @Deprecated
 @InterfaceAudience.Public
@@ -124,7 +124,7 @@ public class Buffer implements Comparable, Cloneable {
   
   /**
* Change the capacity of the backing storage.
-   * The data is preserved if newCapacity = getCount().
+   * The data is preserved if newCapacity {@literal =} getCount().
* @param newCapacity The new capacity in bytes.
*/
   public void setCapacity(int newCapacity) {
@@ -162,7 +162,7 @@ public class Buffer implements Comparable, Cloneable {
   public void truncate() {
 setCapacity(count);
   }
-  
+
   /**
* Append specified bytes to the buffer.
*

http://git-wip-us.apache.org/repos/asf/hadoop/blob/51c374ac/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Utils.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Utils.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Utils.java
index d5be59c..59e2080 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Utils.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Utils.java
@@ -28,9 +28,9 @@ import org.apache.hadoop.io.WritableComparator;
 import 

hadoop git commit: HADOOP-11720. [JDK8] Fix javadoc errors caused by incorrect or illegal tags in hadoop-tools. Contributed by Akira AJISAKA.

2015-03-17 Thread ozawa
Repository: hadoop
Updated Branches:
  refs/heads/trunk bb243cea9 - ef9946cd5


HADOOP-11720. [JDK8] Fix javadoc errors caused by incorrect or illegal tags in 
hadoop-tools. Contributed by Akira AJISAKA.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ef9946cd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ef9946cd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ef9946cd

Branch: refs/heads/trunk
Commit: ef9946cd52d54200c658987c1dbc3e6fce133f77
Parents: bb243ce
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Tue Mar 17 16:09:21 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Tue Mar 17 16:09:21 2015 +0900

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 +++
 .../java/org/apache/hadoop/ant/DfsTask.java |  6 ++---
 .../org/apache/hadoop/fs/s3/S3FileSystem.java   |  4 +---
 .../hadoop/fs/s3a/S3AFastOutputStream.java  |  4 ++--
 .../hadoop/fs/s3native/NativeS3FileSystem.java  | 24 
 .../fs/azure/AzureNativeFileSystemStore.java| 22 ++
 .../hadoop/fs/azure/NativeAzureFileSystem.java  | 16 +
 .../hadoop/tools/CopyListingFileStatus.java |  8 +++
 .../apache/hadoop/tools/SimpleCopyListing.java  |  2 +-
 .../apache/hadoop/tools/util/DistCpUtils.java   |  4 ++--
 .../java/org/apache/hadoop/record/Buffer.java   |  8 +++
 .../java/org/apache/hadoop/record/Utils.java|  8 +++
 12 files changed, 51 insertions(+), 58 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef9946cd/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 2e04cc1..3817054 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1108,6 +1108,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11638. OpensslSecureRandom.c pthreads_thread_id should support 
FreeBSD
 and Solaris in addition to Linux. (Kiran Kumar M R via cnauroth)
 
+HADOOP-11720. [JDK8] Fix javadoc errors caused by incorrect or illegal
+tags in hadoop-tools. (Akira AJISAKA via ozawa)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef9946cd/hadoop-tools/hadoop-ant/src/main/java/org/apache/hadoop/ant/DfsTask.java
--
diff --git 
a/hadoop-tools/hadoop-ant/src/main/java/org/apache/hadoop/ant/DfsTask.java 
b/hadoop-tools/hadoop-ant/src/main/java/org/apache/hadoop/ant/DfsTask.java
index 78cb360..9d0b3a4 100644
--- a/hadoop-tools/hadoop-ant/src/main/java/org/apache/hadoop/ant/DfsTask.java
+++ b/hadoop-tools/hadoop-ant/src/main/java/org/apache/hadoop/ant/DfsTask.java
@@ -41,8 +41,8 @@ import org.apache.hadoop.hdfs.HdfsConfiguration;
 public class DfsTask extends Task {
 
   /**
-   * Default sink for {@link java.lang.System.out System.out}
-   * and {@link java.lang.System.err System.err}.
+   * Default sink for {@link java.lang.System#out}
+   * and {@link java.lang.System#err}.
*/
   private static final OutputStream nullOut = new OutputStream() {
   public void write(int b){ /* ignore */ }
@@ -171,7 +171,7 @@ public class DfsTask extends Task {
   }
 
   /**
-   * Invoke {@link org.apache.hadoop.fs.FsShell#doMain FsShell.doMain} after a
+   * Invoke {@link org.apache.hadoop.fs.FsShell#main} after a
* few cursory checks of the configuration.
*/
   public void execute() throws BuildException {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef9946cd/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java
--
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java
index dda3cf6..8bdfe9a 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3/S3FileSystem.java
@@ -44,10 +44,9 @@ import org.apache.hadoop.io.retry.RetryProxy;
 import org.apache.hadoop.util.Progressable;
 
 /**
- * p
  * A block-based {@link FileSystem} backed by
  * a href=http://aws.amazon.com/s3;Amazon S3/a.
- * /p
+ *
  * @see NativeS3FileSystem
  */
 @InterfaceAudience.Public
@@ -70,7 +69,6 @@ public class S3FileSystem extends FileSystem {
 
   /**
* Return the protocol scheme for the FileSystem.
-   * p/
*
* @return codes3/code
*/


hadoop git commit: MAPREDUCE-4653. TestRandomAlgorithm has an unused import statement. Contributed by Amir Sanjar.

2015-03-17 Thread harsh
Repository: hadoop
Updated Branches:
  refs/heads/trunk 48c2db34e - e5370477c


MAPREDUCE-4653. TestRandomAlgorithm has an unused import statement. Contributed 
by Amir Sanjar.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e5370477
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e5370477
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e5370477

Branch: refs/heads/trunk
Commit: e5370477c2d00745e695507ecfdf86de59c5f5b9
Parents: 48c2db3
Author: Harsh J ha...@cloudera.com
Authored: Tue Mar 17 14:01:15 2015 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Tue Mar 17 14:11:54 2015 +0530

--
 hadoop-mapreduce-project/CHANGES.txt  | 3 +++
 .../org/apache/hadoop/mapred/gridmix/TestRandomAlgorithm.java | 2 --
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e5370477/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index b5baf51..3936c9b 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -253,6 +253,9 @@ Release 2.8.0 - UNRELEASED
 
   IMPROVEMENTS
 
+MAPREDUCE-4653. TestRandomAlgorithm has an unused import statement.
+(Amir Sanjar via harsh)
+
 MAPREDUCE-6100. replace mapreduce.job.credentials.binary with
 MRJobConfig.MAPREDUCE_JOB_CREDENTIALS_BINARY for better readability.
 (Zhihai Xu via harsh)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e5370477/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestRandomAlgorithm.java
--
diff --git 
a/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestRandomAlgorithm.java
 
b/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestRandomAlgorithm.java
index cd55483..4e85ce2 100644
--- 
a/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestRandomAlgorithm.java
+++ 
b/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestRandomAlgorithm.java
@@ -30,8 +30,6 @@ import java.util.Set;
 
 import org.junit.Test;
 
-import com.sun.tools.javac.code.Attribute.Array;
-
 public class TestRandomAlgorithm {
   private static final int[][] parameters = new int[][] {
 {5, 1, 1}, 



hadoop git commit: MAPREDUCE-4653. TestRandomAlgorithm has an unused import statement. Contributed by Amir Sanjar.

2015-03-17 Thread harsh
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 991ac04af - c58786794


MAPREDUCE-4653. TestRandomAlgorithm has an unused import statement. Contributed 
by Amir Sanjar.

(cherry picked from commit 75e4670408a058efa95eaa768fedbe614008658f)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c5878679
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c5878679
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c5878679

Branch: refs/heads/branch-2
Commit: c58786794b2b3fab71f0100709c5851248223556
Parents: 991ac04
Author: Harsh J ha...@cloudera.com
Authored: Tue Mar 17 14:01:15 2015 +0530
Committer: Harsh J ha...@cloudera.com
Committed: Tue Mar 17 14:13:23 2015 +0530

--
 hadoop-mapreduce-project/CHANGES.txt  | 3 +++
 .../org/apache/hadoop/mapred/gridmix/TestRandomAlgorithm.java | 2 --
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c5878679/hadoop-mapreduce-project/CHANGES.txt
--
diff --git a/hadoop-mapreduce-project/CHANGES.txt 
b/hadoop-mapreduce-project/CHANGES.txt
index ec0e49d..abcfe8c 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -8,6 +8,9 @@ Release 2.8.0 - UNRELEASED
 
   IMPROVEMENTS
 
+MAPREDUCE-4653. TestRandomAlgorithm has an unused import statement.
+(Amir Sanjar via harsh)
+
 MAPREDUCE-6100. replace mapreduce.job.credentials.binary with
 MRJobConfig.MAPREDUCE_JOB_CREDENTIALS_BINARY for better readability.
 (Zhihai Xu via harsh)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c5878679/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestRandomAlgorithm.java
--
diff --git 
a/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestRandomAlgorithm.java
 
b/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestRandomAlgorithm.java
index cd55483..4e85ce2 100644
--- 
a/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestRandomAlgorithm.java
+++ 
b/hadoop-tools/hadoop-gridmix/src/test/java/org/apache/hadoop/mapred/gridmix/TestRandomAlgorithm.java
@@ -30,8 +30,6 @@ import java.util.Set;
 
 import org.junit.Test;
 
-import com.sun.tools.javac.code.Attribute.Array;
-
 public class TestRandomAlgorithm {
   private static final int[][] parameters = new int[][] {
 {5, 1, 1}, 



hadoop git commit: HADOOP-11720. [JDK8] Fix javadoc errors caused by incorrect or illegal tags in hadoop-tools. Contributed by Akira AJISAKA.

2015-03-17 Thread ozawa
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 bb828565f - 77297017d


HADOOP-11720. [JDK8] Fix javadoc errors caused by incorrect or illegal tags in 
hadoop-tools. Contributed by Akira AJISAKA.

(cherry picked from commit ef9946cd52d54200c658987c1dbc3e6fce133f77)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/77297017
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/77297017
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/77297017

Branch: refs/heads/branch-2
Commit: 77297017d88ff795622619375445b8aa9ee0c95d
Parents: bb82856
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Tue Mar 17 16:09:21 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Tue Mar 17 16:09:38 2015 +0900

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 +++
 .../java/org/apache/hadoop/record/Buffer.java   |  8 +++
 .../java/org/apache/hadoop/record/Utils.java|  8 +++
 .../java/org/apache/hadoop/ant/DfsTask.java |  6 ++---
 .../org/apache/hadoop/fs/s3/S3FileSystem.java   |  4 +---
 .../hadoop/fs/s3a/S3AFastOutputStream.java  |  4 ++--
 .../hadoop/fs/s3native/NativeS3FileSystem.java  | 24 
 .../fs/azure/AzureNativeFileSystemStore.java| 22 ++
 .../hadoop/fs/azure/NativeAzureFileSystem.java  | 16 +
 .../hadoop/tools/CopyListingFileStatus.java |  8 +++
 .../apache/hadoop/tools/SimpleCopyListing.java  |  2 +-
 .../apache/hadoop/tools/util/DistCpUtils.java   |  4 ++--
 12 files changed, 51 insertions(+), 58 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/77297017/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index a1d2125..7f47197 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -697,6 +697,9 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11638. OpensslSecureRandom.c pthreads_thread_id should support 
FreeBSD
 and Solaris in addition to Linux. (Kiran Kumar M R via cnauroth)
 
+HADOOP-11720. [JDK8] Fix javadoc errors caused by incorrect or illegal
+tags in hadoop-tools. (Akira AJISAKA via ozawa)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/77297017/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Buffer.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Buffer.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Buffer.java
index 50cc1a1..737d63d 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Buffer.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Buffer.java
@@ -25,10 +25,10 @@ import org.apache.hadoop.classification.InterfaceStability;
 
 /**
  * A byte sequence that is used as a Java native type for buffer.
- * It is resizable and distinguishes between the count of the seqeunce and
+ * It is resizable and distinguishes between the count of the sequence and
  * the current capacity.
  * 
- * @deprecated Replaced by a href=http://hadoop.apache.org/avro/;Avro/a.
+ * @deprecated Replaced by a href=http://avro.apache.org/;Avro/a.
  */
 @Deprecated
 @InterfaceAudience.Public
@@ -124,7 +124,7 @@ public class Buffer implements Comparable, Cloneable {
   
   /**
* Change the capacity of the backing storage.
-   * The data is preserved if newCapacity = getCount().
+   * The data is preserved if newCapacity {@literal =} getCount().
* @param newCapacity The new capacity in bytes.
*/
   public void setCapacity(int newCapacity) {
@@ -162,7 +162,7 @@ public class Buffer implements Comparable, Cloneable {
   public void truncate() {
 setCapacity(count);
   }
-  
+
   /**
* Append specified bytes to the buffer.
*

http://git-wip-us.apache.org/repos/asf/hadoop/blob/77297017/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Utils.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Utils.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Utils.java
index d5be59c..59e2080 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Utils.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/Utils.java
@@ -28,9 +28,9 @@ import org.apache.hadoop.io.WritableComparator;
 import 

hadoop git commit: HDFS-7838. Expose truncate API for libhdfs. (yliu)

2015-03-17 Thread yliu
Repository: hadoop
Updated Branches:
  refs/heads/trunk ef9946cd5 - 48c2db34e


HDFS-7838. Expose truncate API for libhdfs. (yliu)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/48c2db34
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/48c2db34
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/48c2db34

Branch: refs/heads/trunk
Commit: 48c2db34eff376c0f3a72587a5540b1e3dffafd2
Parents: ef9946c
Author: yliu y...@apache.org
Authored: Tue Mar 17 07:22:17 2015 +0800
Committer: yliu y...@apache.org
Committed: Tue Mar 17 07:22:17 2015 +0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  2 ++
 .../src/contrib/libwebhdfs/src/hdfs_web.c   |  6 
 .../hadoop-hdfs/src/main/native/libhdfs/hdfs.c  | 37 
 .../hadoop-hdfs/src/main/native/libhdfs/hdfs.h  | 15 
 4 files changed, 60 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c2db34/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 9339b97..ad3e880 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -364,6 +364,8 @@ Release 2.7.0 - UNRELEASED
 
 HDFS-6488. Support HDFS superuser in NFS gateway. (brandonli)
 
+HDFS-7838. Expose truncate API for libhdfs. (yliu)
+
   IMPROVEMENTS
 
 HDFS-7752. Improve description for

http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c2db34/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c 
b/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
index deb11ef..86b4faf 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
@@ -1124,6 +1124,12 @@ done:
 return file;
 }
 
+int hdfsTruncateFile(hdfsFS fs, const char* path, tOffset newlength)
+{
+errno = ENOTSUP;
+return -1;
+}
+
 tSize hdfsWrite(hdfsFS fs, hdfsFile file, const void* buffer, tSize length)
 {
 if (length == 0) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c2db34/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
index 34a..504d47e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
@@ -1037,6 +1037,43 @@ done:
 return file;
 }
 
+int hdfsTruncateFile(hdfsFS fs, const char* path, tOffset newlength)
+{
+jobject jFS = (jobject)fs;
+jthrowable jthr;
+jvalue jVal;
+jobject jPath = NULL;
+
+JNIEnv *env = getJNIEnv();
+
+if (!env) {
+errno = EINTERNAL;
+return -1;
+}
+
+/* Create an object of org.apache.hadoop.fs.Path */
+jthr = constructNewObjectOfPath(env, path, jPath);
+if (jthr) {
+errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
+hdfsTruncateFile(%s): constructNewObjectOfPath, path);
+return -1;
+}
+
+jthr = invokeMethod(env, jVal, INSTANCE, jFS, HADOOP_FS,
+truncate, JMETHOD2(JPARAM(HADOOP_PATH), J, Z),
+jPath, newlength);
+destroyLocalReference(env, jPath);
+if (jthr) {
+errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
+hdfsTruncateFile(%s): FileSystem#truncate, path);
+return -1;
+}
+if (jVal.z == JNI_TRUE) {
+return 1;
+}
+return 0;
+}
+
 int hdfsUnbufferFile(hdfsFile file)
 {
 int ret;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c2db34/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
index 64889ed..5b7bc1e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
@@ -396,6 +396,21 @@ extern  C {
   int bufferSize, short replication, tSize blocksize);
 
 /**
+ * hdfsTruncateFile - Truncate a hdfs file to given lenght.
+ * @param fs The configured filesystem handle.
+ * @param path The full path to the file.
+ * @param newlength The 

hadoop git commit: HDFS-7838. Expose truncate API for libhdfs. (yliu)

2015-03-17 Thread yliu
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 77297017d - 991ac04af


HDFS-7838. Expose truncate API for libhdfs. (yliu)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/991ac04a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/991ac04a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/991ac04a

Branch: refs/heads/branch-2
Commit: 991ac04afc3a7cea59993a304b7c6b1286ac8c4f
Parents: 7729701
Author: yliu y...@apache.org
Authored: Tue Mar 17 07:24:20 2015 +0800
Committer: yliu y...@apache.org
Committed: Tue Mar 17 07:24:20 2015 +0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  2 ++
 .../src/contrib/libwebhdfs/src/hdfs_web.c   |  6 
 .../hadoop-hdfs/src/main/native/libhdfs/hdfs.c  | 37 
 .../hadoop-hdfs/src/main/native/libhdfs/hdfs.h  | 15 
 4 files changed, 60 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/991ac04a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index f788a9b..8e1a696 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -53,6 +53,8 @@ Release 2.7.0 - UNRELEASED
 
 HDFS-6488. Support HDFS superuser in NFS gateway. (brandonli)
 
+HDFS-7838. Expose truncate API for libhdfs. (yliu)
+
   IMPROVEMENTS
 
 HDFS-7752. Improve description for

http://git-wip-us.apache.org/repos/asf/hadoop/blob/991ac04a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c 
b/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
index deb11ef..86b4faf 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
@@ -1124,6 +1124,12 @@ done:
 return file;
 }
 
+int hdfsTruncateFile(hdfsFS fs, const char* path, tOffset newlength)
+{
+errno = ENOTSUP;
+return -1;
+}
+
 tSize hdfsWrite(hdfsFS fs, hdfsFile file, const void* buffer, tSize length)
 {
 if (length == 0) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/991ac04a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
index 27a2809..5c39dde 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
@@ -1037,6 +1037,43 @@ done:
 return file;
 }
 
+int hdfsTruncateFile(hdfsFS fs, const char* path, tOffset newlength)
+{
+jobject jFS = (jobject)fs;
+jthrowable jthr;
+jvalue jVal;
+jobject jPath = NULL;
+
+JNIEnv *env = getJNIEnv();
+
+if (!env) {
+errno = EINTERNAL;
+return -1;
+}
+
+/* Create an object of org.apache.hadoop.fs.Path */
+jthr = constructNewObjectOfPath(env, path, jPath);
+if (jthr) {
+errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
+hdfsTruncateFile(%s): constructNewObjectOfPath, path);
+return -1;
+}
+
+jthr = invokeMethod(env, jVal, INSTANCE, jFS, HADOOP_FS,
+truncate, JMETHOD2(JPARAM(HADOOP_PATH), J, Z),
+jPath, newlength);
+destroyLocalReference(env, jPath);
+if (jthr) {
+errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
+hdfsTruncateFile(%s): FileSystem#truncate, path);
+return -1;
+}
+if (jVal.z == JNI_TRUE) {
+return 1;
+}
+return 0;
+}
+
 int hdfsUnbufferFile(hdfsFile file)
 {
 int ret;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/991ac04a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
index 64889ed..5b7bc1e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
@@ -396,6 +396,21 @@ extern  C {
   int bufferSize, short replication, tSize blocksize);
 
 /**
+ * hdfsTruncateFile - Truncate a hdfs file to given lenght.
+ * @param fs The configured filesystem handle.
+ * @param path The full path to the file.
+ * @param newlength The 

hadoop git commit: HDFS-7838. Expose truncate API for libhdfs. (yliu)

2015-03-17 Thread yliu
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 51c374ac1 - ef9d46dcb


HDFS-7838. Expose truncate API for libhdfs. (yliu)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ef9d46dc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ef9d46dc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ef9d46dc

Branch: refs/heads/branch-2.7
Commit: ef9d46dcb6bc71f1ad6ce5b2e439cd443b589224
Parents: 51c374a
Author: yliu y...@apache.org
Authored: Tue Mar 17 07:25:58 2015 +0800
Committer: yliu y...@apache.org
Committed: Tue Mar 17 07:25:58 2015 +0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  2 ++
 .../src/contrib/libwebhdfs/src/hdfs_web.c   |  6 
 .../hadoop-hdfs/src/main/native/libhdfs/hdfs.c  | 37 
 .../hadoop-hdfs/src/main/native/libhdfs/hdfs.h  | 15 
 4 files changed, 60 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef9d46dc/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 91d3459..3f5da9b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -39,6 +39,8 @@ Release 2.7.0 - UNRELEASED
 
 HDFS-6488. Support HDFS superuser in NFS gateway. (brandonli)
 
+HDFS-7838. Expose truncate API for libhdfs. (yliu)
+
   IMPROVEMENTS
 
 HDFS-7752. Improve description for

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef9d46dc/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c 
b/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
index deb11ef..86b4faf 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
@@ -1124,6 +1124,12 @@ done:
 return file;
 }
 
+int hdfsTruncateFile(hdfsFS fs, const char* path, tOffset newlength)
+{
+errno = ENOTSUP;
+return -1;
+}
+
 tSize hdfsWrite(hdfsFS fs, hdfsFile file, const void* buffer, tSize length)
 {
 if (length == 0) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef9d46dc/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
index 27a2809..5c39dde 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
@@ -1037,6 +1037,43 @@ done:
 return file;
 }
 
+int hdfsTruncateFile(hdfsFS fs, const char* path, tOffset newlength)
+{
+jobject jFS = (jobject)fs;
+jthrowable jthr;
+jvalue jVal;
+jobject jPath = NULL;
+
+JNIEnv *env = getJNIEnv();
+
+if (!env) {
+errno = EINTERNAL;
+return -1;
+}
+
+/* Create an object of org.apache.hadoop.fs.Path */
+jthr = constructNewObjectOfPath(env, path, jPath);
+if (jthr) {
+errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
+hdfsTruncateFile(%s): constructNewObjectOfPath, path);
+return -1;
+}
+
+jthr = invokeMethod(env, jVal, INSTANCE, jFS, HADOOP_FS,
+truncate, JMETHOD2(JPARAM(HADOOP_PATH), J, Z),
+jPath, newlength);
+destroyLocalReference(env, jPath);
+if (jthr) {
+errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
+hdfsTruncateFile(%s): FileSystem#truncate, path);
+return -1;
+}
+if (jVal.z == JNI_TRUE) {
+return 1;
+}
+return 0;
+}
+
 int hdfsUnbufferFile(hdfsFile file)
 {
 int ret;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ef9d46dc/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
index 64889ed..5b7bc1e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
@@ -396,6 +396,21 @@ extern  C {
   int bufferSize, short replication, tSize blocksize);
 
 /**
+ * hdfsTruncateFile - Truncate a hdfs file to given lenght.
+ * @param fs The configured filesystem handle.
+ * @param path The full path to the file.
+ * @param newlength 

hadoop git commit: Revert YARN-3181. FairScheduler: Fix up outdated findbugs issues. (kasha)

2015-03-17 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/trunk d8846707c - 32b433045


Revert YARN-3181. FairScheduler: Fix up outdated findbugs issues. (kasha)

This reverts commit c2b185def846f5577a130003a533b9c377b58fab.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/32b43304
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/32b43304
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/32b43304

Branch: refs/heads/trunk
Commit: 32b43304563c2430c00bc3e142a962d2bc5f4d58
Parents: d884670
Author: Karthik Kambatla ka...@apache.org
Authored: Tue Mar 17 12:31:15 2015 -0700
Committer: Karthik Kambatla ka...@apache.org
Committed: Tue Mar 17 12:31:15 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  2 --
 .../dev-support/findbugs-exclude.xml| 27 
 .../scheduler/fair/AllocationConfiguration.java | 13 +++---
 .../fair/AllocationFileLoaderService.java   |  2 +-
 .../scheduler/fair/FSOpDurations.java   |  3 ---
 5 files changed, 31 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/32b43304/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index f5b72d7..fee0ce0 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -320,8 +320,6 @@ Release 2.7.0 - UNRELEASED
 YARN-2079. Recover NonAggregatingLogHandler state upon nodemanager
 restart. (Jason Lowe via junping_du) 
 
-YARN-3181. FairScheduler: Fix up outdated findbugs issues. (kasha)
-
 YARN-3124. Fixed CS LeafQueue/ParentQueue to use QueueCapacities to track
 capacities-by-label. (Wangda Tan via jianhe)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/32b43304/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml 
b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
index a89884a..943ecb0 100644
--- a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
+++ b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
@@ -152,12 +152,22 @@
 Class 
name=org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService
 /
 Bug pattern=IS2_INCONSISTENT_SYNC /
   /Match
+  Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService
 /
+Field name=allocFile /
+Bug pattern=IS2_INCONSISTENT_SYNC /
+  /Match
   !-- Inconsistent sync warning - minimumAllocation is only initialized once 
and never changed --
   Match
 Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler
 /
 Field name=minimumAllocation /
 Bug pattern=IS2_INCONSISTENT_SYNC /
   /Match
+  Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerNode
 /
+Method name=reserveResource /
+Bug pattern=BC_UNCONFIRMED_CAST / 
+  /Match
   !-- Inconsistent sync warning - reinitialize read from other queue does not 
need sync--
   Match
 Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue
 /
@@ -215,6 +225,18 @@
 Field name=scheduleAsynchronously /
 Bug pattern=IS2_INCONSISTENT_SYNC /
   /Match
+  !-- Inconsistent sync warning - updateInterval is only initialized once and 
never changed --
+  Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
 /
+Field name=updateInterval /
+Bug pattern=IS2_INCONSISTENT_SYNC /
+  /Match
+  !-- Inconsistent sync warning - callDurationMetrics is only initialized 
once and never changed --
+  Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
 /
+Field name=fsOpDurations /
+Bug pattern=IS2_INCONSISTENT_SYNC /
+  /Match
 
   !-- Inconsistent sync warning - numRetries is only initialized once and 
never changed --
   Match
@@ -415,6 +437,11 @@
 Bug pattern=IS2_INCONSISTENT_SYNC /
   /Match
   Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
 /
+Field name=allocConf /
+Bug pattern=IS2_INCONSISTENT_SYNC /
+  /Match
+  Match
 Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode /
 Field name=numContainers /
 Bug pattern=VO_VOLATILE_INCREMENT /


[45/50] [abbrv] hadoop git commit: YARN-3243. CapacityScheduler should pass headroom from parent to children to make sure ParentQueue obey its capacity limits. Contributed by Wangda Tan.

2015-03-17 Thread cmccabe
http://git-wip-us.apache.org/repos/asf/hadoop/blob/487374b7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
index a5a2e5f..972cabb 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
@@ -350,8 +350,8 @@ public class TestLeafQueue {
 // Start testing...
 
 // Only 1 container
-a.assignContainers(clusterResource, node_0, false,
-new ResourceLimits(clusterResource));
+a.assignContainers(clusterResource, node_0, new ResourceLimits(
+clusterResource));
 assertEquals(
 (int)(node_0.getTotalResource().getMemory() * a.getCapacity()) - 
(1*GB),
 a.getMetrics().getAvailableMB());
@@ -486,7 +486,7 @@ public class TestLeafQueue {
 // Start testing...
 
 // Only 1 container
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(1*GB, a.getUsedResources().getMemory());
 assertEquals(1*GB, app_0.getCurrentConsumption().getMemory());
@@ -497,7 +497,7 @@ public class TestLeafQueue {
 
 // Also 2nd - minCapacity = 1024 since (.1 * 8G)  minAlloc, also
 // you can get one container more than user-limit
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(2*GB, a.getUsedResources().getMemory());
 assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
@@ -506,7 +506,7 @@ public class TestLeafQueue {
 assertEquals(2*GB, a.getMetrics().getAllocatedMB());
 
 // Can't allocate 3rd due to user-limit
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(2*GB, a.getUsedResources().getMemory());
 assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
@@ -516,7 +516,7 @@ public class TestLeafQueue {
 
 // Bump up user-limit-factor, now allocate should work
 a.setUserLimitFactor(10);
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(3*GB, a.getUsedResources().getMemory());
 assertEquals(3*GB, app_0.getCurrentConsumption().getMemory());
@@ -525,7 +525,7 @@ public class TestLeafQueue {
 assertEquals(3*GB, a.getMetrics().getAllocatedMB());
 
 // One more should work, for app_1, due to user-limit-factor
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(4*GB, a.getUsedResources().getMemory());
 assertEquals(3*GB, app_0.getCurrentConsumption().getMemory());
@@ -536,8 +536,8 @@ public class TestLeafQueue {
 // Test max-capacity
 // Now - no more allocs since we are at max-cap
 a.setMaxCapacity(0.5f);
-a.assignContainers(clusterResource, node_0, false,
-new ResourceLimits(clusterResource));
+a.assignContainers(clusterResource, node_0, new ResourceLimits(
+clusterResource));
 assertEquals(4*GB, a.getUsedResources().getMemory());
 assertEquals(3*GB, app_0.getCurrentConsumption().getMemory());
 assertEquals(1*GB, app_1.getCurrentConsumption().getMemory());
@@ -652,21 +652,21 @@ public class TestLeafQueue {
 //recordFactory)));
 
 // 1 container to user_0
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(2*GB, a.getUsedResources().getMemory());
 assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
 assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
 
 // Again one to user_0 since he hasn't exceeded user limit yet
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(3*GB, 

hadoop git commit: YARN-3305. Normalize AM resource request on app submission. Contributed by Rohith Sharmaks

2015-03-17 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/trunk 32b433045 - 968425e9f


YARN-3305. Normalize AM resource request on app submission. Contributed by 
Rohith Sharmaks


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/968425e9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/968425e9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/968425e9

Branch: refs/heads/trunk
Commit: 968425e9f7b850ff9c2ab8ca37a64c3fdbe77dbf
Parents: 32b4330
Author: Jian He jia...@apache.org
Authored: Tue Mar 17 13:49:59 2015 -0700
Committer: Jian He jia...@apache.org
Committed: Tue Mar 17 13:49:59 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  7 +++--
 .../server/resourcemanager/RMAppManager.java|  6 -
 .../server/resourcemanager/TestAppManager.java  |  5 
 .../resourcemanager/TestClientRMService.java|  5 
 .../capacity/TestCapacityScheduler.java | 27 
 5 files changed, 47 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/968425e9/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index fee0ce0..bb752ab 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -66,8 +66,11 @@ Release 2.8.0 - UNRELEASED
 
   BUG FIXES
 
- YARN-3197. Confusing log generated by CapacityScheduler. (Varun Saxena 
- via devaraj)
+YARN-3197. Confusing log generated by CapacityScheduler. (Varun Saxena 
+via devaraj)
+
+YARN-3305. Normalize AM resource request on app submission. (Rohith 
Sharmaks
+via jianhe)
 
 Release 2.7.0 - UNRELEASED
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/968425e9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
index 8dcfe67..9197630 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAppManager.java
@@ -390,7 +390,11 @@ public class RMAppManager implements 
EventHandlerRMAppManagerEvent,
 +  for application  + submissionContext.getApplicationId(), e);
 throw e;
   }
-  
+  SchedulerUtils.normalizeRequest(amReq, scheduler.getResourceCalculator(),
+  scheduler.getClusterResource(),
+  scheduler.getMinimumResourceCapability(),
+  scheduler.getMaximumResourceCapability(),
+  scheduler.getMinimumResourceCapability());
   return amReq;
 }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/968425e9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAppManager.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAppManager.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAppManager.java
index d2ac4ef..5ebc68c 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAppManager.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAppManager.java
@@ -67,6 +67,7 @@ import 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.YarnScheduler;
 import 
org.apache.hadoop.yarn.server.resourcemanager.security.ClientToAMTokenSecretManagerInRM;
 import org.apache.hadoop.yarn.server.security.ApplicationACLsManager;
+import org.apache.hadoop.yarn.util.resource.ResourceCalculator;
 import org.apache.hadoop.yarn.util.resource.Resources;
 import org.junit.After;
 import 

[2/2] hadoop git commit: YARN-3243. CapacityScheduler should pass headroom from parent to children to make sure ParentQueue obey its capacity limits. Contributed by Wangda Tan.

2015-03-17 Thread jianhe
YARN-3243. CapacityScheduler should pass headroom from parent to children to 
make sure ParentQueue obey its capacity limits. Contributed by Wangda Tan.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/487374b7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/487374b7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/487374b7

Branch: refs/heads/trunk
Commit: 487374b7fe0c92fc7eb1406c568952722b5d5b15
Parents: a89b087
Author: Jian He jia...@apache.org
Authored: Tue Mar 17 10:22:15 2015 -0700
Committer: Jian He jia...@apache.org
Committed: Tue Mar 17 10:24:23 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../scheduler/capacity/AbstractCSQueue.java | 112 ++-
 .../scheduler/capacity/CSQueue.java |   4 +-
 .../scheduler/capacity/CapacityScheduler.java   |  33 ++-
 .../scheduler/capacity/LeafQueue.java   | 292 +++
 .../scheduler/capacity/ParentQueue.java | 140 +++--
 .../scheduler/common/fica/FiCaSchedulerApp.java |  16 +-
 .../capacity/TestApplicationLimits.java |   8 +-
 .../capacity/TestCapacityScheduler.java |  59 
 .../scheduler/capacity/TestChildQueueOrder.java |  25 +-
 .../scheduler/capacity/TestLeafQueue.java   | 142 -
 .../scheduler/capacity/TestParentQueue.java |  97 +++---
 .../scheduler/capacity/TestReservations.java| 147 +-
 13 files changed, 561 insertions(+), 517 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/487374b7/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 82934ad..f5b72d7 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -56,6 +56,9 @@ Release 2.8.0 - UNRELEASED
 
   IMPROVEMENTS
 
+YARN-3243. CapacityScheduler should pass headroom from parent to children
+to make sure ParentQueue obey its capacity limits. (Wangda Tan via jianhe)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/487374b7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
index d800709..4e53060 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
@@ -20,10 +20,13 @@ package 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity;
 
 import java.io.IOException;
 import java.util.HashMap;
+import java.util.HashSet;
 import java.util.Map;
 import java.util.Set;
 
 import org.apache.commons.lang.StringUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.authorize.AccessControlList;
@@ -34,6 +37,7 @@ import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.factories.RecordFactory;
 import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider;
+import org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager;
 import org.apache.hadoop.yarn.security.AccessType;
 import org.apache.hadoop.yarn.security.PrivilegedEntity;
 import org.apache.hadoop.yarn.security.PrivilegedEntity.EntityType;
@@ -49,6 +53,7 @@ import org.apache.hadoop.yarn.util.resource.Resources;
 import com.google.common.collect.Sets;
 
 public abstract class AbstractCSQueue implements CSQueue {
+  private static final Log LOG = LogFactory.getLog(AbstractCSQueue.class);
   
   CSQueue parent;
   final String queueName;
@@ -406,21 +411,102 @@ public abstract class AbstractCSQueue implements CSQueue 
{
 parentQ.getPreemptionDisabled());
   }
   
-  protected 

[1/2] hadoop git commit: YARN-3243. CapacityScheduler should pass headroom from parent to children to make sure ParentQueue obey its capacity limits. Contributed by Wangda Tan.

2015-03-17 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/trunk a89b087c4 - 487374b7f


http://git-wip-us.apache.org/repos/asf/hadoop/blob/487374b7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
index a5a2e5f..972cabb 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
@@ -350,8 +350,8 @@ public class TestLeafQueue {
 // Start testing...
 
 // Only 1 container
-a.assignContainers(clusterResource, node_0, false,
-new ResourceLimits(clusterResource));
+a.assignContainers(clusterResource, node_0, new ResourceLimits(
+clusterResource));
 assertEquals(
 (int)(node_0.getTotalResource().getMemory() * a.getCapacity()) - 
(1*GB),
 a.getMetrics().getAvailableMB());
@@ -486,7 +486,7 @@ public class TestLeafQueue {
 // Start testing...
 
 // Only 1 container
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(1*GB, a.getUsedResources().getMemory());
 assertEquals(1*GB, app_0.getCurrentConsumption().getMemory());
@@ -497,7 +497,7 @@ public class TestLeafQueue {
 
 // Also 2nd - minCapacity = 1024 since (.1 * 8G)  minAlloc, also
 // you can get one container more than user-limit
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(2*GB, a.getUsedResources().getMemory());
 assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
@@ -506,7 +506,7 @@ public class TestLeafQueue {
 assertEquals(2*GB, a.getMetrics().getAllocatedMB());
 
 // Can't allocate 3rd due to user-limit
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(2*GB, a.getUsedResources().getMemory());
 assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
@@ -516,7 +516,7 @@ public class TestLeafQueue {
 
 // Bump up user-limit-factor, now allocate should work
 a.setUserLimitFactor(10);
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(3*GB, a.getUsedResources().getMemory());
 assertEquals(3*GB, app_0.getCurrentConsumption().getMemory());
@@ -525,7 +525,7 @@ public class TestLeafQueue {
 assertEquals(3*GB, a.getMetrics().getAllocatedMB());
 
 // One more should work, for app_1, due to user-limit-factor
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(4*GB, a.getUsedResources().getMemory());
 assertEquals(3*GB, app_0.getCurrentConsumption().getMemory());
@@ -536,8 +536,8 @@ public class TestLeafQueue {
 // Test max-capacity
 // Now - no more allocs since we are at max-cap
 a.setMaxCapacity(0.5f);
-a.assignContainers(clusterResource, node_0, false,
-new ResourceLimits(clusterResource));
+a.assignContainers(clusterResource, node_0, new ResourceLimits(
+clusterResource));
 assertEquals(4*GB, a.getUsedResources().getMemory());
 assertEquals(3*GB, app_0.getCurrentConsumption().getMemory());
 assertEquals(1*GB, app_1.getCurrentConsumption().getMemory());
@@ -652,21 +652,21 @@ public class TestLeafQueue {
 //recordFactory)));
 
 // 1 container to user_0
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(2*GB, a.getUsedResources().getMemory());
 assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
 assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
 
 // Again one to user_0 since he hasn't exceeded user limit yet
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 

hadoop git commit: HDFS-5356. MiniDFSCluster should close all open FileSystems when shutdown() (Contributed by Rakesh R)

2015-03-17 Thread vinayakumarb
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 c58786794 - 6ddb1bc85


HDFS-5356. MiniDFSCluster should close all open FileSystems when shutdown() 
(Contributed by Rakesh R)

(cherry picked from commit 018893e81ec1c43e6c79c77adec92c2edfb20cab)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6ddb1bc8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6ddb1bc8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6ddb1bc8

Branch: refs/heads/branch-2
Commit: 6ddb1bc857b2ab85748171d1084882569f760c69
Parents: c587867
Author: Vinayakumar B vinayakum...@apache.org
Authored: Tue Mar 17 15:32:34 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Tue Mar 17 15:34:48 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  | 32 +---
 .../apache/hadoop/hdfs/TestFileCreation.java|  4 +--
 .../snapshot/TestRenameWithSnapshots.java   |  4 +--
 4 files changed, 35 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/6ddb1bc8/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 8e1a696..f9d2d32 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -14,6 +14,9 @@ Release 2.8.0 - UNRELEASED
 
   BUG FIXES
 
+HDFS-5356. MiniDFSCluster should close all open FileSystems when shutdown()
+(Rakesh R via vinayakumarb)
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/6ddb1bc8/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
index 1076938..2113268 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
@@ -61,6 +61,7 @@ import java.util.Collection;
 import java.util.List;
 import java.util.Map;
 import java.util.Random;
+import java.util.Set;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -120,6 +121,7 @@ import org.apache.hadoop.util.ToolRunner;
 import com.google.common.base.Joiner;
 import com.google.common.base.Preconditions;
 import com.google.common.collect.Lists;
+import com.google.common.collect.Sets;
 
 /**
  * This class creates a single-process DFS cluster for junit testing.
@@ -525,7 +527,8 @@ public class MiniDFSCluster {
   private boolean federation;
   private boolean checkExitOnShutdown = true;
   protected final int storagesPerDatanode;
-  
+  private SetFileSystem fileSystems = Sets.newHashSet();
+
   /**
* A unique instance identifier for the cluster. This
* is used to disambiguate HA filesystems in the case where
@@ -1709,6 +1712,13 @@ public class MiniDFSCluster {
* Shutdown all the nodes in the cluster.
*/
   public void shutdown(boolean deleteDfsDir) {
+shutdown(deleteDfsDir, true);
+  }
+
+  /**
+   * Shutdown all the nodes in the cluster.
+   */
+  public void shutdown(boolean deleteDfsDir, boolean closeFileSystem) {
 LOG.info(Shutting down the Mini HDFS Cluster);
 if (checkExitOnShutdown)  {
   if (ExitUtil.terminateCalled()) {
@@ -1718,6 +1728,16 @@ public class MiniDFSCluster {
 throw new AssertionError(Test resulted in an unexpected exit);
   }
 }
+if (closeFileSystem) {
+  for (FileSystem fs : fileSystems) {
+try {
+  fs.close();
+} catch (IOException ioe) {
+  LOG.warn(Exception while closing file system, ioe);
+}
+  }
+  fileSystems.clear();
+}
 shutdownDataNodes();
 for (NameNodeInfo nnInfo : nameNodes) {
   if (nnInfo == null) continue;
@@ -2138,8 +2158,10 @@ public class MiniDFSCluster {
* Get a client handle to the DFS cluster for the namenode at given index.
*/
   public DistributedFileSystem getFileSystem(int nnIndex) throws IOException {
-return (DistributedFileSystem)FileSystem.get(getURI(nnIndex),
-nameNodes[nnIndex].conf);
+DistributedFileSystem dfs = (DistributedFileSystem) FileSystem.get(
+getURI(nnIndex), nameNodes[nnIndex].conf);
+fileSystems.add(dfs);
+return dfs;
   }
 
   /**
@@ -2147,7 +2169,9 @@ public class MiniDFSCluster {
* This simulating different threads working on 

hadoop git commit: HDFS-5356. MiniDFSCluster should close all open FileSystems when shutdown() (Contributed by Rakesh R)

2015-03-17 Thread vinayakumarb
Repository: hadoop
Updated Branches:
  refs/heads/trunk e5370477c - 018893e81


HDFS-5356. MiniDFSCluster should close all open FileSystems when shutdown() 
(Contributed by Rakesh R)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/018893e8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/018893e8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/018893e8

Branch: refs/heads/trunk
Commit: 018893e81ec1c43e6c79c77adec92c2edfb20cab
Parents: e537047
Author: Vinayakumar B vinayakum...@apache.org
Authored: Tue Mar 17 15:32:34 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Tue Mar 17 15:32:34 2015 +0530

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 ++
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  | 32 +---
 .../apache/hadoop/hdfs/TestFileCreation.java|  4 +--
 .../snapshot/TestRenameWithSnapshots.java   |  4 +--
 4 files changed, 35 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/018893e8/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index ad3e880..bbe1f02 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -327,6 +327,9 @@ Release 2.8.0 - UNRELEASED
 
   BUG FIXES
 
+HDFS-5356. MiniDFSCluster should close all open FileSystems when shutdown()
+(Rakesh R via vinayakumarb)
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/018893e8/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
index 9208ed2..a6cc71f 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
@@ -60,6 +60,7 @@ import java.util.Collection;
 import java.util.List;
 import java.util.Map;
 import java.util.Random;
+import java.util.Set;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -118,6 +119,7 @@ import org.apache.hadoop.util.ToolRunner;
 import com.google.common.base.Joiner;
 import com.google.common.base.Preconditions;
 import com.google.common.collect.Lists;
+import com.google.common.collect.Sets;
 
 /**
  * This class creates a single-process DFS cluster for junit testing.
@@ -523,7 +525,8 @@ public class MiniDFSCluster {
   private boolean federation;
   private boolean checkExitOnShutdown = true;
   protected final int storagesPerDatanode;
-  
+  private SetFileSystem fileSystems = Sets.newHashSet();
+
   /**
* A unique instance identifier for the cluster. This
* is used to disambiguate HA filesystems in the case where
@@ -1705,6 +1708,13 @@ public class MiniDFSCluster {
* Shutdown all the nodes in the cluster.
*/
   public void shutdown(boolean deleteDfsDir) {
+shutdown(deleteDfsDir, true);
+  }
+
+  /**
+   * Shutdown all the nodes in the cluster.
+   */
+  public void shutdown(boolean deleteDfsDir, boolean closeFileSystem) {
 LOG.info(Shutting down the Mini HDFS Cluster);
 if (checkExitOnShutdown)  {
   if (ExitUtil.terminateCalled()) {
@@ -1714,6 +1724,16 @@ public class MiniDFSCluster {
 throw new AssertionError(Test resulted in an unexpected exit);
   }
 }
+if (closeFileSystem) {
+  for (FileSystem fs : fileSystems) {
+try {
+  fs.close();
+} catch (IOException ioe) {
+  LOG.warn(Exception while closing file system, ioe);
+}
+  }
+  fileSystems.clear();
+}
 shutdownDataNodes();
 for (NameNodeInfo nnInfo : nameNodes) {
   if (nnInfo == null) continue;
@@ -2144,8 +2164,10 @@ public class MiniDFSCluster {
* Get a client handle to the DFS cluster for the namenode at given index.
*/
   public DistributedFileSystem getFileSystem(int nnIndex) throws IOException {
-return (DistributedFileSystem)FileSystem.get(getURI(nnIndex),
-nameNodes[nnIndex].conf);
+DistributedFileSystem dfs = (DistributedFileSystem) FileSystem.get(
+getURI(nnIndex), nameNodes[nnIndex].conf);
+fileSystems.add(dfs);
+return dfs;
   }
 
   /**
@@ -2153,7 +2175,9 @@ public class MiniDFSCluster {
* This simulating different threads working on different FileSystem 
instances.
*/
   public FileSystem 

hadoop git commit: YARN-3197. Confusing log generated by CapacityScheduler. Contributed by Varun Saxena.

2015-03-17 Thread devaraj
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 6ddb1bc85 - 895588b43


YARN-3197. Confusing log generated by CapacityScheduler. Contributed by
Varun Saxena.

(cherry picked from commit 7179f94f9d000fc52bd9ce5aa9741aba97ec3ee8)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/895588b4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/895588b4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/895588b4

Branch: refs/heads/branch-2
Commit: 895588b439bf6e3113a85583422484af4b89a890
Parents: 6ddb1bc
Author: Devaraj K deva...@apache.org
Authored: Tue Mar 17 15:57:57 2015 +0530
Committer: Devaraj K deva...@apache.org
Committed: Tue Mar 17 15:59:19 2015 +0530

--
 hadoop-yarn-project/CHANGES.txt | 3 +++
 .../resourcemanager/scheduler/capacity/CapacityScheduler.java   | 5 +++--
 2 files changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/895588b4/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 538b4dd..b2f25cd 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -15,6 +15,9 @@ Release 2.8.0 - UNRELEASED
 
   BUG FIXES
 
+ YARN-3197. Confusing log generated by CapacityScheduler. (Varun Saxena 
+ via devaraj)
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/895588b4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index 28ce264..756e537 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -1279,7 +1279,8 @@ public class CapacityScheduler extends
   protected synchronized void completedContainer(RMContainer rmContainer,
   ContainerStatus containerStatus, RMContainerEventType event) {
 if (rmContainer == null) {
-  LOG.info(Null container completed...);
+  LOG.info(Container  + containerStatus.getContainerId() +
+   completed with event  + event);
   return;
 }
 
@@ -1291,7 +1292,7 @@ public class CapacityScheduler extends
 ApplicationId appId =
 container.getId().getApplicationAttemptId().getApplicationId();
 if (application == null) {
-  LOG.info(Container  + container +  of +  unknown application 
+  LOG.info(Container  + container +  of +  finished application 
   + appId +  completed with event  + event);
   return;
 }



hadoop git commit: YARN-3197. Confusing log generated by CapacityScheduler. Contributed by Varun Saxena.

2015-03-17 Thread devaraj
Repository: hadoop
Updated Branches:
  refs/heads/trunk 018893e81 - 7179f94f9


YARN-3197. Confusing log generated by CapacityScheduler. Contributed by
Varun Saxena.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7179f94f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7179f94f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7179f94f

Branch: refs/heads/trunk
Commit: 7179f94f9d000fc52bd9ce5aa9741aba97ec3ee8
Parents: 018893e
Author: Devaraj K deva...@apache.org
Authored: Tue Mar 17 15:57:57 2015 +0530
Committer: Devaraj K deva...@apache.org
Committed: Tue Mar 17 15:57:57 2015 +0530

--
 hadoop-yarn-project/CHANGES.txt | 3 +++
 .../resourcemanager/scheduler/capacity/CapacityScheduler.java   | 5 +++--
 2 files changed, 6 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7179f94f/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index cb68480..82934ad 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -63,6 +63,9 @@ Release 2.8.0 - UNRELEASED
 
   BUG FIXES
 
+ YARN-3197. Confusing log generated by CapacityScheduler. (Varun Saxena 
+ via devaraj)
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7179f94f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index 28ce264..756e537 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -1279,7 +1279,8 @@ public class CapacityScheduler extends
   protected synchronized void completedContainer(RMContainer rmContainer,
   ContainerStatus containerStatus, RMContainerEventType event) {
 if (rmContainer == null) {
-  LOG.info(Null container completed...);
+  LOG.info(Container  + containerStatus.getContainerId() +
+   completed with event  + event);
   return;
 }
 
@@ -1291,7 +1292,7 @@ public class CapacityScheduler extends
 ApplicationId appId =
 container.getId().getApplicationAttemptId().getApplicationId();
 if (application == null) {
-  LOG.info(Container  + container +  of +  unknown application 
+  LOG.info(Container  + container +  of +  finished application 
   + appId +  completed with event  + event);
   return;
 }



[2/2] hadoop git commit: YARN-3243. CapacityScheduler should pass headroom from parent to children to make sure ParentQueue obey its capacity limits. Contributed by Wangda Tan. (cherry picked from com

2015-03-17 Thread jianhe
YARN-3243. CapacityScheduler should pass headroom from parent to children to 
make sure ParentQueue obey its capacity limits. Contributed by Wangda Tan.
(cherry picked from commit 487374b7fe0c92fc7eb1406c568952722b5d5b15)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1c601e49
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1c601e49
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1c601e49

Branch: refs/heads/branch-2
Commit: 1c601e492f4cd80e012aa78b796383ee9de161fd
Parents: 895588b
Author: Jian He jia...@apache.org
Authored: Tue Mar 17 10:22:15 2015 -0700
Committer: Jian He jia...@apache.org
Committed: Tue Mar 17 10:25:07 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |   3 +
 .../scheduler/capacity/AbstractCSQueue.java | 112 ++-
 .../scheduler/capacity/CSQueue.java |   4 +-
 .../scheduler/capacity/CapacityScheduler.java   |  33 ++-
 .../scheduler/capacity/LeafQueue.java   | 292 +++
 .../scheduler/capacity/ParentQueue.java | 140 +++--
 .../scheduler/common/fica/FiCaSchedulerApp.java |  16 +-
 .../capacity/TestApplicationLimits.java |   8 +-
 .../capacity/TestCapacityScheduler.java |  59 
 .../scheduler/capacity/TestChildQueueOrder.java |  25 +-
 .../scheduler/capacity/TestLeafQueue.java   | 142 -
 .../scheduler/capacity/TestParentQueue.java |  97 +++---
 .../scheduler/capacity/TestReservations.java| 147 +-
 13 files changed, 561 insertions(+), 517 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1c601e49/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index b2f25cd..e15fdf2 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -8,6 +8,9 @@ Release 2.8.0 - UNRELEASED
 
   IMPROVEMENTS
 
+YARN-3243. CapacityScheduler should pass headroom from parent to children
+to make sure ParentQueue obey its capacity limits. (Wangda Tan via jianhe)
+
   OPTIMIZATIONS
 
 YARN-3339. TestDockerContainerExecutor should pull a single image and not

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1c601e49/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
index d800709..4e53060 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
@@ -20,10 +20,13 @@ package 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity;
 
 import java.io.IOException;
 import java.util.HashMap;
+import java.util.HashSet;
 import java.util.Map;
 import java.util.Set;
 
 import org.apache.commons.lang.StringUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.authorize.AccessControlList;
@@ -34,6 +37,7 @@ import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.factories.RecordFactory;
 import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider;
+import org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager;
 import org.apache.hadoop.yarn.security.AccessType;
 import org.apache.hadoop.yarn.security.PrivilegedEntity;
 import org.apache.hadoop.yarn.security.PrivilegedEntity.EntityType;
@@ -49,6 +53,7 @@ import org.apache.hadoop.yarn.util.resource.Resources;
 import com.google.common.collect.Sets;
 
 public abstract class AbstractCSQueue implements CSQueue {
+  private static final Log LOG = LogFactory.getLog(AbstractCSQueue.class);
   
   CSQueue parent;
   final String queueName;
@@ -406,21 +411,102 @@ public abstract class AbstractCSQueue implements CSQueue 
{
  

hadoop git commit: HDFS-7940. Add tracing to DFSClient#setQuotaByStorageType (Rakesh R via Colin P. McCabe)

2015-03-17 Thread cmccabe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 1c601e492 - 455d4aa8a


HDFS-7940. Add tracing to DFSClient#setQuotaByStorageType (Rakesh R via Colin 
P. McCabe)

(cherry picked from commit d8846707c58c5c3ec542128df13a82ddc05fb347)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/455d4aa8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/455d4aa8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/455d4aa8

Branch: refs/heads/branch-2
Commit: 455d4aa8a12920fccad1bcde715f6fb6d9a63561
Parents: 1c601e4
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Tue Mar 17 10:47:21 2015 -0700
Committer: Colin Patrick Mccabe cmcc...@cloudera.com
Committed: Tue Mar 17 11:01:42 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../src/main/java/org/apache/hadoop/hdfs/DFSClient.java   | 3 +++
 2 files changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/455d4aa8/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index f9d2d32..08d58b1 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -451,6 +451,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-2605. Remove redundant Release 0.21.1 section from CHANGES.txt.
 (Allen Wittenauer via shv)
 
+HDFS-7940. Add tracing to DFSClient#setQuotaByStorageType (Rakesh R via
+Colin P. McCabe)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/455d4aa8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 3336077..658cccf 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -3088,6 +3088,7 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
   throw new IllegalArgumentException(Don't support Quota for storage type 
: 
 + type.toString());
 }
+TraceScope scope = getPathTraceScope(setQuotaByStorageType, src);
 try {
   namenode.setQuota(src, HdfsConstants.QUOTA_DONT_SET, quota, type);
 } catch (RemoteException re) {
@@ -3096,6 +3097,8 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
 QuotaByStorageTypeExceededException.class,
 UnresolvedPathException.class,
 SnapshotAccessControlException.class);
+} finally {
+  scope.close();
 }
   }
   /**



hadoop git commit: HDFS-7940. Add tracing to DFSClient#setQuotaByStorageType (Rakesh R via Colin P. McCabe)

2015-03-17 Thread cmccabe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 ef9d46dcb - d2dad7442


HDFS-7940. Add tracing to DFSClient#setQuotaByStorageType (Rakesh R via Colin 
P. McCabe)

(cherry picked from commit d8846707c58c5c3ec542128df13a82ddc05fb347)
(cherry picked from commit 455d4aa8a12920fccad1bcde715f6fb6d9a63561)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d2dad744
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d2dad744
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d2dad744

Branch: refs/heads/branch-2.7
Commit: d2dad744215fd028405f5b57abcd002915827787
Parents: ef9d46d
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Tue Mar 17 10:47:21 2015 -0700
Committer: Colin Patrick Mccabe cmcc...@cloudera.com
Committed: Tue Mar 17 11:02:05 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../src/main/java/org/apache/hadoop/hdfs/DFSClient.java   | 3 +++
 2 files changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d2dad744/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 3f5da9b..2aff3b3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -434,6 +434,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-2605. Remove redundant Release 0.21.1 section from CHANGES.txt.
 (Allen Wittenauer via shv)
 
+HDFS-7940. Add tracing to DFSClient#setQuotaByStorageType (Rakesh R via
+Colin P. McCabe)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d2dad744/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 3336077..658cccf 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -3088,6 +3088,7 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
   throw new IllegalArgumentException(Don't support Quota for storage type 
: 
 + type.toString());
 }
+TraceScope scope = getPathTraceScope(setQuotaByStorageType, src);
 try {
   namenode.setQuota(src, HdfsConstants.QUOTA_DONT_SET, quota, type);
 } catch (RemoteException re) {
@@ -3096,6 +3097,8 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
 QuotaByStorageTypeExceededException.class,
 UnresolvedPathException.class,
 SnapshotAccessControlException.class);
+} finally {
+  scope.close();
 }
   }
   /**



hadoop git commit: HDFS-7912. Erasure Coding: track BlockInfo instead of Block in UnderReplicatedBlocks and PendingReplicationBlocks. Contributed by Jing Zhao.

2015-03-17 Thread jing9
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-7285 ae53715e0 - 050018b11


HDFS-7912. Erasure Coding: track BlockInfo instead of Block in 
UnderReplicatedBlocks and PendingReplicationBlocks. Contributed by Jing Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/050018b1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/050018b1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/050018b1

Branch: refs/heads/HDFS-7285
Commit: 050018b115774672bd0f747e97316bf64f38ec4b
Parents: ae53715
Author: Jing Zhao ji...@apache.org
Authored: Tue Mar 17 10:18:50 2015 -0700
Committer: Jing Zhao ji...@apache.org
Committed: Tue Mar 17 10:18:50 2015 -0700

--
 .../server/blockmanagement/BlockManager.java| 47 -
 .../PendingReplicationBlocks.java   | 51 +--
 .../blockmanagement/UnderReplicatedBlocks.java  | 49 +-
 .../hdfs/server/namenode/FSDirAttrOp.java   | 10 ++--
 .../hdfs/server/namenode/FSNamesystem.java  | 21 
 .../hadoop/hdfs/server/namenode/INode.java  | 12 ++---
 .../hadoop/hdfs/server/namenode/INodeFile.java  |  4 +-
 .../hdfs/server/namenode/NamenodeFsck.java  | 10 ++--
 .../hadoop/hdfs/server/namenode/SafeMode.java   |  3 +-
 .../blockmanagement/BlockManagerTestUtil.java   |  5 +-
 .../blockmanagement/TestBlockManager.java   |  8 +--
 .../server/blockmanagement/TestNodeCount.java   |  3 +-
 .../TestOverReplicatedBlocks.java   |  5 +-
 .../blockmanagement/TestPendingReplication.java | 19 ---
 .../TestRBWBlockInvalidation.java   |  4 +-
 .../blockmanagement/TestReplicationPolicy.java  | 53 +++-
 .../TestUnderReplicatedBlockQueues.java | 16 +++---
 .../datanode/TestReadOnlySharedStorage.java |  9 ++--
 .../namenode/TestProcessCorruptBlocks.java  |  5 +-
 19 files changed, 180 insertions(+), 154 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/050018b1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index f22e9f4..bb28343 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -1339,7 +1339,7 @@ public class BlockManager {
* @return number of blocks scheduled for replication during this iteration.
*/
   int computeReplicationWork(int blocksToProcess) {
-ListListBlock blocksToReplicate = null;
+ListListBlockInfo blocksToReplicate = null;
 namesystem.writeLock();
 try {
   // Choose the blocks to be replicated
@@ -1357,7 +1357,7 @@ public class BlockManager {
* @return the number of blocks scheduled for replication
*/
   @VisibleForTesting
-  int computeReplicationWorkForBlocks(ListListBlock blocksToReplicate) {
+  int computeReplicationWorkForBlocks(ListListBlockInfo blocksToReplicate) 
{
 int requiredReplication, numEffectiveReplicas;
 ListDatanodeDescriptor containingNodes;
 DatanodeDescriptor srcNode;
@@ -1371,7 +1371,7 @@ public class BlockManager {
 try {
   synchronized (neededReplications) {
 for (int priority = 0; priority  blocksToReplicate.size(); 
priority++) {
-  for (Block block : blocksToReplicate.get(priority)) {
+  for (BlockInfo block : blocksToReplicate.get(priority)) {
 // block should belong to a file
 bc = blocksMap.getBlockCollection(block);
 // abandoned block or block reopened for append
@@ -1455,7 +1455,7 @@ public class BlockManager {
 }
 
 synchronized (neededReplications) {
-  Block block = rw.block;
+  BlockInfo block = rw.block;
   int priority = rw.priority;
   // Recheck since global lock was released
   // block should belong to a file
@@ -1711,7 +1711,7 @@ public class BlockManager {
* and put them back into the neededReplication queue
*/
   private void processPendingReplications() {
-Block[] timedOutItems = pendingReplications.getTimedOutBlocks();
+BlockInfo[] timedOutItems = pendingReplications.getTimedOutBlocks();
 if (timedOutItems != null) {
   namesystem.writeLock();
   try {
@@ -2796,13 +2796,13 @@ public class BlockManager {
   
   /** Set replication for the blocks. */
   public void setReplication(final short oldRepl, final short 

hadoop git commit: HDFS-7940. Add tracing to DFSClient#setQuotaByStorageType (Rakesh R via Colin P. McCabe)

2015-03-17 Thread cmccabe
Repository: hadoop
Updated Branches:
  refs/heads/trunk 487374b7f - d8846707c


HDFS-7940. Add tracing to DFSClient#setQuotaByStorageType (Rakesh R via Colin 
P. McCabe)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d8846707
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d8846707
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d8846707

Branch: refs/heads/trunk
Commit: d8846707c58c5c3ec542128df13a82ddc05fb347
Parents: 487374b
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Tue Mar 17 10:47:21 2015 -0700
Committer: Colin Patrick Mccabe cmcc...@cloudera.com
Committed: Tue Mar 17 10:47:21 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt   | 3 +++
 .../src/main/java/org/apache/hadoop/hdfs/DFSClient.java   | 3 +++
 2 files changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d8846707/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index bbe1f02..3e11356 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -756,6 +756,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-2605. Remove redundant Release 0.21.1 section from CHANGES.txt.
 (Allen Wittenauer via shv)
 
+HDFS-7940. Add tracing to DFSClient#setQuotaByStorageType (Rakesh R via
+Colin P. McCabe)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d8846707/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index f970fef..3c8fd31 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -3089,6 +3089,7 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
   throw new IllegalArgumentException(Don't support Quota for storage type 
: 
 + type.toString());
 }
+TraceScope scope = getPathTraceScope(setQuotaByStorageType, src);
 try {
   namenode.setQuota(src, HdfsConstants.QUOTA_DONT_SET, quota, type);
 } catch (RemoteException re) {
@@ -3097,6 +3098,8 @@ public class DFSClient implements java.io.Closeable, 
RemotePeerFactory,
 QuotaByStorageTypeExceededException.class,
 UnresolvedPathException.class,
 SnapshotAccessControlException.class);
+} finally {
+  scope.close();
 }
   }
   /**



[1/2] hadoop git commit: HADOOP-11722. Some Instances of Services using ZKDelegationTokenSecretManager go down when old token cannot be deleted. Contributed by Arun Suresh.

2015-03-17 Thread atm
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 ab34e6975 - 85473cd61
  refs/heads/trunk 968425e9f - fc90bf7b2


HADOOP-11722. Some Instances of Services using ZKDelegationTokenSecretManager 
go down when old token cannot be deleted. Contributed by Arun Suresh.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fc90bf7b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fc90bf7b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fc90bf7b

Branch: refs/heads/trunk
Commit: fc90bf7b27cc20486f2806670a14fd7d654b0a31
Parents: 968425e
Author: Aaron T. Myers a...@apache.org
Authored: Tue Mar 17 19:41:36 2015 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Tue Mar 17 19:41:36 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  4 
 .../ZKDelegationTokenSecretManager.java | 21 ++--
 2 files changed, 23 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/fc90bf7b/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 3817054..a6bd68d 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -,6 +,10 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11720. [JDK8] Fix javadoc errors caused by incorrect or illegal
 tags in hadoop-tools. (Akira AJISAKA via ozawa)
 
+HADOOP-11722. Some Instances of Services using
+ZKDelegationTokenSecretManager go down when old token cannot be deleted.
+(Arun Suresh via atm)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/fc90bf7b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
index ec522dcf..73c3ab8 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
@@ -55,6 +55,7 @@ import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.token.delegation.web.DelegationTokenManager;
 import org.apache.zookeeper.CreateMode;
 import org.apache.zookeeper.KeeperException;
+import org.apache.zookeeper.KeeperException.NoNodeException;
 import org.apache.zookeeper.ZooDefs.Perms;
 import org.apache.zookeeper.client.ZooKeeperSaslClient;
 import org.apache.zookeeper.data.ACL;
@@ -709,7 +710,15 @@ public abstract class 
ZKDelegationTokenSecretManagerTokenIdent extends Abstract
 try {
   if (zkClient.checkExists().forPath(nodeRemovePath) != null) {
 while(zkClient.checkExists().forPath(nodeRemovePath) != null){
-  zkClient.delete().guaranteed().forPath(nodeRemovePath);
+  try {
+zkClient.delete().guaranteed().forPath(nodeRemovePath);
+  } catch (NoNodeException nne) {
+// It is possible that the node might be deleted between the
+// check and the actual delete.. which might lead to an
+// exception that can bring down the daemon running this
+// SecretManager
+LOG.debug(Node already deleted by peer  + nodeRemovePath);
+  }
 }
   } else {
 LOG.debug(Attempted to delete a non-existing znode  + 
nodeRemovePath);
@@ -761,7 +770,15 @@ public abstract class 
ZKDelegationTokenSecretManagerTokenIdent extends Abstract
 try {
   if (zkClient.checkExists().forPath(nodeRemovePath) != null) {
 while(zkClient.checkExists().forPath(nodeRemovePath) != null){
-  zkClient.delete().guaranteed().forPath(nodeRemovePath);
+  try {
+zkClient.delete().guaranteed().forPath(nodeRemovePath);
+  } catch (NoNodeException nne) {
+// It is possible that the node might be deleted between the
+// check and the actual delete.. which might lead to an
+// exception that can bring down the daemon running this
+// SecretManager
+LOG.debug(Node already deleted by peer  + nodeRemovePath);
+  }
 }
   } else {
 LOG.debug(Attempted to remove a non-existing znode  + 
nodeRemovePath);



hadoop git commit: YARN-3205. FileSystemRMStateStore should disable FileSystem Cache to avoid get a Filesystem with an old configuration. Contributed by Zhihai Xu.

2015-03-17 Thread ozawa
Repository: hadoop
Updated Branches:
  refs/heads/trunk fc90bf7b2 - 3bc72cc16


YARN-3205. FileSystemRMStateStore should disable FileSystem Cache to avoid get 
a Filesystem with an old configuration. Contributed by Zhihai Xu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3bc72cc1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3bc72cc1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3bc72cc1

Branch: refs/heads/trunk
Commit: 3bc72cc16d8c7b8addd8f565523001dfcc32b891
Parents: fc90bf7
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Wed Mar 18 11:53:14 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Wed Mar 18 11:53:19 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt |  3 +++
 .../recovery/FileSystemRMStateStore.java| 22 +++-
 .../recovery/TestFSRMStateStore.java|  5 +
 3 files changed, 25 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/3bc72cc1/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index bb752ab..c869113 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -72,6 +72,9 @@ Release 2.8.0 - UNRELEASED
 YARN-3305. Normalize AM resource request on app submission. (Rohith 
Sharmaks
 via jianhe)
 
+YARN-3205 FileSystemRMStateStore should disable FileSystem Cache to avoid
+get a Filesystem with an old configuration. (Zhihai Xu via ozawa)
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3bc72cc1/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
index 8147597..7652a07 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/FileSystemRMStateStore.java
@@ -84,7 +84,10 @@ public class FileSystemRMStateStore extends RMStateStore {
   protected static final String AMRMTOKEN_SECRET_MANAGER_NODE =
   AMRMTokenSecretManagerNode;
 
+  @VisibleForTesting
   protected FileSystem fs;
+  @VisibleForTesting
+  protected Configuration fsConf;
 
   private Path rootDirPath;
   @Private
@@ -121,14 +124,23 @@ public class FileSystemRMStateStore extends RMStateStore {
 // create filesystem only now, as part of service-start. By this time, RM 
is
 // authenticated with kerberos so we are good to create a file-system
 // handle.
-Configuration conf = new Configuration(getConfig());
-conf.setBoolean(dfs.client.retry.policy.enabled, true);
+fsConf = new Configuration(getConfig());
+fsConf.setBoolean(dfs.client.retry.policy.enabled, true);
 String retryPolicy =
-conf.get(YarnConfiguration.FS_RM_STATE_STORE_RETRY_POLICY_SPEC,
+fsConf.get(YarnConfiguration.FS_RM_STATE_STORE_RETRY_POLICY_SPEC,
   YarnConfiguration.DEFAULT_FS_RM_STATE_STORE_RETRY_POLICY_SPEC);
-conf.set(dfs.client.retry.policy.spec, retryPolicy);
+fsConf.set(dfs.client.retry.policy.spec, retryPolicy);
+
+String scheme = fsWorkingPath.toUri().getScheme();
+if (scheme == null) {
+  scheme = FileSystem.getDefaultUri(fsConf).getScheme();
+}
+if (scheme != null) {
+  String disableCacheName = String.format(fs.%s.impl.disable.cache, 
scheme);
+  fsConf.setBoolean(disableCacheName, true);
+}
 
-fs = fsWorkingPath.getFileSystem(conf);
+fs = fsWorkingPath.getFileSystem(fsConf);
 mkdirsWithRetries(rmDTSecretManagerRoot);
 mkdirsWithRetries(rmAppRoot);
 mkdirsWithRetries(amrmTokenSecretManagerRoot);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3bc72cc1/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestFSRMStateStore.java
--
diff --git 

[2/2] hadoop git commit: HADOOP-11722. Some Instances of Services using ZKDelegationTokenSecretManager go down when old token cannot be deleted. Contributed by Arun Suresh. (cherry picked from commit

2015-03-17 Thread atm
HADOOP-11722. Some Instances of Services using ZKDelegationTokenSecretManager 
go down when old token cannot be deleted. Contributed by Arun Suresh.
(cherry picked from commit fc90bf7b27cc20486f2806670a14fd7d654b0a31)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/85473cd6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/85473cd6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/85473cd6

Branch: refs/heads/branch-2
Commit: 85473cd61a37a9b7614805bd83507cabe85eaeb0
Parents: ab34e69
Author: Aaron T. Myers a...@apache.org
Authored: Tue Mar 17 19:41:36 2015 -0700
Committer: Aaron T. Myers a...@apache.org
Committed: Tue Mar 17 19:42:31 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt |  4 
 .../ZKDelegationTokenSecretManager.java | 21 ++--
 2 files changed, 23 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/85473cd6/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index 7f47197..0d1ffce 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -700,6 +700,10 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11720. [JDK8] Fix javadoc errors caused by incorrect or illegal
 tags in hadoop-tools. (Akira AJISAKA via ozawa)
 
+HADOOP-11722. Some Instances of Services using
+ZKDelegationTokenSecretManager go down when old token cannot be deleted.
+(Arun Suresh via atm)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/85473cd6/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
index ec522dcf..73c3ab8 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/ZKDelegationTokenSecretManager.java
@@ -55,6 +55,7 @@ import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.token.delegation.web.DelegationTokenManager;
 import org.apache.zookeeper.CreateMode;
 import org.apache.zookeeper.KeeperException;
+import org.apache.zookeeper.KeeperException.NoNodeException;
 import org.apache.zookeeper.ZooDefs.Perms;
 import org.apache.zookeeper.client.ZooKeeperSaslClient;
 import org.apache.zookeeper.data.ACL;
@@ -709,7 +710,15 @@ public abstract class 
ZKDelegationTokenSecretManagerTokenIdent extends Abstract
 try {
   if (zkClient.checkExists().forPath(nodeRemovePath) != null) {
 while(zkClient.checkExists().forPath(nodeRemovePath) != null){
-  zkClient.delete().guaranteed().forPath(nodeRemovePath);
+  try {
+zkClient.delete().guaranteed().forPath(nodeRemovePath);
+  } catch (NoNodeException nne) {
+// It is possible that the node might be deleted between the
+// check and the actual delete.. which might lead to an
+// exception that can bring down the daemon running this
+// SecretManager
+LOG.debug(Node already deleted by peer  + nodeRemovePath);
+  }
 }
   } else {
 LOG.debug(Attempted to delete a non-existing znode  + 
nodeRemovePath);
@@ -761,7 +770,15 @@ public abstract class 
ZKDelegationTokenSecretManagerTokenIdent extends Abstract
 try {
   if (zkClient.checkExists().forPath(nodeRemovePath) != null) {
 while(zkClient.checkExists().forPath(nodeRemovePath) != null){
-  zkClient.delete().guaranteed().forPath(nodeRemovePath);
+  try {
+zkClient.delete().guaranteed().forPath(nodeRemovePath);
+  } catch (NoNodeException nne) {
+// It is possible that the node might be deleted between the
+// check and the actual delete.. which might lead to an
+// exception that can bring down the daemon running this
+// SecretManager
+LOG.debug(Node already deleted by peer  + nodeRemovePath);
+  }
 }
   } else {
 LOG.debug(Attempted to remove a non-existing znode  + 
nodeRemovePath);



[1/2] hadoop git commit: HADOOP-11706 Refine a little bit erasure coder API

2015-03-17 Thread drankye
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-7285 050018b11 - 7d6043869


HADOOP-11706 Refine a little bit erasure coder API


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/902c9a73
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/902c9a73
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/902c9a73

Branch: refs/heads/HDFS-7285
Commit: 902c9a73f593337c8c87c8434fa167d3076f6453
Parents: 050018b
Author: Kai Zheng kai.zh...@intel.com
Authored: Wed Mar 18 19:21:37 2015 +0800
Committer: Kai Zheng kai.zh...@intel.com
Committed: Wed Mar 18 19:21:37 2015 +0800

--
 .../io/erasurecode/coder/ErasureCoder.java  |  4 +++-
 .../erasurecode/rawcoder/RawErasureCoder.java   |  4 +++-
 .../hadoop/io/erasurecode/TestCoderBase.java| 17 +---
 .../erasurecode/coder/TestErasureCoderBase.java | 21 +++-
 .../erasurecode/rawcoder/TestJRSRawCoder.java   | 12 +--
 .../erasurecode/rawcoder/TestRawCoderBase.java  |  2 ++
 6 files changed, 31 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/902c9a73/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
index 68875c0..c5922f3 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/ErasureCoder.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.io.erasurecode.coder;
 
+import org.apache.hadoop.conf.Configurable;
+
 /**
  * An erasure coder to perform encoding or decoding given a group. Generally it
  * involves calculating necessary internal steps according to codec logic. For
@@ -31,7 +33,7 @@ package org.apache.hadoop.io.erasurecode.coder;
  * of multiple coding steps.
  *
  */
-public interface ErasureCoder {
+public interface ErasureCoder extends Configurable {
 
   /**
* Initialize with the important parameters for the code.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/902c9a73/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
index 91a9abf..9af5b6c 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.io.erasurecode.rawcoder;
 
+import org.apache.hadoop.conf.Configurable;
+
 /**
  * RawErasureCoder is a common interface for {@link RawErasureEncoder} and
  * {@link RawErasureDecoder} as both encoder and decoder share some properties.
@@ -31,7 +33,7 @@ package org.apache.hadoop.io.erasurecode.rawcoder;
  * low level constructs, since it only takes care of the math calculation with
  * a group of byte buffers.
  */
-public interface RawErasureCoder {
+public interface RawErasureCoder extends Configurable {
 
   /**
* Initialize with the important parameters for the code.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/902c9a73/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
index 194413a..22fd98d 100644
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCoderBase.java
@@ -17,11 +17,12 @@
  */
 package org.apache.hadoop.io.erasurecode;
 
+import org.apache.hadoop.conf.Configuration;
+
 import java.nio.ByteBuffer;
 import java.util.Arrays;
 import java.util.Random;
 
-import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertTrue;
 
 /**
@@ -31,6 +32,7 @@ import static org.junit.Assert.assertTrue;
 public abstract 

[2/2] hadoop git commit: Updated CHANGES-HDFS-EC-7285.txt accordingly

2015-03-17 Thread drankye
Updated CHANGES-HDFS-EC-7285.txt accordingly


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7d604386
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7d604386
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7d604386

Branch: refs/heads/HDFS-7285
Commit: 7d6043869a970f8be6bf56ce0fbe14d4956a35b3
Parents: 902c9a7
Author: Kai Zheng kai.zh...@intel.com
Authored: Wed Mar 18 19:24:24 2015 +0800
Committer: Kai Zheng kai.zh...@intel.com
Committed: Wed Mar 18 19:24:24 2015 +0800

--
 hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7d604386/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt 
b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
index a97dc34..e27ff5c 100644
--- a/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES-HDFS-EC-7285.txt
@@ -19,6 +19,9 @@
 ( Kai Zheng via vinayakumarb )
 
 HADOOP-11705. Make erasure coder configurable. Contributed by Kai Zheng
-( Kai Zheng )
+( Kai Zheng )
+
+HADOOP-11706. Refine a little bit erasure coder API. Contributed by Kai 
Zheng
+( Kai Zheng )
 
 



hadoop git commit: HDFS-7847. Modify NNThroughputBenchmark to be able to operate on a remote NameNode (clamb)

2015-03-17 Thread clamb
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-7836 087611208 - 4e9e1cca6


HDFS-7847. Modify NNThroughputBenchmark to be able to operate on a remote 
NameNode (clamb)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4e9e1cca
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4e9e1cca
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4e9e1cca

Branch: refs/heads/HDFS-7836
Commit: 4e9e1cca6488ebb6ef4c3b0dce9281d61c4de516
Parents: 0876112
Author: Charles Lamb cl...@cloudera.com
Authored: Tue Mar 17 15:50:30 2015 -0400
Committer: Charles Lamb cl...@cloudera.com
Committed: Tue Mar 17 15:50:30 2015 -0400

--
 .../org/apache/hadoop/hdfs/DFSTestUtil.java |  39 ++
 .../server/namenode/NNThroughputBenchmark.java  | 135 +--
 2 files changed, 132 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4e9e1cca/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
index 5b391c5..cc6f3c9 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.hdfs;
 
+import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Charsets;
 import com.google.common.base.Joiner;
 import com.google.common.base.Preconditions;
@@ -78,12 +79,14 @@ import org.apache.hadoop.hdfs.server.namenode.ha
 .ConfiguredFailoverProxyProvider;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration;
 import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage;
+import org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol;
 import org.apache.hadoop.hdfs.tools.DFSAdmin;
 import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.io.nativeio.NativeIO;
 import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.net.unix.DomainSocket;
 import org.apache.hadoop.net.unix.TemporarySocketDirectory;
+import org.apache.hadoop.security.RefreshUserMappingsProtocol;
 import org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.token.Token;
@@ -104,6 +107,7 @@ import java.security.NoSuchAlgorithmException;
 import java.security.PrivilegedExceptionAction;
 import java.util.*;
 import java.util.concurrent.TimeoutException;
+import java.util.concurrent.atomic.AtomicBoolean;
 
 import static org.apache.hadoop.fs.CreateFlag.*;
 import static org.apache.hadoop.hdfs.DFSConfigKeys.*;
@@ -1698,4 +1702,39 @@ public class DFSTestUtil {
 GenericTestUtils.setLogLevel(NameNode.stateChangeLog, level);
 GenericTestUtils.setLogLevel(NameNode.blockStateChangeLog, level);
   }
+
+  /**
+   * Get the NamenodeProtocol RPC proxy for the NN associated with this
+   * DFSClient object
+   *
+   * @param nameNodeUri the URI of the NN to get a proxy for.
+   *
+   * @return the Namenode RPC proxy associated with this DFSClient object
+   */
+  @VisibleForTesting
+  public static NamenodeProtocol getNamenodeProtocolProxy(Configuration conf,
+  URI nameNodeUri, UserGroupInformation ugi)
+  throws IOException {
+return NameNodeProxies.createNonHAProxy(conf,
+NameNode.getAddress(nameNodeUri), NamenodeProtocol.class, ugi, false).
+getProxy();
+  }
+
+  /**
+   * Get the RefreshUserMappingsProtocol RPC proxy for the NN associated with
+   * this DFSClient object
+   *
+   * @param nameNodeUri the URI of the NN to get a proxy for.
+   *
+   * @return the RefreshUserMappingsProtocol RPC proxy associated with this
+   * DFSClient object
+   */
+  @VisibleForTesting
+  public static RefreshUserMappingsProtocol 
getRefreshUserMappingsProtocolProxy(
+  Configuration conf, URI nameNodeUri) throws IOException {
+final AtomicBoolean nnFallbackToSimpleAuth = new AtomicBoolean(false);
+return NameNodeProxies.createProxy(conf,
+nameNodeUri, RefreshUserMappingsProtocol.class,
+nnFallbackToSimpleAuth).getProxy();
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4e9e1cca/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
 

[21/50] [abbrv] hadoop git commit: HDFS-7915. The DataNode can sometimes allocate a ShortCircuitShm slot and fail to tell the DFSClient about it because of a network error (cmccabe)

2015-03-17 Thread cmccabe
HDFS-7915. The DataNode can sometimes allocate a ShortCircuitShm slot and fail 
to tell the DFSClient about it because of a network error (cmccabe)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bc9cb3e2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bc9cb3e2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bc9cb3e2

Branch: refs/heads/HDFS-7836
Commit: bc9cb3e271b22069a15ca110cd60c860250aaab2
Parents: 79426f3
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Sat Mar 14 22:36:46 2015 -0700
Committer: Colin Patrick Mccabe cmcc...@cloudera.com
Committed: Sat Mar 14 22:36:46 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  3 +
 .../apache/hadoop/hdfs/BlockReaderFactory.java  | 23 -
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |  2 +
 .../datatransfer/DataTransferProtocol.java  |  5 +-
 .../hdfs/protocol/datatransfer/Receiver.java|  2 +-
 .../hdfs/protocol/datatransfer/Sender.java  |  4 +-
 .../hdfs/server/datanode/DataXceiver.java   | 95 
 .../server/datanode/ShortCircuitRegistry.java   | 13 ++-
 .../src/main/proto/datatransfer.proto   | 11 +++
 .../shortcircuit/TestShortCircuitCache.java | 63 +
 10 files changed, 178 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bc9cb3e2/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index c3f9367..93237af 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1154,6 +1154,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7903. Cannot recover block after truncate and delete snapshot.
 (Plamen Jeliazkov via shv)
 
+HDFS-7915. The DataNode can sometimes allocate a ShortCircuitShm slot and
+fail to tell the DFSClient about it because of a network error (cmccabe)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bc9cb3e2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
index ba48c79..1e915b2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.hdfs;
 
+import static 
org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ShortCircuitFdResponse.USE_RECEIPT_VERIFICATION;
+
 import java.io.BufferedOutputStream;
 import java.io.DataInputStream;
 import java.io.DataOutputStream;
@@ -69,6 +71,12 @@ import com.google.common.base.Preconditions;
 public class BlockReaderFactory implements ShortCircuitReplicaCreator {
   static final Log LOG = LogFactory.getLog(BlockReaderFactory.class);
 
+  public static class FailureInjector {
+public void injectRequestFileDescriptorsFailure() throws IOException {
+  // do nothing
+}
+  }
+
   @VisibleForTesting
   static ShortCircuitReplicaCreator
   createShortCircuitReplicaInfoCallback = null;
@@ -76,6 +84,11 @@ public class BlockReaderFactory implements 
ShortCircuitReplicaCreator {
   private final DFSClient.Conf conf;
 
   /**
+   * Injects failures into specific operations during unit tests.
+   */
+  private final FailureInjector failureInjector;
+
+  /**
* The file name, for logging and debugging purposes.
*/
   private String fileName;
@@ -169,6 +182,7 @@ public class BlockReaderFactory implements 
ShortCircuitReplicaCreator {
 
   public BlockReaderFactory(DFSClient.Conf conf) {
 this.conf = conf;
+this.failureInjector = conf.brfFailureInjector;
 this.remainingCacheTries = conf.nCachedConnRetry;
   }
 
@@ -518,11 +532,12 @@ public class BlockReaderFactory implements 
ShortCircuitReplicaCreator {
 final DataOutputStream out =
 new DataOutputStream(new BufferedOutputStream(peer.getOutputStream()));
 SlotId slotId = slot == null ? null : slot.getSlotId();
-new Sender(out).requestShortCircuitFds(block, token, slotId, 1);
+new Sender(out).requestShortCircuitFds(block, token, slotId, 1, true);
 DataInputStream in = new DataInputStream(peer.getInputStream());
 BlockOpResponseProto resp = BlockOpResponseProto.parseFrom(
 

[34/50] [abbrv] hadoop git commit: YARN-3349. Treat all exceptions as failure in TestFSRMStateStore#testFSRMStateStoreClientRetry. Contributed by Zhihai Xu.

2015-03-17 Thread cmccabe
YARN-3349. Treat all exceptions as failure in 
TestFSRMStateStore#testFSRMStateStoreClientRetry. Contributed by Zhihai Xu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7522a643
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7522a643
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7522a643

Branch: refs/heads/HDFS-7836
Commit: 7522a643faeea2d8a8e2c7409ae60e0973e7cf38
Parents: 2681ed9
Author: Tsuyoshi Ozawa oz...@apache.org
Authored: Tue Mar 17 08:09:55 2015 +0900
Committer: Tsuyoshi Ozawa oz...@apache.org
Committed: Tue Mar 17 08:09:55 2015 +0900

--
 hadoop-yarn-project/CHANGES.txt  |  3 +++
 .../resourcemanager/recovery/TestFSRMStateStore.java | 11 +++
 2 files changed, 6 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/7522a643/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 26ef7d3..b8e07a0 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -772,6 +772,9 @@ Release 2.7.0 - UNRELEASED
 YARN-1453. [JDK8] Fix Javadoc errors caused by incorrect or illegal tags 
in 
 doc comments. (Akira AJISAKA, Andrew Purtell, and Allen Wittenauer via 
ozawa)
 
+YARN-3349. Treat all exceptions as failure in
+TestFSRMStateStore#testFSRMStateStoreClientRetry. (Zhihai Xu via ozawa)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/7522a643/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestFSRMStateStore.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestFSRMStateStore.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestFSRMStateStore.java
index 675d73c..d2eddd6 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestFSRMStateStore.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/recovery/TestFSRMStateStore.java
@@ -100,11 +100,11 @@ public class TestFSRMStateStore extends 
RMStateStoreTestBase {
   workingDirPathURI.toString());
   conf.set(YarnConfiguration.FS_RM_STATE_STORE_RETRY_POLICY_SPEC,
 100,6000);
-  conf.setInt(YarnConfiguration.FS_RM_STATE_STORE_NUM_RETRIES, 5);
+  conf.setInt(YarnConfiguration.FS_RM_STATE_STORE_NUM_RETRIES, 8);
   conf.setLong(YarnConfiguration.FS_RM_STATE_STORE_RETRY_INTERVAL_MS,
   900L);
   this.store = new TestFileSystemRMStore(conf);
-  Assert.assertEquals(store.getNumRetries(), 5);
+  Assert.assertEquals(store.getNumRetries(), 8);
   Assert.assertEquals(store.getRetryInterval(), 900L);
   return store;
 }
@@ -277,12 +277,7 @@ public class TestFSRMStateStore extends 
RMStateStoreTestBase {
 ApplicationStateData.newInstance(111, 111, user, null,
 RMAppState.ACCEPTED, diagnostics, 333));
   } catch (Exception e) {
-// TODO 0 datanode exception will not be retried by dfs client, fix
-// that separately.
-if (!e.getMessage().contains(could only be replicated +
- to 0 nodes instead of minReplication (=1))) {
-  assertionFailedInThread.set(true);
-}
+assertionFailedInThread.set(true);
 e.printStackTrace();
   }
 }



[09/50] [abbrv] hadoop git commit: HDFS-7926. NameNode implementation of ClientProtocol.truncate(..) is not idempotent. Contributed by Tsz Wo Nicholas Sze

2015-03-17 Thread cmccabe
HDFS-7926. NameNode implementation of ClientProtocol.truncate(..) is not 
idempotent. Contributed by Tsz Wo Nicholas Sze


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f446669a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f446669a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f446669a

Branch: refs/heads/HDFS-7836
Commit: f446669afb5c3d31a00c65449f27088b39e11ae3
Parents: 8180e67
Author: Brandon Li brando...@apache.org
Authored: Fri Mar 13 10:42:22 2015 -0700
Committer: Brandon Li brando...@apache.org
Committed: Fri Mar 13 10:42:22 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt  |  3 +++
 .../BlockInfoContiguousUnderConstruction.java|  1 +
 .../hadoop/hdfs/server/namenode/FSNamesystem.java| 15 +++
 .../hdfs/server/namenode/TestFileTruncate.java   |  2 ++
 4 files changed, 21 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/f446669a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 153453c..909182b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1142,6 +1142,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-6833.  DirectoryScanner should not register a deleting block with
 memory of DataNode.  (Shinichi Yamashita via szetszwo)
 
+HDFS-7926. NameNode implementation of ClientProtocol.truncate(..) is not 
+idempotent (Tsz Wo Nicholas Sze via brandonli)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f446669a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguousUnderConstruction.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguousUnderConstruction.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguousUnderConstruction.java
index 91b76cc..ae809a5 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguousUnderConstruction.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoContiguousUnderConstruction.java
@@ -383,6 +383,7 @@ public class BlockInfoContiguousUnderConstruction extends 
BlockInfoContiguous {
 
   private void appendUCParts(StringBuilder sb) {
 sb.append({UCState=).append(blockUCState)
+  .append(, truncateBlock= + truncateBlock)
   .append(, primaryNodeIndex=).append(primaryNodeIndex)
   .append(, replicas=[);
 if (replicas != null) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/f446669a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 77b4a27..b384ce6 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -1966,6 +1966,21 @@ public class FSNamesystem implements Namesystem, 
FSNamesystemMBean,
   throw new UnsupportedOperationException(
   Cannot truncate lazy persist file  + src);
 }
+
+// Check if the file is already being truncated with the same length
+final BlockInfoContiguous last = file.getLastBlock();
+if (last != null  last.getBlockUCState() == BlockUCState.UNDER_RECOVERY) 
{
+  final Block truncateBlock
+  = ((BlockInfoContiguousUnderConstruction)last).getTruncateBlock();
+  if (truncateBlock != null) {
+final long truncateLength = file.computeFileSize(false, false)
++ truncateBlock.getNumBytes();
+if (newLength == truncateLength) {
+  return false;
+}
+  }
+}
+
 // Opening an existing file for truncate. May need lease recovery.
 recoverLeaseInternal(RecoverLeaseOp.TRUNCATE_FILE,
 iip, src, clientName, clientMachine, false);


[48/50] [abbrv] hadoop git commit: Revert YARN-3181. FairScheduler: Fix up outdated findbugs issues. (kasha)

2015-03-17 Thread cmccabe
Revert YARN-3181. FairScheduler: Fix up outdated findbugs issues. (kasha)

This reverts commit c2b185def846f5577a130003a533b9c377b58fab.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/32b43304
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/32b43304
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/32b43304

Branch: refs/heads/HDFS-7836
Commit: 32b43304563c2430c00bc3e142a962d2bc5f4d58
Parents: d884670
Author: Karthik Kambatla ka...@apache.org
Authored: Tue Mar 17 12:31:15 2015 -0700
Committer: Karthik Kambatla ka...@apache.org
Committed: Tue Mar 17 12:31:15 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  2 --
 .../dev-support/findbugs-exclude.xml| 27 
 .../scheduler/fair/AllocationConfiguration.java | 13 +++---
 .../fair/AllocationFileLoaderService.java   |  2 +-
 .../scheduler/fair/FSOpDurations.java   |  3 ---
 5 files changed, 31 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/32b43304/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index f5b72d7..fee0ce0 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -320,8 +320,6 @@ Release 2.7.0 - UNRELEASED
 YARN-2079. Recover NonAggregatingLogHandler state upon nodemanager
 restart. (Jason Lowe via junping_du) 
 
-YARN-3181. FairScheduler: Fix up outdated findbugs issues. (kasha)
-
 YARN-3124. Fixed CS LeafQueue/ParentQueue to use QueueCapacities to track
 capacities-by-label. (Wangda Tan via jianhe)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/32b43304/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml 
b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
index a89884a..943ecb0 100644
--- a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
+++ b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
@@ -152,12 +152,22 @@
 Class 
name=org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService
 /
 Bug pattern=IS2_INCONSISTENT_SYNC /
   /Match
+  Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService
 /
+Field name=allocFile /
+Bug pattern=IS2_INCONSISTENT_SYNC /
+  /Match
   !-- Inconsistent sync warning - minimumAllocation is only initialized once 
and never changed --
   Match
 Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler
 /
 Field name=minimumAllocation /
 Bug pattern=IS2_INCONSISTENT_SYNC /
   /Match
+  Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerNode
 /
+Method name=reserveResource /
+Bug pattern=BC_UNCONFIRMED_CAST / 
+  /Match
   !-- Inconsistent sync warning - reinitialize read from other queue does not 
need sync--
   Match
 Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue
 /
@@ -215,6 +225,18 @@
 Field name=scheduleAsynchronously /
 Bug pattern=IS2_INCONSISTENT_SYNC /
   /Match
+  !-- Inconsistent sync warning - updateInterval is only initialized once and 
never changed --
+  Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
 /
+Field name=updateInterval /
+Bug pattern=IS2_INCONSISTENT_SYNC /
+  /Match
+  !-- Inconsistent sync warning - callDurationMetrics is only initialized 
once and never changed --
+  Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
 /
+Field name=fsOpDurations /
+Bug pattern=IS2_INCONSISTENT_SYNC /
+  /Match
 
   !-- Inconsistent sync warning - numRetries is only initialized once and 
never changed --
   Match
@@ -415,6 +437,11 @@
 Bug pattern=IS2_INCONSISTENT_SYNC /
   /Match
   Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
 /
+Field name=allocConf /
+Bug pattern=IS2_INCONSISTENT_SYNC /
+  /Match
+  Match
 Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode /
 Field name=numContainers /
 Bug pattern=VO_VOLATILE_INCREMENT /

http://git-wip-us.apache.org/repos/asf/hadoop/blob/32b43304/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationConfiguration.java

[30/50] [abbrv] hadoop git commit: HDFS-7886. Fix TestFileTruncate falures. Contributed by Plamen Jeliazkov and Konstantin Shvachko.

2015-03-17 Thread cmccabe
HDFS-7886. Fix TestFileTruncate falures. Contributed by Plamen Jeliazkov and 
Konstantin Shvachko.

Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ce5de93a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ce5de93a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ce5de93a

Branch: refs/heads/HDFS-7836
Commit: ce5de93a5837e115e1f0b7d3c5a67ace25385a63
Parents: 587d8be
Author: Konstantin V Shvachko s...@apache.org
Authored: Mon Mar 16 12:54:04 2015 -0700
Committer: Konstantin V Shvachko s...@apache.org
Committed: Mon Mar 16 12:54:04 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  2 +
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  | 44 ++--
 .../hdfs/server/namenode/TestFileTruncate.java  | 18 
 3 files changed, 51 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ce5de93a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 93237af..d313b6c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1157,6 +1157,8 @@ Release 2.7.0 - UNRELEASED
 HDFS-7915. The DataNode can sometimes allocate a ShortCircuitShm slot and
 fail to tell the DFSClient about it because of a network error (cmccabe)
 
+HDFS-7886. Fix TestFileTruncate falures. (Plamen Jeliazkov and shv)
+
 BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
   HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ce5de93a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
index 834eb32..9208ed2 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
@@ -77,9 +77,12 @@ import org.apache.hadoop.hdfs.MiniDFSNNTopology.NNConf;
 import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.protocol.BlockListAsLongs;
 import org.apache.hadoop.hdfs.protocol.ClientProtocol;
+import org.apache.hadoop.hdfs.protocol.DatanodeID;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerTestUtil;
+import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor;
 import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.StartupOption;
 import org.apache.hadoop.hdfs.server.common.Storage;
 import org.apache.hadoop.hdfs.server.common.Util;
@@ -1343,7 +1346,6 @@ public class MiniDFSCluster {
 }
 
 int curDatanodesNum = dataNodes.size();
-final int curDatanodesNumSaved = curDatanodesNum;
 // for mincluster's the default initialDelay for BRs is 0
 if (conf.get(DFS_BLOCKREPORT_INITIAL_DELAY_KEY) == null) {
   conf.setLong(DFS_BLOCKREPORT_INITIAL_DELAY_KEY, 0);
@@ -2022,7 +2024,23 @@ public class MiniDFSCluster {
*/
   public synchronized boolean restartDataNode(int i, boolean keepPort)
   throws IOException {
-DataNodeProperties dnprop = stopDataNode(i);
+return restartDataNode(i, keepPort, false);
+  }
+
+  /**
+   * Restart a particular DataNode.
+   * @param idn index of the DataNode
+   * @param keepPort true if should restart on the same port
+   * @param expireOnNN true if NameNode should expire the DataNode heartbeat
+   * @return
+   * @throws IOException
+   */
+  public synchronized boolean restartDataNode(
+  int idn, boolean keepPort, boolean expireOnNN) throws IOException {
+DataNodeProperties dnprop = stopDataNode(idn);
+if(expireOnNN) {
+  setDataNodeDead(dnprop.datanode.getDatanodeId());
+}
 if (dnprop == null) {
   return false;
 } else {
@@ -2030,6 +2048,24 @@ public class MiniDFSCluster {
 }
   }
 
+  /**
+   * Expire a DataNode heartbeat on the NameNode
+   * @param dnId
+   * @throws IOException
+   */
+  public void setDataNodeDead(DatanodeID dnId) throws IOException {
+DatanodeDescriptor dnd =
+NameNodeAdapter.getDatanode(getNamesystem(), dnId);
+dnd.setLastUpdate(0L);
+BlockManagerTestUtil.checkHeartbeat(getNamesystem().getBlockManager());
+  }
+
+  public void setDataNodesDead() throws IOException {

[23/50] [abbrv] hadoop git commit: YARN-1453. [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments. Contributed by Akira AJISAKA, Andrew Purtell, and Allen Wittenauer.

2015-03-17 Thread cmccabe
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3da9a97c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
index 9923806..bfe10d6 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java
@@ -349,7 +349,7 @@ public abstract class AMRMClientT extends 
AMRMClient.ContainerRequest extends
* Set the NM token cache for the codeAMRMClient/code. This cache must
* be shared with the {@link NMClient} used to manage containers for the
* codeAMRMClient/code
-   * p/
+   * p
* If a NM token cache is not set, the {@link NMTokenCache#getSingleton()}
* singleton instance will be used.
*
@@ -363,7 +363,7 @@ public abstract class AMRMClientT extends 
AMRMClient.ContainerRequest extends
* Get the NM token cache of the codeAMRMClient/code. This cache must be
* shared with the {@link NMClient} used to manage containers for the
* codeAMRMClient/code.
-   * p/
+   * p
* If a NM token cache is not set, the {@link NMTokenCache#getSingleton()}
* singleton instance will be used.
*

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3da9a97c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
index 721728e..08b911b 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
@@ -125,7 +125,7 @@ public abstract class NMClient extends AbstractService {
* Set the NM Token cache of the codeNMClient/code. This cache must be
* shared with the {@link AMRMClient} that requested the containers managed
* by this codeNMClient/code
-   * p/
+   * p
* If a NM token cache is not set, the {@link NMTokenCache#getSingleton()}
* singleton instance will be used.
*
@@ -139,7 +139,7 @@ public abstract class NMClient extends AbstractService {
* Get the NM token cache of the codeNMClient/code. This cache must be
* shared with the {@link AMRMClient} that requested the containers managed
* by this codeNMClient/code
-   * p/
+   * p
* If a NM token cache is not set, the {@link NMTokenCache#getSingleton()}
* singleton instance will be used.
*

http://git-wip-us.apache.org/repos/asf/hadoop/blob/3da9a97c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMTokenCache.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMTokenCache.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMTokenCache.java
index 0e7356f..0c349cc 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMTokenCache.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMTokenCache.java
@@ -34,26 +34,26 @@ import com.google.common.annotations.VisibleForTesting;
 /**
  * NMTokenCache manages NMTokens required for an Application Master
  * communicating with individual NodeManagers.
- * p/
+ * p
  * By default Yarn client libraries {@link AMRMClient} and {@link NMClient} use
  * {@link #getSingleton()} instance of the cache.
  * ul
- * liUsing the singleton instance of the cache is appropriate when running a
- * single ApplicationMaster in the same JVM./li
- * liWhen using the singleton, users don't need to do anything special,
- * {@link AMRMClient} and {@link NMClient} are already set up to use the 
default
- * singleton {@link NMTokenCache}/li
+ *   li
+ * Using the singleton instance of the cache is appropriate when running a
+ * single ApplicationMaster in the same JVM.
+ *   /li
+ *   li
+ * When using the singleton, users don't need to do anything special,
+ * {@link AMRMClient} and {@link NMClient} are already set up to use the
+ * default singleton {@link NMTokenCache}
+ * /li
  * /ul
- * p/
  

[01/50] [abbrv] hadoop git commit: YARN-3338. Exclude jline dependency from YARN. Contributed by Zhijie Shen

2015-03-17 Thread cmccabe
Repository: hadoop
Updated Branches:
  refs/heads/HDFS-7836 4e9e1cca6 - 4f62b5ad7 (forced update)


YARN-3338. Exclude jline dependency from YARN. Contributed by Zhijie
Shen


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/06ce1d9a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/06ce1d9a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/06ce1d9a

Branch: refs/heads/HDFS-7836
Commit: 06ce1d9a6cd9bec25e2f478b98264caf96a3ea44
Parents: ff83ae7
Author: Xuan xg...@apache.org
Authored: Thu Mar 12 10:25:00 2015 -0700
Committer: Xuan xg...@apache.org
Committed: Thu Mar 12 10:25:00 2015 -0700

--
 hadoop-project/pom.xml  | 4 
 hadoop-yarn-project/CHANGES.txt | 2 ++
 2 files changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/06ce1d9a/hadoop-project/pom.xml
--
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index a6127c7..6c95cf0 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -833,6 +833,10 @@
 groupIdorg.jboss.netty/groupId
 artifactIdnetty/artifactId
   /exclusion
+  exclusion
+groupIdjline/groupId
+artifactIdjline/artifactId
+  /exclusion
 /exclusions
   /dependency
   dependency

http://git-wip-us.apache.org/repos/asf/hadoop/blob/06ce1d9a/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 969c6a1..11d1cc9 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -755,6 +755,8 @@ Release 2.7.0 - UNRELEASED
 YARN-1884. Added nodeHttpAddress into ContainerReport and fixed the link 
to NM
 web page. (Xuan Gong via zjshen)
 
+YARN-3338. Exclude jline dependency from YARN. (Zhijie Shen via xgong)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES



[36/50] [abbrv] hadoop git commit: HDFS-7838. Expose truncate API for libhdfs. (yliu)

2015-03-17 Thread cmccabe
HDFS-7838. Expose truncate API for libhdfs. (yliu)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/48c2db34
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/48c2db34
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/48c2db34

Branch: refs/heads/HDFS-7836
Commit: 48c2db34eff376c0f3a72587a5540b1e3dffafd2
Parents: ef9946c
Author: yliu y...@apache.org
Authored: Tue Mar 17 07:22:17 2015 +0800
Committer: yliu y...@apache.org
Committed: Tue Mar 17 07:22:17 2015 +0800

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt |  2 ++
 .../src/contrib/libwebhdfs/src/hdfs_web.c   |  6 
 .../hadoop-hdfs/src/main/native/libhdfs/hdfs.c  | 37 
 .../hadoop-hdfs/src/main/native/libhdfs/hdfs.h  | 15 
 4 files changed, 60 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c2db34/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 9339b97..ad3e880 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -364,6 +364,8 @@ Release 2.7.0 - UNRELEASED
 
 HDFS-6488. Support HDFS superuser in NFS gateway. (brandonli)
 
+HDFS-7838. Expose truncate API for libhdfs. (yliu)
+
   IMPROVEMENTS
 
 HDFS-7752. Improve description for

http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c2db34/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
--
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c 
b/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
index deb11ef..86b4faf 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/hdfs_web.c
@@ -1124,6 +1124,12 @@ done:
 return file;
 }
 
+int hdfsTruncateFile(hdfsFS fs, const char* path, tOffset newlength)
+{
+errno = ENOTSUP;
+return -1;
+}
+
 tSize hdfsWrite(hdfsFS fs, hdfsFile file, const void* buffer, tSize length)
 {
 if (length == 0) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c2db34/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
index 34a..504d47e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.c
@@ -1037,6 +1037,43 @@ done:
 return file;
 }
 
+int hdfsTruncateFile(hdfsFS fs, const char* path, tOffset newlength)
+{
+jobject jFS = (jobject)fs;
+jthrowable jthr;
+jvalue jVal;
+jobject jPath = NULL;
+
+JNIEnv *env = getJNIEnv();
+
+if (!env) {
+errno = EINTERNAL;
+return -1;
+}
+
+/* Create an object of org.apache.hadoop.fs.Path */
+jthr = constructNewObjectOfPath(env, path, jPath);
+if (jthr) {
+errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
+hdfsTruncateFile(%s): constructNewObjectOfPath, path);
+return -1;
+}
+
+jthr = invokeMethod(env, jVal, INSTANCE, jFS, HADOOP_FS,
+truncate, JMETHOD2(JPARAM(HADOOP_PATH), J, Z),
+jPath, newlength);
+destroyLocalReference(env, jPath);
+if (jthr) {
+errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
+hdfsTruncateFile(%s): FileSystem#truncate, path);
+return -1;
+}
+if (jVal.z == JNI_TRUE) {
+return 1;
+}
+return 0;
+}
+
 int hdfsUnbufferFile(hdfsFile file)
 {
 int ret;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c2db34/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
index 64889ed..5b7bc1e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/hdfs.h
@@ -396,6 +396,21 @@ extern  C {
   int bufferSize, short replication, tSize blocksize);
 
 /**
+ * hdfsTruncateFile - Truncate a hdfs file to given lenght.
+ * @param fs The configured filesystem handle.
+ * @param path The full path to the file.
+ * @param newlength The size the file is to be truncated to
+ * @return 1 if the file has been 

[12/50] [abbrv] hadoop git commit: HDFS-2605. Remove redundant Release 0.21.1 section from CHANGES.txt. Contributed by Allen Wittenauer.

2015-03-17 Thread cmccabe
HDFS-2605. Remove redundant Release 0.21.1 section from CHANGES.txt. 
Contributed by Allen Wittenauer.

Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/dfd32017
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/dfd32017
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/dfd32017

Branch: refs/heads/HDFS-7836
Commit: dfd32017001e6902829671dc8cc68afbca61e940
Parents: 6acb7f2
Author: Konstantin V Shvachko s...@apache.org
Authored: Fri Mar 13 13:32:45 2015 -0700
Committer: Konstantin V Shvachko s...@apache.org
Committed: Fri Mar 13 13:32:45 2015 -0700

--
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/dfd32017/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
--
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt 
b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index a149f18..c3f9367 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -746,6 +746,9 @@ Release 2.7.0 - UNRELEASED
 HDFS-7435. PB encoding of block reports is very inefficient.
 (Daryn Sharp via kihwal)
 
+HDFS-2605. Remove redundant Release 0.21.1 section from CHANGES.txt.
+(Allen Wittenauer via shv)
+
   OPTIMIZATIONS
 
 HDFS-7454. Reduce memory footprint for AclEntries in NameNode.
@@ -10299,8 +10302,6 @@ Release 0.22.0 - 2011-11-29
 
 HDFS-2287. TestParallelRead has a small off-by-one bug. (todd)
 
-Release 0.21.1 - Unreleased
-
 HDFS-1466. TestFcHdfsSymlink relies on /tmp/test not existing. (eli)
 
 HDFS-874. TestHDFSFileContextMainOperations fails on weirdly 



[44/50] [abbrv] hadoop git commit: HADOOP-11721. switch jenkins patch tester to use git clean instead of mvn clean (temp commit)

2015-03-17 Thread cmccabe
HADOOP-11721. switch jenkins patch tester to use git clean instead of mvn clean 
(temp commit)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a89b087c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a89b087c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a89b087c

Branch: refs/heads/HDFS-7836
Commit: a89b087c45e549e1f5b5fc953de4657fcbb97195
Parents: 7179f94
Author: Vinayakumar B vinayakum...@apache.org
Authored: Tue Mar 17 21:39:14 2015 +0530
Committer: Vinayakumar B vinayakum...@apache.org
Committed: Tue Mar 17 21:39:14 2015 +0530

--
 dev-support/test-patch.sh | 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/a89b087c/dev-support/test-patch.sh
--
diff --git a/dev-support/test-patch.sh b/dev-support/test-patch.sh
index b0fbb80..574a4fd 100755
--- a/dev-support/test-patch.sh
+++ b/dev-support/test-patch.sh
@@ -292,6 +292,10 @@ prebuildWithoutPatch () {
 cd -
   fi
   echo Compiling $(pwd)
+  if [[ -d $(pwd)/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs ]]; 
then
+echo Changing permission 
$(pwd)/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs to avoid broken 
builds 
+chmod +x -R $(pwd)/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs
+  fi
   echo $MVN clean test -DskipTests -D${PROJECT_NAME}PatchProcess -Ptest-patch 
 $PATCH_DIR/trunkJavacWarnings.txt 21
   $MVN clean test -DskipTests -D${PROJECT_NAME}PatchProcess -Ptest-patch  
$PATCH_DIR/trunkJavacWarnings.txt 21
   if [[ $? != 0 ]] ; then



[1/2] hadoop git commit: YARN-3243. CapacityScheduler should pass headroom from parent to children to make sure ParentQueue obey its capacity limits. Contributed by Wangda Tan. (cherry picked from com

2015-03-17 Thread jianhe
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 895588b43 - 1c601e492


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1c601e49/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
--
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
index a5a2e5f..972cabb 100644
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
@@ -350,8 +350,8 @@ public class TestLeafQueue {
 // Start testing...
 
 // Only 1 container
-a.assignContainers(clusterResource, node_0, false,
-new ResourceLimits(clusterResource));
+a.assignContainers(clusterResource, node_0, new ResourceLimits(
+clusterResource));
 assertEquals(
 (int)(node_0.getTotalResource().getMemory() * a.getCapacity()) - 
(1*GB),
 a.getMetrics().getAvailableMB());
@@ -486,7 +486,7 @@ public class TestLeafQueue {
 // Start testing...
 
 // Only 1 container
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(1*GB, a.getUsedResources().getMemory());
 assertEquals(1*GB, app_0.getCurrentConsumption().getMemory());
@@ -497,7 +497,7 @@ public class TestLeafQueue {
 
 // Also 2nd - minCapacity = 1024 since (.1 * 8G)  minAlloc, also
 // you can get one container more than user-limit
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(2*GB, a.getUsedResources().getMemory());
 assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
@@ -506,7 +506,7 @@ public class TestLeafQueue {
 assertEquals(2*GB, a.getMetrics().getAllocatedMB());
 
 // Can't allocate 3rd due to user-limit
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(2*GB, a.getUsedResources().getMemory());
 assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
@@ -516,7 +516,7 @@ public class TestLeafQueue {
 
 // Bump up user-limit-factor, now allocate should work
 a.setUserLimitFactor(10);
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(3*GB, a.getUsedResources().getMemory());
 assertEquals(3*GB, app_0.getCurrentConsumption().getMemory());
@@ -525,7 +525,7 @@ public class TestLeafQueue {
 assertEquals(3*GB, a.getMetrics().getAllocatedMB());
 
 // One more should work, for app_1, due to user-limit-factor
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(4*GB, a.getUsedResources().getMemory());
 assertEquals(3*GB, app_0.getCurrentConsumption().getMemory());
@@ -536,8 +536,8 @@ public class TestLeafQueue {
 // Test max-capacity
 // Now - no more allocs since we are at max-cap
 a.setMaxCapacity(0.5f);
-a.assignContainers(clusterResource, node_0, false,
-new ResourceLimits(clusterResource));
+a.assignContainers(clusterResource, node_0, new ResourceLimits(
+clusterResource));
 assertEquals(4*GB, a.getUsedResources().getMemory());
 assertEquals(3*GB, app_0.getCurrentConsumption().getMemory());
 assertEquals(1*GB, app_1.getCurrentConsumption().getMemory());
@@ -652,21 +652,21 @@ public class TestLeafQueue {
 //recordFactory)));
 
 // 1 container to user_0
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,
 new ResourceLimits(clusterResource));
 assertEquals(2*GB, a.getUsedResources().getMemory());
 assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
 assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
 
 // Again one to user_0 since he hasn't exceeded user limit yet
-a.assignContainers(clusterResource, node_0, false,
+a.assignContainers(clusterResource, node_0,

hadoop git commit: Revert YARN-3181. FairScheduler: Fix up outdated findbugs issues. (kasha)

2015-03-17 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/branch-2 455d4aa8a - 1e77d92d6


Revert YARN-3181. FairScheduler: Fix up outdated findbugs issues. (kasha)

This reverts commit c2b185def846f5577a130003a533b9c377b58fab.

(cherry picked from commit 32b43304563c2430c00bc3e142a962d2bc5f4d58)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1e77d92d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1e77d92d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1e77d92d

Branch: refs/heads/branch-2
Commit: 1e77d92d622e3b9fb444982fac7566515532089b
Parents: 455d4aa
Author: Karthik Kambatla ka...@apache.org
Authored: Tue Mar 17 12:31:15 2015 -0700
Committer: Karthik Kambatla ka...@apache.org
Committed: Tue Mar 17 12:31:44 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  2 --
 .../dev-support/findbugs-exclude.xml| 27 
 .../scheduler/fair/AllocationConfiguration.java | 13 +++---
 .../fair/AllocationFileLoaderService.java   |  2 +-
 .../scheduler/fair/FSOpDurations.java   |  3 ---
 5 files changed, 31 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1e77d92d/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index e15fdf2..2aa2fdd 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -272,8 +272,6 @@ Release 2.7.0 - UNRELEASED
 YARN-2079. Recover NonAggregatingLogHandler state upon nodemanager
 restart. (Jason Lowe via junping_du) 
 
-YARN-3181. FairScheduler: Fix up outdated findbugs issues. (kasha)
-
 YARN-3124. Fixed CS LeafQueue/ParentQueue to use QueueCapacities to track
 capacities-by-label. (Wangda Tan via jianhe)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1e77d92d/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml 
b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
index a89884a..943ecb0 100644
--- a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
+++ b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
@@ -152,12 +152,22 @@
 Class 
name=org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService
 /
 Bug pattern=IS2_INCONSISTENT_SYNC /
   /Match
+  Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService
 /
+Field name=allocFile /
+Bug pattern=IS2_INCONSISTENT_SYNC /
+  /Match
   !-- Inconsistent sync warning - minimumAllocation is only initialized once 
and never changed --
   Match
 Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler
 /
 Field name=minimumAllocation /
 Bug pattern=IS2_INCONSISTENT_SYNC /
   /Match
+  Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerNode
 /
+Method name=reserveResource /
+Bug pattern=BC_UNCONFIRMED_CAST / 
+  /Match
   !-- Inconsistent sync warning - reinitialize read from other queue does not 
need sync--
   Match
 Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue
 /
@@ -215,6 +225,18 @@
 Field name=scheduleAsynchronously /
 Bug pattern=IS2_INCONSISTENT_SYNC /
   /Match
+  !-- Inconsistent sync warning - updateInterval is only initialized once and 
never changed --
+  Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
 /
+Field name=updateInterval /
+Bug pattern=IS2_INCONSISTENT_SYNC /
+  /Match
+  !-- Inconsistent sync warning - callDurationMetrics is only initialized 
once and never changed --
+  Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
 /
+Field name=fsOpDurations /
+Bug pattern=IS2_INCONSISTENT_SYNC /
+  /Match
 
   !-- Inconsistent sync warning - numRetries is only initialized once and 
never changed --
   Match
@@ -415,6 +437,11 @@
 Bug pattern=IS2_INCONSISTENT_SYNC /
   /Match
   Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
 /
+Field name=allocConf /
+Bug pattern=IS2_INCONSISTENT_SYNC /
+  /Match
+  Match
 Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode /
 Field name=numContainers /
 Bug pattern=VO_VOLATILE_INCREMENT /


hadoop git commit: Revert YARN-3181. FairScheduler: Fix up outdated findbugs issues. (kasha)

2015-03-17 Thread kasha
Repository: hadoop
Updated Branches:
  refs/heads/branch-2.7 d2dad7442 - 47e6fc2bf


Revert YARN-3181. FairScheduler: Fix up outdated findbugs issues. (kasha)

This reverts commit c2b185def846f5577a130003a533b9c377b58fab.

(cherry picked from commit 32b43304563c2430c00bc3e142a962d2bc5f4d58)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/47e6fc2b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/47e6fc2b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/47e6fc2b

Branch: refs/heads/branch-2.7
Commit: 47e6fc2bf9c3731594a37c74089411ccc44a5221
Parents: d2dad74
Author: Karthik Kambatla ka...@apache.org
Authored: Tue Mar 17 12:31:15 2015 -0700
Committer: Karthik Kambatla ka...@apache.org
Committed: Tue Mar 17 12:32:14 2015 -0700

--
 hadoop-yarn-project/CHANGES.txt |  2 --
 .../dev-support/findbugs-exclude.xml| 27 
 .../scheduler/fair/AllocationConfiguration.java | 13 +++---
 .../fair/AllocationFileLoaderService.java   |  2 +-
 .../scheduler/fair/FSOpDurations.java   |  3 ---
 5 files changed, 31 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/47e6fc2b/hadoop-yarn-project/CHANGES.txt
--
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 1c46d0d..0cdc7c4 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -251,8 +251,6 @@ Release 2.7.0 - UNRELEASED
 YARN-2079. Recover NonAggregatingLogHandler state upon nodemanager
 restart. (Jason Lowe via junping_du) 
 
-YARN-3181. FairScheduler: Fix up outdated findbugs issues. (kasha)
-
 YARN-3124. Fixed CS LeafQueue/ParentQueue to use QueueCapacities to track
 capacities-by-label. (Wangda Tan via jianhe)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/47e6fc2b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
--
diff --git a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml 
b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
index a89884a..943ecb0 100644
--- a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
+++ b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
@@ -152,12 +152,22 @@
 Class 
name=org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService
 /
 Bug pattern=IS2_INCONSISTENT_SYNC /
   /Match
+  Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.AllocationFileLoaderService
 /
+Field name=allocFile /
+Bug pattern=IS2_INCONSISTENT_SYNC /
+  /Match
   !-- Inconsistent sync warning - minimumAllocation is only initialized once 
and never changed --
   Match
 Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler
 /
 Field name=minimumAllocation /
 Bug pattern=IS2_INCONSISTENT_SYNC /
   /Match
+  Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerNode
 /
+Method name=reserveResource /
+Bug pattern=BC_UNCONFIRMED_CAST / 
+  /Match
   !-- Inconsistent sync warning - reinitialize read from other queue does not 
need sync--
   Match
 Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue
 /
@@ -215,6 +225,18 @@
 Field name=scheduleAsynchronously /
 Bug pattern=IS2_INCONSISTENT_SYNC /
   /Match
+  !-- Inconsistent sync warning - updateInterval is only initialized once and 
never changed --
+  Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
 /
+Field name=updateInterval /
+Bug pattern=IS2_INCONSISTENT_SYNC /
+  /Match
+  !-- Inconsistent sync warning - callDurationMetrics is only initialized 
once and never changed --
+  Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
 /
+Field name=fsOpDurations /
+Bug pattern=IS2_INCONSISTENT_SYNC /
+  /Match
 
   !-- Inconsistent sync warning - numRetries is only initialized once and 
never changed --
   Match
@@ -415,6 +437,11 @@
 Bug pattern=IS2_INCONSISTENT_SYNC /
   /Match
   Match
+Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
 /
+Field name=allocConf /
+Bug pattern=IS2_INCONSISTENT_SYNC /
+  /Match
+  Match
 Class 
name=org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode /
 Field name=numContainers /
 Bug pattern=VO_VOLATILE_INCREMENT /


[08/50] [abbrv] hadoop git commit: YARN-3267. Timelineserver applies the ACL rules after applying the limit on the number of records (Chang Li via jeagles)

2015-03-17 Thread cmccabe
YARN-3267. Timelineserver applies the ACL rules after applying the limit on the 
number of records (Chang Li via jeagles)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8180e676
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8180e676
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8180e676

Branch: refs/heads/HDFS-7836
Commit: 8180e676abb2bb500a48b3a0c0809d2a807ab235
Parents: 387f271
Author: Jonathan Eagles jeag...@gmail.com
Authored: Fri Mar 13 12:04:30 2015 -0500
Committer: Jonathan Eagles jeag...@gmail.com
Committed: Fri Mar 13 12:04:30 2015 -0500

--
 .../jobhistory/TestJobHistoryEventHandler.java  | 14 +++---
 .../mapred/TestMRTimelineEventHandling.java | 12 ++---
 hadoop-yarn-project/CHANGES.txt |  3 ++
 .../distributedshell/TestDistributedShell.java  |  4 +-
 .../server/timeline/LeveldbTimelineStore.java   | 18 +--
 .../server/timeline/MemoryTimelineStore.java| 12 -
 .../server/timeline/TimelineDataManager.java| 50 +++-
 .../yarn/server/timeline/TimelineReader.java|  3 +-
 .../timeline/TestLeveldbTimelineStore.java  | 16 +++
 .../timeline/TestTimelineDataManager.java   | 26 +-
 .../server/timeline/TimelineStoreTestUtils.java | 33 +
 11 files changed, 126 insertions(+), 65 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8180e676/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java
--
diff --git 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java
 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java
index de35d84..43e3dbe 100644
--- 
a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java
+++ 
b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java
@@ -464,7 +464,7 @@ public class TestJobHistoryEventHandler {
   t.appAttemptId, 200, t.containerId, nmhost, 3000, 4000),
   currentTime - 10));
   TimelineEntities entities = ts.getEntities(MAPREDUCE_JOB, null, null,
-  null, null, null, null, null, null);
+  null, null, null, null, null, null, null);
   Assert.assertEquals(1, entities.getEntities().size());
   TimelineEntity tEntity = entities.getEntities().get(0);
   Assert.assertEquals(t.jobId.toString(), tEntity.getEntityId());
@@ -480,7 +480,7 @@ public class TestJobHistoryEventHandler {
   new HashMapJobACL, AccessControlList(), default),
   currentTime + 10));
   entities = ts.getEntities(MAPREDUCE_JOB, null, null, null,
-  null, null, null, null, null);
+  null, null, null, null, null, null);
   Assert.assertEquals(1, entities.getEntities().size());
   tEntity = entities.getEntities().get(0);
   Assert.assertEquals(t.jobId.toString(), tEntity.getEntityId());
@@ -498,7 +498,7 @@ public class TestJobHistoryEventHandler {
   new JobQueueChangeEvent(TypeConverter.fromYarn(t.jobId), q2),
   currentTime - 20));
   entities = ts.getEntities(MAPREDUCE_JOB, null, null, null,
-  null, null, null, null, null);
+  null, null, null, null, null, null);
   Assert.assertEquals(1, entities.getEntities().size());
   tEntity = entities.getEntities().get(0);
   Assert.assertEquals(t.jobId.toString(), tEntity.getEntityId());
@@ -520,7 +520,7 @@ public class TestJobHistoryEventHandler {
   new JobFinishedEvent(TypeConverter.fromYarn(t.jobId), 0, 0, 0, 0,
   0, new Counters(), new Counters(), new Counters()), 
currentTime));
   entities = ts.getEntities(MAPREDUCE_JOB, null, null, null,
-  null, null, null, null, null);
+  null, null, null, null, null, null);
   Assert.assertEquals(1, entities.getEntities().size());
   tEntity = entities.getEntities().get(0);
   Assert.assertEquals(t.jobId.toString(), tEntity.getEntityId());
@@ -546,7 +546,7 @@ public class TestJobHistoryEventHandler {
 new JobUnsuccessfulCompletionEvent(TypeConverter.fromYarn(t.jobId),
 0, 0, 0, JobStateInternal.KILLED.toString()), currentTime + 20));
   entities = ts.getEntities(MAPREDUCE_JOB, null, null, null,
-  null, 

[26/50] [abbrv] hadoop git commit: HADOOP-9477. Amendment to CHANGES.txt.

2015-03-17 Thread cmccabe
HADOOP-9477. Amendment to CHANGES.txt.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d1eebd9c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d1eebd9c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d1eebd9c

Branch: refs/heads/HDFS-7836
Commit: d1eebd9c9c1fed5877ef2665959e9bd1485d080c
Parents: 03b77ed
Author: Yongjun Zhang yzh...@cloudera.com
Authored: Mon Mar 16 09:16:57 2015 -0700
Committer: Yongjun Zhang yzh...@cloudera.com
Committed: Mon Mar 16 09:16:57 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1eebd9c/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index e161d7d..a43a153 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -37,9 +37,6 @@ Trunk (Unreleased)
 
 HADOOP-11565. Add --slaves shell option (aw)
 
-HADOOP-9477. Add posixGroups support for LDAP groups mapping service.
-(Dapeng Sun via Yongjun Zhang)
-
   IMPROVEMENTS
 
 HADOOP-8017. Configure hadoop-main pom to get rid of M2E plugin execution
@@ -447,6 +444,9 @@ Release 2.8.0 - UNRELEASED
 HADOOP-11226. Add a configuration to set ipc.Client's traffic class with
 IPTOS_LOWDELAY|IPTOS_RELIABILITY. (Gopal V via ozawa)
 
+HADOOP-9477. Add posixGroups support for LDAP groups mapping service.
+(Dapeng Sun via Yongjun Zhang)
+
   IMPROVEMENTS
 
 HADOOP-11692. Improve authentication failure WARN message to avoid user



[28/50] [abbrv] hadoop git commit: HADOOP-11714. Add more trace log4j messages to SpanReceiverHost (cmccabe)

2015-03-17 Thread cmccabe
HADOOP-11714. Add more trace log4j messages to SpanReceiverHost (cmccabe)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/bf3275db
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/bf3275db
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/bf3275db

Branch: refs/heads/HDFS-7836
Commit: bf3275dbaa99105d49520e25f5a6eadd6fd5b7ed
Parents: ed4e72a
Author: Colin Patrick Mccabe cmcc...@cloudera.com
Authored: Mon Mar 16 12:02:10 2015 -0700
Committer: Colin Patrick Mccabe cmcc...@cloudera.com
Committed: Mon Mar 16 12:02:10 2015 -0700

--
 hadoop-common-project/hadoop-common/CHANGES.txt|  2 ++
 .../org/apache/hadoop/tracing/SpanReceiverHost.java| 13 ++---
 2 files changed, 12 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/hadoop/blob/bf3275db/hadoop-common-project/hadoop-common/CHANGES.txt
--
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt 
b/hadoop-common-project/hadoop-common/CHANGES.txt
index a43a153..aa17841 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -692,6 +692,8 @@ Release 2.7.0 - UNRELEASED
 HADOOP-11642. Upgrade azure sdk version from 0.6.0 to 2.0.0.
 (Shashank Khandelwal and Ivan Mitic via cnauroth)
 
+HADOOP-11714. Add more trace log4j messages to SpanReceiverHost (cmccabe)
+
   OPTIMIZATIONS
 
 HADOOP-11323. WritableComparator#compare keeps reference to byte array.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/bf3275db/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/SpanReceiverHost.java
--
diff --git 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/SpanReceiverHost.java
 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/SpanReceiverHost.java
index 01ba76d..f2de0a0 100644
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/SpanReceiverHost.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/SpanReceiverHost.java
@@ -134,20 +134,27 @@ public class SpanReceiverHost implements 
TraceAdminProtocol {
 String[] receiverNames =
 config.getTrimmedStrings(SPAN_RECEIVERS_CONF_KEY);
 if (receiverNames == null || receiverNames.length == 0) {
+  if (LOG.isTraceEnabled()) {
+LOG.trace(No span receiver names found in  +
+  SPAN_RECEIVERS_CONF_KEY + .);
+  }
   return;
 }
 // It's convenient to have each daemon log to a random trace file when
 // testing.
 if (config.get(LOCAL_FILE_SPAN_RECEIVER_PATH) == null) {
-  config.set(LOCAL_FILE_SPAN_RECEIVER_PATH,
-  getUniqueLocalTraceFileName());
+  String uniqueFile = getUniqueLocalTraceFileName();
+  config.set(LOCAL_FILE_SPAN_RECEIVER_PATH, uniqueFile);
+  if (LOG.isTraceEnabled()) {
+LOG.trace(Set  + LOCAL_FILE_SPAN_RECEIVER_PATH +  to  +  
uniqueFile);
+  }
 }
 for (String className : receiverNames) {
   try {
 SpanReceiver rcvr = loadInstance(className, EMPTY);
 Trace.addReceiver(rcvr);
 receivers.put(highestId++, rcvr);
-LOG.info(SpanReceiver  + className +  was loaded successfully.);
+LOG.info(Loaded SpanReceiver  + className +  successfully.);
   } catch (IOException e) {
 LOG.error(Failed to load SpanReceiver, e);
   }



  1   2   >